Inspiration

Going to foreign countries can often be intimidating, especially when ordering at restaurants and trying to pronounce words and symbols that you've never seen before. We do not need to be bounded by languages. With this you can order at any restaurant with ease without having to keep pointing at what you want and awkwardly trying to show the waiter or waitress.

What it does

Replaces text from image of a menu with translated text in user's language of choice. You can also use it as a general image translator.

How we built it

We first used the Google Cloud Vision API to obtain text from images. Then we parsed the data and translated it with the Google Translate API. While doing so, we're keeping track of the "blocks" of texts's position, blurring the original text, and then writing the newly translated text over the original text.

Challenges I ran into

Implementing Vision API, running a GUI that takes in an image and sends it to the main script, parsing to gather the right information from googletrans library, trying to make a GUI that looks passable. Implementing an algorithm to blur out original text while writing translated text over it and supporting a wide range of languages. How to translate all of the original text at once but still able to put portions of the translated text to its original position, while having only position data of only "blocks" of text instead of individual sentences.

Accomplishments that I'm proud of

Using Google Cloud, creating a reasonable GUI for users, allow for any supported language to be translated. Parsing Google Cloud Vision API and implementing an algorithm to blur text and write translated text over the original text.

What I learned

Vision API, python parsing through libraries. Implementing google-cloud, overlaying text over images, blurring text, using Pillow, using TkInter to create a GUI.

What's next for Limited Language Translator

Implement a before and after images for once it was translated and when it wasn't. Create an app for users to use their phone to capture an image and the program returns a translated version. Working on cleaning up the program, blurred background and text outline. Add a toggle for showing pronunciation that does not require the user to reload the image and rerun the program.

Built With

Share this project:

Updates