We've always wanted to be able to point our phone at an object and know what that object is in another language. So we built that app.

What it does

Point your phone's camera towards an object, and it will identify that object for you, using the Inception neural network. We translate the object from a source language (English) to a target language, usually a language that the user wants to learn using Google Translation API. Using AR Kit, we depict the image name, in both English and a foreign language, on top of the object. In order to help you find the word, we help you see some different ways of using that word in a sentence.

All in all, the app is a great resource for learning how to pronounce and learn about different objects in different languages.

How we built it

We built the frontend mobile app in Swift, used AR Kit to place words on top of an object, and used Google Cloud functions to access APIs.

Challenges we ran into

Dealing with Swift frontend frames, and getting authentication keys to work properly for APIs.

Accomplishments that we're proud of

We built an app looks awesome with AR Kit, and has great functionality. We took an app idea and worked together to make it come to life.

What we learned

We learned in greater depth how Swift 4 works, how to use AR Kit, and how easy it is to use Google Cloud functions to offload a server-like computation away from your app without having to set up a server.

What's next for TranslateAR

IPO in December

Share this project: