Inspiration

It has always been very time consuming to dedicate time to learning a new language. Many of the traditional methods often involve studying words in a dull static environment. Our experience has shown that this is not always the best or most fun way to learn new languages in a way that makes it "stick". This was why we wanted to develop an on the go AR app that anyone could take with you and live translate in real time words that a user sees more often to personalize the learning experience for the individual.

What it does

Using iOS ARKit, we created an iOS app which works with augmented headset. We use the live video feed from the camera and leverage a tensor flow object recognizing model to scan the field of view and when the user focuses in on the object of choice, we use the model to identify the object in English. The user then must speak the translated word for the object and the app will determine if they are correct.

How we built it

We leveraged ARKit from the iOS library for the app. Integrated a tensor flow model for object recognition. Created an api server for translated words using stdlib hosted on azure. Word translation was done using microsoft bing translation service. Our score system is a db we created in firebase.

Challenges we ran into

Hacking out field of view for the ARKit. Getting translation to work on the app. Speech to text. Learning js and integrating stdlib. Voice commands

Accomplishments that we're proud of

It worked!!

What we learned

How to work with ARKit scenes. How to spin up an api quickly through stdlib. A bit of mandarin in the process of testing the app!

What's next for Visualingo

Voice commands for app settings (e.g. language), motion commands (head movements). Gamification of the app.

Share this project:

Updates