Inspiration

As an international student, I've had to personally struggle with English, so I wanted to create a tool that makes that struggle easier and more fun to deal with. This app combines the immediacy of pointing your phone with the convenience of instantly seeing the word you're looking for.

What it does

This app is capable of recognising, naming and translating in real time the objects it is looking at. The target word is first extracted from the live camera feed and then translated to any language through an external API. Then, both the original (English) and translated words a are displayed in augmented reality, which allows them to "stick" to the object they identify.

How I built it

The app was entirely built in Swift. Specifically, the image recognition part of this project was accomplished through the InceptionV3 machine learning model, which was integrated inside the iOS app via Apple's CoreML APIs. The translation component was carried out through the Google Cloud Translate APIs. Finally, the augmented reality component was possible thanks to Apple's newest ARKit framework.

Challenges I ran into

Although I've done some iPhone development, this was my first ever project in Swift, so the learning curve was fairly steep! It was also my first time building an AR app, which proved more challenging than I initially thought.

Accomplishments that I'm proud of

I built an arguably useful piece of technology, with a nice-looking UI and a non-trivial technical component

What I learned

Something great happens when different technologies and disciplines are brought together into one product

Built With

Share this project:
×

Updates