Inspiration

We were inspired not only by the difficulty of acquiring a new language, but also the doors, opportunities and personal relationships that we have established thanks to our efforts acquiring a foreign language.

What it does

It makes learning a language a familiar experience by tying it to the world around you. Once one starts the LinguaLens app (or webpage), the only view is a camera. Pointing the camera at objects in the nearby areas will fetch an identification and classification of that object from the IBM-Watson Image Recognition service. The label is then handed off to the Google Translate API, which handles converting it to the foreign language of interest.

The person can now enter the name of the object who is the focus of the picture, but in the language he/she is currently learning. The correct label will always be given to the user, including the label in the vernacular tongue in order to ensure there were no misclassifications. When misclassifications do occur, these are stored locally on the server and then submitted to IBM-Watson as training data to improve the model.

How we built it

We have a Heroku server with node.js that serves a basic webpage with a single webcam view. Snapping a picture on this page will submit the picture to a python flask server hosted on an AWS server. This python module receives the image, makes the call to the IBM-Watson API to identify the object of focus, and then makes the calls to the Google Translate API. Finally, the python module sends a response back containing only the potential labels identified by IBM-Watson in the language of interest.

Challenges we ran into

Our lack of experience in web development was the single greatest mission blocker of this hackathon. We were not able to reach our final desired product because of limitations encountered when interfacing between Heroku, Flask and AWS.

Accomplishments that we're proud of

We are proud, nonetheless, of what we were able to accomplish with nodejs and flask. We were able to setup 2 servers independently and begin making API calls with them. We're proud that the only thing we missed was the integration; something that can be fulfilled with minimal issues if we team up with someone experienced in web development.

What we learned

-A crash course in web development -How to use the IBM-Watson API -How to use the Google-Translate API -Source control -Project management

What's next for LinguaLens

Once LinguaLens overcomes the integration issue, it will be ready to put on the web and on the iPhone for testing. We have a basic repurposed iOS app from IBM-Watson that also implemented a single camera-view app to perform image recognition on waste. We have edited the source code for it to build and work with our general object classification purposes. This would mean LinguaLens is not far away from bringing users a polished language learning experience on the App Store!

Share this project:
×

Updates