Inspiration

Navigating the streets of Peru we were lost trying to speak the natural tongue. The pocket dictionary did not help much. Forced to use improper grammar from our translate apps on our phone we looked like fools navigating the local villages. We wondered if there was a better solution rather than hiring a local guide...

What it does

Utilizing Clarifai's API, Object Lens is able to use visual recognition to depict photos taken by the user. The photo is immediately understood and broken down into what is in the picture. The backend then uses Google Translate to translate the language the user wants. All the pictures are saved and over time a database of objects that the user is familiar with is created. This allows Object Lens to challenge the user by testing their vocabulary against objects they are already familiar with.

How we built it

We used a plethora of technologies to make Object Lens happen. Using a MEAN stack we created a mobile friendly website using Clarifai's API to recognize objects from the camera. The backend saves the image and the tags associated with it. The tags are then fed to Google Translate and the translated information is displayed to the user.

Challenges we ran into

We had some trouble with integrating all our work together, luckily we were able to finish in time.

Accomplishments that we are proud of

We love that we were able to put the project together this weekend! Its something we would love to work the kinks out so we can use it in real life.

What we learned

It was the first time for some of the team members using the MEAN stack. It was also our first time using Clarifai's API.

What's next for Object Lens

We want to improve the tagging of objects and generate relevant sentences are created for the user. By utilizing pictures already taken by the user, we also want to introduce pictures that are similar to the ones the user took and add them to the quiz. This will challenge the user and improve their vocabulary.

Share this project:
×

Updates