Our initial idea was to create a health application that allowed users to scan various food items that they've consumed and determine how many minutes of exercise it would take to burn off those calories. However, we were having issues with reading text on non-standard surfaces with Microsoft's project oxford image recognition api...SO WE PIVOTED IN THE LAST COUPLE HOURS OF THE HACKATHON.

We we're still interested in using the image recognition API so we created Read2Me.

The families of military service people move 10 times more than the avrage household, almost once every 2 years. We've created an app that helps solve translation issues that are frequently encountered on these trips. Our application gives the user the abilities to take a picture of text, translate it, and have their device speak it out to them. With the original text still visible the user can follow along and learn the language while the application reads them the text.

Alternatively the application helps users with unfortunately poor eyesight understand what is written in a text block. There are many frequent instances where citizens are left without glasses but need to continue with there everyday life (occasionally requiring them to read text). Our solution provides an easy to use application with easy to see buttons at low vision rates to help with these issues.

We used, Microsoft'x Oxford Image recognition API, as well as the Bing Translator API.

Moving forward, we would like to add support for more languages, as well as other features including improving the User Interface to provide a seamless experience.

Built With

Share this project:

Updates