TravelLens

Inspiration

When traveling to foreign lands, it's almost impossible to know what to do, where to go, what to see, what to eat, and most notably, what anyone is saying. Many people have to travel often for business and we believe that in order to make travel as enjoyable as possible, there is much room for improvement with current travel technology.

What it does

Our project uses the Google Glass and many of Microsoft's Cognitive APIs to create live assistance in many different travel areas. Most notably, our project uses Microsoft's OCR API and Translator API to read signs and posters in unknown languages them and translate them in real-time for the user. Also, if the user is unsure of how certain words are pronounced, we use Google's text-to-speech functionality to read out loud such words. FInally we developed a live subititle feature for when you ahve to talk to someone speaking another language.

How we built it

We built our project on Android Studio and used the aforementioned APIs to fuel many of our core processes such as text recognition, language detection and translation, and object detection. To take pictures of objects/text of interest, we used a constant photo feed with a high frame rate and took snapshots when the user taps the side of the glass. This would trigger our program to take a snapshot and process that using a mix of some APIs and our own code. Based on the outcome of these processes, we would display relevant information to the console of the glass itself.

Challenges we ran into

There are several challenges that we ran into. First and foremost, the Google Glass could not connect to MHacksGuest because of some port incompatibilities. This gave us a lot of starting trouble because we were unable to even see whether our code was working until the last day of hacking. Also, since the Glass has to run on Android 4.4, we had to use older documentations to use several standard Android features which were not always easy to find. Also, since the Glass' camera has a very different resolution when compared to most Android products, adjusting our image gathering parameters for the Glass was a very long process that needed to be done before any analysis could be tried and executed in the rest of our project. All in all, there were several roadblocks along the way, but we all learned a great deal from them.

Accomplishments that we're proud of

We are very proud for successfully hacking on the Glass for the first time. None of us have ever used this hardware before and though it was difficult to use initially, we are proud of the work we were able to produce given our level of experience. Also, we are proud for effectively using Microsoft's powerful APIs in our project to help speed up and optimize our I/O.

What we learned

We learned quite a lot from this project. Aside from all of us learning a great deal about the Google Glass and Microsoft's Cognitive APIs, for many of us, this was our first time ever coding in Java, let alone Android. This whole experience was very educational and we are all happy with the knowledge we were able to take away from using the Glass and Microsoft's Cognitive APIs.

What's next for mhacks-fall-2016

We hope to continue to work further with the Google Glass and Augmented Reality in general. It is a field that we are all greatly interested in and something that we would enjoy working on further in the future years to come. Thank you for checking out our project, and we hope you enjoy what you see!

Built With

Share this project:
×

Updates