Inspiration

From the times we had to help a visually impaired person on the street where we wish we could have done more. From the realization that having to perform and function in a foreign space is anxiety inducing enough as able bodied people. The thought of facing this challenge as someone with a visual impairment seems next to impossible without guidance.

What it does

Lo-Kate is an android app that allows users to verbally express the object they are looking for and uses the camera to locate the requested object and direct the user towards its location relative to themselves.

How we built it

With caffeine and a lack of sleep. But really, we used first started with a brainstorming session where each one of us chose two favorite ideas to research on, and to defend for. After settling on an idea, we moved on to planning the different phases needed to reach our end goal. As for the technologies used, Android Studio and its speech to text conversion library to create an app that allows users to speak aloud the object they seek and convert their message to text while identifying the name of the object from their request. Next, using the open source machine learning framework TensorFlow we altered it's Object Detection API only identify

Challenges we ran into

The biggest challenge we ran into was a compatibility issue when integrating both, object detection and speech recognition, software together. Android Studio was only working on two laptop, we used the pair programming approach to efficiently used our resources.

Accomplishments that we are proud of

What seemed to be an impossible project to do within 36 hours, we managed to divide and conquer by efficient team work(work flow: different phases, list of task by priorities, etc). Working with Android Studio was a pain, but we managed to pull through get the app DONE AND WORKING!!!!!!!!!!!!

What we learned

How to handle threading in Android Studio using Async Task, as well as implementing communication between foreground and background services. Additionally, we learned the importance of dynamic team work and collaborating using agile development methodologies.

What's next for Lo-Kate

The ultimate goal is to integrate the software into a pair of google glass with which the person can communicate with. It would be able to not only find objects, but as well as find buildings/landmarks and give directions upon request from the user. Hopefully, with further development this software could have a meaningful impact on the lives of those who face challenges greater than our own.

Built With

  • tensor-flow
  • android-studio
  • speech-to-text-for-android
  • java
  • tensorflow
  • android-speech-to-text
  • android-text-to-speech
+ 56 more
Share this project:
×

Updates