Inspiration

At first, we were't sure of what we were going to create for the Hackathon. While brainstorming for Ideas, one of us started to talk about blind people and thats how we built upon that idea concerning the blind and decided that we would create a mobile application that would assist them in understanding and being aware of their surrounding at all times. Nowadays, you could find big data on the cloud so why don't use them.

What it does

It help blind people in their daily life by acting like theirs eyes. It takes pictures from the phone's front camera at a set frequency and constantly sends it to our server, which analyzes the pictures and sends back JSON objects. The server would send only one JSON object per 5 pictures (2 x 5 = 10 seconds in all) taken, in response to the best image (response rate - async - 1 response per 6 seconds). This Jason object includes names of things that were recognized in the image and also a message that describes the image briefly. Only the most "confident" elements of the set of pictures will be return. The application uses this data to give a speech response.

How we built it

The android application was built using java in Android Studio. Third party libraries were used to ease the programming process. Retrofit was used to sent the captured jpeg images over http. Socket.io was used to establish a socket between the server and the phone. Transmission of JSON was done over the Socket. The server was built upon the Heroku framework in addition to nodeJS. MongoDB was also used to store temporary information about the same images. Lastly, Microsoft Cognitive Service Api was used to incorporate machine learning. Specifically, the computer vision api was used to detect objects and one's surroundings.

Challenges we ran into

In developing Uaware, there were big challenges in programming the camera to work in the right fashion without user interaction. Additionally, sending/recieving the right data between client and server(images, json, and socket establishments) brought along several issues that affected the performance and usability of this application. However, through testing and iterations, these problems have been fixed. Lastly, a small challenge that we recently noticed is the limit imposed by Microsofts Cognitive Services Api Calls. In trying to improve accuracy, we noticed that the Api service would block our requests because of the high number of them. This is the only trade off regarding performance and accuracy.

Accomplishments that we're proud of

Our team is proud of ironing out all the bugs that prevented the app from working correctly. Socket.IO and heroku together are known not to work well with each other. And simply proud to have developed an helpful app for disabled people. However, we still managed to make some changes to the configuration and have everything working smoothly.

What we learned

We learned that with a good team, anything can be made as long as there is a solid goal. The innovation in technology in the Silicon Valley, and Los Angeles can be used to improve the life of everyone and it is important for the students and younger generations to take part in creating what previously was thought to be the impossible. We all three learned many things from Socket.IO to Microsoft Cognitive Services APIs.

What's next for T7 Uaware

T7 has already planed its future. We would love to share with the open community all the lines of codes that we wrote during this amazing hackaton. The code will be released on GitHub after we would have cleaned it. Uaware could be more advanced in helping disabled people in their daily life and make them easier. Currently we are preventing blind people from hazardous situation by keeping them alerted about their environment maybe we could go further and add more APIs to add others functionalities like face recognition, ... .

Built With

Share this project:

Updates