Although visual impairment serves as one of the largest health issues in the world today, much of the general population remains unaware of its significance. Consequently, the visually impaired have been critically under-served in our society, as not nearly enough efforts have been made to improve their quality of life. Because of this, we decided to develop an intuitive, easy-to-use solution that would help make the day-to-day lives of the visually impaired just a little bit easier.

What it does

Essentially, Sixth Sense is a native interface that works entirely and seamlessly through both your smartphone and a digital assistant of your choice--e.g., Amazon's Alexa. To start, all you have to do is take out your phone and snap several pictures of your surroundings. Next, simply ask Alexa basic questions about your environment, such as “Alexa, what’s in front of me?” or "Alexa, what's to my right?" It doesn’t stop there, however, as Sixth Sense progressively provides more information upon further prompts from the user. If you aren’t satisfied with the information you receive at first, you can ask Alexa even more questions to probe more deeply into your surroundings.

How we built it

Sixth Sense was built by passing a real-time picture taken from a smart phone seamlessly through the cloud. Next, the cloud pass this picture file to the machine learning module, where it recognize both the objects in the picture and the contexts that those objects are placed in. The image is then ready for the user to ask Alexa for more information.

Challenges we ran into

Integration between four different devices--the phone, Amazon Alexa, the server and the visualizer--proved to be the most significant challenge that we faced as a group, specifically with regards to the custom interfaces we created. Despite the difficulty of this obstacle, however, ironing out the integration issues that we had really facilitated a deeper appreciation of what we were trying to accomplish.

Accomplishments that we're proud of

Overall, we're very proud of how seamlessly we were able to make Sixth Sense integrate with the various devices we used. Specifically, we really achieved our vision of allowing users to simply use their smartphone cameras without the need for an external GUI. Moreover, Sixth Sense's ability to transmit visual input, through photos, into auditory output, via Alexa, serves as a critical component of our overarching vision, allowing the visually impaired to navigate their world in a way that enhances their overall experience.

What we learned

As we improved the back-end integration, we developed a stronger understanding of Flask, bash scripting and cross-origin resource sharing (CORS).

What's next for Sixth Sense

Given the absence of a user interface, Sixth Sense has huge potential for growth among the non-technical population. It requires only two simple skills—the ability to use a smartphone camera, and the ability to ask Alexa questions. It also has high potential for integration in other apps.

More importantly, however, Sixth Sense has huge potential for the future of AI, as it can help computer systems develop stronger understandings of their surroundings.

Built With

Share this project: