Freedom. Independence. Two concepts we take for granted in our every day lives. After reflecting on our experiences volunteering at hospitals during high school, we realized one commonality we shared was the experience of working with the visually impaired such as navigating them through hospitals. What if we can leverage the power of audio visual recognition systems? Our goal is to give the visually challenged the freedom and independence to re-engage with society through providing voice-guided indoor navigation and facial sentiment analysis.

What it does

While external navigation is a common problem being dealt with, we realized that indoor navigation was often neglected. Our software can provide voice-guided indoor navigation and provide facial sentiment analysis.

How we built it

We achieved indoor navigation through a mixture of 3 techniques: QR code recognition, paired-color pattern recognition, and ultrasonic frequency wave detection. We developed an algorithm to determine the most efficient route to the destination based on the hallway network, and used QR-codes and paired-color patterns to achieve localization and guidance. We also developed a method to provide guidance to the final destination using ultrasonic waves (22 kHz +), by developing a language of various beep patterns to transmit information. For facial sentiment analysis, we transmitted photos taken at set intervals to a firebase database, and used the google cloud platform, utilizing their sentiment analysis and object detection techniques. Keeping our stakeholders in mind, we also developed a full audio UI to provide directions and process queries.

Challenges we ran into

React, uploading files to firebase, hardware limitations on frequency sampling rates, QR code tolerances, integration of everyone's work at the end (a result of parallel development).

Accomplishments that we're proud of

Ultrasonic guidance, route optimization algorithm, and our ability to solve a complex problem using innovative solutions.

What we learned

FFTs, parallel development (and what not to do in it), engineering practises, optimizations of code (when dealing with large datasets).

What's next for nvsble

Using hardware with fewer limitations to develop the app further, more sentimental analysis, natural language processing to better structure and understand how to present a scene to a user. Partner with a local organization to implement our technology and test its functionality.

Share this project: