Inspiration

We initially planned on creating an AR/VR experience for dementia patients as a form of rehabilitation however we decided that we wanted to play more. So instead, we ran with the idea of a phone to device command that navigates visually impaired people - foreseeing that we needed to figure out numerous of technical challenges (i.e. adding the automatic text to speech function, making the gimbal rotate with real-time human commands, etc)

What it does

It’s an audio companion for blind people 🔊. With the Gimbal device as the “eye” 👁 and the iOS App the “companion” 👯‍♀️, visually impaired users can learn about their location 🗺, navigate through their surroundings ⤵️ and understand it (they can ask about the history of the location or objects functionality etc) 🧠.

How we built it

Since our tool is designed for visually impaired people, our Mobile App has an automatic text to voice command in order for the user to navigate through App easily - from the login page and all the way to map navigation and bot companion

So how did we do it? When the user audibly asks “What’s that in front of me?” - the App will receive the audio command and transcribe it into text which sends a signal to a central broker that is on the Azure cloud and delivers the message to the Raspberry Pi Camera or Jetson Nano to rotate or adjust accordingly to take the picture, which sends the picture to Google Cloud Vision where it’ll be recognized, and the App will receive back the classification where the embedded voice command will communicate back to the user.

We used SolidWorks to 3D design the Gimbal device, Figma to provide the UI design foundation of the App, and XCode to bring the designs testable on the iPhone and weave three cloud servers into one (AWS for backend gateway API, Azure for RabbitMQ, GCP for cloud vision recognition).

Challenges we ran into

🌀Since it was snowing, the nearby 3D printer stores in the Seattle area were closed so we weren’t able to print out our gimbal device in time. Instead, we used existing materials lying around the house, taped it up together, and created a Gimbal from scratch. 🌀We had initial difficulty in figuring how to get the automatic voice command to work however ended up figuring it out but what was more difficult was the asynchronous nature of voice recognition so we had to write a complex locking mechanism in order for it to not fire up thousands of requests per seconds (in other words, in order for it to not crash and malfunction).

Accomplishments that we're proud of

🏆 Having an iOS App testable with a functioning backend and a fair UX design to complement it

What we learned

✳️ Jetson Platform is truly much more functional and powerful than the Raspberry Pi

What's next for AdaEye

➡️ 3D printing the Gimbal device ➡️ Develop the iOS app further in order for it to be deployable on the Apple App Store ➡️ Make the App cross platforms (rewriting with Flutter/Xamarin)

Built With

Share this project:

Updates