Inspiration

Both of us have glasses and have pretty bad vision. Even though we are nowhere near severe visually impairment, problems dealing with eyesight are things we easily relate to. We could only imagine how it felt to never be able to see properly, because even by removing our glasses, a lot of our sight is gone and life is so uncomfortable in those moments. We also have seen people try to combat this problem in the past, but none of the solutions have been successfully grasped by society. We wanted to give our best shot at changing that.

What it does

Mirage has 2 parts: the module and the app. The module is a 3D printed device that mounts onto any standard cane that visually impaired people use. The module has a 1080p camera that is connected to a gimbal, and recognizes street signs, humans, roadways, paths, obstacles, that are in front of the person as they walk, and then sends audio to bluetooth headphones informing the person of all these things. The module + headphone setup keeps the individual aware of their surroundings at all times, even when they are alone. But, Mirage is so much more. Mirage is the perfect cross-section between hardware and software to help the visually impaired. The mobile application can be downloaded by the blind person's caretaker, and with the app, they are able to communicate via audio to the visually impaired person's headset, as well as get a live camera feed of the cane's field of view as the person is walking. The caretaker can do whatever they need to, even when their loved one is away and alone, but can still check in when needed.

How we built it

We used Autodesk Inventor to design the 3D printable module from scratch. We used a Raspberry Pi to perform all Computer Vision using Python and OpenCV. We only wanted to use a camera for vision because we were able to get the most information, with the least amount of sensors. Once we recognized objects, we used a TTS (text to speech) software to create warnings and commands for the visually impaired person that would be played via bluetooth connection from the Raspberry Pi to Apple Airpods. Since the cane is always moving at different angles, we didn't want the camera feed to be all shaky, so we created a gimbal mechanism utilizing control algorithms with a servo, IMU, and Arduino UNO board to stabilize the camera as the person walked. We connected all of these components to a portable battery pack that was also mounted on the cane. In order to build the caretaker's app, we used Swift and Xcode. In order to achieve live streamed video between Python (Raspberry Pi) and Swift (iPhone) we used the Twilio Video API. We created a webpage using Ngrok, node.js, and html/css/js that would be launched with a Python program on the Raspberry Pi, which would connect to Swift on the iPhone to achieve a live stream. Because of the livestream, we were also able to send messages of GPS coordinates of the module device to the app, to then be plotted on a map for the caretaker. We also coded secure user authentication for the caretakers by utilizing Google's Firebase Authentication/Database servers, so they can each securely log in to our app and store any private information about them or their family on the app.

Challenges we ran into

The biggest challenge we ran into was our choice to use a Raspberry Pi to power all the computer vision and live streaming. Not only had we never programmed with a Pi before, but we quickly found out that it was very, very slow. We originally were working with Amazon AWS's rekognition API to better detect surroundings, but the board wouldn't stay under 85-90 degrees C with it running in addition to the other things. We ended up needing to make a lot of compromises to our CV software because of its hardware limitations so in the future we certainly plan to use a more powerful machine, like a Nvidia Jetson Nano. Another one of the biggest challenges we ran into was to successfully create a live video stream across multiple platforms (Python, Swift, HTML/CSS/JS). Although we had worked a little with these languages in the past, we struggled a lot getting each individual portion to work on its own even before we tried to connect everything.

Accomplishments that we're proud of

One of the things that we are most proud of this hackathon was our ability to use so much technology that we had never messed with in the past, but still finish work in a timely fashion. We both certainly agree that of the few hackathons we have went to, we definitely learned the most at PennApps, and maybe that is thanks to the 36 hours of hacking time. In the past, we were known for always cutting it close and finishing at the end, but here we had a bit of extra wiggle room and time to make modifications and really test everything towards the end.

What we learned

After working on Mirage, we learned a lot in Python and OpenCV, while working with Raspberry Pi, about the whole world of computer vision. We also tried to implement many API's throughout portions of this hack, something that we never really messed with in the past too. After seeing how beneficial and useful they are, they are certainly things that will make every hack we create in the future. We also learned a lot about Swift, as it was only our second time ever using Xcode to create mobile apps. We feel a lot more comfortable with the environment and utilizing all its powerful features. And lastly, once again, our ambition to try to incorporate a lot of things in this project taught us so much in so many different areas this weekend which is certainly our most valuable takeaway.

What's next for Mirage

First off, we plan to switch out the Raspberry Pi with a much more powerful alternative so our computer vision and live streaming can work seamlessly without lag and delay. We also plan to improve our voice feedback with the Apple Airpods. Right now, we're like Siri. One day, we'll be like Jarvis.

Share this project:
×

Updates