Inspiration

We really didn't have an idea before we came here but we wanted to achive the sole purpose of affecting and improving the community we live in. We wanted to improve the lives of the disabled in our community. We decided to focus on the visually impaired and resolve some of the challenges they face. We got in touch with a friend from our school, he's partially impaired and he gaves us some of the issues he faces while trying to navigate. One of the issues is being able to know what's infront of him, is it a trash can, soda bottle, fireplace, etc... We decided to find a solution to this issue

What it does

We have a mobile app that will tell the user what every is in front of them. It does this real time so it's very easy to use and accessible to the user. We also have a hardware device that does a similar task but also helps with reading text in images infront of the user. It also does some cloud computing to provide the user with some tags about the area infront of them. It also send that with the image as a text to the user.

How we built it

We used the raspberry pi, breadboard,button,camera to assemble the hardware device. We connected it to aws for computing and google vision for the image recognition api and ocr. We also used xcode and swift to craft the ios application that utilized ML Core and aws rekognition.

Challenges we ran into

We faced challenges through the ios apps when trying to process objects in a timely feedback and sync it with the current frame in the video stream. We also faced issues setting up xcode signing for the application and swift3/4 compatibility issues. Another issue is when utlizing the raspberry pi for processing the recognition platform so after we worked around it, we decided to use cloud computing on do mostly everything on the cloud.

Accomplishments that we're proud of

We are proud that we have a functional toolkit that's composed of two devices that totally achieve the point of assisting the visually impaired in navigating the world.

  1. Object recognition to alert user what's in front of them
  2. Image recognition and OCR, read text in front of them from picture or wall.

What we learned

We learned a lot about machine learning, embedded devices such as the raspberry pi and the capabilities. We also learned how to utilize the full capibilities of the ml core suite for ios devices and xcode/swift development.

What's next for EyeSee

We want to proceed and make a professional product. This is just a prototype so it really dosen't look that good and some things can be tweaked. What makes this marketable is the design criteria of being simple, cheap, and efficient. This would be a big help to the visually impaired community. The tools used in this are very cheap, at most $50 dollars which is very cheap compared to current marketed products. The iOS app is free for anyone to use too, so we hope to actually push it to the app store and provide android compatibility.

Built With

Share this project:

Updates