Inspiration
How do we leverage the power of machine learning to help people with disabilities. WE BUILD AEYE.
What it does
Aeye is a prototype of a system that uses image processing and machine learning to describe the surroundings and transcribe it into audio.
How we built it
We used Tensorflow, CV2, and the COCO data-set to recognize objects through a video stream and send the information through a Websocket server into an iOS application. The application will then inform the user of their surroundings through audio.
Challenges we ran into
We attempted to build our frozen inference graph to detect custom objects, but saw that it was impossible within given time limits. So we decided to use a pre-built model.
We had to find a way to share the information that our server was producing to the iOS app as fast as possible, and have the app handle the data in the correct way.
Accomplishments that we're proud of
We are proud of having used technology for good. Being able to build a prototype that could help thousands of people with disabilities, and working as a team each with different skill-sets.
What we learned
We learned how to build applications that involve machine learning, image processing, and web sockets.
What's next for Aeyes
Aeyes is a very small example of what can be accomplished with technology that can improve the lives of many people. Aeyes can expand with more accurate data sets using the tools we used today that can detect many more objects. Aeyes can run on a much smaller computer with added sensors in order to better describe your surroundings and be portable and practical.
Built With
- node.js
- opencv
- python
- react-native
- tensorflow
- web-sockets
Log in or sign up for Devpost to join the conversation.