Inspiration
We were inspired by the idea of improving the quality of life of people unable to communicate due to loss of movement, preventing any type of vocal or physical communication other than eye movement (such as ALS). We wanted to give these people a voice.
What it does
Our Hack translates blinks and eye movement to speech, text as well as to perform basic automatic tasks such as sending help notifications to a career. Our ultimate goal was to create an inexpensive communication mechanism for people with the disabilities mentioned previously. Android devices are now very affordable so building an android app seemed like the obvious choice.
How we built it
Our hack is built on top of open source eye tracking software. We wrote a python wrapper to interface with this program, allowing images to be uploaded to a server, and then be analysed by the open source library. The library produces a JSON file with the corner points of key facial features such as the eye. Our python script then interprets this in order to figure out whether or not the eyes are blinking or if they are looking in a certain direction. Based on the result of this, a sequence of eye movements can be created, which can then be interpreted into a phrase/command. An android app was created in order to send a video feed to the python wrapper for analysis.
Challenges we ran into
We found integration to be the most difficult stage, probably followed closely by the struggles of compiling the eye tracking library.
What's next for Eye2Action
Given a bit more time it would be great to bring together the individual parts, and complete our original goal of helping others. We would also love to remove the python layer and do all the processing from within the android app, however this would require a bit more effort particularly to get the open source image processing library to compile correctly on mobile.
Log in or sign up for Devpost to join the conversation.