We wanted to help people with ALS have an easier experience while working at the computer.
What it does
It maps eye movement onto a virtual board that can type letters.
How we built it
We built it with neural networks closely related to the LeNet architecture. There was a lot of pre and post processing to get the 3 cameras to work together as well as getting the output to a working speech generator. The model was trained on an AWS EC2 instance.
Challenges we ran into
Lack of data due to not enough time made this more of a proof of concept rather than a full working prototype.
Accomplishments that we're proud of
The final product looks amazing and the code architecture works swimmingly.
What we learned
Collecting data is very hard. We also learned a lot about neural networks.
What's next for ALS Keyboard with Neural Networks
To gather more data and make a fully working prototype.