Inspiration: While deciding on topics at the Ideation Clinic, we found that the majority of our team was drawn to biotechnology based solutions. During our research into prevalent problems in the health industry, we were drawn to the interesting case of those who suffer from ALS. In most cases of ALS, medication is not difficult to administer, which we thought was a large pain point. Rather, communication, especially with more advanced forms of ALS was found to be a severe issue – as motor skills and communication abilities decrease, patients were incapable of communicating without expensive brain computer interface equipment, or glorified children’s “read it out loud” books with phrases. We decided there had to be a better way.

What it does: Our project uses OpenCV libraries to track the movement of pupils or centres of eyes for patients with ALS. Using the direction of the pupils/distance from the rest position, it selects text, letters, and words that can be formed. A predictive NLP algorithm is applied to create the words, similar to that of a cell phone.

How we built it: The project was built using OpenCV in C++,the user interface and letter selection screens were built using html/css, and the predictive text algorithms were built in C++, using frequency of words from a larger medical dataset to predict certain words.

Challenges we ran into: Since all of us are first time hackers, or people who had only attended a hackathon for the learning experience, and hadn’t really “hacked” anything before, we had to learn the nuances of using github, stackoverflow etc. We also struggled a lot with ensuring the capabilities of the various programs, libraries, dll’s etc required to run OpenCV at all, which meant that the hard part was actually getting to use OpenCV. The algorithm itself was not as difficult, but otherwise our issues came from putting the different parts of the project together.

Share this project:

Updates