Inspiration

Watching Professor Stephen Hawking speak was not only an inspiration due to his thought-provoking and EYE-opening words but also engaged us in the thought of how his equipment works, how he speaks using sensory signals sent due to his cheek muscle movements, and how it all seems so perfect. This triggered me and my teammates, Venkat and Anurag, to come up with something that helps people like him suffering from such a life-altering condition like the Amyotrophic Lateral Sclerosis and Tetraplegia, helping them do things they can't do, helping them do things people other than them can easily do, and helping them cross a metaphorical bridge or a roadblock. Therefore, our problem statement decided was: how can people with such conditions use technology, especially laptops in this case? To come up with a solution to a problem of such a huge social value was not easy. We thought of many complex possible solutions but then we realized that for a problem that's focussing on humans, we should find a simplistic, humanistic, accessible and easy approach. We thought, " What do people use most in today's world?" And the answer was obviously easy- electronics. Therefore, we decided to build software to enable patients suffering from ALS, or Tetraplegia to do on a computer screen anything that any other person can do using the mouse or the keyboard.

What it does and how we built it

The other variants for this same solution approach( though not for the same purpose) use either heavy or light hardware in forms of wearables or use eye movement to type using a virtual keyboard but not in an intuitive way. We aim to make the process of using technology intuitive and easy for the target audience mentioned above. So how would we make this possible? We have come up with a software, that uses the eye aspect ratio to determine fixations(fixed gazes) and saccades(movements of the eyes), and based on these movements which are tracked using computer vision( by distinguishing the eye , the pupil and the fovea), the cursor is manipulated to move based on where the user is pointing his or her eyes. Furthermore, the movement using eyes basically acts as the mouse input to a computer in general and clicking action of the mouse is incorporated by blinking of eyes. So the lft eye blinking would give a left click action , and the right eye blinking would open the drop down menu which contains "refresh" and other options, like on a real computer. Also the EYE mouse can be used to click on icons and do any kind of manipulation we want. So, for example, if we want to watch a video on youtube, we move our eyes to the Firefox icon, left blink on it, which opens the firefox webpage, then we move our eye focus to the search bar, left blink on the search bar, and then the on screen keyboard icon on the windows bar, and then type out youtube by moving focus to each letter on the keyboard and left blinking, then left blinking on enter, then left blinking on the link that shows up, then left blinking on the search bar, and then typing out and name of the video and so on. Before the manipulation starts , each new user will go through a calibration process which is automated which will help the software understand the user's visual range horizontally.

Challenges we ran into

We faced a number of significant challenges on the run. There are 5 main types of eye movements , out of which saccadic movements are the jerky motions and movements like vestibular movements and optokinetic movements cannot be detected. The movment of up and down using your eyeballs is very dofficult because it is a combination of a microsaccadic movement due to its small angle range and a vestibular movement due to its hard detectablitlty. We also faced a number of other issues with respect to the automated calibration, with respect to fiding ranges suing the webcam which can be innacurate. Furthermore, fiding the values for the eye gaze ratio poses a challenge as the values obtained are pretty fluctuating, but we attempted to run over it by finding mean values and performing the functons. Lastly, dealing with the eye in opencv is a pretty challenging task due to its precision characteristics and accuracy.

Accomplishments that we're proud of

We successfully implemented the horizontal movemnet and manipulation for the project and this is the first time we have attempted a project of such social impact magnitude. Also the implementation fo the manipulation part using the blinks tells us that the target audience can use this perfectly to attempt to manipulate a normal laptop. Also, since due to their conditions , they have a pretty fixed positions which would override the discrepancies we encounter sometimes during the demonstration.

What we learned

We have learnt a lot of OpenCv frameworks and opencv functions .

What's next for i2i - Project iNavigator

We see a lot of prospect for this project in the future, in regards to the vertical movmenet and manipulation, incorporating more features, trying to distinguish voluntary and involuntary eye actions. Therefore it can prove to be an amazing product for the target audience mentioned, and bridge their way to using technology like we do.

Built With

Share this project:
×

Updates