Inspiration

The internet should be a place where everyone can participate. Unfortunately, many disabilities make accessing the internet more difficult than it should be. There are plenty of developers that care about doing stuff with VoiceRecognition. But, who cares about the people that can't use their voice nor their hands? We care. They matter.

What it does

Iris Assistant provides a solution to people who can't use the physical mouse as it's typical. Through Machine Learning, Computer Vision and Image Processing, everyone with eyes can use the mouse pointer just by the movement of their iris.

How we built it

The first step is to train the machine using a faces' database to recognise the user's face, that displays a blue square for the leftmost face that recognices in front of the camera. Afterwards using an eyes' database, we locate the user's eyes and mark the position of the left eye. Then we find the iris within the eye, as shown in the image above.

The next step is to find out the coordinates for the center of the eye and the coordinates for the iris. In order to move the cursor is needed to look around, in the direction where is wanted to move the cursor. Our program detects the difference between the iris coordinates and the eye coordinates and decides how the cursor moves.

For clicking we used the funcions from the pyautogui library. When we don't detect the presence of the eye for 3 seconds while a face it's been detected, we make a click in the position where the cursor is. In order to make click with the right button, its mode has to be activated by clicking in the rightmost point of the screen. After activating this mode all clicks are right button clicks. To return to default click mode, the leftmost point of the screen has to be clicked. In left mode double click happens when the eye is not detected for 5 seconds while the face it's been detected.

Challenges we ran into

At first we started by working the image processing with Matlab. However we encountered difficulties both when refining eye recognition and when trying to call our Matlab image processing from Python.

As is custom in hackathons time was scarce and we have been challenged at meeting the deadline.

Accomplishments that we're proud of

We have done something we find difficult in very little time.We had no preious knowlendge about image processing through Python, noone on the team had worked on this previously. We have learnt a lot about image processsing both on Matlab and Python.

What we learned

We have learned that Matlab is not the best language when it comes connectivity with other platforms. We have learned to program with Python's library OpenCV and within the WindowsOS .

What's next for Iris Assistant

We intend to keep improving Iris Assistant. Our first priority is to improve iris recongition stability and tracking. We want to try working with cameras with better resolutions.

Built With

Share this project:
×

Updates