Inspiration

With the ever rising number of computer users, the accessibility of computers is still, to a great extent, limited to the use of traditional keyboard inputs. So, using computers for people with disabilities is either quite expensive or an unpleasant experience. We wanted to build an application that could use gestures to perform tasks without using any expensive additional tools.

What It Does

The application user your eyes gesture(more particularly your blink gesture) to perform a task in the computer. In this particular case, we will use our simple flappy bird style game called TonyHops to show the use of such a feature. A user can control the motion of the character just by blinking their eyes and thus would be great for people who are unable to use their arms. Besides, our game, the IDetector also works great with the T-Rex game from google.

How I Built it

Our application uses pythons openCV library to take live feed from the users’ webcam. Then we implement the dlib library to track their facial structure. We then use a shape predictor file provided by dlib that can be used with the predictor functions in dlib library to accurately track the users eyes. Then measuring the width to height ratio of the eyes, our program makes an accurate prediction of, if the user blinked his eye. On detecting a blink, the program performs a key press(in our case the space button) which allows you to perform a certain task. The key press can be modified depending on the need of the user.

Challenges I ran into

I think the biggest challenge for us was trying to run multiprocessing in python. Since playing the game requires information from the webcam feed and running the game at the same time we needed to implement multiprocessing in our program. However, we ran into a lot of errors while trying to do this. This error definitely cost our team a lot of valuable time.

Accomplishments that I'm proud of

This was the first Hackathon for our team. We had little knowledge on what we would build before we got here. During the first night, our ideas were all over the place and we feared we would not even complete the challenge. But all three of us were very dedicated to make it work. Thus, we brainstormed, out everything on it and seeing the product now is truly satisfying.

What I learned

This project was heavily dependent on the Python openCV library and the dlib library. We got to learn a lot about these powerful tools that Python Offers for image manipulation, detection and much more . We also got into simple game development with python which was a fun experience for everybody. Using Tkinter library to make GUI for python was also a new experience for us.

What next for IDectetor

IDetector is capable of so much more with just some timy modifications. The dlib library can be used to detect left eye, right eye, nose, and mouth independently. Thus, different gestures from different parts of the face can be used to give even more functionality to the program. With proper implementation, we will be able to make an efficient interface for handicapped people to easily navigate any function in the computer.

Built With

Share this project:
×

Updates