Inspiration

To create something original, we decided to use the provided equipment in unconventional ways. Also, as a lot of issues can be easily taken care on the software side, we want to solve something that can only be tackled on the hardware side, i.e., the hardware combination we were given.

Specifically, we decided to use the webcam to create a secondary, alternative input device to the keyboard for basic, often used commands. This would allow users to control their computer without using the keyboard, which could be useful when their hands are occupied or when they're not within arm's reach of their device.

What it does and why this is a good idea

The webcam uses object tracking technology to detect hand gestures and movements, and then convert them to computer commands, allowing the users to control their device without coming into physical interaction with it. Additionally, this puts common controls literally at the fingertips of the user, providing less constrained, more expressive and intuitive interaction with the computer than other ordinary input devices.

Different from existing solutions which use much more complicated devices (depth/double camera, IMU, etc.), we use a single camera to achieve the similar functions. This makes our solution more accessible for general users.

How we built it

Our application is built upon existing computer vision libraries which tracks the gestures and movements of hands through webcam. By distinguishing different hand gestures using detected measurable parameters, we mapped low level information such as the centre and area of detected palm, to high level commands by dedicated detection algorithms.

For a proof of concept, we built an application to send "undo" and "redo" commands through simple hand movements. This is implemented by injecting corresponding keyboard commands based on matching certain patterns with the captured movements. Using the same concept, other applications such as mistouch detection, finger monitoring for fingering correction and other user defined gesture-triggered functions can be built.

Challenges we ran into

As hand gesture detection is still an open problem in the domain of computer vision research, the softwares we can find and integrate are greatly limited. The lacking in the detectable parameters of hand and the stability of their values restricted the variety of gestures that we could use in our prototype. We also had limited experience in designing computer vision solutions. Lastly, poor programming choices in the gesture tracking library we used made it harder to extend its functionality.

Accomplishments that we are proud of

Facing the challenges mentioned above, we still able to built one working application to support our idea. We optimized the parameters to maximally enhance user experience.

What we learned

The difficulties with the gesture tracking library highlighted the consequences of limited design choices for users of any given codebase and thus showed the importance of following good practices and design principles. This project also exercised our ability to work around limitations of the available tools and use the available data to its fullest extent for the best user experience.

What's next for our idea

Given more advanced computer vision algorithms (maybe from machine learning domain), the system can be improved by adding more complex and stable gesture detections for additional functionalities. Stable fingertip tracking and better gesture recognition in low contrast environments are critical to some of the features we had imagined. We want to address more complicated hand movements such as gestures involving circular motion with the fingers or rotation of the hands.

Built With

Share this project:

Updates