AirType is an application that uses computer vision so that you can type on a keyboard by just touching the keys without pressing them down. This makes it much quieter, and it makes it possible to work or play games in public spaces without annoying everyone around you with loud keyboard sounds.

Inspiration

Our team was inspired to create this project when we realized how loud keyboards were. As coders, fast typers, and keyboard enthusiasts, we are very familiar with the loud and/or annoying sounds that mechanical keyboards and even membrane or laptop keyboards can make. Since these sounds often create distractions to those around us, in a quiet setting, I would usually just opt to type slower and therefore quieter. However, this would lessen my productivity/efficiency and it would destroy my workflow when I was doing work and it would destroy my gameplay in games, making working or gaming in public spaces unenjoyable. We decided to solve this problem, by creating AirType.

What it does

AirType allows you to type on a keyboard by just touching the top of the keys; you don't have to actually press the keys down. This makes it significantly quieter than a keyboard that isn't using AirType. As a result, AirType makes it possible to work or play games in public spaces without annoying everyone around you with loud keyboard sounds.

How we built it

First, we used opencv for computer vision, so that we could detect and map out the user's hands. We attempted to use tensorflow, keras, and scikit-learn to create a machine learning convolutional neural network. However, in the end, it didn't work.

Challenges we ran into

We had some issues creating and training the machine learning convolutional neural network. Often, it would not be sensitive enough, but eventually we managed to fix the issue. At first, we had major latency issues, but we were able to heavily optimize our opencv code and our machine learning convolutional neural network code in order to reduce camera and processing latency. At the end, we were unable to resolve all the issues we had with the machine learning.

Accomplishments that we're proud of

We are proud that we got the opencv computer vision hand and finger detection and mapping working.

What we learned

Our team learned how to use opencv, combined with image processing techniques, in order to detect, isolate, and map hands and fingers, as well as their shapes and positions.

What's next for AirType

Our team would like to further fix all our issues with our machine learning, and improve the convolutional neural network to work on different keyboard layouts as it currently only works for the ANSI layout (the standard layout for the USA, Canada, and many other countries). We would also like the implement support for more languages since AirType currently only supports English. We also want to further reduce the latency from roughly 100 milliseconds to at least 50 milliseconds or less in order to make it more appealing for gaming and other use cases that may benefit from lower latency keyboard input. Finally, we are interested in polishing the project and selling it as a product/subscription online. AirType has business potential as a product used by companies. In office buildings, the sound of as many as one hundred or more nearby workers typing on their keyboards all day is certain to annoy and decrease productivity for all the workers. If AirType was implemented in an office setting, it would improve worker productivity and time-efficiency by massively reducing the annoying sound produced from keyboards.

Built With

+ 26 more
Share this project:

Updates