Inspiration

We came across a paper written by Microsoft a couple of years ago and noticed that nobody implemented it.

How it works

Basically, we emit a high-frequency sound from the speakers of a laptop, move the hand above the speakers and record the sound again. Afterwards we measure the change in frequency and amplitude of the wave, the left and right shifts and according to the measurements we recognize the gesture the user did.

Challenges I ran into

We had some problems with configuring Aquila library for sound processing and a pile of them with the pattern recognition one and the audio libraries it requires.

Accomplishments that I'm proud of

We managed to record the "highest peak" of the wave, which is the point on the graph with the highest amplitude and measure the right and left shifts of it.

What I learned

I learned to use SoX and Aquila libraries and some audio pattern recognition and improved my c++ coding skills.

What's next for Doppler Gesture recognition

Built With

Share this project:
×

Updates