Inspiration

Love for integrating software and sensors to execute projects that create assistive technology or result in social good.

What it does

An automated home application that allows multi-modal interaction using Kinect’s gesture and speech recognition capability to control devices in a living space. The gesture aids in identification of the device whereas the speech command is responsible for changing the state of the device. The user is empowered to use a combination of a gesture and speech command. However the system is designed to work with gesture or speech commands solely to accommodate physically challenged and mute people.

How I built it

C#, Kinect 2.0, Kinect SDK, MS Speech SDK, Arduino and lots of love.

Challenges I ran into

Integrating gesture and speech recognition into a single app via multi threading. The background noise makes it hard to recognize words. The more the gestures for which we trained our model to recognize, the lesser was the confidence of gesture recognition. This was due to the overlapping of gestures.

Accomplishments that I'm proud of

Integrating gesture and speech recognition into a single app.

What I learned

It's really hard to integrate gesture and speech recognition into a single app. Integration of an Arduino board with Kinect. Careful selection of non-overlapping gestures. Designing of data sets for gesture training.

What's next for KinectEd Living

We've demoed the usage of a limited set of voice-gesture combo to trigger basic actuators. In the future, more gestures and commands can be added, a myriad of actuators can be triggered, the possibilities are endless.

Built With

Share this project:

Updates