Using the computer is clunky and unintuitive. We wanted to change that and give the ability for natural gesture and voice controls to be able to manipulate the computer.

What it does

Using gestures and voice activation we inject commands into application windows and keyboard events for data and application manipulation.

How we built it

Using IBM Bluemix and Watson we hook into the speech to text, text to speech, and Watson Iot platform. We used a Leap Motion for gesture control with custom gestures trained with TensorFlow. We used a Raspberry Pi for control visualization. We used a headset for audio control input and a bluetooth speaker for audio output.

Challenges we ran into

Connecting multiple devices together to seamlessly communicate with our application.

What we learned

It was relatively easy to use IBM Bluemix services to augment our application. Machine learning gestures for the Leap motion with TensorFlow. Python web sockets and server.

Share this project: