The Idea With the recent success of depth cameras such as Kinect, gesture and posture recognition has become easier. Using depth sensors, the 3D locations of body-joints can be reliably extracted to used with any machine learning framework, specific gestures or posture can be modelled and inferred. Real world applications in Virtual Reality can be used for Yoga, Ballet training, Golf, anything related to activity recognition and proper postures. I also see application of it in the Architectural, Engineering, Construction and Manufacturing Industry by sending depth sensor data to the cloud to identify correct configurations. It could be used for quality control, anomaly detection and part recognition

What it does

This is a proof of concept to detect pose "Y", "M", "C" , "A" and stream the result back to the browser.

How I built it

Posture Recognition Device in Virtual Reality Capturing the Kinect Skeletal Data, Visualize it in Virtual Reality and send it to Azure IoT for Machine Learning.

If this project made you interested in Kinect, LattePanda board, Johnny-Five, Azure Services, please click "Like" button and follow me. Feel free to contact me, if you have questions.

Challenges I ran into

Accomplishments that I'm proud of

Getting Kinect to run on LattePanda board

What I learned

Virtual Reality Web and Machine Learning

What's next for Posture Recognition, Kinect and WebVR

Built With

Share this project: