We wanted to create a dance dance revolution game using a single RGB camera for 2d pose estimation

What it does

Using a pre-trained CNN on the MPII dataset we can detect up to 14 different joint articulations and compare the sum of square differences between player1 and player2

How we built it

We used OPENCV to compress 1920x1080p frames down to 340x270p and fed through a pre-trained CNN using the eldar/pose-tensorflow library

Challenges we ran into

We had problems connecting from smartphone to server because we did not know much about API called Also, we had an issue with accuracy of pose-tensor. When I tested some images with an original idea, that was not accurate so that we need to change approach.

Accomplishments that we're proud of

Using tensorflow and opencv, at least I can detect movement.

What we learned

tensor flow, opencv, expo, azure

What's next for Movement

Try to combine front-end and back-end

Built With

Share this project: