Inspiration

We have seen computer vision project in the past that we thought were interesting and wanted to try to build one ourselves.

What it does

It is a UI Demo that uses gestures to interact with the application. In order to navigate a trivia game you create poses to encourage a fun learning environment.

How we built it

We used Mediapipe and opencv to process video and gestures. We then built a state machine to keep track of the portion of the game you were in and use gestures to navigate the game. For the questions we pull from a open source database of question.

Challenges we ran into

It was very difficult to get the gesture recognition code running smoothly and accurately on our machines. We also wanted the app to feel very intuitive and smooth, and fine tuning the details presents a challenge. In addition we were also exploring other methods of tracking and interaction using body movements that did not always work or best for this application. This is clear when looking at the multiple branches of the github.

Accomplishments that we're proud of

We not only got the application to smoothly recognize gestures, but the app navigation works better than expected. In addition we were able to learn new libraries that has inspired us for future projects.

What we learned

We learned how to use machine learning and computer vision to process camera feed and produce meaningful results.

What's next for pose pose revolution

The next steps for pose pose revolution is to put the code in a module and build other apps that use our gesture based UI

Built With

Share this project:

Updates