Inspiration

We were inspired by our love for Kpop and wanted to build something that could help the average person learn dance moves, without the need for a coach.

What it does

VibeDance allows users to choose a song to dance to, with a reference video, then it tracks the users motion to see how well it matches the original dance. It will provide live feedback for the user on how they need to improve, as well as a longer summary at the end of their practice session.

How we built it

We build VibeDance with a python fastapi backend using mediapipe for recognizing anatomical landmarks, openai for practice analysis and feedback generation, and numpy for intermediate calculations and image processing. On our frontend, we designed a UI using figmamake, and implemented our frontend with react + ts + vite. The frontend integrates the browser's camera to record the user, utilizing snapshots of that data for the practice analysis that happens in the backend.

Challenges we ran into

We ran into issues at first with the performance of the video recording, only managing to get a few fps due to real-time tracking. We had to figure out a more efficient way to poll the landmark recognition so that the video would remain smooth, a very important feature of an app like this. We also ran into a lot of issues with our version control, that led to a lot of time-consuming debugging.

Accomplishments that we're proud of

We're very proud of how professional and complete our UI looks, as well as the now seamless video recording in high fps.

What we learned

We've learned more about how to organize the architecture for an app that uses a live video recording, as well as landmark recognition of anatomical features. We also learned a lot about the limits of AI coding tools when codebases get too complex.

What's next for VibeDance

VibeDance needs to have a more complete working implementation of the feedback + dance analysis system, which tends to be buggy and inaccurate for now.

Built With

Share this project:

Updates