Inspiration 🤯
As artists, it is in our DNA to allow us to create and portray our feelings and emotions through different forms and mediums. Social media has opened the doors for choreographers and dancers to showcase these emotions to broader audiences at a faster rate. Yet, amidst this exposure, the true essence of authorship often fades into the background. Many creators find their work celebrated without proper recognition, their unique expressions overshadowed in a sea of content. This is why we created Trace AI—to reclaim authorship and ensure that every dancer receives the acknowledgment they deserve.
What it does ✅
Let us get one thing clear. Trace AI is an innovative functionality and not an application. As mentioned above, Trace AI aims to properly recognize choreographers on social media platforms and allow the general public to enjoy their creations without worrying about crediting the artist.
How we built it ⚙️
We categorize Trace AI into two parts. The Full Stack and the Analysis. Tackling the full stack first, the front end was created with the React API and CSS. Now for the backend, we used both MongoDB and Firebase for handling video data and user data. We decided to have two separate backends to store the URLs of the videos uploaded, while Firebase holds the user data and the actual uploaded files of the user.
Challenges we ran into 😤
The biggest challenge that we ran into and currently still have is converting the data collected from the TensorFlow position estimate into a 3D glTF file to be used by Panda 3D to carry out the mesh analysis.
Some other challenges that ran into but were able to solve were analyzing 3D animations and comparing joint transformations between two animations. We were struggling with mapping the TensorFlow model with a "feed" model that we were running on our website but were eventually able to solve it by using Intersection Observer API to track what elements were in frame and off-screen.
Accomplishments that we're proud of 🥳
We are proud of everything honestly. Our main goal going into this was to create something innovative and to be able to show tangible results. Therefore, the feed page and the main page are one of the big things we are proud to show off as they show off a lot of integrated systems. In addition, we are also proud of creating an analysis method using Panda 3D as something we hope to integrate in the future as these 3D models can be used with AR glasses and bring the art of dance into our lenses.
What we learned 💭
The major things that we learned with this project are to iterate fast and to make the bare minimum quickly. This allowed us to focus our efforts on trying to create as many basic functionalities quickly and integrate them as we predicted that would take us the majority of the time.
What's next for Trace AI 🚀
The first thing we will tackle is the outstanding problem we are currently facing with integrating the TensorFlow Model with the Panda 3D Mesh Analysis. In addition, we also want to do a lot of code cleanup and make some functionality on the site more fleshed out.
In addition, when our team was brainstorming for this idea we realized there were numerous corner cases that we must consider with the authorship ideas. A student uploads choreography that was taught at a workshop, but the platform detects them being the first person to upload the dance. How do we properly track dance groups? How do we differentiate improv, musicality, and freestyle when addressing similarities in dances?
Even though there are corner cases for us to consider, we hope to take this idea further by utilizing the 3D Mesh to its maximum ability and integrating it with AR glasses. Since we create meshes out of these dances that are 3D renders, we can put these meshes onto AR glasses and project them on the lenses. We thought of the idea of basically bringing the concept of Just Dance to any dance that is created and you can learn with AR glasses.

Log in or sign up for Devpost to join the conversation.