💡 Inspirations
Need instant inspiration for TikTok dance moves? Do you want to be the first to post a dance video of your favorite hidden music gem? Physically Awkward?
Introducing JiveGenie! Inspired by the TikTok trend "why are you taking a full body picture of me."
💃 What is JiveGenie
JiveGenie is a powerful choreography generator that leverages Generative AI to create unique dance moves based on the music clip you choose for your TikTok video.
Our key features include:
- Generation of trendy choreography based on the music clip uploaded by the user
- Creation of motion frames
- Presented on the web application of JiveGenie
- Can be exported for use on pose-transfer models like MagicPose and DisCo
- Enables endless possibilities, such as social media filters, music video generation, and computer vision research
🤖 Tech Stack
Frontend
We used Javascript, Typescript, and the React Framework for the frontend stack to provide an interactive user interface for users to upload their preferred music portion and generate choreography. We used TikTok’s Video Embed API to give users an outlet for trendy dances and soundtracks so that inspirations of what music clips to upload can flow freely. These embedded videos also act as possible starting points for users to judge whether they like the choreography generated by our models, and provide feedback for improvements.
Backend
The Flask backend was written in Python. The core components of the pipeline utilized the open-source EDGE model (with pretrained weights) from the paper “EDGE: Editable Dance Generation From Music” (Jonathan Tseng, Rodrigo Castellon, C. Karen Liu, 2022) and MMHuman3D, an open-source Pytorch-based codebase for 2D and 3D human parametric models. Using a frozen jukebox model to extract embeddings from music, the EDGE model was used to generate choreographies in the form of SMPL model key points. At the same time, the MMHuman3D converted those key points into a 2D skeleton representation, which can be used as motion frames for pose-transfer models.
Core Components
- EDGE model from "EDGE: Editable Dance Generation From Music" (Jonathan Tseng, Rodrigo Castellon, C. Karen Liu, 2022)
- MMHuman3D (open-source Pytorch-based codebase)
- Frozen jukebox model for music embedding extraction
🔨 Challenges & Reflection
- Construction of a lightweight pipeline to run inference and dance visualizations smoothly
- Resolving dependencies between models and libraries
- Reconciling different formats and representations of human models (like SMPL to OpenPose to DensePose)
- Model training and lack of resources to finetune model on custom dataset (should have used GCP, AWS, etc.)
🚀 Next Steps
Make JiveGenie available on more platforms:
- As part of the TikTok application
- As an independent application on iOS and Android
Develop an extension or filter for better dance learning:
- Video overlay of the generated dance on top of TikTok's video recording feature
- Allows users to learn the dance with the overlay
- Option to post a video dancing with the AI-generated figure

Log in or sign up for Devpost to join the conversation.