If you're interested in this project, check out our generative art collection!
We were very excited and inspired by the “Everybody Dance Now” demonstration which combines image-to-image translation with pose estimation to produce photo-realistic ‘do-as-i-do’ motion transfer. Researchers used roughly 20 mins of video shot at 120 fps of a subject moving through a normal range of body motion. We wanted to make motion transfer more accessible, so we explore model and implementation reductions with the goal of quickly producing ‘reasonable quality’ motion transfer examples in a live demo.
How we built it
We write about our approach extensively in our blog: https://smellslike.ml/posts/everybody-dance-faster/