Inspiration

A few months ago, I saw several interesting research papers that involve dancing motion transfer and full-body generation, such as Liquid Warping GAN, First Order Motion model, and Everybody Dance Now, and since then I've been itching to try out these models.

What it does

We aim to democratize character design and animation to the general public. No more learning complex software and buying expensive motion-capture gear, just sit back and let the AI do all the tedious work for you!

CharacterGAN allows you to generate 2D characters with AI and edit it to certain attributes such as gender, bulkiness, monstrousness, sexiness, and etc using simple sliders. Additionally, you can animate the generated characters by uploading a video of a person moving and doing motion transfer without any expensive motion-capture gear.

How we built it

I web scraped around 8k character images from ArtBreeder, and then trained StyleGAN2-ADA on the dataset for about a day. The results are surprisingly good. Then, I proceeded to find useful latent directions using GANSpace and label these directions to certain attributes like gender, color, etc. Next, I used Impersonator++ or Liquid Warping GAN model to do motion transfer. Finally, I created a basic user-friendly interface using Gradio for the app.

Challenges we ran into

This is the first I had to do web scraping, which took some time to learn. I end up using Selenium to avoid being flagged as a bot. Additionally, working with video and FFmpeg is pretty frustrating sometimes as well, sometimes audio does not match or the trimming function is not accurate. Finally, I tried to use a new video editing software, Adobe After Effect and I felt like my video editing skills improved a bit

Initially, I also wanted to make this as API using FastAPi and make a proper front-end app, but I end up spending too much time on video editing and generating results for the demo, the motion transfer takes quite long to process the video, although it may be because I used high res video as the input.

Accomplishments that we're proud of

Able to incorporate multiple models in a short time and also learned web scraping and Adobe After Effect.

What we learned

Learned web scraping and Adobe After effect.

What's next for CharacterGAN

Modifying the deployment architecture to be more scalable and performant and make the ML code to API and create the proper front end. Real-time motion transfer.

Built With

Share this project:

Updates

posted an update

The Colab demo has some dependency issues right now for the character editing, but it should still work for the motion transfer part. Just skip the "Load Model" and "Character Editing" part.

Meanwhile, you can try the demo from here for now: Character Editing Demo: https://bit.ly/charactergan-edit Character Animation Demo: https://bit.ly/charactergan-animate

I'm running these on Colab, hence it will only be available for the next 24 hours.

Log in or sign up for Devpost to join the conversation.