Inspiration

  • The idea for our hackathon project came from the relatively recent popularity of deepfakes in meme culture. When the tracks for this hackathon were unveiled, we decided to make an arcade game with deepfake technology.

What it does

  • A fighting game made in Unity using AI generated sprites from motion capture and liquid warping GAN.
  • Our project uses the ITERDance GAN to take a video of motion, i.e. dancing, fighting, etc. to map two photos (front and back) of another person onto the motion. We then use this animation track as a sprite sheet to make a character in our game.

How we built it

  • We used ITERDance for motion capture and transferring the motion to images of people to make animated sprites for our game. Unity was used to build the game part.
  • We used ITERDance for motion capture and transferring the motion to images of people to make animated sprites for our game. We use a custom-built Tkinter application to remove the background and group the sprite frames. The Unity game engine was used to build the game part, with three different streets, combat with AI with incremental difficulty, multiplayer mode, and live game preview mode.

Challenges we ran into

  • Significant issues getting CUDA to work with Pytorch for ITERDance.
  • Getting used to Unity and learning on the spot.
  • Merge conflict issues between branches implementing different features, multiple times.
  • Staying awake for the entire night
  • Figuring out frames of the animations

Accomplishments that we're proud of

  • It takes so long to resolve the merge conflicts even though we already tried so hard to prevent it, but we managed to always make sure the merge worked properly.
  • All the art assets are made and designed by ourselves using Adobe Photoshop and we put the streets at UCR.
  • Animations and various player states. The workload of processing all the frames for our animation is very heavy, but we managed to use Pytorch to deal with it, and are able to import any person as our character in 5 mins.
  • Quick prototyping and minimum viable product creation, with a decently strong AI player.
  • Speedrun minimal viable product
  • Add Post Processing Effects to upgrade the graphics

What we learned

  • Gets more familiar with Unity
  • Ways to prevent merge conflict on Github
  • Teamwork management
  • Game balance is challenging

What's next for R’umble

  • We will optimize the animation creation process and add an upload section, so the player can upload their own picture as a character
  • We will add more streets on campus
  • Add more dialogs and storyline.
  • Add more special player movement and combo attacks

Works Cited

  • Liu, Wen and Piao, Zhixin and Tu, Zhi and Luo, Wenhan and Ma, Lin and Gao, Shenghua. Liquid warping GAN with attention: A unified framework for human image synthesis (2021). Liquid warping GAN with attention: A unified framework for human image synthesis. IEEE.
  • Wen Liu and Zhixin Piao, Min Jie, Wenhan Luo, Lin Ma and Shenghua Gao. Liquid Warping GAN: A Unified Framework for Human Motion Imitation, Appearance Transfer and Novel View Synthesis (2019). The IEEE International Conference on Computer Vision (ICCV).

Built With

Share this project:

Updates