💡

Inspiration

What if you could meet your favorite fictional character? As avid anime fans since an early age, we dreamed of our favorite anime characters coming to life as children. This interactive deep learning web app Anime Talk makes it possible. The idea started when we came across an article that explored how AI is being used to generate hyper-realistic human representations of anime characters, which pushed us to take the initiative to bring anime one step closer to life.

🔎

What it does

Anime Talk provides an exciting and immersive experience for anime fans to see their favorite characters come to life as a realistic human version of themselves. Utilizing AI, this web app displays deep fakes of anime characters that imitate human movement in real life, as well as an opportunity to connect with other anime fans in the community. The user has an extensive array of characters to choose from that come from different anime worlds. Users can chat with them via a chatbox that receives user input and provides a response from the anime character which is displayed as a synthesized audiovisual as the character replies back in a natural manner. A community forum is also available to users to meet new people and discuss all things anime.

🔨

How we built it

We began by using still images of realistic AI-generated drawings of anime characters. Using deep learning technology, we used Deep Nostalgia to animate the faces and synthesized an animated moving video of each character. We built an audio database from different characters' voices and stored it with the videos in Google Cloud Storage, then accessed them from Wav2Lip which generated lip-synced audiovisuals that combined the deep fakes and voices. JavaScript, HTML & CSS were used to combine the audiovisuals with chat features and generate the appropriate response to the user's messages.

🥇

Accomplishments that we're proud of

Building such an extensive and interactive applications is something we are proud of as most of us were beginners and had no idea about the process behind deep learning technology. The user is able to get a smooth and instantaneous reply, which was a key process behind making this an immersive experience. Attention to detail with specifics such as characters' humour and witty comebacks was something we valued and paid close attention to. The community forum is also a fun and collaborative feature that we liked adding to the application.

⚠️

Challenges we ran into

We were originally planning on using Real-Time Voice Cloning to synthesize audio files using our characters' audio database to generate text to speech. However, that resulted in unclear and messy audio, so we had to pivot our approach to building a response system to user input using existing voice clips of the anime characters. It was hard to find clips that matched what we needed, as well as clear ones without loud BG sounds. We were ultimately able to utilize multiple soundboards and youtube videos to provide a good quality result. The lip-synced files produced from Wav2Lip were blurry in comparison to the original deep fake, which made the transition of the video when responding a bit messy. We tried to fix this by enhancing video quality by editing and smoothing transitions from one file to another.

🧠

What we learned

We were able to get insight into the world of AI and deep learning technology, and how those funny deep fakes of Obama on youtube are generated. It was the first time we'd built a chatbox or community forum feature on a web app, so we are happy with the result we have from this first try.

💭

What's next for Anime Talk

We hope to provide even more extensive responses to users by synthesizing a bigger database of audiovisuals and expand the number of fully functioning chatting features to the anime characters' list. Making the deep fakes more smooth in transitioning from still to talking is an area of future further improvement as well. A more customizable profile and engaging forum are areas of improvement for the future.

Share this project:

Updates