Inspiration
Realizing how many incredible speeches given by the best orators of the golden times had failed to survive the test of time, we decided to bring them back to life using the power of AI and Hologram Technologies.
What it does
This project aims to recreate life-like, speech-rendering algorithm to relive the long lost speeches. It's an intersection of 3 major technologies - Speech Synthesis, Spectrogram and Hologram Technologies.
How we built it
Project Resurrect uses Deep Learning Models to synthesize audio and video, syncing it together, to recreate famous speeches, that can be further used to generate holograms, virtual actors and a lot more. We are refining these videos by constantly feeding it new data, to yield seamless videos in real-time
Challenges we ran into
Firstly, the scarcity of high quality audio datasets containing revolutionary speeches is a major barrier. Secondly, the hologram technology doesn't really exist. We're experimenting and trying to come up with an alternative to render the same.
Accomplishments that we're proud of
We successfully created a 3-D model of the hologram using SolidWorks; which basically works as an "illusion" of the same.
What we learned
Working with different machine learning libraries and a lot about holograms.
What's next for SW-D05
We plan on heavily improving our Deep Learning Models and maybe explore Unity to create better looking 3D Models.
Log in or sign up for Devpost to join the conversation.