Inspiration

This project was inspired by the Green Goblin's iconic scene in Spider-Man(2002), where Norman Osborne talks to his alter ego in the mirror.

What it does

Captures an image of the user through the webcam, and processes it. The app then displays a generated video of the user saying Green Goblin lines from the movie.

How we built it

The frontend was built with React. This captures a photo and sends it to the backend with an API call. We use the gooey api to create lipsync due to processing constraints. The generated video is then sent back to the frontend and displayed. There is a version of the application that uses an modified open source python program (https://github.com/Rudrabha/Wav2Lip).

Challenges we ran into

Connecting the work of the frontend and backend was difficult. We had to send a photo to the backend and receive the generated video. This was eventually done using an API we created. We also underestimated the time it would take to achieve our goals, as we faced numerous instances of trial and error.

Accomplishments that we're proud of

That completed a relatively stable and easy code in the timeframe give. This the first time any of us have worked with such a restrictive timeframe, so we are glad of our accomplishments.

What we learned

Programing in such a short time is hard. Despite our project being simple in concept, errors and lack of experience made tasks that were easy in hindsight much more difficult

What's next for Green Goblin

Adding a deep-fake feature where you can now sound and look like Willem Dafoe

Built With

Share this project:

Updates