Inspiration

We were watching some boring corporate video and realized that these videos were all so simple, repetitive and formulaic that we could write a program to make them for us.

What it does

Ignitify generates a live corporate video while you speak, with real-time speech recognition and analysis to create a live feed of stock video clips relating to what you're talking about. It also generates an original, unique music track that plays in the background.

How we built it

We used a Python MIDI library to generate the music, then used Azure Cognitive Services to recognize speech, rake-nltk to get key words from the spoken text, and then searched stock video sites like Pixabay and Shutterstock to get video clips and displayed them with OpenCV.

Challenges we ran into

The Python library for the speech recognition service that we were originally using (Google Cloud) didn't work at all with asyncio, which meant we couldn't listen for speech while displaying videos. We eventually switched to Azure for speech recognition, which worked much better.

Accomplishments that we're proud of

We spent a lot of time fine-tuning our natural language processing algorithms to provide the best results for searching stock video sites, and ended up with something that works really well.

What we learned

We learned a lot about asynchronous programming in Python while trying to execute all of the parts of the program at the same time. We also figured out how to effectively parse text for keywords, and learned how to randomly generate music that sounds okay.

What's next for Ignitify

We need to hide it from corporations, and make sure it never gets used, ever

Share this project:

Updates