Creating subtitles for videos can be tedious. Often times it takes so long to generate a transcript, figure out the formatting, and then getting it loaded into the video editing program, so many creators just forget about it. But with the help of Azure AI Speech to Text, this simple program allows anyone to get a head start on creating their own subtitles file to be imported into their video editor. Now more than ever it is important to not just think of those who are hard of hearing who absolutely need closed captioning to consume videos, but also for those who prefer to watch videos with them turned on. With this tool, you can get a head start to building these files and be more productive during your video editing flow so that no matter where on the Internet you video is consumed, it can be enjoyed by all!
What it does
Generates WebVTT and SRT files using Azure Speech to Text. Then if you choose to have translated subtitles, it uses Azure Translator to generate those as well.
How we built it
Azure Speech to Text does the heavy lifting of generating a video transcript. Then Azure Translator is used to translate the transcript into different languages if the creator chooses they want multiple languages. FFmpeg is used to prepare the video for Azure. Finally Node JS helps organize everything.
Challenges we ran into
Understanding how FFmpeg works and the different subtitles formats including WebVTT and SRT.
Accomplishments that we're proud of
Creating this simple project that can be used by all video editors to be more productive, myself included.
What we learned
How to use Azure Speech to Text, Azure Translator, FFmpeg, and the different subtitle formats.
What's next for Subtitles Node JS
Build a plugin for DaVinci Resolve so there is no context switching and for those who might not feel comfortable with the command line can still use it.