How it works
What tech we used
What inspired us was the relatively bad experiences we had when trying to find transcriptions for podcasts we regularly listened to. That got us thinking, if we are having trouble finding transcriptions how are people that are deaf or hard of hearing even listening or consuming this form of media? With all of the available tools we wondered how and why there was no cost-effective and user friendly solution to make podcasts accessible to all.
What it does
Provide a rich podcast experience for deaf and hard of hearing people (and everyone else too) by offering an automated, on-demand transcription and annotation service for audio files. You can read along or learn more while listening or skimming to your favorite parts of any podcasts.
How we built it
We built it using these technologies
Google Cloud Natural Language Google Cloud Speech-to-Text Wikipedia
firebase realtime database
Express.js Prettier Swagger Eslint React hooks Node.js Google cloud platform Github Figma Typescript
Challenges we ran into
We ran into time constraint challenges and hurdles to run many APIs in a series to achieve a presentable final and deliverable result. We also ran into scope challenges, wanting to accomplish so much in such a small amount of time.
Accomplishments that we're proud of
We are proud of having delivered a functional demo with a full backend, working database, and a product we are proud of. We made something we did not think possible in less than 20 hours!
What we learned
We learned to master the art of running APIs in series and tackling many implementation challenges in attempting to both deliver an optimal product as well as an efficient one.
What's next for hear: podcasts made accessible
We would like to integrate live transcription, language translation as well as build community and connecting people