We looked at the challenges on http://www.healthhack.com.au/ and found there was a need for a subtitling app to improve the cinema experience of deaf people.
The idea involved:
- being capable of nudging the subtitle text forwards/backwards
- having a sync capability that uses some form of voice recognition
- involve an AR system that overlays the subtitle text on top of a video feed
- allow other languages to be loaded up so the film can be enjoyed by non-deaf people of other nationalities
What it does
We didn't have a chance to finish the project, but the main idea was to use audio fingerprinting (python-dejavu) to find the exact point in the movie, then display the relevant subtitles.
What we learned
We learned how audio fingerprinting works and how to install docker.
Ana learned how to measure loudness of sound to trigger an action (e.g. phone vibrates)
What's next for SubSync
Actually build it!