Originally, we were reluctant to do any hacking of our own, but upon seeing the opportunity to use Microsoft's HoloLens, we became inspired to help contribute to the future of this new technology.
What it does
SubtitlesIRL utilizes Cortana's speech-to-text functionality and the HoloLens to display real life subtitles and dub people in real time, as they speak. This app has countless uses, such as aiding those who have hearing disabilities, second language learners, and those that hate the squeals of humans.
How we built it
Challenges we ran into
We spent the large majority of the hackathon trying to figure out why our text-to-speech parser was sporadically performing and often flattely outright failing from the same situation to situation. We debugged throughout the process and made minor changes across the project and recorded each step and ended up realizing that without stable wifi, our search for the root of all error was heavily delayed, and akin to climbing an Escher Staircase. The reason is as follows: SubtitlesIRL utilizes Microsofts speech-to-text recognition and this API talks to its server to perform this.
Accomplishments that we're proud of
We our quite proud of our tenacity throughout the build process. As mentioned above, we ran into validity issues early on in our project's timeline and weren't able to make any headway in that regard until almost 24 hours into the hackathon.
What we learned
This project opened our eyes to the potential that hololens has to offer, and made us quite excited at the future of AR. It also revealed to us the unity ecosystem, and C#.
What's next for SubtitlesIRL
Future applications for SubititlesIRL include real-time language translations, using the Bing or Cortana translator API (where Cortana beats out Google here in languages like Chinese), and machine-learning for speech recognition on individuals for more detailed transcripts.