As a student it was difficult to always be taking detailed notes in class and also follow along with what the professor is explaining. Recording some classes allowed us to pay more attention to the content of the class. Therefore, we thought this would be a suitable idea for an application that automatically does the transcription for us.
What it does
We built a system that took a FLAC/WAV file as input and transcribed all the words in it into textual format, giving us not only the transcriptions of a meeting/lecture, but also how many people spoke in it and who spoke what. We also show some analytics on who speaks the most in the conversation, along with the agenda of the meeting etc.
How I built it
Shivangi - I used the IBM Bluemix speech-to-text API to get transcriptions of the dialogues in a conversation into textual form. I also helped in setting up the front-end of the website.
Sandeep - I worked on the ML & web-dev part of the project, and collecting the prosody features of speech to recognize the speaker.
Grishma - I worked on parts of the frontend and backend which involved using Django. I was also responsible for finding the most frequently used words using Word cloud.
Challenges I ran into
It was difficult coming up with an idea for a product. Since all of us came from a data background, we ended up playing with different kinds of datasets. Another challenge we had to face was how, after finishing each other's respective work, we had to take some major time to merge all our work together into a single post.
Accomplishments that I'm proud of
Since this was the first hackathon for half of the team, we're proud of the fact that we were able to make a successful submission.
What's next for not.r
We'd like to be able session management to the site, so that we can keep a track of different sessions of the user. We could also sync it up with Google Calendar to start recording as soon as we enter a meeting. It would also be interesting to see the applications of this for physically challenged persons.