Don’t want to be bothered to write boring meeting minutes
Empower people with hearing impairment as they can actively attend meeting where decisions are made
What it does
Real-time speech-to-text to up to two audio devices simultaneously
Summarization of the transcripts
Extract and highlight the action items
How I built it
Core in python with pytorch, kivy, gRPC and Azure/google cloud clients. Newer GUI with electronjs.
Challenges I ran into
Working 15 hours a days for 9 days is hard...
I'm outside my comfort zone with modern JS (electronjs) so I tried to learn as much a possible as fast as possible.
Accomplishments that I'm proud of
The prototype works, I can use it.
What I learned
This isn’t easy to handle audio devices (real or virtual)
How to use some part of Google cloud and Microsoft Azure
Updated my knowledge on modern JS
What's next for AutoMinutes
Replacing Speech-to-text cloud providers by Idiap module
Make the Electron interface look modern
Evaluate the possibility of cross-platform deployment