What it does
How we built it
It was built using several IDE's with PyCharm being used predominantly.
Challenges we ran into
We first attempted to run a python flask server however problems arouse with multi-way communication and it was decided to be too complex for such a short period of time. This problem was fixed by switching to a semi-independent front-end/back-end. Another problem we encountered was the parsing of JSON files provided by Watson. This was fixed by print the structure that returned and re-directing stdout to a file that could then be renamed to a JSON file.
Accomplishments that we're proud of
What we learned
What's next for AudioToTextKeywordSearch
There are a lot of ideas we would like to implement further for this project. One of them would allow the user to submit an audio file through the HTML to be read by the IBM-Watson API. This would make the project more dynamic and versatile because the user wouldn't have to put the audio files into the backend, which is currently how the project operates. Another future improvement would be to create a heap map when the user searches for a word. This heap map would display the frequency of how many times this word was spoken, highlighting areas of the speech that used this word more often. This was originally a part of the scope, however due to challenges and new technology it was not able to be implemented in time. This would however allow the user to find an ,"area", of the speech they are looking more efficiently based on where the keyword appears more frequently.