One of the suggested project was 'Talk it Out' and it seemed very interesting especially the voice and feature recognition.

What it does

The project record somebody speaking and transcribe the audio file. This transcript is then fed into a model that picks out key words and classify the emotions expressed by the person. The audio is used in parallel to detect emotions based on the tone, pitch of the audio file.

The results from the 2 models are then merged together to generate an overall emotion for the audio file.

How we built it

For the speech-to-text recognition we used the Google Speech Recognition. The text was then used to classify the different emotions using the IBM Watson API. The emotion detection based on tone analysis was done using openSmile. A web app was also developed for a friendly user interface.

Challenges we ran into

Getting the different APIs working was quite challenging. Plus, since no-one on the team had worked with speech recognition before, getting the project started was a bit slow.

Accomplishments that we're proud of

We are proud to have a fully working program through which we can classify the emotions expressed by a person.

What we learned

Working with different APIs and learning about speech recognition and processing was very interesting.

What's next for Talkie_Walkie_for_Mental_Health

Stay tuned to find out.

Share this project: