We were inspired to go for this topic as people usually create helpful projects using what a person says, but not how the person actually feels when speaking.
What it does
It analyzes the voice of a person and determines their emotions using the inflections in their voice.
Challenges we ran into
Many of the modules that were used for this project were not compatible with each other and hence we faced a lot of errors.
Accomplishments that we're proud of
After a lot of experimenting with different data models the accuracy of our model peaked to a little over 90%.
What we learned
We achieved a deeper understanding of obtaining information from an audio file and using that information for real life use cases.
What's next for Emotion Detector Using Voice as an Input
If possible this model can be used in voice assistants to predict the emotions of the user and recommend information based on their mood. This would allow for a more personalized experience with such voice assistants. This model can be used for interrogations and screening.