Inspiration: I was mainly inspired to participate in this hackathon because of the voice emotion recognition theme in AI/ML. I know what it is and how it feels like to not be able to express yourself clearly and cause confusion for others too. Therefore, I decided to create an app that solves this problem and enables and assists people in expressing themselves clearly and confidently.

What it does: It analyzes your voice to detect emotions, offering feedback on vocal tone and suggestions for improvement.

How we built it: We built the app using Hugging Face for emotion recognition and Gradio for the interactive interface, with Python for backend processing.

Challenges we ran into: Integrating accurate speech-to-text transcription and real-time emotion analysis proved to be technically demanding.

Accomplishments that we're proud of: Successfully developed a working prototype that provides real-time feedback on emotional expression from voice recordings.

What we learned: We learned how to integrate emotion detection models, manage audio input, and build interactive user interfaces.

What's next for Aashay:We plan to refine the emotion recognition algorithm, improve speech-to-text accuracy, and expand the app's capabilities for personalized feedback.

Built With

Share this project:

Updates