Inspiration
We talk every day, but how we say often matters more. People express feelings through tone, speed, rhythm, and pauses, but these non-verbal cues are usually ignored by most communication and mental health apps. We apply data science to visualize the information, help communicate smoothly.
What it does
Listen beyond words, make emotions visible, and provide supports.
How we built it
We built HearU using React and Flask. Users upload audio, which is analyzed on the backend, where we wrtte Python code to convert speech to insights and stored them in MongoDB; the frontend then displays transcripts, word clouds, and emotional insights.
Challenges we ran into
How to present the user's meaning, and what is the best way to show the output intuitively?
Accomplishments that we're proud of
Friendly UI, easy-to-understand insights convert from conversations.
What we learned
Always put yourself in the users' shoes.
What's next for HearU
Refine it to a better product, and let more people use it.


Log in or sign up for Devpost to join the conversation.