About Emotion-Aware Virtual Listener

Inspiration The Emotion-Aware Virtual Listener was inspired by the need for accessible mental health support. I wanted to create a web app for Hackathon (July 2025) that listens to users’ emotions and responds empathetically, using technology to foster connection.

What it does The app transcribes user speech (via a mock or Web Speech API), displays the transcript, and provides empathetic responses. It features a microphone toggle, a transcript area, and a response section, all styled for a modern, user-friendly experience.

How we built it Setup: Initialized a React app with npx create-react-app and installed Tailwind CSS and Lucide Icons:npm install --save-dev tailwindcss postcss autoprefixer npm install lucide-react

Frontend: Built components (MicButton.jsx, TranscriptDisplay.jsx, ResponseArea.jsx) using React hooks (useState, useEffect). Styled with Tailwind CSS:className="min-h-screen bg-gradient-to-br from-blue-50 to-purple-50"

Speech-to-Text: Implemented mock transcription with setTimeout, later added Web Speech API for real-time transcription. GitHub: Pushed to GitHub with:git add . git commit -m "Initial commit" git push -u origin main

Challenges we ran into EJSONPARSE: Fixed invalid package.json syntax. react-scripts Error: Resolved missing react-scripts with npm install react-scripts@5.0.1. Missing index.html: Created public/index.html to fix build errors. Tailwind Error: Corrected TypeError: "" is not a function by fixing tailwind.config.js syntax. Learning Curve: Navigated React and Tailwind as a beginner, debugging Webpack issues.

Accomplishments that we're proud of Built a functional React app with a responsive, modern UI. Overcame multiple build errors to achieve a stable local deployment (http://localhost:3000). Successfully uploaded to GitHub, including a README.md for clarity. Implemented mock speech-to-text, with a Web Speech API option for Chrome.

What we learned React state management and component design. Tailwind CSS for rapid styling. Web Speech API for speech recognition. Git workflows (git init, git push) and debugging npm errors.

What's next for Emotion-Aware Virtual Listener Integrate real-time emotion detection via NLP APIs. Deploy to Vercel for a live demo:npm run build vercel

Enhance accessibility and add user history tracking. Expand to mobile with a React Native version.

Built With

Share this project:

Updates