Elevator Pitch: A friendly web app that listens, understands, and guides users to mental health resources through voice or text input.

Inspiration

Mental health affects millions, yet many people don’t know where to turn when they feel anxious, stressed, or overwhelmed. I wanted to create something simple, accessible, and practical—a tool that could listen to someone’s voice or read their words and provide immediate guidance.

The idea came from real-life struggles people face in expressing emotions and seeking help. I wanted to combine technology and empathy to make mental health support approachable for everyone.

What I Learned

Building this project was a huge learning journey. I gained experience in:

Voice recognition in the browser using webkitSpeechRecognition.

Dynamic content rendering with JavaScript to provide personalized suggestions.

Rule-based natural language processing for classifying user input into HIGH, MEDIUM, LOW, or NEUTRAL concern levels.

Front-end design principles to create a clean, responsive, and human-friendly interface.

Project workflow and deployment, preparing a live demo and demonstration video.

I also learned the importance of making every interaction feel intuitive and reassuring, especially in a mental health tool.

How I Built the Project

Setup: I built the project using HTML, CSS, and JavaScript, keeping it fully client-side so anyone can run it without installing extra software.

Voice & Text Input: Users can either speak or type their feelings. The voice input supports multiple languages, including English, Hindi, Urdu, Spanish, and French.

Concern Classification: A rule-based keyword system classifies input into four levels: HIGH, MEDIUM, LOW, or NEUTRAL. Each level triggers different support suggestions tailored to the user’s emotional state.

Dynamic UI: Results are displayed in a color-coded panel, making it easy to understand at a glance. Suggested resources rotate randomly so repeated inputs feel fresh.

Polishing: I improved the layout, spacing, and readability, and added icons and a favicon to make the app visually appealing and professional.

Testing: The app was tested in Chrome and Edge, ensuring voice recognition, text input, and resource suggestions worked reliably.

Changes Made During Development

Added multi-language voice recognition.

Implemented randomized support suggestions to avoid repetition.

Enhanced UI readability and responsiveness for mobile and desktop.

Added color-coded result panels for instant visual feedback.

Optimized buttons, text areas, and spacing for a clean user experience.

Technologies Used

HTML5 & CSS3 — Structure and styling

JavaScript — Core logic, dynamic UI, voice recognition

Web Speech API — Voice input

VS Code — Development

OBS Studio — Demo recording

Key Takeaways

Built a fully functional, client-side web app with voice and text input.

Learned how to combine technology with empathy to support mental health.

Improved skills in JavaScript, DOM manipulation, and front-end design.

Delivered a clean, professional UI and a 3-minute demo video for judges.

Built With

Share this project:

Updates