Inspiration

SentiSense was inspired by the need for real-time, empathetic support for those dealing with emotional distress. With 1 in 5 adults facing mental health challenges each year, many people struggle to find the support they need. We saw an opportunity to use AI to recognize emotions through speech and text—two key ways people express themselves. By offering compassionate, non-judgmental responses, SentiSense aims to help people feel heard and supported in moments of distress, improving mental health support through technology.

What it does

SentiSense is an AI-based platform that combines emotion recognition from videos and text with real-time empathetic responses. It uses speech-to-text transcription to analyze the emotions behind spoken words, processes video-based emotion recognition to understand the user's feelings, and offers supportive text-based interactions.

How we built it

Frontend: Built with TypeScript, we created a simple, easy-to-use interface where users can interact with the AI and get real-time emotional feedback.

Backend: Our backend is written in Python. We used APIs like Whisper for speech-to-text transcription and emotion recognition models like DeepFace to analyze both text and images.

Challenges we ran into

Speech-to-text accuracy: Getting accurate transcriptions in real-time was tough, especially with noise or different accents.

Emotion recognition: Combining both image and text emotion recognition models smoothly took a lot of fine-tuning to work together without any issues.

Frontend-backend integration: Connecting the frontend with the backend was tricky, especially with large amounts of data coming in from users.

Accomplishments that we're proud of

Real-time emotion analysis: We successfully built a system that can detect emotions from both speech and images in real-time.

Empathy-driven interaction: We created a platform that provides users with a sense of being heard and understood through the AI’s responses.

Teamwork: Despite the tight timeline, our team worked together efficiently, each handling different parts of the project.

What we learned

Multimodal AI: We learned how to combine different AI models—like speech-to-text and emotion recognition—into one seamless system.

Frontend-backend integration: The project helped us understand how to connect a Python backend with a TypeScript frontend through APIs and WebSockets.

Time management: With limited time, we learned how to prioritize tasks and focus on what was most important for the project.

What's next for SentiSense

We want to improve the accuracy of the emotion recognition and expand its features to offer deeper sentiment analysis.

User personalization: We’re looking to add features that personalize the experience based on a user’s emotional history.

We plan to make SentiSense accessible to more people, potentially through mobile apps or virtual assistants, to increase its impact.

Share this project:

Updates