Inspiration
HeartEcho was born out of a deeply personal observation while working on my EMG-controlled prosthetic arm. I realized that traditional prosthetics do not respond to emotional fluctuations — a user under stress might apply unintended force, resulting in accidents or frustration. Emotional context matters in human movement, so I set out to build the first system that brings empathy to prosthetic calibration using real-time voice analysis.
What it does
HeartEcho is an emotional-response prosthetic calibration system that analyzes the user’s voice in real-time, detects emotional states such as stress, frustration, or calmness, and automatically adjusts the grip sensitivity of a prosthetic hand. Using a clean, user-friendly interface built in Bolt.new, the system connects AI with hardware: Bolt handles UI and logic, ElevenLabs detects emotional tone, and an ESP8266 microcontroller communicates with an Arduino to update servo-controlled grip.
How we built it
- Frontend + Logic: Designed and developed entirely on Bolt.new, including UI components, emotion-based grip logic, session history, and assistant interactions.
- Voice AI: Integrated ElevenLabs API to analyze voice pitch and emotional state.
- Hardware Integration: Used ESP8266 to receive grip percentage and pass it to an Arduino Nano, which controls servo motors that adjust the prosthetic’s grip.
- Data Handling: Session logs are handled via Supabase, enabling scalability and record-keeping.
- Tavus: Optionally added video-based AI tutor for usage guidance and encouragement.
- Deployment: Hosted via Netlify with a custom domain for public access, proudly displaying the “Built with Bolt.new” badge.
Challenges we ran into
- Syncing hardware and software: Real-time adjustments from Bolt to physical hardware via Wi-Fi required careful serial and JSON handling between ESP and Arduino.
- Emotion calibration: Mapping emotional stress levels to a meaningful grip sensitivity scale (0-100%) took trial and error and calibration.
- Voice ambiguity: Background noise and soft speech sometimes affected emotion detection accuracy, which we addressed using ElevenLabs fine-tuning.
- Time constraints: Integrating multiple APIs and hardware within the hackathon timeline demanded tight coordination.
Accomplishments that we're proud of
- Created a functional empathy-driven prosthetic system — likely a first in its category.
- Seamlessly integrated Bolt.new UI, ElevenLabs voice emotion detection, and real-world hardware in under a week.
- Delivered a calming, accessible UI with light/dark modes, AI guidance, and live session history.
- Designed a concept that’s potentially patentable, practical, and impactful.
What we learned
- How to harness emotional AI and apply it beyond software — into real, physical human-assistive technology.
- Real-time IoT control using ESP+Arduino with cloud-connected frontends like Bolt.new.
- Bolt.new’s platform enabled us to go from idea to polished, functional UI without traditional code-heavy delays.
- Integration of multiple APIs (voice, video, hardware, and database) is possible with proper planning and modular thinking.
What's next for HeartEcho
- Clinical Trials: We aim to test HeartEcho with actual prosthetic users to fine-tune grip adjustments based on real emotional feedback.
- Multilingual Support: Add emotion detection for multiple languages and dialects to expand accessibility.
- Mobile Version: Launch a mobile app version using RevenueCat for subscriptions and accessibility.
- Advanced AI Training: Use historical voice patterns to predict emotional states more accurately and proactively adjust prosthetic behavior.
- Open Hardware API: Enable developers to connect other devices or feedback loops into the HeartEcho system.
HeartEcho is not just assistive tech — it’s the beginning of emotionally intelligent robotics.
Built With
- and
- built-with-bolt.new
- css
- deployed
- elevenlabs
- esp8266-+-arduino
- html
- javascript
- netlify
- on
- real-time
- supabase
- voice-to-hardware
- with
Log in or sign up for Devpost to join the conversation.