๐ก Inspiration
In a world full of noise, we rarely take the time to truly listen to ourselves.
Between busy schedules and digital distractions, our thoughts often fade into the background.
Echonami was born from a personal need to slow down to create a space where your voice, your emotions, and your inner dialogue are heard, preserved, and transformed into something meaningful.
โ๏ธ What Echonami does
Echonami is a voice-first, introspective journaling app.
It allows you to speak freely about your day, your feelings, or your reflections.
In return, the app turns these raw thoughts into emotional audio narratives, played back using an AI-generated voice, one that mirrors your own.
The result?
A deeply personal, almost meditative experience where your voice echoes back, carrying new meaning, clarity, and calm.
๐ ๏ธ How I built it
I built Echonami solo using Bolt.new to accelerate the product development and prototyping process.
The stack includes:
- React + TypeScript (web frontend)
- Supabase (auth, storage, database)
- OpenAI Whisper (voice-to-text transcription)
- ElevenLabs (voice cloning and playback)
- Framer Motion (for micro-interactions)
- TailwindCSS (for clean, responsive design)
I carefully shaped the user experience to feel poetic rather than clinical, prioritizing emotional resonance, clarity, and softness across all devices.
๐งฑ Challenges I faced
- Supabase Edge limitations forced me to switch from server-side to client-side voice generation.
- Keeping the UI responsive and smooth while handling real-time audio processing.
- Balancing minimal design with emotional depth and expressiveness.
- Time constraints during the final sprint to fine-tune animations and ensure cross-device stability.
๐ What Iโm proud of
- Successfully integrating ElevenLabs voice cloning with fallback behavior.
- Creating a design system that balances simplicity and emotion.
- Building a fully functional MVP with guest access, secure account creation, and voice journaling in under 10 days.
- Bringing the emotional tone of the app to life with animated mic input and ripple feedback.
๐ What I learned
- How to creatively navigate API and platform limitations under pressure.
- The importance of emotional UX, especially for wellness and mental health tools.
- That even small, personalized audio moments can leave a deep emotional impact on users.
๐ Whatโs next for Echonami
- A React Native version is in the works for journaling on the go.
- A system of daily prompts to guide introspection for first-time users.
- The core vision is to encourage users to create one Echo per day, and later discover a retrospective, a capsule of that period.
- Users will be able to clone their own voice and choose the type of audio feedback they want (e.g. podcast-style, emotional tone, etc.).
Built With
- bolt
- elevenlabs
- netlify
- openai
- react
- supabase
- vite
Log in or sign up for Devpost to join the conversation.