Inspiration

I built Eliya to see if I could. I’ve never written code, never deployed anything. But I wanted to prove I could go from nothing to a working product — by myself — using Bolt. No team, no shortcuts. Just the tools and a deadline.

What it does

Eliya is a voice-first app. You speak a thought. It transcribes what you said and uses GPT-4 to categorize it as an Action, Reminder, Reflection, or Let Go. That’s all. No advice. Just clarity, without friction.

How we built it

Everything was done in Bolt. I used the Web Speech API for voice input, GPT-4 Turbo for categorization, and Supabase to store thoughts. I built the full UI visually using Bolt’s canvas, with no code. I deployed it live using Netlify and added a custom domain through Entri.

Challenges we ran into

Voice input worked, but the final transcript wasn’t always triggering submission. I had to debug race conditions and handle timing issues in Bolt logic. I tested silence detection and rewrote parts of the state flow to make it reliable.

Accomplishments that we're proud of

I got the whole thing working in less than a day. Voice input, GPT integration, Supabase, deploy — all live. I didn’t fake anything. I didn’t skip the hard parts. I learned how to fix real bugs inside a visual tool. That’s the part I’m proud of.

What we learned

Bolt can do more than people think. You can go end-to-end with AI, voice input, secure storage, and clean UI — without code. I learned how to design flow, debug logic, write AI prompts that don’t sound robotic, and get something out the door.

What's next for Eliya

The voice pipeline is working. The categories are working. The base is there. I’d like to refine the UX even more, bring in avatar stages later if it feels right, and let users reflect across time. But this version does what I wanted: it listens, it responds, and it helps you feel a little lighter.

Built With

Share this project:

Updates