About the Project QuietCompanion™ is a voice-first accessibility platform designed to help people think out loud — and actually feel heard. It's built for blind users, elderly folks, neurodivergent people, or anyone who just needs a gentler way to interact with AI.
This project was born out of a simple truth: most apps are too loud, too fast, or too hard to use if you’re not already comfortable with technology. We wanted to make something different — something quiet, helpful, and emotionally aware.
Inspiration We built QuietCompanion during the Bolt.new Hackathon, inspired by the accessibility and inclusivity track. Our team has real-life ties to people who are blind, nonverbal, elderly, or navigating grief and neurodivergence. So this wasn’t just a feature checklist — it was personal.
We asked: – What if AI could softly help people with memory, grief, or journaling? – What if the interface expected you to talk to it — and responded with real warmth? – What if your voice mattered more than your clicks?
What It Does QuietCompanion lets users:
Speak their thoughts into a Voice Journal and download clean transcripts and audio clips.
Ask questions aloud and get smart, spoken AI answers using premium ElevenLabs voices.
Get instant audio summaries of anything they paste in — with optional visual or spoken feedback.
Describe images with voice prompts, or generate new ones with simple spoken ideas.
Manage credits and fallback gracefully to native TTS when resources run out.
Every feature is accessible by voice, downloadable, and designed for emotional clarity.
How We Built It Framework: React + Vite + Tailwind
AI Models: GPT-4o via OpenAI API
TTS: ElevenLabs voice integration (premium and fallback)
Storage & Auth: Supabase (user auth, credit tracking, RLS protection)
Hosting: Netlify
Dev Tools: Bolt.new + VS Code + copilot support
We worked quickly but carefully — shipping ~5 major iterations in just under 3 weeks. Each pass focused on reducing friction and increasing dignity.
Challenges We Faced Making real-time credit tracking work without glitching
Preventing double charges from rapid clicks (we added smart debouncing)
Preserving premium voice playback even when a user’s last credit is spent
Navigating voice licensing, especially when deciding whether users could download their generated audio (we now only use legally safe, original voices)
Testing accessibility without full access to screen readers (still ongoing)
We also had to say goodbye to a cloned Burt Reynolds voice. It was glorious. But it was time.
What We Learned People want emotionally intelligent tools — not just efficiency
Accessibility isn’t a feature — it’s a whole philosophy
Voice-first design changes everything about user interaction flow
AI is most powerful when it slows down and listens
What's Next for QuietCompanion™ – Voice-First AI Tools for Everyone QuietCompanion™ is just the beginning. Now that we’ve built a working MVP with real users and a polished interface, we’re preparing to:
🌀 Expand Core Tools Add voice scheduling, reminder prompts, and mood tracking to the Voice Journal.
Build an offline mode for users with limited connectivity.
Integrate multilingual support for non-English speakers.
Refine the AI Companion’s memory to support ongoing conversations.
🧠 Launch QuietCompanion EDU A focused edition for schools, caregivers, and therapists that includes:
Group-safe voice journaling
Accessibility onboarding
Visual aid integration for students with cognitive disabilities
🫶 Open Access Pools Introduce free daily credits for blind and low-income users through a public-access pool, fueled by donations and optional supporter plans. Everyone deserves access to dignified tools — not just those who can pay.
🧰 Quiet SDK (Phase 2) Package the core components into a developer toolkit:
Drop-in React components
Supabase + ElevenLabs templates
Prebuilt voice UX patterns
This would allow others to build their own voice-first, ethical tools without starting from scratch.
💡 Loopkeeper Integration QuietCompanion is part of a larger framework called The Quiet Systems Lab — a modular ecosystem of AI-powered emotional tools, all designed for sustainable care. Future integrations will include:
Grief support modules
Blind-accessible creative apps
Loopkeeper™ routines for energy management
Built With
- bolt.new
- css
- elevenlabs
- lodash
- netlify
- openai
- react
- supabase
- tailwind
- typescript
- vite



Log in or sign up for Devpost to join the conversation.