Inspiration
The inspiration for Sonal AI arose from a personal challenge: frequently asking my wife, "Did you tell me that earlier?" or "Was that what we agreed?" These recurring questions highlighted the need for a reliable system to capture and recall daily conversations and commitments. Recognising that this issue extends beyond personal relationships, I envisioned a solution to assist anyone struggling with memory retention in our fast-paced world.
What it does
Sonal AI is a wearable memory assistant that turns your daily conversations into an organised, searchable, and intelligent database of your life. Here’s how it works:
Wear & Record: A small, stylish device (worn as a pendant, watch, pin, etc.) captures your conversations. Alternatively you can import an audio and. still enjoy the power of sonal ai
Sync & Transcribe: It automatically syncs audio to the mobile app, where it's quickly transcribed into text.
Get AI Insights: The app doesn't just give you a wall of text. It intelligently analyses the conversation to pull out actionable items like to-do lists, shopping reminders, and calendar events.
Search Your Memory: The most powerful feature is the semantic search. You can ask questions in natural language like, "What did my boss say about the project deadline?" and it will find the exact moment in your conversation history, even if the keywords don't match perfectly.
How we built it
I brought this idea to life using Bolt.new, by first telling it about the project and getting a product requirement document then starting to build the screens and features, one after the other. Using Bolt.new I was able to rapidly build and test the entire system. From the landing page at https://sonal-ai.com to the mobile app itself. Adding Supabase and Stripe payment was also handled by Bolt.new
The Hardware Concept: I designed the five wearable designs using AI tools like Imagen for inspiration before handing over the concepts to a partner manufacturer. printing to perfect their size, style, and feel before planning for manufacturing in durable PC+ABS and premium metals.
The App & Backend: The mobile app is the central hub, designed to provide a clean and intuitive user experience. It connects via Bluetooth to the wearable.
The AI Engine: This is a multi-step process. First, I tried many versions of speech-to-text models such as Whisper, Deepgram, Moonshot, some forks of whsiper to support speaker diarization before finally settling for AssemblyAi. Then, that text is fed into OpenAI gpt-4 model to identify and categorize tasks, events, and other insights. This is then used in a RAG system to polish the insights. The semantic search is powered by Supabase vector storage and vector search, OpenAI text-embedding-ada-002. This helps the app understand the context and intent of a user's query.
Challenges we ran into
This was definitely a challenging journey. My biggest hurdles were:
Hardware Miniaturisation: Shrinking all the tech—processor, high-quality microphone, and a battery that lasts all day—into a tiny, stylish device was a constant battle between performance and size.
Real-World Accuracy: Getting clean transcriptions from audio with many speakers, auto-asigning the speakers by labels and getting the correct timestamp, when people talk over others, understanding different accents, and also using a model that pricing was not prohibitive
Making AI Useful: The AI can pull out a lot of information. The challenge was designing an app interface that presents these insights in a way that feels helpful and organised, not overwhelming or robotic.
Accomplishments that we're proud of
Despite the challenges, I'm incredibly proud of what's been achieved so far:
A Fully Functional End-to-End System: We have a working prototype that successfully captures audio, syncs, transcribes, and extracts insights. The core concept is proven.
The "Magic" of Semantic Search: The first time I searched for a vague memory and the app found the exact moment I was thinking of was a huge "wow" moment. It felt like real magic and validated the entire vision.
Designing for People, Not Just Tech: Creating five distinct wearable forms means Sonal AI isn't a one-size-fits-all gadget. I'm proud of designing a system that people can adapt to their own personal style.
What we learned
The Need is Broader Than I Imagined: What started as a personal memory aid has shown clear potential to be a powerful tool for professionals, a game-changing accessibility device for students with ADHD or dyslexia, a memory assistant for people struggling with remembering, and so much more.
Seamless Integration is Key: The technology has to disappear. If the device is clumsy or the app is confusing, people won't use it. Every step of the process has to be effortless.
What's next for Sonal AI
This hackathon is just the beginning. The roadmap for Sonal AI is exciting:
Launch on Kickstarter: The next step is to move from a working prototype to a production-ready device. We plan to launch a Kickstarter campaign to fund the first manufacturing run.
Expand Language Support: While currently focused on English, we plan to add support for multiple languages like Spanish, French, and German.
Refine the AI: We will continuously train and improve the AI models to provide even deeper and more accurate insights.
Develop Accessibility Features: We are passionate about working with educational and accessibility experts to build out features specifically designed to help students and individuals with cognitive challenges.
Built With
- assemblyai
- deepgram
- expo.io
- openai
- react
- react-native
- stripe
- supabase
- whisper
Log in or sign up for Devpost to join the conversation.