Here’s a clean, ready-to-paste Markdown answer for your submission (strong + judge-friendly):
Inspiration
In a world full of technology, people still feel alone. Most solutions only respond when asked—but real support is about presence. Oki-Doki was inspired by the idea of creating something that doesn’t just wait for input, but stays, understands, and supports continuously.
What it does
Oki-Doki is a context-aware AI companion that monitors user behavior, understands emotional signals, and provides real-time support. It goes beyond conversation by turning insights into actionable nudges, helping users improve their mental well-being and daily habits.
How we built it
We built Oki-Doki as a multi-layered system combining:
- AI models (Gemini) for reasoning, vision, and conversations
- Supabase for backend, real-time sync, and storage
- pgvector for semantic memory and personalization
- A Python-based local agent for device-level awareness
- ESP32 hardware integration for physical interaction
- Speech stack (Whisper + TTS) for voice-based communication
We also implemented the Me-Do framework, which bridges emotional understanding (Me) with actionable outcomes (Do).
Challenges we ran into
- Making AI responses feel emotionally accurate and human-like
- Maintaining real-time performance across multiple systems
- Handling privacy-sensitive user data responsibly
- Integrating hardware and software seamlessly within limited time
Accomplishments that we're proud of
- Built a working prototype with real-time interaction
- Created a system that is proactive, not just reactive
- Successfully integrated AI, hardware, and behavioral logic
- Designed a unique human-centered AI experience
What we learned
- Technology alone isn’t enough—empathy matters
- Users value systems that understand without being asked
- Real impact comes from action, not just insight
- Building meaningful products requires both technical and emotional thinking
What's next for Oki-Doki
- Enhance emotional intelligence with better AI models
- Add voice and visual awareness for deeper context
- Build a privacy-first architecture
- Launch beta testing with real users
- Scale into a full ecosystem of ambient AI companions
Demo video
https://drive.google.com/file/d/1n9O97OijyY3VhWEWojuTXZnZ2BTYCbyY/view
Built With
- api
- arduino
- c++
- computer
- embeddings
- esp32
- gemini
- iot
- natural-language-processing
- pgvector
- platformio
- postgresql
- python
- react
- serial
- speech-to-text
- sql
- sqlite
- supabase
- text-to-speech
- typescript
- vision
Log in or sign up for Devpost to join the conversation.