Inspiration
Once, while walking on a busy road in India, I noticed a blind man standing near the edge of traffic. He wasn’t moving — not because he didn’t want to, but because he couldn’t tell when it was safe to cross. Cars were honking, people were rushing past, and yet he was completely on his own.
That moment stayed with me. And it led to a simple question: What if AI could give that access back? Not by replacing humans, but by becoming a set of reliable, always-available eyes.
That question became 24/7 AI.
What it does
24/7 AI is an AI-powered assistant that acts as real-time eyes for blind users.
Using a camera and voice interaction, it:
- Describes surroundings in real time
- Detects obstacles, objects, and people
- Reads text like signboards, books, and screens
- Answers follow-up questions conversationally
The user can simply talk to it — and it talks back, explaining the world as it sees it.
How we built it
We built 24/7 AI using:
- Gemini 3 Pro Preview for reading a scene or recalling what had happened before.
- Live API for real-time guidance
- Gemini 2.5 Flash for map reasoning
The focus was on speed, clarity, and natural conversation, so the experience feels like having a smart companion rather than using an app.
Challenges we ran into
- Reducing latency so responses feel instant
- Ensuring outputs are context-aware, not robotic
- Designing for blind users where every interaction must be audio-first
Balancing accuracy, speed, and simplicity was the hardest part.
Accomplishments that we're proud of
- Built a spatio-temporal scene understander, not just a static image reader
- Achieved natural, conversational feedback instead of plain descriptions
- Created a solution that tries to focus on independence and dignity, not just assistance
- Successfully demonstrated how multimodal AI can be used for social good
What we learned
- Accessibility is not about adding features — it’s about rethinking design entirely
- AI becomes powerful when it understands context, not just data
- Real-world problems need human empathy, not just technology
- Small ideas, when built right, can have life-changing impact
What's next for 24/7 AI
We plan to take 24/7 AI beyond software and turn it into a dedicated hardware device, built in two stages:
Stage 1: Smart Assistive Hardware
A wearable device using cameras and sensors, fully integrated with our current AI software, providing real-time navigation, object detection, and voice interaction.
Stage 2: Seeing in the Mind
We aim to explore advanced sensory feedback — allowing blind users to form mental visual understanding, translating vision into structured audio and cognitive patterns so they can “see” the world in their mind.
Our goal is simple but powerful: Not just to help the blind live — but to help them experience the world.
Built With
- ai-studio-google
- html
- typescript-xml
Log in or sign up for Devpost to join the conversation.