Inspiration
Visually impaired individuals face daily challenges in navigation, safety, social interaction, and digital accessibility. Most existing assistive solutions are either expensive, cloud-dependent, or limited to a single function. Through problem analysis and user-focused research, we identified the need for a low-cost, reliable, and all-in-one assistive system that works even in poor or no network conditions. This insight led to the creation of IRIS , aimed at improving independence, safety, and confidence for visually impaired users.
What it does
IRIS is an AI-powered assistive application and wearable system designed to support visually impaired users in everyday activities. It offers real-time object and obstacle detection, text and signboard reading via OCR, face recognition, path navigation, fall detection, and SOS alerts. The system delivers guidance through voice and haptic feedback and operates seamlessly in both online (cloud) and offline (edge) modes to ensure uninterrupted assistance.
How we built it
IRIS 2.0 is built as a multi-platform system consisting of a web application, a mobile application, and an edge-based wearable setup. The web app serves as the primary intelligence layer, where Gemini 3.0 is used to handle tasks such as scene understanding, text interpretation, contextual reasoning, and decision-making. At the current stage, the system does not rely on a dedicated cloud backend; all AI intelligence is accessed directly through Gemini 3.0 APIs.
The mobile application, developed using React Native, focuses on real-time assistive features such as SOS alerts, face recognition, object detection feedback, navigation guidance, and voice interaction. It is designed with a voice-first and accessibility-focused UI, enabling seamless use without visual dependency.
For offline and sensor-based processing, Raspberry Pi 5 is used as the edge device, integrating the camera, ultrasonic sensor, accelerometer, and gyroscope. Lightweight AI models and sensor fusion logic handle critical tasks like obstacle detection and fall detection locally. The system is designed to later support cloud integration, but currently prioritizes local intelligence and API-based AI reasoning for reliability and simplicity.
Challenges we ran into
Ensuring real-time performance and accuracy on affordable hardware was a major challenge. Designing smooth transitions between online and offline modes without affecting user experience required careful system optimization. Additional challenges included battery efficiency, handling low-light environments, and building a voice-first, non-visual UI/UX that remains intuitive and accessible.
Accomplishments that we're proud of
We developed a fully functional prototype that combines AI vision, sensor data, and voice interaction into a single assistive solution. IRIS 2.0 successfully delivers critical features such as offline obstacle detection, face recognition, fall alerts, and SOS, which are rarely available together in cost-effective systems. The solution stands out for its reliability, affordability, and real-world usability, especially in low-connectivity environments.
What we learned
This project helped us understand the importance of inclusive and human-centered design. We gained hands-on experience with edge AI optimization, sensor fusion, and accessibility-focused product design. Most importantly, we learned that assistive technology must prioritize consistency, simplicity, and trust to truly support users in real-world conditions.
What's next for Iris
We plan to improve model accuracy, enhance battery performance, and further miniaturize the hardware for better portability. Future updates will include regional language support, extended indoor navigation, and large-scale pilot testing through institutions, NGOs, and community partnerships. Our long-term goal is to make IRIS a widely accessible assistive platform that empowers visually impaired users everywhere.
Built With
- deepface
- express.js
- gemini
- indexeddb
- javascript
- mongodb
- node.js
- ocr-interpretation
- opencv
- raspberry-pi
- react-native
- react.js
- reasoning
- tailwind
- tensorflow
- whisper
- yolov8
Log in or sign up for Devpost to join the conversation.