💡 Inspiration

The idea for CodeNetra-AI came from observing two distinct groups struggling with visual data: visually impaired individuals trying to understand their surroundings, and developers spending hours converting UI designs into code. I realized that a single, powerful multimodal AI could solve both. As an 18-year-old developer coding this entire project strictly on a mobile/tablet setup, I wanted to prove that with the right AI (Gemini 3), hardware limitations cannot stop innovation.

🚀 What it does

CodeNetra-AI is a dual-mode Accessibility and Developer Productivity Super-App:

  • 👁️ Netra Vision Mode (Accessibility): Provides real-time "Vision-to-Voice" for the visually impaired. It features Live Vision narration, Hazard Detection (obstacles/vehicles), Currency Recognition, and PDF reading.
  • 💻 Developer Mode (Productivity): Offers instant "UI-to-Code" generation. Upload a screenshot, and it generates clean Flutter code. It also includes an AI Error Debugger and Repo Chat to query entire codebases.

🛠️ How I built it

I built the cross-platform application using Flutter and Dart. For authentication and secure role-based routing, I used Firebase. The core intelligence is powered by the Google Gemini 3 API for its ultra-fast multimodal (Vision + Text) capabilities. I integrated flutter_tts and speech_to_text to ensure a completely hands-free, voice-first experience for the Netra mode.

⚠️ Challenges I ran into

The biggest challenge was my hardware constraint. Architecting a complex, dual-mode AI application while typing code on a mobile touch screen/tablet made testing and debugging extremely difficult. Additionally, seamlessly handling two entirely different UI flows (Accessibility vs. Developer tools) without confusing the state management required careful planning.

🏆 Accomplishments that I'm proud of

I am incredibly proud of successfully integrating the Gemini API to accurately analyze UI screenshots and return production-ready Flutter code, all while building it on a mobile device. I am also proud that CodeNetra-AI was recently recognized as a National Finalist at another major hackathon, validating its real-world impact.

🔮 What's next for CodeNetra-AI

The next step is integrating smart glasses hardware for a truly hands-free "Netra Mode" experience. For Developer Mode, I plan to expand the UI-to-Code feature to support React Native and add real-time video processing for faster hazard detection.

Built With

Share this project:

Updates