Inspiration
The idea for ShieldPath AI was born out of a simple observation: weather alerts are often too loud but not specific enough. When a "Severe Thunderstorm Warning" hits Richardson, most people just see a notification on their phone. They don’t immediately realize that the expensive laptop near the window or the power strip on the floor is at risk from the specific 1-inch hail or flood warnings mentioned in the fine print. I wanted to turn passive alerts into proactive safety actions.
What it does
ShieldPath AI is a Cyber-Physical Safety Auditor that scans your environment through your phone's camera to identify vulnerabilities in real-time. By combining local Richardson weather data with Computer Vision, the app identifies High-Value Assets that automatically detects electronics, furniture, and infrastructure (e.g., Dell Inspiron laptops, JBL headphones), simulates threats which visually overlay flood or storm simulations to show how rising water would interact with the room, calculates risk scores - a "Safety Score" (e.g., 7.8/10) - based on object proximity to windows, floors, and current hazard levels, and it suggests mitigation to delivers specific, actionable forensic advice to protect both physical property and digital hardware.
How we built it
ShieldPath AI is a React Native mobile application built on a "Zero-Trust" architecture for environmental data. For Computer Vision, I utilized the Gemini 2.5 Flash model for high-speed image reasoning and coordinate-based object detection ($box_2d$).Reasoning Engine: By injecting both the visual detections and the local hazard context into the model, the AI performs a complex "Forensic Audit" rather than a simple image classification. There is also geospatial context; the app pulls real-time weather and hazard data specific to the user's location.
Adaptive Thinking: By feeding both the image data and the local hazard context into Gemini, the AI performs a "Safety Audit," identifying objects and calculating risk scores based on current threats.
Challenges we ran into
- JSON Consistency: Ensuring the AI consistently returned structured data was a major battle. Early on, the model would return a simple list of strings, causing the app to crash when it tried to map objects that lacked coordinate data ($box_2d$).
- The "Undefined" Collision: I faced a persistent TypeError where the UI tried to render detection boxes before the AI response had fully loaded. This required a strict "Safe Fallback" architecture.
- Quota Management: Navigating API rate limits meant optimizing the frequency of calls and handling 429 (Too Many Requests) errors gracefully to ensure a smooth user experience.
Accomplishments that we're proud of
- End-to-End Integration: Successfully connecting a live camera feed to a generative AI reasoning engine with less than a 3-second latency.
- Forensic Accuracy: The AI's ability to not just "see" a laptop, but understand that its proximity to a window during a storm warning is a specific risk.
- UI Fluidity: Successfully implementing a persistent, gesture-based bottom sheet that keeps data accessible without hiding the camera visuals.
What we learned
This project taught me the importance of Defensive Programming in AI. You cannot treat an AI's output as a guaranteed constant; you must build "shields" in your code to handle the unpredictability of generative models. I discovered how a few lines of clever prompt engineering can transform a generic image classifier into a life-saving safety auditor. Visual assets generated using OpenAI DALL·E
What's next for ShieldPath
Cyber-Physical Mapping: Integrating IoT network scanning to audit the digital security of the physical devices detected in the room. Gemini Live Integration: Allowing users to perform a "Voice Audit" of their space for real-time safety Q&A. Insurance API Sync: Connecting directly to insurance providers to offer premium discounts for users who complete their safety audits.
Log in or sign up for Devpost to join the conversation.