About the Project
Inspiration
The idea for AI Storm Preparedness Auditor came from my own experience with hurricanes (including Hurricane Melissa) and seeing how difficult it is for people to understand storm risk before damage happens. Some did not understand what a category 5 hurricane could do. Most tools focus on weather forecasts, but not on how a specific environment — a home, street, or piece of infrastructure — will actually respond over time. I wanted to build something that could reason visually, spatially, and temporally, helping users prepare instead of react.
This hackathon was the perfect opportunity to explore how Gemini 3’s multimodal reasoning could be used for real-world risk analysis rather than another chat-based experience.
What I Built
AI Storm Preparedness Auditor is a multimodal application that allows users to upload images and videos of their surroundings, combine that data with live storm information, and receive a time-based simulation of how risks evolve across hours or days.
Instead of generating a single answer, the system runs multiple reasoning phases:
- Visual interpretation of uploaded media
- Environmental and structural inference
- Storm interaction modeling
- Risk prioritization and mitigation simulation
The result is a transparent, step-by-step analysis that shows why a risk exists and how it escalates over time.
How I Built It
I built the project using Google AI Studio and the Gemini 3 API, leveraging its multimodal capabilities to process images, videos, and structured storm data together.
The application orchestrates multiple Gemini calls rather than a single prompt, enabling:
- Multi-pass reasoning
- Temporal simulation
- Confidence-based risk assessment
AI Studio allowed me to rapidly prototype a fully interactive, no-auth experience so judges could immediately use the app without friction.
Challenges I Faced
One of the main challenges was balancing ambition with practicality during a hackathon. Designing a system that reasons across time and modalities required careful prompt structuring and output validation.
Another challenge was working around limitations that require a paid API key for image generation, which influenced how certain visual outputs were mocked or represented. This forced me to focus more on reasoning transparency and system design rather than purely visual generation.
What I Learned
This project taught me how powerful explicit, multi-step AI reasoning can be when surfaced through thoughtful UI instead of hidden behind a chat interface. I also learned how to design AI systems that feel trustworthy by showing uncertainty, evidence, and evolution over time.
Most importantly, I learned how to frame AI not as an answer engine, but as a decision-support system for complex, real-world problems.
Built With
- gemini-3
- google-ai-studio
Log in or sign up for Devpost to join the conversation.