About the Project: Inspiration:
During disasters such as floods, cyclones, and heatwaves, information is widely available, but clarity is not. Alerts are often written in technical language, forwarded messages spread fear without context, and many safety instructions assume digital literacy and prior knowledge. For people in vulnerable communities, this confusion often leads to delayed or incorrect decisions.
This project was inspired by a simple question: when an alert arrives, how does a person know what it actually means for them? I wanted to build a system that does not just repeat information, but interprets it, personalizes it, and turns it into clear actions that people can realistically follow under stress.
What I Built?
Am I Safe? is a Gemini-powered, multimodal disaster safety assistant that transforms alerts into actionable guidance.
The system allows users to upload real-time photos of their surroundings, paste or screenshot SMS alerts and forwarded messages, and provide text or voice input in multiple languages, including English, Hindi, Malayalam, Bengali, and Telugu. Users can also share contextual information such as their location type (coastal, urban, or rural), housing conditions, and time of day.
Using this information, the system assesses personalized risk and explains what the situation means in simple terms. It provides ordered, step-by-step guidance on what to do immediately, what to prepare, and what to avoid. It includes an Anti-Rumor Trust Score to evaluate the credibility of forwarded messages and an Emergency Explain-to-Family mode that generates calm, ready-to-share messages to reduce panic among loved ones. The system automatically adapts its tone when a user sounds distressed and produces offline-friendly summaries for low-connectivity situations.
How I Built It?
The project is built entirely using Google AI Studio with Gemini 3, without relying on external backend infrastructure.
Gemini’s multimodal capabilities are used to reason over images, text, and voice inputs simultaneously. The model interprets abstract alerts, analyzes visual evidence from photos, adapts responses based on user context and vulnerability, and generates outputs in the same language and format used by the user. Prompt design focuses on safety, uncertainty awareness, and calm communication, ensuring that the system avoids exaggeration while remaining honest and practical.
Challenges:
One of the main challenges was preventing panic amplification. In emergency situations, overly detailed or alarmist responses can increase fear rather than help. This was addressed by designing structured, step-by-step outputs, automatic tone adjustment for distressed users, and clear explanations of uncertainty instead of absolute claims.
Another challenge was balancing technical sophistication with accessibility. The system needed to work for users with limited literacy, limited connectivity, and high emotional stress, without oversimplifying critical safety information.
What I Learned?
This project reinforced the idea that effective AI is not about displaying intelligence, but about reducing cognitive load at critical moments. Working with Gemini 3 highlighted how powerful multimodal reasoning can be when combined with responsible design, multilingual support, and a strong focus on real-world constraints.
Most importantly, I learned that AI can function as public infrastructure during emergencies, acting as a calm, reliable guide that helps people make safer decisions when it matters most.
Built With
- context-aware-ai-reasoning
- google-ai-studio
- google-gemini-3
- multilingual-natural-language-processing
- prompt-engineering
- speech-to-text
- text-to-speech
Log in or sign up for Devpost to join the conversation.