Inspiration
To save lives and minimize harm by delivering immediate, intelligent, and personalized safety guidance when every second counts.
What it does
When users are unable to contact emergency services directly, AlertHub will be their immediate resource for clear, step-by-step instructions on what to do next. Our aspiration is for AlertHub to assist users in relaying vital details to relevant emergency services when they are otherwise unable, ensuring help is dispatched effectively.
How we built it
The chatbot primarily relies on a Large Language Model to generate dynamic situation reports, answer queries, and draft messages. It synthesizes information based on its training and the prompts it receives. It also supports multiple languages and is based on react for an easy to use interface. Contextualization: User's language and self-reported location (city/postal code) are fed into the AI to localize responses. Prompts are specifically designed to request inclusion of: Weather alerts and natural disaster warnings. Critical public health emergencies (e.g., disease outbreaks like yellow fever). Shelter availability and safety guidance. Direct input from users (e.g., descriptions of local hazards) is used to inform AI queries for features like hazard impact assessment. Standard interactions like initial greetings, language selection, and some fixed prompts utilize pre-defined application logic for speed and consistency.
Challenges we ran into
LLMs can potentially generate incorrect or "hallucinated" information, which is highly risky in an emergency context. Approach: Implementing strict persona guardrails, instructing the AI to use only (simulated) reliable sources, and carefully defining the bot's scope to avoid unverified claims. Challenge: Calls to advanced AI models can introduce delays. In emergencies, users need information rapidly. Approach: Utilizing efficient LLM models (e.g., Gemini Flash), optimizing interaction flows, and providing UI feedback (e.g., "typing..." indicators) to manage user expectations during processing
Accomplishments that we're proud of
A Functional End-to-End Prototype: We successfully built a working AI assistant that demonstrates our core vision from initial contact to delivering complex guidance. Deep, Multi-Faceted AI Integration: We went beyond basic Q&A, using the Gemini API for diverse, high-impact tasks like situation reports, safety checklists, and message drafting. An Empathetic, Action-Oriented UI: The design anticipates user stress, offering practical tools like one-click safety checklists and "I'm Safe" message templates. A Foundation Built for Global Scale: The multilingual framework was successfully integrated from day one, proving the model's potential for global accessibility.
What we learned
Prompting is Paramount: The AI's accuracy is directly tied to the precision of our prompts. Explicitly requesting details like public health alerts proved essential. Simulating Real-Time is a Skill: We learned to effectively guide the LLM to simulate access to current information from trusted sources like ReliefWeb. Crisis UX Demands Simplicity: In an emergency, users need immediate, actionable tools. One-click buttons are more effective than open-ended questions. Safety Requires Strong Guardrails: Defining what the AI cannot do (e.g., give medical advice) is just as critical as defining what it can.
What's next for AlertHub Global
Integrate directly with official APIs (e.g., National Weather Services, GDACS, WHO, local emergency feeds) for live, verified disaster, weather, and health alert data. Expansion: Implement opt-in push notifications for users regarding critical alerts in their registered locations, even when not actively using the app. Offer alerthub services on social media platforms like WhatsApp and Instagram, making it more convenient for the user.


Log in or sign up for Devpost to join the conversation.