Inspiration
During recent disasters like the Turkey-Syria earthquake and Ukraine conflict, we saw how language barriers cost precious time when every second matters. Emergency responders struggle to understand victims' handwritten notes or signs in unfamiliar languages. People die not because help isn't available, but because responders can't understand what's needed. This hit us hard and we felt that the advancing technology should solve this.
What it does
Crisis Response Translator takes a photo of any text (handwritten notes, medical prescriptions, injury descriptions, emergency signs) and instantly detects and extracts the text using AI vision, translates it to the responder's language, flags urgency level (low/medium/high/critical) based on medical keywords and suggests culturally appropriate responses
How we built it
we started with React for a fast and responsive interface since responders need speed. The core challenge was integrating Gemini's API. I personally spent hours searching for prompts to reliably extract text from poor quality photos.The urgency detection was tricky. We couldn't just search for words because obviously context matters. "Need insulin" is critical, "take insulin daily" isn't. We engineered the AI prompt to understand medical context and cultural nuances. For example, pain descriptions vary across cultures, so the system considers both explicit keywords and implicit signals. Deployed on Vercel with proper error handling because in emergencies, errors aren't acceptable.
Challenges we ran into
- Handling diverse image formats was tough -> some photos were HEIC from iPhones, others were low resolution screenshots. We had to normalize everything to base64 JPEG and handle conversion errors gracefully.
- Getting consistent JSON responses from the AI was frustrating. Sometimes it added markdown formatting, sometimes it wrote explanations before the JSON.
- Also, testing was weird. We don't have access to real disaster scenarios, so we decided to create test images with Google Translate screenshots and handwritten notes in different languages. Not ideal but better than nothing.
Accomplishments that we're proud of
Getting the urgency detection to work feels huge. When I tested it with a handwritten note saying "diabetic, no insulin, 2 days" in Italiano, it correctly flagged it as CRITICAL and extracted the medical context. That's when I knew this could actually save someone's life. The app works offline-ish - once an image is uploaded, the analysis is fast because I optimized the API calls. I'm also proud that it handles 10+ languages properly. Not just translation, but cultural context. The suggested responses adapt to the target culture.
What we learned
So much about prompt engineering. It's basically programming but with words instead of code. Small changes to how I phrase the prompt dramatically affect output quality. We learned to be very specific and give examples to the AI. API integration is harder than tutorials make it seem. Real world APIs have rate limits, inconsistent responses, and documentation that's sometimes wrong. You need robust error handling for everything. Also learned about CORS policies, and why you can't just call any API from the browser. Understanding the actual HTTP requests helped me debug issues way faster.
What's next for Crisis Translator
We're planning to add offline capability using service workers and a cached model for basic translation when internet is down. Disaster zones often lose connectivity. Voice input should be included like responders could speak their language and have it translated, then show the text to victims. This is technically possible with browser APIs. Image preprocessing to enhance quality before sending to the AI would improve accuracy. In long term, we're thinking about a mobile app with offline architecture and integration with existing emergency response systems. The dream is making this open source so other developers can adapt it for specific regions or disasters. Every disaster is different, earthquake vs flood vs conflict and the system should adapt.
This project started as a hackathon idea but honestly, we want to keep building it. Language shouldn't be a barrier when lives are at stake.
Built With
- geminiapi
- javascript
- react
- restapi
- tailwindcss
- vercel
Log in or sign up for Devpost to join the conversation.