SafetyBuddy – Hackathon Story
Inspiration
We started with a simple but serious question: what happens when a student is in distress but can’t reach for help?
Panic apps exist, but they depend on someone pressing a button or dialing a number. In a real emergency, that isn’t always possible. That’s what inspired us to build SafetyBuddy — an AI-powered system that can listen, understand, and act automatically, turning a cry for help into immediate support.
What We Learned
- Keyword detection alone is not enough. If every time someone said “help” we raised an SOS, supervisors would drown in false alarms.
- By combining speech recognition, stress analysis, and emotion/context detection, we were able to filter out noise and focus only on real emergencies.
- LLMs became more than just chatbots. We used them to:
- Summarize incidents for supervisors into clear daily/weekly reports.
- Surface patterns across multiple SOS events.
- Highlight the most urgent cases so supervisors could focus where it mattered.
How We Built It
Audio Pipeline
- Standardized every recording with ffmpeg.
- Extracted stress features (pitch, RMS, tempo) using Librosa. ### AI Models
- Whisper → transcribes speech.
- DistilRoBERTa → detects emotions like fear or sadness.
- DistilBERT → analyzes context (casual phrase vs genuine distress). ### Risk Scoring We created a weighted scoring system that blends all signals: [ \text{Risk} = 30K + 35E + 25C + S ] Where:
- (K) = keyword match
- (E) = emotion confidence
- (C) = context score
- (S) = stress weight ### Alert Flow
- Log the event in Supabase (transcript, location, risk level).
- Send alerts from the student’s verified email to their emergency contacts.
- Escalate the case to the supervisor’s Gmail for oversight.
- Follow up with the student via an automated email linking to a secure form. ### LLM Analytics
- Summarize raw incident logs into human-readable reports.
- Provide trend insights (recurring emotions, hotspots, frequency).
- Ensure supervisors act on verified incidents, not noise.
Challenges
- False Positives → Early versions triggered on casual “help” mentions; solved by combining multiple signals.
- Email Roles → Had to carefully separate student vs supervisor email flows for alerts, escalations, and follow-ups.
- Deployment → Installing and running ffmpeg with AI models on Render required Docker/static binaries.
- Performance → Running three AI models + stress detection slowed responses; fixed with chunking and model optimization.
Takeaway
SafetyBuddy became more than just a hackathon project — it proved that AI can be both empathetic and practical.
- It listens when students can’t speak for themselves.
- It responds in real time with alerts and escalations.
- It helps supervisors with clear, actionable insights instead of overwhelming noise. Our biggest takeaway? Technology can be a true safety net, turning scattered distress signals into real intervention and giving students the reassurance that someone is always listening.
Built With
- distilbert
- distilroberta
- docker
- dotenv
- fastapi
- ffmpeg
- github
- huggingface
- javascript
- librosa
- llms
- numpy
- openaiwhisper
- python
- railway
- sendgrid
- subprocess
- supabase
- twilio
- typescript

Log in or sign up for Devpost to join the conversation.