Inspiration

Last year, I became a victim of a terrifying road rage incident that changed my perspective on driving safety forever. After an accidental lane change, an enraged driver followed me, pulled up beside my car, and began screaming threats. When I stopped at a red light, he got out of his vehicle and started pounding on my car, yelling aggressively while I sat frozen in fear. I had a dashcam recording everything, but it didn't help me in that moment—I didn't know what to do, whether to drive away, call police, or how to de-escalate. That experience haunted me. I realized that while dashcams record incidents, they don't protect us when we need help most. What if AI could act as a calm, intelligent co-pilot that detects dangerous situations and guides drivers through them safely? This personal trauma inspired me to create the Road Rage Assistant—so no one else has to face that terror alone.

What it does

Road Rage Assistant is an AI-powered safety system that transforms dashcam footage into actionable protection. It analyzes video in real-time to detect aggressive driving behaviors like tailgating, brake checking, and threatening gestures. When incidents are detected, it immediately provides calming audio guidance to help drivers stay safe and make smart decisions. After the journey, it automatically generates comprehensive incident reports with timestamps, threat assessments, and police-ready documentation—turning raw footage into evidence that can protect drivers legally and financially.

How I built it

I built a three-stage AI pipeline using Google's Gemini vision model for video analysis, creating a Flask web application with real-time Server-Sent Events for live updates. The perception agent processes dashcam footage frame-by-frame to identify incidents with precise timestamps and threat levels. The de-escalation agent uses natural language generation and text-to-speech to create personalized safety guidance. Finally, the post-incident agent synthesizes all data into structured reports. I containerized the application with Docker and implemented multiple deployment options (Render, Railway, VPS) to ensure accessibility for various use cases.

Challenges I ran into

Processing video with AI models proved challenging—balancing frame rates for accuracy versus API costs required careful optimization. Generating natural-sounding, context-aware safety guidance that felt helpful rather than robotic took numerous prompt iterations. Managing real-time streaming updates to the frontend while running heavy background processing without blocking the UI required threading and careful state management. I also wrestled with making the system fast enough for real-time use while maintaining high accuracy in threat detection across diverse driving scenarios and lighting conditions.

Accomplishments that I'm proud of

I'm proud of creating a complete end-to-end system that actually works—from video upload to polished reports in minutes. The real-time audio guidance feels genuinely helpful and calming, something that could have made a real difference during my own terrifying experience. My modular architecture means each agent can be improved independently, and the web interface provides a smooth, professional experience that makes complex AI processing feel simple. Most importantly, I validated that AI can be a practical safety tool for everyday drivers, transforming my traumatic experience into something that could help protect others.

What I learned

I discovered that prompt engineering for safety-critical applications requires extreme care—the tone and timing of guidance can significantly impact driver behavior. I learned to optimize video processing by strategically sampling frames rather than analyzing every single one. Working with multimodal AI models taught me the importance of structured output formats for downstream processing. I also gained deep insights into real-time web architectures, particularly around streaming updates and background job management in Python. Perhaps most importantly, I learned that building AI systems for human safety demands thoughtful design beyond just technical capability—it needs to address the real fear and uncertainty people feel in dangerous situations.

What's next for Road Rage Assistant

My vision is to integrate directly with dashcam hardware for true real-time detection and warnings during drives, not just post-analysis. I plan to add driver behavior tracking to identify patterns and provide personalized safety coaching. Expanding the system to detect other hazards like distracted driving, drowsiness, or mechanical failures would make it a comprehensive safety assistant. I'm exploring partnerships with insurance companies who could offer discounts for users with verified safe driving records. Long-term, I envision a community safety network where anonymized incident data helps identify dangerous roads and recurring problem areas, making driving safer for everyone—so what happened to me never has to happen to anyone else.

Share this project:

Updates