Inspiration
We were inspired to create RageGauge because aggressive driving is a factor in more than half of all fatal car crashes in the US, and yet most road rage goes completely untracked because we realized that there's no way to measure it.
Fiona recently experienced a near car crash, largely due to her Uber driver road raging. This made us think about how emotionally charged driving is, and how little awareness most drivers have of their own stress levels behind the wheel. Ronald also mentioned how many people talk to themselves and go through many emotions when they drive, which led us to think about an app that understands your emotional state and helps you become a calmer, safer driver over time.
What it does
Rage Gauge is a passive commute wellness app that runs quietly in the background while you drive. Using your phone's microphone, it detects stress signals in real time: screams, swear words honks, and smacks. Gemini processes the audio stream to produce a rage analysis at the end of your trip that builds your emotional fingerprint.
After you drive, you get a full recap: a map of where your stress spiked, your rage count, and how your emotional state when driving compares to your friends, allowing you to check in with those friends who are displaying higher road rage.
As an additional feature, we included a social layer where friends can record encouraging voice clips that play automatically when your rage hits certain thresholds to help you calm down. The goal is for users to download it because you have road rage, and to use it until you don't need it anymore.
How we built it
We built Rage Gauge as a React Native app using Expo, so it runs on both iOS and Android from a single codebase and supports a landscape driving mode that mimics a CarPlay-style interface when your phone is mounted horizontally. Firebase powers the backend and stores per-trip rage events tagged with GPS coordinates, which feed the live map. Firebase Auth handles sign-in and friend connections.
Audio is captured continuously in the background using Expo's audio APIs and streamed to Gemini, which handles real-time sentiment analysis and audio classification, detecting emotional tone, swear words, and decibel spikes. We iterated heavily on our prompts to get consistent, calibrated rage scores out of the model. ElevenLabs generates and serves the default encouraging voice clips, with different emotional tones.
We used Figma and Figma Make throughout the design process to prototype screens, test the driving UI layout, and iterate quickly before writing a single line of UI code. It saved us hours of back-and-forth on the driving mode layout in particular.
Challenges we ran into
Live audio processing was our biggest technical challenge. Getting continuous microphone input to stream reliably to Gemini without dropping frames or running out of API credits took lots of trial and error, as the specific Gemini model needed to process live audio input was not included in the Gemma model features. We hit our Google Cloud quota several times and had to become creative with batching to stay within our limits.
The hardest non-technical challenge was a design question: how can we build an app about road rage without accidentally encouraging it through gamification? We spent a lot of time rethinking the incentive structure to promote calm, safe driving and making sure every feature pushed users toward better behavior rather than dramatic outbursts.
Gemini's audio models also required careful prompt engineering. Getting the model to reliably distinguish between angry venting and normal conversation, or between a car horn and a door slam, meant writing detailed, example-heavy prompts and testing them against a lot of edge cases.
Accomplishments that we're proud of
We're really proud that we were able to develop a functional app in under 48 hours, and to have worked with AI-driven technology, like Gemini audio sentiment analysis being able to produce real rage scores from real microphone input and using ElevenLabs for the first time.
We're also proud of how seriously we took the ethical side of the project. It would have been easy to just build the fun parts and ship, but instead we kept interrogating our own design choices, which allowed us to create something we think will be useful to many people not only in our own lives, but also beyond.
What we learned
We came in with different skill levels and left having all learned something new. ElevenLabs was new to most of us, and Gemini's audio capabilities made us weigh the pros and cons of various models. The API is more capable than we expected, but it also requires more careful prompting than text models.
Figma Make changed how we approached design iteration. Being able to prototype the driving UI layout and test it in landscape mode before writing any code meant we caught layout problems early and didn't waste build time on screens that didn't work.
We also learned how to keep our users front and center by thinking through user stories and pain points, prototyping, and constantly ask whether each feature served the person driving the car.
What's next for Road Gauge
As a next step, we would love to integrate our app with CarPlay. Right now the landscape mode on a mounted phone gets you most of the way there, but an actual CarPlay entitlement would let us build an integrated experience with no phone handling required.
We also want to expand the safety audio layer. Siren detection for deaf and hard-of-hearing drivers is a feature we believe in deeply and want to build out properly, with customizable alerts and vibration patterns.
On the social side, there's a lot more to explore: group road trips with shared rage scores, route recommendations based on your historical calm-drive data, and monthly Wrapped-style recaps of your emotional patterns behind the wheel.
Long term, the most interesting version of this app is one where it puts itself out of a job, where your rage scores trend down week over week until you don't need the nudges anymore.
Log in or sign up for Devpost to join the conversation.