Inspiration
Vision Learning can be scary when you don't have control over what's being collected, but we saught to solve a problem through being fully transparent
What it does
Real-time fatigue detection for truckers. Computer vision tracks eye openness and yawns. Nemotron reasons over the data and fires proportional alerts, on-device, privately
How we built it
MediaPipe vision feeds a Nemotron-Super ReAct agent with tools for baseline lookup and intervention dispatch. A companion agent adds human-sounding alerts. Everything runs locally.
Challenges we ran into
Null model responses broke our parser. Memory imports failed mid-build after a refactor. Git diverged twice during live merges. Yawn detection produced false positives.
Accomplishments that we're proud of
Full autonomous multi-agent pipeline in 24 hours. Reasoning chain visible in the dashboard. Driver baselines persist across shifts. No biometric data ever leaves the vehicle.
What we learned
Visible reasoning builds trust in safety-critical agents. Multi-agent contracts must be defined early. Persistent memory fundamentally changes how an agent makes decisions.
What's next for SafeShift
We need to connect our system to brev.dev (already compatible we just need the set up) See for potential improvements in the model
Log in or sign up for Devpost to join the conversation.