The AI Helmet project was inspired by the need to improve situational awareness and rider safety for two-wheeler users, who often face unpredictable traffic conditions and limited reaction time. We wanted to explore whether a tiny, low-power edge device could deliver real-time road sign detection directly from a helmet-mounted perspective—without relying on a phone, the cloud, or external infrastructure. This led us to build an end-to-end, fully on-device system using the Arduino Nicla Vision and Edge Impulse, creating a practical prototype that can alert riders instantly when critical signs appear in their field of view.
To build this system, we collected and labeled data from a helmet-like viewpoint, trained a compact FOMO MobileNetV2 (0.35) model using the Edge Impulse workflow, and deployed it through OpenMV/MicroPython for real-time inference. Along the way, we learned how to optimize TinyML models for resource-constrained hardware, design efficient data pipelines, and integrate multi-modal alerts such as on-screen overlays, RGB LED indicators, and buzzer feedback. One of the biggest challenges was achieving fast, accurate detection within the Nicla Vision’s limited memory and compute budget—but through quantization and architectural tuning, we achieved a model that fits in ~111 KB flash, runs at ~19.6 fps, and performs reliably in real-world scenarios. The result is a functional prototype of an intelligent, context-aware helmet that enhances rider awareness directly from the edge.
Log in or sign up for Devpost to join the conversation.