Inspiration
There are many scenarios where automated hazard detection can be useful, particularly in construction sites. We wanted to build a project centered around this that can help increase workplace safety by automatically identifying hazards, along with recording important information about them.
What it does
HAZEL (Hazard Assessment & Zone Enforcement Limb) turns a robotic arm into an autonomous hazard-aware inspection system for real-world environments like construction sites and indoor safety-critical spaces. The arm continuously scans with an onboard camera, detects unsafe conditions (for example missing PPE, proximity to machinery, or other visual anomalies), and automatically repositions to keep important targets centered in view. At each event, the system captures evidence (image + overlay + timestamped metadata), incorporates live Arduino sensor readings (light, IR, sound, gas, distance, risk), and stores it as a point of interest (POI) in a sparse 3D representation of the scene.
How we built it
For hardware, we used AMD's LeRobot arm, several sensors like light and IR, and a camera. For software, we used python, and incorporated a pre-trained YOLO model for construction workplace hazard detection. Whenever objects are detected by this model, our camera takes a screenshot and records metadata for the image, including all of the sensor readings, and estimates its 3D position relative to the robot arm. At the end, each image, along with its metadata, is fed into a vision-language model (VLM) with Gemini's API to generate a natural language summary of the images.
Challenges we ran into
Our main challenges were setting up the hardware, 3D printing a case for all of the sensors and attaching it to the robot arm, and bridging the gap between hardware and software, as none of us have any substantial hardware experience.
Accomplishments that we're proud of
We are proud of getting all the sensors to work, 3D printing a case for them, and incorporating everything into software, for something that's a prototype for something potentially useful in a construction zone.
What we learned
We learned how to read in sensor data and control the robot arm with software, as well as use a YOLO model and the Gemini API.
What's next for HAZEL
Better 3D point estimation, web interface to visualize the points (along with 3d recreation of the scene), more types of events detected (e.g., fire/smoke), making our model not take repeat pictures of a given scenario, and actively explore the environment for new things that haven't been inspected yet. Also, having more sophisticated sensors that could better inform the environment (e.g., 3D sound detection), allowing the robot arm to move to the point of interest and take a photo could be cool.

Log in or sign up for Devpost to join the conversation.