VIGILANT: Vigilant Worker Protection and Efficiency Measure System

Real-Time Safety, Powered by Arm AI at the Edge.

About the Project

  1. The Inspiration: Why Edge AI is a Life-Critical Necessity

The drive to create VIGILANT stems from a singular, critical truth: the increasing rate of coal mine accidents and the persistent threat to miner health demand a fundamental change in safety monitoring. Traditional systems are often reactive, focusing on environmental sensors that are distant from the worker.

We recognized that true safety requires a paradigm shift: deploying personalized, intelligent monitoring directly on the miner, with processing executed by powerful, optimized Arm-based edge hardware. Our goal was to create a proactive, predictive, and life-critical system that could detect health anomalies (like extreme fatigue or heatstroke risk) and environmental hazards ($\text{CH}_4$, $\text{CO}_2$) in milliseconds, not minutes. This is not just data logging; this is life-saving, sub-second decision-making.

  1. How We Built It: Life-Critical Edge Computing

VIGILANT is built as a three-tiered, fully integrated system, strategically leveraging the Arm architecture for maximum performance and power efficiency.

Tier 1: Sensor Fusion and Data Acquisition

Wearable Wristband: Continuously monitors critical biometric data (Heart Rate, $\text{SpO}_2$, Body Temperature).

Embedded Environmental Sensors: Gathers localized data (Methane $\text{CH}_4$ and $\text{CO}_2$ concentration).

Data Flow: Sensor data is streamed via low-power Bluetooth Low Energy (BLE) to the central processing unit.

Tier 2: The Edge Processing Unit (Arm AI Core)

The core of our innovation lies here, running on the STM32MP135-FDK (Arm Cortex-A7 for Linux/OS and Cortex-M4 for Real-Time tasks).

Dual-Core Strategy: We divided tasks between the high-level application processor (A7) and the deterministic real-time processor (M4), optimizing for speed and reliability.

Machine Learning Model: We implemented a two-stage anomaly detection model using a Gated Recurrent Unit (GRU) recurrent neural network. This model is trained on normal miner activity to predict the next expected sensor reading ($\hat{y}_t$).

Anomaly Detection: The deviation $E_t = |y_t - \hat{y}_t|$ is calculated. A sustained deviation triggers a Critical Alert (e.g., potential heat exhaustion).

Arm Optimization: To guarantee ultra-low latency ($< 100 \text{ ms}$), the trained model was converted and optimized using TensorFlow Lite for Microcontrollers (TFLite-Micro). The model was highly compressed using Post-Training Quantization (INT8), reducing the memory footprint $M$ and boosting inference speed on the Arm cores:

$$M_{\text{INT8}} \approx \frac{1}{4} M_{\text{FP32}}$$

The dedicated Cortex-M4 core handles real-time sensor processing, demonstrating peak optimization for the Arm architecture.

Tier 3: Communication and Supervisory Oversight

Networking: The edge unit connects to a resilient underground LoRa/Wi-Fi Mesh network.

Dashboard: Critical alerts and aggregated worker status are transmitted to the supervisor's mobile device (also Arm-based, ensuring end-to-end compatibility).

Communication: The system facilitates inter-clan emergency broadcasts and two-way voice communication.

  1. Learnings and Technical Deep Dive

Our journey provided deep insights into deploying AI at the extreme edge:

Mastering Heterogeneous Arm Architectures: We gained essential proficiency in utilizing the dual-core structure of the STM32MP135, allocating critical, time-sensitive tasks to the Cortex-M4 and general processing to the Cortex-A7. This skill is paramount for any successful Arm-based embedded solution.

Extreme Model Optimization: We mastered the intricacies of quantization and pruning, realizing that a smaller, faster model deployed locally is far superior to a larger model relying on cloud processing in a safety-critical context. The performance gain from INT8 quantization on the Arm core was significant.

Sensor Fusion for Reliability: We learned to apply Kalman Filtering in C++ to align the time-stamped, asynchronous data from multiple sensors. This reduced noise and significantly increased the reliability of our ML predictions, making the system viable for real-world deployment.

  1. Challenges Faced and Overcome

Latency Constraint: The hardest requirement was achieving sub-$100 \text{ ms}$ detection-to-alert latency. We initially struggled with network transmission delays. The solution was to perform all ML inference directly on the STM32MP135-FDK edge device (Tier 2). This move eliminated transmission latency for primary detection, cementing our project as a true edge-AI solution and delivering the required speed.

Wireless Synchronization: Synchronizing the time-stamped data from the wristband (BLE) and the environmental sensors (direct connect) was complex. The custom Kalman filter implementation was key, ensuring that the input sequence to the GRU model was accurate and cohesive, preventing false alarms.

Built With: Technology Stack. VIGILANT is a full-stack, edge-optimized solution demonstrating proficiency across multiple platforms.

Our system is built around the STM32MP135-FDK edge processor, which runs a Custom Embedded Linux distribution on its Arm Cortex-A7 core. The real-time sensor polling and firmware for the Arm Cortex-M4 core were developed in C, utilizing the STM32 HAL/LL libraries for bare-metal control. The main application logic and data processing pipeline are implemented in high-performance C++.

The machine learning pipeline began with model training in Python using TensorFlow/Keras to develop the GRU Neural Network anomaly detector. The resulting model was then optimized for edge deployment using Post-Training Quantization (INT8) and deployed via TensorFlow Lite for Microcontrollers (TFLite-Micro) or Arm NN for accelerated inference.

For data integrity, we implemented Kalman Filtering (in C++) for robust sensor fusion and noise reduction. Communication relies on low-power, resilient protocols: Bluetooth Low Energy (BLE) for short-range sensor data, and a combination of LoRa/Wi-Fi Mesh running the MQTT protocol for reliable underground data transmission. Data persistence on the edge device is managed using a lightweight SQLite database.

PPT: https://docs.google.com/presentation/d/1ngrNm8pFrWkohm86gqdeL-KAPwSdxn9B/edit?usp=sharing&ouid=109250164866477233856&rtpof=true&sd=true

Documentary: https://drive.google.com/file/d/1Ywi8UQjEs4ol_R3i7qv39Oy6BN0_XMJW/view?usp=drive_link

Built With

Share this project:

Updates

posted an update

Exciting News from the Vigilant Worker System Team! My journey began with the core mission to harmonize worker safety and operational efficiency using smart technology. I recently deployed v1.1, which introduced a critical Geo-Fence Violation Detection feature, leveraging a custom quad-tree implementation to provide supervisors with instant alerts when workers enter high-risk zones without authorization, enhancing protective measures significantly. Building on this foundation, I've just celebrated the official launch of Vigilant 2.0 on both the iOS and Android App Stores! This major release brings a complete UI overhaul, offering a simplified Dashboard for quick status checks and an Efficiency Map that provides privacy-preserving, aggregated insights into team workflow. Check out the clean new interface on the app stores now, where you can easily utilize features like One-Tap Clock-In/Out and monitor real-time safety status. Furthermore, this system, also known as the SafeMineAI System, integrates wearable sensors for continuous health and environmental monitoring, with data processed by an STM32MP135-FDK unit. I'm proud that Vigilant 2.0 fully incorporates the CNN model for object detection and the time-series algorithm for predictive health analysis, fulfilling our initial objective of using IoT and Computer Vision to enhance proactive hazard detection and worker safety in challenging environments like mining.

Log in or sign up for Devpost to join the conversation.