Inspiration

One of our team members has a relative with POTS — a condition that causes dizziness and fainting, especially after standing for long periods. Waiting in line at a concert shouldn't be a medical risk, but for people with chronic or episodic conditions, it often is. That personal experience drove us to build Vigil, and while POTS was our starting point, the real vision is broader: a scalable safety system that lets high-risk individuals participate in large events with confidence

What it does

Vigil is a safety system designed for people with conditions much like POTS attending large events. It combines a wearable medical assistance watch with computer vision monitoring to detect signs of potential medical distress.

The wearable collects data related to the user's condition, while the computer vision system monitors posture and movement patterns that may indicate dizziness or fainting. If Vigil detects a potential medical emergency, it can alert medical personnel so help can arrive faster.

Vigil helps reduce the risk of delayed response times and allows users to participate in crowded events with greater confidence.

How we built it

We built Vigil using a combination of wearable hardware and computer vision. We chose the Samsung Galaxy Watch 4 for its robust sensor suite and Wear OS support — the watch app monitors the wearer's physiological metrics like heart rate and can notify them directly in an emergency. But the real value is how fast Vigil gets information to real medical professionals. The companion Android app runs on staff members' phones and handles BLE advertising, allowing nearby Raspberry Pi units to localize the user via RSSI and UUID without storing any personal identifiers. On the vision side, a camera streams live video to a central server where OpenCV feeds frames through a three-model pipeline: YOLOv8 Nano for person detection, a custom PyTorch LSTM for movement sequencing, and a classifier for anomaly categorization — all served via FastAPI. When the wearable and vision signals align on an emergency, an alert is pushed instantly to the staff-facing mobile app, where Gemini generates a real-time incident summary so medical personnel can respond without losing a second.

Challenges we ran into

Running real-time computer vision on a Raspberry Pi required significant optimization, and landing on YOLOv8 Nano was a deliberate tradeoff between speed and accuracy. Training the LSTM on our own recorded data took multiple iterations to generalize reliably. Fusing BLE and video signals to reduce false positives — rather than compound them — was a real systems challenge, and debugging across the watch, Pi, server, and mobile app simultaneously kept us busy throughout.

Accomplishments that we're proud of

We built a functional working end-to-end prototype spanning wearable hardware, edge computer vision, BLE localization, and a mobile staff interface — all integrated in real time. We're especially proud of training our own LSTM on original data and designing the system with real-world constraints in mind: affordable hardware, HIPAA compliance, and genuine scalability from the ground up.

What we learned

Building Vigil taught us how to integrate hardware, networking, machine learning, and mobile development into a single real-time system under serious time pressure. We also learned that designing for medical use cases shifts your priorities — when someone's safety is on the line, minimizing false negatives matters more than almost any other metric.

What's next for Vigil

Next, we want to improve the accuracy of our detection system and reduce false alerts. We also want to refine the wearable device and make the system easier to deploy at large events. Long term, we hope to expand Vigil to support more medical conditions and integrate directly with event safety systems and emergency response teams.

Built With

Share this project:

Updates