Inspiration
As part of our community volunteering project, we regularly visit elderly residents living alone. It was through this work that the problem first became more personal.
One of the aunties we grew close to fell at home one day and broke her hip. Unable to get up, she had to call for help from the floor, and waited a long time before help arrived, her door broken down to reach her. She described feeling terrified and helpless when she was alone on the ground. The fall left her hospitalised for months, and even after discharge, it took a long time before she felt confident enough to go outside on her own again.
That experience stayed with us. It raised an important question: how many other elderly individuals living alone have faced, or will face, the same thing? More critically, what could we do about it?
We realised the problem was not just the fall itself. It was the wait, the time that was spent feeling helpless. The broken response chain between a fall happening and help actually arriving. That became the problem we set out to solve.
What it does
ARGUS addresses the precise clinical moment that existing solutions fail, when a person has fallen but cannot call for help. Edge agentic AI detection bridges this critical gap with zero user burden. ARGUS monitors an elderly person living alone through a single camera, running silently in the background 24/7 without requiring any wearable or interaction from the patient. When a fall happens, it sends an immediate Telegram alert to the family, a personalised message that explains the risk based on the patient's specific medical history, what likely caused the fall, and exactly what the caregiver should do next.It then watches the person for 20 seconds. If they get back up, the family is reassured. If they don't move, ARGUS escalates, calling emergency services. Beyond falls, ARGUS learns the patient's daily routine over the first few days. If the patient is unusually inactive during a time they're normally active, the family receives a welfare check, catching potential problems before they become emergencies. All of this happens on-device. No footage is uploaded, no data leaves the home, and no cloud subscription is required, making it accessible and privacy-safe for real home deployment in Singapore.
How we built it
We took inspiration from codes of existing fall detection systems. From there, we modified the code and enhanced the model to encompass key elements that we felt were lacking in other models; privacy, accuracy and responsiveness.
With these elements in mind, we designed the ARGUS system to run through 5 layers. In the first layer, the computer vision layer, every frame from the infrared (IR) camera is analysed by MediaPipe which draws a skeleton of 33 body landmarks. From that skeleton, 8 features are extracted: hip_y, ankle_y, head_y, body_height, spine_angle, hip_velocity, head_velcoity, landmarks_in_frame. 4 event flags of sudden_collapse, rapid_descent, abnormal_posture and prolonged_immobility. In the risk engine layer, extracted features are fed into a Random Forest classifier trained using the UP-fall detection dataset (Universidad de Panamericana, Mexico). The activities in the dataset include falling in different directions and non-fall activities. The classifier outputs a fall probability score which is combined with a rule-based score. We had 83% accuracy on our held-out tests. UP-Fall uses Inertial Measurement Unit (IMU) sensors and ARGUS uses IR cameras. This bridge maps pose features into the IMU feature space so the Random Forest operates on video-derived inputs. The next layer entails the Markov Decision Process. Instead of simple thresholds, the MDP observes signals over time across a sliding window of frames and computes 4 scores: floor - how horizontal the person is standing (0=standing, 1=flat), duration - how long they’ve been in this state, conf - CV layer confidence, spine - body til. These are weighted together into a combined score which determines the risk state. The patient’s medical history adjusts the sensitivity, e.g. a patient with Parkinson’s and osteoporosis escalates faster than a healthy person. During the interpretation layer, when a fall is detected, the local Llama3.2 model runs and reads the fall detection data as well as the patient’s medical history. It generates a contextualised alert explaining the risk, possible cause, and what the caregiver should do instead of simply alerting the caregiver to the fall. In the last layer of the recovery predictor, after the initial fall alert is sent, we coded the system to watch the person for 20 seconds and scan for rolling or shifting motions (hip movement > 0.10) and attempts by the patient to recover to a safer position i.e. pushing up with arms (body height increasing > 0.12) and spine recovering toward vertical (angle change > 20º). If recovery is detected, a telegram bot is triggered to call the next-of-kin. If no recovery is observed after 20s, the Telegram bot will call set emergency services or clinician.
We believe that patients, their families and their physicians should have access to such data, with hopes that it would aid in future fall prediction and prevention. We coded our model to feed information into a mobile application that presents the data. The homepage of our app is designed to display real-time risk levels along with graphs plotting historical risks. This allows caregivers or family members to open the app and instantly understand the patient's current status without any technical knowledge. The graphs are able to show that there were any high risk moments even if no alert was sent. Another page of our app, live metrics, will display the live skeleton overlay drawn on the person using MediaPipe’s 33 body landmarks, connected in real time. This gives caregivers visual confirmation that the model is actively tracking the patient and so that clinicians can assess patient stature and infra-red detection maintains privacy. Another page in our app, weekly data, includes summary statistics that show total falls detected and recoveries for the week as well as patterns that reveal the patient’s natural routine. This allows families to spot gradual changes in routine and provides clinicians and caregivers with a more comprehensive and longitudinal view.
Challenges we ran into
As high school students with limited prior experience in coding and AI, the learning curve was steep. Understanding the technical literature, from machine learning architectures to LLM integration, required significant independent research on top of an already demanding academic schedule. In the early stages of development, much of our code was non-functional, and troubleshooting without a strong foundational background made progress slow and at times discouraging. Balancing the demands of this project alongside our studies required deliberate time management and a lot of late nights.
While it was relatively straightforward to build a fall detection model, ensuring its reliability in varied, real-life environments, such as different lighting conditions, occlusions, and non-fall movements, proved significantly more complex, as false positives could cause unnecessary panic while false negatives could be dangerous.
Accomplishments that we're proud of
The moment our code finally ran was a turning point for the team. What had seemed insurmountable became proof that we could figure it out, and that optimism carried us through the rest of the build. Today, we have a working prototype that we are proud of and continuing to develop.
Beyond the technical milestone, we are particularly proud of the contextualisation of our alert system. Rather than sending a generic notification, our system connects to an app that delivers context-rich, LLM-generated alerts, synthesising the patient’s medical history, lifestyle habits, and sensor data into a meaningful, actionable message for caregivers. We believe this is a genuinely novel approach that addresses a gap no current solution has meaningfully solved.
What we learned
Designing this system required a careful sensitivity to the needs of vulnerable users, particularly elderly individuals who may experience physical frailty, discomfort with surveillance, or difficulty navigating complex technology, as well as caregivers who often operate under significant stress and rely on timely reassurance. Empathy, in this context, meant making deliberate design choices to minimise friction, such as using simple alert systems and familiar communication platforms like Telegram or SMS, while also avoiding the presentation of excessive or confusing data. At the same time, it was important to ensure that the system felt supportive rather than intrusive, preserving the user’s sense of dignity and autonomy while still providing meaningful assistance.
What's next for ARGUS
Phase I: Clinical Legitimization IRB-approved hospital study to generate real fall-event data, and integrating with the National Electronic Health Record. Phase II: Government Recognition To make the device affordable at scale, secure government co-funding through eldercare subsidy infrastructure. Phase III: B2C Pilot in Singapore Online store presence, set up referral-to-purchase loop Phase IV: Product Maturation Reviewing the model improvement flywheel, introducing supplementary features Phase V: Asia Pacific (APAC) Expansion Local clinical validation, product horizontally scaled across Southeast Asia SEA and rest of APAC
Log in or sign up for Devpost to join the conversation.