Test out our front end here*: https://crustaly.github.io/nomi/
Deployment Note: Our team successfully initialized the NIM endpoint on SageMaker. However, a mid-contest change in IAM access permissions forced a temporary pivot to an API-only local proxy for final submission. Nevertheless, the architecture is designed for—and is fully compatible with—deployment on AWS SageMaker. If you are interested in testing out the project with SageMaker, refer to our detailed ReadMe for setup instructions: https://github.com/Crustaly/nomi
*Because our data required over 24 hours of data collection from our sensor network, the front end data uses sample data. For more information on running the NVIDIA NIM Endpoint and Llama 3.1 model, refer to our readme for setup instructions!
Inspiration
Last year, my grandmother in India slipped on the stairs while home alone. That moment exposed how fragile independence becomes with age, and how much we still rely on chance instead of technology for elder care. Her story isn’t unique. By 2050, more than 1 in 6 people worldwide will be over 65, totaling 1.6 billion seniors (UN, 2024). Yet, the global caregiver shortage is projected to reach 13.5 million unfilled roles by 2040 (WHO). In the U.S. alone, falls are the leading cause of injury-related death among seniors, costing over $50 billion annually (CDC, 2023). NOMI was created to change that — combining sensor networks and Agentic AI to detect risks, summarize health insights, and alert caregivers in real time. Because no one’s grandmother should ever be left waiting for help.
What it does
Nomi is an intelligent home health assistant designed to support elderly individuals and their caregivers through continuous, privacy-preserving monitoring. It tackles one of the biggest social challenges of our time — aging in place safely — by turning low-cost sensors into real-time insights powered by AI reasoning.
We built Nomi because millions of seniors live alone or rely on family members who can’t be present 24/7. Missed medications, unnoticed falls, or silent health declines can have life-changing consequences. Nomi bridges that gap — quietly, respectfully, and intelligently.
Here’s how each feature addresses a real-world problem:
🕑 Medication Detector
The problem: Many seniors forget or double-dose medication, especially when routines change. Our solution: A thin pressure film sensor under a pillbox detects when it’s opened. Combined with time tracking in DynamoDB, Nomi infers whether medication was taken — or missed — and includes this in its AI reasoning summary.
🍽️ Eating Activity Detector
The problem: Poor nutrition and irregular meals often go unnoticed until weight loss or fatigue appear. Our solution: Pressure sensors under utensils or trays detect eating events. Nomi tracks patterns across days, identifying skipped meals and alerting caregivers to subtle changes before they become health risks.
🤕 Fall & Posture Detector
The problem: Falls are the leading cause of injury-related hospital visits among older adults, and many go unreported. Our solution: OpenCV + MediaPipe analyze posture locally on-device — no video leaves the room. When a “fallen” or “inactive” posture is detected, Nomi triggers an immediate email alert to the caregiver.
❤️ Vital Signs Monitor
The problem: Caregivers often have no continuous view of heart rate, oxygen, or room conditions. Our solution: A Pulse Sensor Amped tracks heart rate and oxygen proxies, while temperature-humidity sensors monitor environment comfort. Data streams through AWS Lambda → DynamoDB → FastAPI → React, updating live charts in the dashboard.
🧠 AI Health Insights
The problem: Even with data, caregivers struggle to interpret trends. Our solution: NVIDIA NIM (Llama-3.1-Nemotron-8B) interprets sensor data like a virtual health coach — summarizing patterns (“Heart rate spiked after standing”), spotting risks, and offering plain-language recommendations.
📩 Caregiver Alerts
The problem: Emergencies often go unnoticed for hours. Our solution: When abnormal vitals or a fall are detected, AWS SNS or Gmail SMTP instantly emails caregivers with event time, vitals, and recommendations — turning sensor data into timely action.
🔒 Privacy by Design
The problem: Constant monitoring can feel invasive. Our solution: All video-based detection happens locally using OpenCV and MediaPipe; only anonymized event labels reach the cloud. Nomi balances safety with dignity.
How we built it
TL;DR (too long; didn’t read): We turned low-cost sensors on an ESP32 into caregiver-ready insights by streaming data to DynamoDB, fusing it in a FastAPI service, and asking NVIDIA’s Nemotron-based NIM to produce clear, structured summaries that render live in a React dashboard, with optional on-device OpenCV/MediaPipe fall detection for privacy.
In full detail: 1️⃣ Hardware → Cloud We started with an ESP32 DevBoard, a pulse sensor, a thin pressure film, and a temperature-humidity sensor—basically a mini hospital taped to our table. After days of debugging serial output at 3 AM (“why is my heartbeat 300 bpm?? oh right, short circuit”), the ESP32 finally sent clean data via HTTP to AWS API Gateway. From there, a Lambda function validated readings and stored them in DynamoDB, creating our realtime data pipeline.
2️⃣ Backend & AI Reasoning Our FastAPI backend became the translator between sensors and sense-making. It pulls data from DynamoDB and asks NVIDIA NIM to summarize what’s going on—heart rate trends, posture changes, fall risks, and daily insights. The NVIDIA NIM uses Llama 3.1’s LLM Model to provide real-world context to data. We basically taught an AI to play doctor… responsibly. Each record (heart rate, oxygen, posture, eating/meds, environment) is validated by Pydantic and stored with a resident/time key pattern in DynamoDB. We used Boto3 for all read/write ops.
3️⃣ Frontend Dashboard The React + TailwindCSS dashboard came next. It shows live vitals, posture, and environment data in Recharts, with clean cards and color cues. The “Insights” card uses NIM’s output to explain what’s happening in plain English, no doctor’s degree required. It’s designed for clarity. We use large fonts, color cues, and an “alerts” bar for quick caregiver attention.
4️⃣ Alerts & Privacy Falls and abnormal vitals trigger AWS SNS or Gmail SMTP alerts that hit your inbox in seconds. For privacy, OpenCV + MediaPipe handle posture and fall detection locally, so your grandma’s living room doesn’t end up in the cloud. Only event labels reach the cloud.
5️⃣ Development & Testing We tested everything in Jupyter Notebooks, built virtual environments that somehow always broke on someone’s laptop, and celebrated the first time a fall alert email actually arrived. That moment felt like magic! The entire workflow runs locally with FastAPI + React, then scales seamlessly to AWS for deployment.
Challenges we ran into
AWS SageMaker deployment permissions unexpectedly blocked our NIM endpoint. The challenge mandated deployment on AWS SageMaker or EKS, and we successfully designed our pipeline for it. We initially deployed and ran the Llama-3.1-Nemotron-Nano-8B-v1 NIM on a SageMaker endpoint. However, mid-development, the provided IAM role permissions were modified by the environment administrator, leading to a cascade of fatal errors: we hit multiple AccessDeniedException and InvalidClientTokenId errors on crucial commands like sagemaker:CreateEndpointConfig and for GPU instances like ml.g5.xlarge. Our architectural pipeline was intact, but the infrastructure lock-out forced us to pivot from a managed SageMaker deployment to an API-only local proxy. We want to emphasize that our architecture is fully compatible with, and was originally deployed on, the required AWS services.
Connecting multiple Arduino-based sensors to a centralized AWS DynamoDB was way harder than it sounded. We had heart rate, pressure, and temperature readings all coming in at different frequencies — sometimes spiking, sometimes silent — and Dynamo really doesn’t like inconsistent payloads. We spent hours debugging JSON formats, timestamp mismatches, and authentication from the ESP32 before finally creating a Lambda middle layer to normalize everything. The moment the first clean DynamoDB table populated felt like watching the data universe align.
Testing OpenCV + MediaPipe posture detection on live humans was surprisingly tricky. Lighting, camera angle, and motion blur all confused the fall-detection logic. We learned the hard way that “pretending to fall” isn’t easy or safe — and that a laptop webcam has no mercy for bad posture.
Obviously, no one wants to keep a 24-hour camera and sensor feed running during judging. So we had to rethink our testing strategy, and built a Dynamo DB Sample data file, to help us show the entire reasoning and alert system without needing a live person constantly standing, eating, or “falling.”
Accomplishments that we're proud of
We designed and 3D-printed the Nomi camera stand ourselves using CAD, then connected all the sensors to a centralized AWS DynamoDB system—a tough but rewarding challenge. This was our first time building an Agentic AI project that combined both physical hardware and intelligent reasoning, giving Nomi real-world impact rather than being just another software demo. The system is intentionally low-cost and accessible.
A major breakthrough was learning how to directly query and process data from DynamoDB, parsing live sensor inputs seamlessly into meaningful insights. Using Pydantic models, we structured Nomi’s data pipeline and reasoning flow so that database retrieval and AI output remain tightly integrated.
What we learned
1️⃣ Structuring reasoning between LLMs and real data We learned how to connect the dots between raw DynamoDB records and NVIDIA NIM’s language reasoning. It’s not just “throw the data at the model” — you have to decide what context matters, normalize units, and prompt the LLM to reason in a structured way. That design step turned out to be as important as the model itself.
2️⃣ Hardware debugging builds character We learned that “simple” sensors can produce wildly unpredictable signals. Force sensors drift, humidity sensors lie, and pulse sensors sometimes detect your Wi-Fi instead of your heartbeat. Patience and calibration became part of our skill set.
3️⃣ Schema discipline saves lives (and data) Working across microcontrollers → Lambda → DynamoDB → FastAPI → React taught us one golden rule: consistent data schema or nothing works. Pydantic became our peacekeeper, enforcing structure when our sensors wanted chaos.
4️⃣ Creative problem-solving beats permissions In the hackathon sandbox, SageMaker endpoints were blocked, so we built a local NIM proxy that mimicked AWS inference behavior. It taught us that limits often force the most inventive solutions — and sometimes the fake endpoint works just as well as the real one. However, our project still runs on SageMaker, but it requires setup on the user end.
5️⃣ Frontend matters more than we expected You can have the smartest backend in the world, but if the dashboard isn’t clear, the caregiver won’t trust it. Designing simple, readable insights turned out to be a real test of empathy and UI clarity.
What's next for Nomi
Real-time streaming with MQTT: Right now, our ESP32 sends data in batches. Next, we’ll switch to an MQTT-based pipeline for true real-time updates. This will let caregivers see vitals instantly, not just in short refresh intervals, and enable smoother integration with AWS IoT Core for better scalability.
Expanded sensor ecosystem: Beyond pulse, pressure, and posture, we’re prototyping new modules for blood oxygen (SpO₂) and activity tracking. Our goal is modular plug-and-play — any sensor, any data source, one unified dashboard.





Log in or sign up for Devpost to join the conversation.