Sentinel

Inspiration

Several of us have seen firsthand how devastating falls can be—especially for the elderly. One teammate's grandparents suffered serious hip injuries from falls. Another's mother, a physical and occupational therapist, has spent years treating patients whose lives were upended by a single misstep.

But the story that stuck with us most was about a neighbor. She fell at the bottom of a staircase and lay there for nearly an hour before anyone found her. Eventually, she had to leave her home and move into a nursing home—not because she couldn't function, but because there was no one watching when it mattered.

That raised a question we couldn't let go of: What if AI could watch over the physical world and step in when no one else is around?

Sentinel is our answer.


What It Does

Sentinel is a low-cost, AI-powered vision monitoring platform that observes and understands real-world environments in real time.

At its core, the system uses a camera paired with AI vision models to detect people, recognize faces, and track individuals as they move through a space. When it detects activity, Sentinel can analyze the situation, communicate what it sees, and send alerts to caregivers or family members.

What sets it apart from a traditional security camera: Sentinel physically follows people. A motorized two-axis pan-tilt mount driven by servo motors lets the camera track a person across a room—not just record a static frame.

Key capabilities:

  • Face detection and facial recognition with identity tracking
  • Real-time motion tracking via a servo-powered pan-tilt camera mount
  • Spatial awareness—understanding where people are positioned in a room
  • Voice interaction through an onboard speaker and microphone
  • Alert notifications via Telegram
  • Time-stamped event logging for later review

The entire system can be built from scratch for under $150 in under 24 hours.


How We Built It

We started by mapping the system architecture: what senses, what reasons, and what communicates.

Hardware:

  • Raspberry Pi — runs the AI processing pipeline
  • Arduino — controls sensors and servo motors
  • Camera module — provides visual input
  • Two-axis pan-tilt servo mount — physically tracks detected motion

Software:

We built a vision detection pipeline that handles person detection, facial recognition, and spatial positioning. When the system identifies activity, it can trigger one or more responses: sending a Telegram alert, speaking through voice output, or logging the event for later analysis.

Our approach was iterative—get a basic pipeline running first, then debug and improve continuously until the prototype worked end to end. The servo-controlled camera mount required its own round of tuning to reliably follow motion without jitter.


Challenges We Ran Into

Bridging AI software and physical hardware is harder than either one alone.

The biggest friction came from hardware compatibility. Configuring the Raspberry Pi and Arduino—ports, drivers, serial communication protocols—consumed more time than we expected. Flashing operating systems, managing device communication, and keeping the real-time vision pipeline stable all required careful debugging.

On the software side, getting the detection pipeline to reliably track people took several iterations of tuning thresholds and handling edge cases.

From a project standpoint, scoping was its own challenge. With limited time, we had to make hard calls about which use case to prioritize and how to tell a clear story around it.


Accomplishments That We're Proud Of

Our team of two traveled from California and Notre Dame to build this. Despite the small team and tight timeline, we delivered a fully functional physical AI prototype:

  • A working AI vision detection system with facial recognition and identity tracking
  • A servo-based camera mount that autonomously follows people through a room
  • Integrated voice output and Telegram alert notifications
  • A complete prototype built in just a few hours of focused development

We also benefited from the support of mentors, organizers, and the broader hackathon community—and we're grateful for that.

Sentinel isn't just a finished prototype. It's the starting point for something much bigger.


What We Learned

This project was a crash course in building fast and shipping something real.

We learned to play to our strengths. Matus focused on system architecture, AI implementation, tracking logic, camera boundary detection, object detection, agentic reasoning loops, and software-hardware integration. Sarah focused on hardware assembly, mounting, and building the presentation around real-world use cases.

The biggest lesson: simple beats clever. Sometimes an analog fix or a straightforward tool outperforms an over-engineered solution. We learned to stay locked on the end goal while staying flexible about how we got there.

Most importantly, we learned how powerful AI becomes when it steps out of software and into the physical world.


What's Next for Sentinel

Sentinel is just the beginning.

Our long-term vision: make AI-powered physical intelligence accessible to everyone—not just developers, but everyday people. We want users to simply ask questions or give instructions, and have the system observe, reason, and respond on its own.

Future directions:

  • Fall detection and healthcare monitoring
  • Activity and behavior recognition
  • Smart home integration
  • Real-time emergency response alerts
  • Expanded spatial reasoning

We're also interested in collaborating with research labs working on humanoid robotics and embodied AI, where platforms like Sentinel could serve as the perceptual layer—helping robots see and understand their surroundings.

The future of AI isn't just text and software. It's AI that can see, understand, and act in the real world.

Built With

Share this project:

Updates