Inspiration

1 in 36 kids is dignosed with ASD at this time. Families raising children with ASD often go months (sometimes years) without a clear picture of what triggers a behavioral episode. Clinical tools exist, but they're expensive, hard to access, and not designed for daily home use. We kept coming back to one question: what if a smartwatch could do this? That's where the idea started. We wanted to turn everyday wearable hardware into a real-time behavioral monitor that any caregiver can actually use.

What It Does

ASD Behavior Tracker is designed as a smartwatch plugin that streams accelerometer, gyroscope, heart rate and microphone data in real time, detecting behavioral episodes like meltdowns, repetitive hand movements, body rocking, and vocal outbursts as they happen. For this hackathon, we used a phone as the sensor input to keep the build feasible within the time constraints, but the architecture is built to plug directly into smartwatch hardware.

When an episode is detected, a Gemini-powered agent looks up the child's history, identifies relevant patterns, and generates a personalized intervention suggestion on the spot. Caregivers get live sensor waveforms, a 30-day episode calendar, pattern insights (e.g. "Amy has meltdowns 3x more often on Monday afternoons"), and one-tap clinical reports ready to bring to their next appointment.

How We Built It

The stack is Next.js 14 with SQLite and Prisma on the backend. The phone browser streams sensor data via HTTP polling to the desktop monitor, simulating what a smartwatch data relay would look like in production.

On the signal processing side, we built a custom pipeline that extracts:

  • Time-domain features — RMS energy, zero-crossing rate
  • Frequency-domain features — dominant frequency and band energy distribution via FFT
  • Cross-sensor correlations — computed over 30-second sliding windows

These feed into a rule-based classifier tuned separately for each behavior type. On top of that, an adaptive sensitivity layer tracks episode history and automatically lowers detection thresholds during historically high-risk time windows by machine-learning.

The AI layer is a 4-tool Gemini agentic loop that retrieves episode history, analyzes patterns, generates a personalized intervention, and saves the result — all without manual prompting.

Challenges We Ran Into

Classifier calibration was genuinely hard. Each behavior has a different signal signature (repetitive hand movements look completely different in frequency space from a meltdown), and getting thresholds tight enough to avoid false positives while staying sensitive enough to catch real episodes took a lot of iteration.

Designing the agentic loop reliably was trickier than expected. The tool-calling sequence — history retrieval, pattern analysis, intervention generation, save — had to be carefully constrained to prevent the model from skipping steps or hallucinating trigger patterns when episode history was sparse. We added hard step caps and explicit system prompt guidance around the deterministic trigger mapping to keep the agent grounded.

Real-time latency was another constraint. Streaming sensor data at 200ms intervals, running feature extraction, and triggering the Gemini agent pipeline all had to complete fast enough that caregivers saw feedback within a few seconds of an episode starting — not after it ended.

Accomplishments That We're Proud Of

We got a full sensor-to-insight pipeline working end-to-end within the hackathon window — from raw motion data in, to a clinician-ready markdown report out. The part we're most proud of is the adaptive sensitivity layer: it actually learns from a child's individual behavioral history and personalizes detection thresholds over time, which is something even most clinical tools don't do.

What We Learned

Sensor data from real-world devices is noisier and more context-dependent than we expected. We came away with a much deeper appreciation for signal processing fundamentals, the practical limits of rule-based classification, and how to design agentic tool-calling loops that stay reliable — including knowing when to fall back to deterministic logic instead of letting the model improvise.

We also learned how much the quality of an AI agent's output depends on the quality of structured context fed into it. Early versions of our Gemini agent produced generic interventions because it had no access to the child's specific history; adding the history-retrieval tool step made the difference between advice that could apply to anyone and advice that was actually personalized. Getting that context pipeline right was as important as the model itself.

What's Next for ASD Behavior Tracker

The next step is building the actual smartwatch plugin and training a lightweight on-device neural classifier on real ASD behavioral datasets to replace the rule-based system. Beyond that, we're thinking about multi-child caregiver dashboards and direct EHR export so the reports we generate can actually flow into clinical workflows without any manual copy-pasting.

Built With

Share this project:

Updates