Inspiration

Alzheimer’s Disease (AD) begins 20 years before the first symptom appears, yet diagnosis often relies on late-stage observation or expensive, inaccessible PET scans. We were inspired by the "Diagnostic Gap"—the massive window where "Silent Risk" exists but remains invisible to standard checkups. We wanted to build a tool that doesn't just "guess" but triangulates risk the way a top-tier clinician would: by cross-referencing behavioral nuances, genetic markers, and structural imaging.

What it does

TriAD (Tri-modal Alzheimer's Detection) is a clinical-grade screening interface that deploys three specialized "AI Agents" to detect early-stage AD:

  • Agent 1 (The Cognitive Sensor): Captures "invisible" biomarkers in real-time. It uses the microphone to analyze speech for "disfluency" (hesitation rates/vocabulary richness) and measures millisecond-level reaction times via a digital Stroop test.
  • Agent 2 (The Geneticist): An Explainable AI (XAI) module that screens user data against a known knowledge base (advp.hg38.tsv), identifying specific high-risk SNPs (like APOE ε4) and explaining why they matter.
  • Agent 3 (The Structural Analyst): An imaging pipeline designed to ingest MRI scans and validate findings using Deep Learning models trained on clinical cohorts.

How we built it

We adopted a "Sensor-First" Architecture:

  1. Frontend (The Sensor Suite): Built with React, TypeScript, and Vite for speed. We leveraged the Web Speech API to build a custom voice analyzer that counts "filler words" (um/uh) on the fly, and the high-precision performance.now() API to measure cognitive inhibition latency.
  2. Data Integration: We moved beyond "dummy data" by architecting the system around real clinical datasets (Bio-Hermes and ADVP).
  3. UI/UX: We used Tailwind CSS and Framer Motion to create a calming, non-clinical environment, crucial for reducing anxiety in older users during testing.

Challenges we ran into

  • The "Simulation Trap": We initially built the frontend with "mock" buttons. Realizing that we couldn't train an AI on fake data, we had to pivot hard to build real telemetry sensors (actual voice recording and reaction timing) to solve the "Cold Start" data problem.
  • Multimodal Fusion: Figuring out how to weigh a "voice signal" against a "genetic signal" was tough. We learned that these shouldn't be one giant model, but a "Feature Fusion" architecture where specific biomarkers are extracted first and then combined.
  • Browser Limitations: getting accurate millisecond precision for the Stroop test in a browser environment required careful state management to avoid "React render lag" affecting the data.

Accomplishments that we're proud of

  • Real Telemetry: We successfully turned a static web form into a live data collector that hears and times the user.
  • Explainable Genetics: Instead of a "Black Box" that just says "High Risk," our system is designed to point to specific genes (e.g., "rs429358 detected"), making the AI transparent and trustworthy.
  • Scientific Grounding: We moved from a "hackathon demo" to a platform grounded in actual clinical literature (Disfluency & Inhibition Latency).

What we learned

  • "Um" matters: We learned that simple hesitation (disfluency) is one of the earliest signs of Temporal Lobe dysfunction, turning a simple speech input into a powerful medical sensor.
  • Data Contracts are Key: You cannot build the UI until you know exactly what the AI needs. We had to rewrite our "schema" multiple times to match the columns in our training datasets.

What's next for TriAD

  • The Backend Brain: We are currently spinning up the Python (FastAPI) backend to receive the telemetry from Agent 1.
  • Model Training: We will train our Random Forest classifier on the 5,000+ patient genetic cohort (preprocessed_alz_data.npz) we secured.
  • Clinical Validation: Moving from "self-collected" data to validating our risk scores against established benchmarks.

Built With

Share this project:

Updates