Inspiration
Alzheimer’s begins long before diagnosis, in quiet moments of forgetting that often go unnoticed. By the time patients reach a clinic, meaningful cognitive signals are already lost. Research suggests early signs often appear not in memory itself, but in subtle language changes like longer pauses, reduced vocabulary, word-finding difficulty, and less coherent storytelling that emerge before obvious memory decline.
Backtrack is built for these moments. It turns natural conversations around personal photos into continuous cognitive signals, transforming how patients describe their memories into meaningful data. By grounding assessment in lived experience rather than isolated tests, it helps clinicians detect change earlier and understand their cognition over time.
Alzheimer’s steals time. Backtrack helps give some of it back!
What it does
Backtrack is a longitudinal cognitive assessment tool that turns everyday memory recall into clinical insight. Patients revisit personal photos from their past and describe what they remember in their own words. The system analyzes their speech for early signs of cognitive decline, including pauses, vocabulary richness, word-finding difficulty, and narrative coherence. Instead of relying on occasional clinic visits, Backtrack continuously converts natural speech into cognitive biomarkers, helping clinicians track how cognition changes over time in a more real-world setting
How we built it
Backtrack is a full-stack web app. The frontend uses React 18 + Vite with Tailwind CSS and Framer Motion for a cinematic, museum like experience. The backend is a Node.js + Express API handling audio, AI processing, and data storage.
We use the Web Audio API for real-time visualization and the Web Speech API for fast transcription. From speech, we extract signals like pauses, fluency, speech rate, and lexical diversity via a server-side pipeline.
Google Gemini 1.5 Flash analyzes images and transcripts to generate clinical insights and ElevenLabs to provide voice prompts. Data is stored in Firebase Firestore with Google Authentication, and Recharts visualizes trends in the backend.
The app is deployed on Firebase Hosting under backtrack.health, registered via GoDaddy. The name reflects the idea: patients “backtrack” through photos to revisit past memories, turning recall into analyzed clinical signal.
Challenges we ran into
- Integration of API's : React frontend with the Node.js/Express backend
- Firebase set up: designing a Firebase database for real-time synchronization between patient sessions and the clinician dashboard.
- API configuration: coordinating multiple external APIs so that patient interactions, voice responses, and generated outputs sync across the system.
Accomplishments that we're proud of
- Successfully built Full-stack AI pipeline - React, Express, Firebase, Gemini Vision, ElevenLabs, and Web Audio API
- Clinical Approach: Use of within-patient baseline comparison instead of population averages.
- Dual-Audience design: Cinematic dark mode for patients, clean light mode for clinicians — from a single codebase
What we learned
- Domain Knowledge: Did not know clinical cognitive tools use within-patient baselines until we researched it. That one insight changed our entire data architecture.
- Voice is harder than text: Learned to treat voice data as noisy by default and build tolerance into the metrics.
What's next for BackTrack
- Real patient photograph uploads personalized per patient
- Language: Develop Multilingual models
- Develop a mobile app version
Built With
- cognitive
- early
- for
- insights
- into
- memories
- turning
Log in or sign up for Devpost to join the conversation.