Inspiration
Parkinson's disease destroys brain cells for 10–15 years before a single symptom is visible. By the time a neurologist confirms the diagnosis, 50–80% of dopamine-producing neurons are already gone. Treatments exist, but only work if the disease is caught early. We kept asking the same question: why is there no screening tool for the 10 million Americans who already know they're at risk? No mammogram equivalent. No annual check. Nothing. That gap is what drove us to build cerebrAl.
What it does
cerebrAl is a 2-minute smartphone screening test for early Parkinson's risk. It runs three clinically validated motor tests:
Voice: you say "Ahhh" for 10 seconds. The app analyzes vocal tremor, pitch variation, and harmonic noise ratios Tapping: you alternate taps on screen for 15 seconds. The app measures timing consistency, speed decay, and left/right hand asymmetry Spiral drawing: you trace a spiral. The app scores tremor energy, motor freezing, and drawing smoothness
The results combine into a composite risk score and tell you whether your motor signals look normal or whether you should seek neurological evaluation. No clinic. No hardware. No specialist. Just your phone.
How we built it
Sourced three independently peer-reviewed clinical datasets — UCI Parkinson's Voice Dataset, Tappy Keystroke Dataset, and HandPD Spiral Dataset Built a scikit-learn ML pipeline for the spiral model with a Random Forest classifier, StandardScaler, and SMOTE for class imbalance - achieving 81.3% accuracy and AUC 0.739 Voice feature extraction uses parselmouth (Praat bindings) and librosa to extract jitter, shimmer, HNR, RPDE, DFA, and PPE Backend built in FastAPI serving three model endpoints Frontend built as a fully interactive mobile prototype with real-time Web Audio API microphone processing, Canvas-based spiral drawing with live tremor scoring, and a tap timing engine measuring inter-tap intervals frame by frame Designed an original Baymax-inspired medical character called Max with physics-based tap reactions, threshold-gated mouth animation that only opens when it actually hears the user speaking, and a draw-then-walk spiral where Max walks your exact drawn path after you finish tracing
Challenges we ran into
Class imbalance - the spiral dataset had 296 healthy samples and only 72 Parkinson's samples. Getting the model to not just predict healthy every time required careful use of class weighting and SMOTE oversampling Mouth animation realism - making Max's mouth open only when the user is actually speaking (not during background noise) required implementing a volume threshold gate with per-frame lerp smoothing rather than just raw amplitude mapping Spiral walk timing - having Max walk the user's exact drawn path after they finish required storing every pointer position, computing deviation scores, and replaying the path at adaptive speed without the animation feeling robotic Multimodal fusion - combining three models with different feature spaces, different datasets, and different class distributions into one coherent risk score required a late-fusion probability ensemble approach rather than a single joint model Clinical credibility vs. accessibility - balancing a medically serious tool with an approachable, gamified UX that doesn't feel like a toy took significant iteration
Accomplishments that we're proud of
Spiral model hitting 81.3% accuracy and AUC 0.739 on a real clinical dataset in a weekend A fully interactive mobile prototype with three complete test flows - voice, tap, and spiral - all working end to end Max reacting with five distinct full-body physics animations on every tap, cycling through squish, hop, lean, and jiggle - genuinely feels like Talking Tom Voice mouth that only opens when it actually hears sound above a clinical amplitude threshold - not a gimmick, it's the same principle as clinical voice activity detection Spiral scoring that happens in real time as you draw, color-coding your path blue when you're on track and red when you drift Grounding every design decision in peer-reviewed published science - the features we extract are the exact features validated by Oxford, Nature Scientific Reports, and npj Parkinson's Disease
What we learned
Digital biomarkers for neurological disease are far more mature than the public realizes - the science exists, the consumer layer just hasn't been built yet Class imbalance is one of the most practically impactful problems in medical ML and is easy to underestimate when you first look at accuracy numbers Gamification and clinical seriousness are not opposites - an engaging UI that people actually want to use is what gets the longitudinal data that makes the model better over time The hardest part of a health app is not the model - it's the trust layer. Every design decision, from the disclaimer text to the way results are worded, shapes whether someone acts on the output Pre-symptomatic disease detection is a fundamentally different problem than diagnostic support — you're looking for deviations in people who feel completely healthy, which means false positive rate and user communication matter as much as AUC
What's next for Cerebrum
Complete the voice and tap ML pipelines — extract features with parselmouth and librosa, train the RBF-SVM voice model and Random Forest tap model Train the multimodal ensemble — late-fusion probability combination of all three models targeting AUC 0.86+ Build the FastAPI backend and connect real model outputs to the frontend, replacing demo scores with live inference Convert to React Native / Expo for actual App Store and Google Play deployment Launch a 90-day clinical pilot with a movement disorder clinic — 50–100 high-risk patients, comparing ParkSense outputs against standard neurological assessment Pursue FDA Breakthrough Device designation — the regulatory pathway that unlocks insurance reimbursement Build the clinical trial matching feature — opt-in consent for users to be matched with active Parkinson's trials, creating the B2B pharmaceutical revenue line Longitudinal tracking — monthly re-testing so users can watch their own motor signal trends over time, which is where the real early detection value lives Sonnet 4.6Adaptive
Built With
- css
- flask
- html
- javascript
- python
- randomforest
- vanilla
Log in or sign up for Devpost to join the conversation.