Inspiration
Parkinson’s Early Motion Screener was inspired by a family friend who lived with Parkinson’s disease. Over time, it became difficult for him to keep track of how his symptoms were changing from day to day. The changes were subtle, gradual, and easy to miss, especially when he did not always have someone nearby to regularly observe him. He already had security cameras at home, and that made me think: what if the same cameras used for safety could also help passively monitor visible movement changes over time? That idea stuck with me because early warning signs are often not dramatic enough for patients or families to notice right away, but they may still matter. I wanted to explore whether computer vision could help turn ordinary video into useful screening insights.
I was also motivated by my own background in AI imaging and computer vision. I have experience working with tools such as PyTorch, YOLOv5, OpenCV, NVIDIA DeepStream SDK, scikit-learn, NumPy, and perception-based classifiers in real-time vision systems, which made this challenge feel like a meaningful way to apply those skills to a more human-centered problem.
What it does
Parkinson’s Early Motion Screener is a proof-of-concept video analysis tool for detecting visible motor patterns associated with Parkinsonian movement changes. The project is designed as an early screening and symptom-tracking aid.
The system analyzes video for signals such as:
- tremor-like motion
- movement asymmetry between the left and right sides of the body
- reduced movement speed
- overall steadiness and smoothness of motion
The goal is to help surface changes that may otherwise go unnoticed and make it easier for users, caregivers, or clinicians to monitor patterns over time. In a real-world setting, this could support at-home monitoring, telehealth check-ins, or earlier referral for follow-up care.
How we built it
We approached the project as a computer vision pipeline built around accessible video input. The core idea was to use a camera feed or recorded footage, detect the person in frame, extract motion-related features over time, and convert those features into an interpretable screening-oriented output.
The build concept includes:
- Video ingestion and preprocessing using OpenCV
- Person or body-region detection using vision models such as YOLOv5
- Feature extraction for motion signals like tremor intensity, movement speed, and asymmetry
- Modeling and scoring using Python tools such as NumPy, scikit-learn, and PyTorch
- Output visualization to show whether motion appears stable or whether there may be changes worth monitoring further
Conceptually, the system follows:
$$ f(x) \rightarrow y $$
where (x) represents motion-related features extracted from video and (y) represents a screening score or trend estimate, not a clinical diagnosis.
My own technical background helped shape the design of this system, especially in thinking about real-time inference pipelines, motion-based signals, and practical computer vision deployment.
Challenges we ran into
One of the biggest challenges was scope. Brain health is a very broad area, and it would have been easy to build something too ambitious to feel credible. Narrowing the project down to early Parkinsonian motor screening from video made the idea much more realistic for a proof of concept.
There were also practical technical challenges, including:
- noisy real-world video conditions
- inconsistent camera angles and lighting
- partial occlusion of the body
- difficulty separating clinically meaningful motion from normal variation
- the lack of clinically validated data within hackathon scope
These challenges made it clear that building a real medical-grade system would require much more data, validation, and clinical collaboration.
Accomplishments that we're proud of
We are proud that we turned a broad idea about brain health into a focused, believable, and responsible proof of concept. We built around a specific problem: how to use everyday video to help monitor visible motor changes over time.
We are also proud of:
- identifying a deployment concept that is accessible, especially for home monitoring
- grounding the project in a practical computer vision pipeline rather than just a vague AI idea
- applying prior experience in real-time vision systems to a healthcare-related use case
- framing the project in a way that emphasizes support, screening, and early awareness rather than replacement of clinical judgment
What we learned
This project taught us that in digital health, clarity and restraint are just as important as technical ambition. A strong proof of concept does not need to solve the entire problem. It needs to solve one part of it well and explain its limitations honestly.
We also learned that:
- narrowing scope makes a project much stronger
- healthcare AI must be interpretable and responsibly framed
- computer vision can be useful for behavioral and motor screening, but not everything visible should be treated as diagnostic
- everyday devices like home cameras may have untapped potential for longitudinal health monitoring when used ethically and with consent
On the technical side, the project reinforced how transferable computer vision skills can be across domains when the underlying problem is framed carefully. My experience building and deploying AI imaging systems gave me a strong foundation for thinking through this prototype.
What's next for Parkinson’s Early Motion Screener
The next step is to move from a concept-level prototype toward a more structured monitoring tool. That would include improving motion feature extraction, testing on more varied video examples, and designing a better interface for showing symptom trends over time.
Future directions could include:
- guided movement tasks for more consistent video capture
- longitudinal tracking dashboards for patients and caregivers
- clinician-facing summaries for telehealth or follow-up visits
- privacy-preserving processing options
- eventual validation with real users and medically supervised datasets
Long term, Parkinson’s Early Motion Screener could evolve into an accessible digital health tool that helps people notice subtle movement changes earlier and monitor them more consistently, especially in home settings where those changes might otherwise go untracked.
Built With
- mediapipe
- numpy
- opencv
- randomforest
- scikit-learn
Log in or sign up for Devpost to join the conversation.