Gallery Above:

1) Video - Product Demo at 0:40 - 1:40 2) Images after the Video are the Pitch Deck shown in video

See the Github link below at "Try It Out" for the repo

Inspiration

In the U.S., someone suffers a stroke every 40 seconds, leading to 440 deaths each day. 1 in 4 people will experience a stroke in their lifetime. If not detected and treated within the first hour, the likelihood of lifelong disability is extremely high. Yet most people cannot recognize the signs of a stroke in others—let alone in themselves. The result: lives lost, futures altered, and an economic burden exceeding $100 billion annually in the U.S. alone. This must change.

The most critical window is the “golden hour”—the first 60 minutes after a stroke begins. In this short timeframe, treatments that restore blood flow can save threatened brain tissue and prevent lasting disability or death.

Older adults—those 65 and above—are at the highest risk. On average, they spend 10 hours a day sitting in front of screens, and nearly every adult in the U.S. owns a smartphone. That’s where we come in: StrokeDx provides passive, continuous stroke detection through smartphones, using the camera and microphone to immediately detect the F.A.S.T. signs of stroke.

StrokeDx also plays a critical secondary role: monitoring after an initial stroke. Due to the high cost of care—where a single 24-hour ICU stay can exceed $50,000 to $100,000—most Americans are discharged within hours of treatment. Yet according to the American Academy of Neurology, nearly 50% of major strokes occur within 24 hours of the first episode. StrokeDx bridges this dangerous gap, offering ongoing monitoring when it is needed most.

What it does

StrokeDx is designed to provide passive, continuous stroke detection through smartphones, leveraging the camera and microphone to identify the F.A.S.T. signs of stroke in real time.

For the hackathon, we focused on developing the core detection engine. The model is trained on a dataset consisting of labeled images of patients having stroke (such as facial droop, asymmetry) alongside matched control headshots, achieving>90% classification accuracy on a held-out validation set.

This proof-of-concept establishes the foundation for real-time stroke detection directly on consumer devices by demonstrating how AI can accurately detect and monitor for stroke.

How we built it

We built and trained a convolutional neural network using TensorFlow/Keras, trained on open-source datasets of annotated facial images containing stroke-positive cases (e.g., facial droop, paralysis) and stroke-negative controls. Preprocessing was handled with OpenCV and PIL/Pillow for image handling, with normalization and augmentation to improve robustness.

Our initial CNN achieved >75% classification accuracy on held-out validation data. As a baseline, we also implemented an XGBoost model using scikit-learn to compare performance with classical machine learning approaches. Training and evaluation outputs were visualized with matplotlib, while experiment reports were auto-generated into PDFs using fpdf2.

This proof-of-concept demonstrates the feasibility of AI-driven stroke detection running on widely available consumer devices. Next steps include expanding to multimodal inputs (e.g., video-based facial asymmetry, speech slurring detection via microphone), optimizing inference with lightweight architectures such as MobileNet and EfficientNet for on-device deployment, and integrating into a smartphone-native application for passive, continuous monitoring.

Challenges we ran into

1) Scaling down scope — Building a fully functional mobile app with cloud integration to run live AI analysis of both video and audio for stroke detection was far beyond what could be achieved in a single weekend. Our first challenge was deciding what the most essential milestone should be: should we prioritize deploying AI to the cloud, building a mobile interface for video capture, or focusing on training an AI model to reliably detect stroke symptoms? Ultimately, we chose to start with the foundation: training and validating an AI model for stroke detection.

2) Building the AI and finding datasets — Once we scoped down, the next challenge was developing a robust AI model and sourcing appropriate datasets. Stroke detection requires labeled images showing subtle facial asymmetries and other stroke indicators, which are not widely available. We identified and curated open-source datasets of stroke-positive and stroke-negative headshots, then preprocessed the images using OpenCV and PIL/Pillow for normalization, alignment, and augmentation. Training the CNN to achieve high accuracy while avoiding overfitting required careful experimentation with model architectures and hyperparameters.

Accomplishments that we're proud of

Achieving >90% accuracy in classifying stroke vs. non-stroke faces. This proof-of-concept alone demonstrates life-saving potential, showing that even a standalone AI model could meaningfully contribute to early stroke detection and improved patient outcomes.

What we learned

Data quality and diversity are more important than model tweaks: with a limited dataset, the model can’t generalize well.

Overfitting is still a major challenge: dropout helps, but it can’t fully solve poor generalization without better data.

A usable solution requires more than just the model: the backend and frontend need to be built for real-world use and integrated in an easy-to-access way, such as a smartphone app.

What's next for StrokeDx - Saving Minds, Saving Futures.

The next phase of StrokeDx will expand to multimodal inputs, including facial video and speech analysis via microphone, and integrate into a smartphone-native application for continuous, passive stroke monitoring. This will enable real-time detection and early intervention, maximizing the potential to save lives and prevent lifelong disability.

Additionally, while ICD-10/CPT/HCPCS insurance billing codes exist for patient monitoring devices, this will need to be clinically tested and verified before it can go to market for insurance companies to offer to their customers. Leading this effort is Bo Miao, who has drafted and supported competitive grant proposals, securing $20K+ in non-dilutive funding and $75K in paid pilot contracts for UC San Diego researchers, while leading IRB (institutional review board for human trials) submissions and documentation through approval.

Built With

Share this project:

Updates