Inspiration
Alzheimer's disease is a "silent crisis." By the time memory loss becomes obvious to families, significant and irreversible brain damage has often already occurred. We learned that 1 in 4 patients are misdiagnosed because early physical changes in the brain (atrophy) are microscopic and incredibly difficult for even expert radiologists to distinguish from healthy aging.
We asked ourselves: What if we could give doctors a "second pair of eyes" that never gets tired and can spot these invisible patterns years in advance? That question birthed NeuroSight AI.
What it does
NeuroSight AI is an intelligent screening assistant that analyzes MRI brain scans in real-time.
- Instant Triage: A user uploads a standard MRI scan, and within 5 seconds, the system processes the image.
- Precision Classification: It classifies the patient into one of 4 stages: Non-Demented, Very Mild Demented, Mild Demented, or Moderate Demented.
- Safety First: It is specifically tuned to catch "Very Mild" cases—ensuring we don't miss patients when intervention is still possible.
How we built it
We approached this as a full-stack engineering challenge, not just a data science experiment.
- The AI Core: We used TensorFlow/Keras to build a Deep Learning model. Instead of training from scratch (which yielded poor results), we leveraged Transfer Learning with ResNet50. We froze the early layers to extract features and fine-tuned the top 30 layers on our Alzheimer's dataset.
- The Frontend: We built a responsive, modern web interface using React 18, Vite, and Tailwind CSS to ensure the tool feels like a professional medical product, not a script.
- DevOps: We managed our large model weights (200MB+) using Git LFS to keep our deployment pipeline clean.
Challenges we ran into
- The "Microscopic" Problem: The visual difference between a "Healthy" brain and a "Very Mild Demented" brain is almost non-existent to the naked eye. Our early models kept confusing them. We solved this by implementing aggressive Data Augmentation (zoom/shear) to teach the model to focus on structural shapes rather than pixel noise.
- The Git LFS Hurdle: Our model file was 205MB, which exceeded GitHub's 100MB limit. This broke our push commands repeatedly. We had to learn and implement Git Large File Storage (LFS) mid-hackathon to successfully version control our weights without corrupting the history.
Accomplishments that we're proud of
- 95% Test Accuracy: We achieved medical-grade accuracy on our test set.
- High Sensitivity: Most importantly, we achieved 94% Recall on "Very Mild" cases. In medicine, missing a sick patient is the worst outcome, so we are proud we built a "safe" model.
- A Working Product: We didn't just leave this in a Jupyter Notebook. We successfully connected the complex Deep Learning backend to a user-friendly React frontend.
What we learned
- Recall > Accuracy: We learned that in healthcare, a "99% accurate" model is useless if it misses the 1% of sick people. We learned how to tune our loss functions to prioritize sensitivity.
- Deployment is Hard: We learned that training a model is only 20% of the work; the other 80% is figuring out how to ship that 200MB file to a user without crashing the browser.
What's next for NeuroSight AI
- Explainability (Grad-CAM): We plan to add "Heatmaps" that overlay the MRI, showing doctors exactly which part of the brain the AI is looking at to build trust.
- Patient Timeline: Building a feature to track a patient's scans over months to calculate the rate of degeneration.
- Global API: Releasing our model as an open API to help clinics in developing nations access top-tier diagnostic tools.
Built With
- css3
- git-lfs
- github
- google-colab
- html5
- javascript
- keras
- python
- react
- tailwindcss
- tensorflow
- typescript
- vite
Log in or sign up for Devpost to join the conversation.