Inspiration

It may appear that doctors don’t make mistakes, but the amount of harm that can come from delaying a brain MRI. It has the potential of ignoring the most severe brain tumor: those who have to suffer through the terrible consequences including headaches, vomiting, and memory loss are put in such a vulnerable position. My uncle was unfortunately one of these individuals, and he wasn’t aware of his tumor until it started impacting his ability to perform simple daily tasks, such as walking up the stairs, reading Moby Dick, and even cooking his family his famous Fettuccine Alfredo. Despite his successful brain MRI, his MRI review was delayed for too long, which only ended up causing him more pain. The amount of distress this caused his family was unmeasurable, and no one deserves to experience this horrific reality. As a result, our team’s goal is to ensure that another family doesn’t have to go through this, and we are working towards this goal by developing NeuroScan AI to help streamline and accelerate these processes while providing guidance for healthcare administrators, inexperienced practitioners, and more!

What it does

Our project is an AI-assisted brain MRI workflow and learning device developed to support scanning reviews, instead of making medical diagnoses. It analyzes uploaded MRI images to check image quality, highlight visual patterns, and suggest which scans may need faster clinical review. This platform also provides medical explanations to help new clinicians learn about what they are viewing. By organizing this information in a structured way, this tool helps improve efficiency, consistency, and prioritization in MRI workflows while keeping all final decisions with medical professionals.

How we built it

We built NeuroScan AI as a web application that is meant to show the world one of the ways AI can support healthcare. The frontend was built using React using component-based state management so it can handle image uploads, previews, and analysis requests. The MRI images are uploaded through a drag-and-drop or file-selection interface and validated as image inputs before scanning.

Once an image is selected, it is converted into an encoded format using the browser’s FileReader API, making sure there is a secure transmission to the AI backend without persistent file storage. We also added an AI API key so our website is capable of visual reasoning and a carefully made prompt that makes sure to constrain the model’s behavior. This lets us make sure the AI analyzes only brain MRI images, gives insights only based on the visible features, and also returns an output in a predefined JSON schema.

The AI response is then validated on the client side then sent to UI components that do an image quality assessment, structural observations, workflow priority guidance, educational insights, and limitations. We added loading states, error handling, and response validation to make sure that system remains stable even when AI outputs are inconsistent. Throughout our development, we focused on making the system function as a decision-support and workflow automation tool rather than a diagnostic system.

Brain tumor segmentation from magnetic resonance imaging (MRI) is needed for clinical diagnosis, treatment planning, and monitoring disease progression. However current automated systems need to show both statistical capabilities and workflow feasibility to be clinically useful. We studied whether data augmentation improves segmentation while maintaining negligible processing time for workflow support. Our hypothesis was that an augmentation-enhanced segmentation model would significantly outperform a non-augmented baseline in Dice similarity coefficient across tumor regions while maintaining accurate performance. We did 100 controlled synthetic MRI trials with a known ground truth tumor mask. We followed BraTS labeling conventions and evaluated Whole Tumor (WT), Tumor Core (TC), and Enhancing Tumor (ET) regions. The segmentation performance was measured using Dice coefficient, Intersection over Union, sensitivity, specificity, Hausdorff distance (HD95), and volumetric agreement. The API’s processing performance was then evaluated across 100 automated trials. The augmented model achieved a mean Dice score of 0.7877 for WT, 0.5524 for TC, and 0.5611 for ET, with a statistically significant improvement over the non-augmented model (p < 0.05 for all regions). Mean API processing time was 37.92 ± 8.02 seconds with a 97% successful completion rate. These findings show that augmentation significantly improves segmentation while conserving workflow feasibility. NeuroScan AI demonstrates potential as the next educational and workflow-support tool. Although clinical validation using real BraTS data or patient scans is a necessary future step.

Challenges we ran into

One of the biggest challenges was making sure to limit the responses the AI model could give us, because large models try to identify or label medical conditions, we had to refine our prompts and put constraints to prevent diagnoses, treatment advice, or speculative conclusions. Making sure that the feedback was reliable needed numerous rounds of testing done by cross referencing public health records.

Another major challenge was output consistency. Even when requesting structured JSON-only responses, the AI would occasionally return partially structured output. We fixed this by implementing defensive parsing logic, fallback error handling, and UI safeguards so the application would fail safely without misleading users.

We also faced some technical challenges around image handling and performance, including managing large image files, converting them efficiently, and maintaining responsive UI feedback during the API calls. Last but most definitely not least, balancing realism with ethics needed a lot of careful design choices. We had to make sure that the platform felt deployable and useful while clearly communicating limitations.

Accomplishments that we're proud of

FullStack MRI Analysis API Pipeline Implemented a full-stack pipeline using JavaScript (React + backend APIs), CSS, and HTML where brain MRI images are uploaded, validated, processed, and analyzed and returned a structured JSON report directly in the UI.

Input Validation Layer Built backend validation to enforce MRI Analysis, including file-type checks, size constraints, and metadata inspection, with automatic rejection or warning paths for unsupported or low-usability inputs.

Schema-Driven Enforcement Defined an analysis schema (Image Quality, Structural Patterns, Workflow Priority, Educational Insights, Confidence Level) and implemented response parsing to ensure consistency across screens.

Visual Feature Implemented AI-driven assessment focused strictly on image properties, including asymmetries, contrast, intensity, as well as irregularities with the shaping. Workflow Priority Classification Engine

Built a logic layer that converts observed visual features into Low, Medium, and High workflow priority, with explainable rationale tied directly to image characteristics, it supports triage without clinical decision-making.

Educational Explanation Generator for Junior Clinicians Implemented a contextual explanation layer that programmatically translates image observations into educational guidance, suggesting what a supervising clinician might review next. This can greatly impact and become beneficial for medical students as a study/review tool as well.

Confidence Module Added a confidence scoring system that surfaces uncertainty, image quality constraints, and analysis boundaries in every report reducing bias. Side-by-Side Visualization & Report Built a responsive UI that synchronizes MRI image viewing and AI reports, enabling rapid scan review for healthcare administrators and clinical operations teams.

What we learned

-Building secure file upload pipelines and validating medical image formats such as PNG.

-Integrating machine learning models into a web backend using an API.

-Optimizing performance so AI inference does not freeze the website.

-Designing an interface that is simple enough for non-technical users.

-Presenting AI results in a clear non-alarming and interpretable way.

What's next for NeuroScan AI

We plan to improve the model’s quality by training and validating it on a more diverse set of MRI scans to better handle different image qualities, scanners, and patient types. We also aim to expand workflow features, such as clearer prioritization explanations, confidence calibration, and integration with existing clinical systems such as PACS for more reliable adoption. In the longer run, we want to bolster the educational component by building on guided learning modules and case-based examples for new clinicians. Any future development would continue to emphasize non-diagnostic use and collaboration with medical professionals to ensure the tool remains an ethical and practical workflow support system.

Comprehensive Report on Model:

Description: Healthcare systems face significant delays in reviewing brain MRI scans due to imaging backlogs, staffing shortages, and inconsistent first-pass review processes. These delays disproportionately affect patients with limited access to care/resources and increase the pressure on early-career clinicians. NeuroScan AI is an AI-powered workflow intelligence and educational support platform designed to assist the initial review and priority of brain MRI images. Rather than diagnosing conditions or recommending treatments, NeuroScan AI analyzes features of MRI scans to gauge image usability and structural patterns, provide guidance based on workflow priorities, and offer informative explanations for further user improvement. The system functions strictly as a decision-support tool, enhancing efficiency while keeping clinicians fully in control.

Target Group: This tool is designed for healthcare administrators, radiology and clinical teams, and medical students and early-career clinicians.

Features & Functions:

Brain MRI image upload (drag-and-drop) Image quality and usability assessment Visual pattern observation (asymmetry, intensity, contrast, spatial features) Workflow priority assessments and guidance on the next decisions Educational explanations for early-career clinicians Transparent confidence and limitation reporting Saved analysis history for workflow tracking and ability for deleting personal data Value Proposition & USP: It reduces MRI review bottlenecks, standardizes first-pass review, supports early-career clinicians with clear and adaptive guidance, and improves operational efficiency without direct risk as it does not seek to completely replace diagnosis, differing from existing tools. Our one-of-a-kind product improves efficiency in many ways. It’s solely non-diagnostic and ethically bounded (does not replace diagnosis), designed for workflow optimization, offers educational advice to build confidence in new doctors, and there’s a clear separation between AI guidance and human decision making.

User Feedback: We gathered feedback and validation from a few members of our target group. Junior Medical Student (Arin): “The clear, concise explanations help me understand what to focus on and practice further to fill in my gaps without feeling like AI is diagnosing and doing everything for me”

Early-Career-Clinician (Dakshin): “This is such a lifesaver for helping me learn how workflow decisions are made in the real field where the time crunch makes a difference in saving a life and losing it”!

Professional Radiologist: “I really like the prioritization logic as it could help manage the huge load of imaging queues we get”

Business Model: The primary model would involve a reasonable subscription for hospitals, medical institutions, and imaging centers. We would have plans with varying levels of benefits such as a workflow-only plan vs a work-flow + educational analysis plan.

Implementation & Feasibility: We would have a web-based deployment, an AI pipeline for future upgrades, a scalable backend, and ready to integrate for hospital systems (PACs). Next Steps:

Improve the condition and scope of the model by including broader MRI datasets, Elaborating on the education explanations with additional resources or links our AI model can refer to. Add different ranges of access based on if you’re a professional administrator or a trainee/upcoming clinician Conduct real testing with teaching hospitals Data Requirements: Our product only uses brain MRI images with no required patient-identifiable data as well as patient tracking over time. We are going to have a required user content and disclaimer that all users have to read and agree to. Users can delete their data/image uploads and accounts. There’s secure handling of images without any direct diagnosis. There’s not going to be any long-term storage unless requested by an institution, and the analysis history simply reports metadata and not the actual, raw image itself. There’s no third-party analytics or tracking tools that are attached to the images. All uploads are encrypted (TLS/HTTPS). Finally, there’s no cross-image tracking that’s possible. NeuroScan AI ensures DSGVO/GDPR compliance through strict data minimization, in-session image processing, encrypted transmission, and clear user consent.

In-depth explanation of what we did after creating the model:

Brain tumor segmentation from magnetic resonance imaging (MRI) is needed for clinical diagnosis, treatment planning, and monitoring disease progression. However current automated systems need to show both statistical capabilities and workflow feasibility to be clinically useful. We studied whether data augmentation improves segmentation while maintaining negligible processing time for workflow support. Our hypothesis was that an augmentation-enhanced segmentation model would significantly outperform a non-augmented baseline in Dice similarity coefficient across tumor regions while maintaining accurate performance. We did 100 controlled synthetic MRI trials with a known ground truth tumor mask. We followed BraTS labeling conventions and evaluated Whole Tumor (WT), Tumor Core (TC), and Enhancing Tumor (ET) regions. The segmentation performance was measured using Dice coefficient, Intersection over Union, sensitivity, specificity, Hausdorff distance (HD95), and volumetric agreement. The API’s processing performance was then evaluated across 100 automated trials. The augmented model achieved a mean Dice score of 0.7877 for WT, 0.5524 for TC, and 0.5611 for ET, with a statistically significant improvement over the non-augmented model (p < 0.05 for all regions). Mean API processing time was 37.92 ± 8.02 seconds with a 97% successful completion rate. These findings show that augmentation significantly improves segmentation while conserving workflow feasibility. NeuroScan AI demonstrates potential as the next educational and workflow-support tool. Although clinical validation using real BraTS data or patient scans is a necessary future step.

Built With

Share this project:

Updates