Inspiration
Our inspiration comes from personal experiences with loved ones affected by Alzheimer's. We understand the impact of late diagnoses and are driven to provide more accessible MRI technology to enable earlier detection. This will give people more time for treatment and planning.
What it does
It takes a MRI scan from the user and provides a Alzheimer's severity score to help guide treatment options. This is done by creating a CNN neural network and creating feature maps from neurons. We use kernels to transpose and condense the feature maps until we get a fully connecting layer. This is done via convolutions, max-pool, flattening, normalization, dropout, and non-linear activation functions.
How we built it
The first thing that we did was assign 4 different values ranging from non-demented to moderately demented. These are the values that are assigned to the MRI image to determine the state of dementia from the person’s brain scans. We used convolutional blocks that extract and store certain information from an image such as edges, curves, shadows, etc. Most LLMs primarily use an input layer, some dense layers, and then an output layer. However, due to the nature of this LLM using images, we needed a way to extract the data as it is not directly provided to us.
Challenges we ran into
We had challenges with hosting issues when we were trying to host on heroku and firebase. Too many files due to React imports and the dataset for training the AI model. Also, getting comfortable with use of the layout.tsx and the typescript language in general was a great learning process for most of us.
Accomplishments that we're proud of
We are proud of training a model using a large dataset that became the backbone of our project. We were also able to develop a frontend we were really proud of, with lots of pages, use of shadcn, and, for the first time, most of us were able to get on and use react, which we had prepared for in the week preceeding the hackathon.
What we learned
We learned how to train a convolutional neural net, determining which activation function to use, learning curves, transforms(including Gaussian blurring). On the frontend, we learned much more about react and next.js, along with coding in typescript, directories, structures, and UI design with shadcn. On a whole, we learned how to use Django to develop a web application.
What's next for Cogniscan
Creating a profile page where users can view their past MRI scans and other info, along with greater encryption and privacy. After that, a natural step would be collaboration with real doctors, as Alzheimer's is an issue that will only become more important as time goes on, and more nations progress into the fourth stage of the Epidemiological Transition Model.
Log in or sign up for Devpost to join the conversation.