Inspiration

If there's one thing that our team has learned throughout our biology classes, it's that cancer sucks. Dealing with cancer is an arduous task from its diagnosis to actual treatment. Many groups and companies have sought to create machine learning models with the goal of checking if a given sample has a malignant tumor. Given the recent hype of quantum computing, we wanted to know whether quantum computing is better than classical computing for cancer detection!

What it does

What we have is a pretty simple web app, one which allows any user to upload images. Our classification algorithm, which has already been trained on a binary classification cancer data set, will analyze the image and report whether there's a malignant tumor or not. Additionally the user can request the model to point out the areas in the picture which played the biggest parts in the classification.

How we built it

We use the CBIS-DDSM from the TCIA which has labelled breast cancer pathology images. We had to turn it into a comprehensible format for our quantum machine learning algorithm. We used transfer learning with pretrained weights from ResNet18 to speed up learning. We checked our model's performance metrics as well. LIME allowed us to mark points on the picture which the algorithm factored in the most for its decision, which is what allowed us to implement AI explainability. We hosted a web app through Anvil, connecting it to the Jupyter notebook to allow users to upload photos and have them classified. The notebooks were hosted on the UMIACS cluster.

Challenges we ran into

Setting up the server presented a bit of a challenge as we needed to set up a chain of host locations and virtual environments to get the whole thing running. Pre-processing the data presented a challenge because it was originally in DICOM, which was not supported by the software we used. The dataset had some errors and inconveniences in the filenames which we had to work around. Although we had a temporary API key to run our model on IonQ's quantum hardware, we were unfortunately stuck in the training phase (probably due to the queue).

What we learned

We learned how to use TCIA and PyDicom to download and access the medical data. We learned how to integrate Pytorch and Pennylane together. We also learned how to make web apps with Anvil, and how to connect them to external programs.

What's next for Explainable AI 4 Cancer w/ Quantum Transfer Learning? Yes!

We could use a variety of different models in order to compare different transfer learning models, both classical and quantum. Expanding the number of qubits our quantum model uses could present new results. Expanding our training dataset to include more types of cancer could improve the accessibility and use even further. We also hope to optimize the hyperparameters to improve model performance.

Built With

Share this project:

Updates