Inspiration

According to the American Heart Association Research, every 3d person in the world dies because of cardiovascular disease. For the last decade, Coronary artery disease is the leading cause of death worldwide, killing more than 18 million people per year.

The main screening test which doctors use for identifying heart diseases is a CT scan. European Telemedicine Clinic Research reports that the quantity of CT scans per population shows an annual increase from 6% to 16%. In contrast, the number of doctors per population remains the same. Nowadays, you need to wait for results for a few days or even weeks. In the next years, the waiting time will only increase. But for those who are in a critical situation, a quick diagnosis is vitally important.

What it does

CardioVision - is an AI - guided application that allows radiologists to identify heart disease from the computed tomography scans five times faster. It provides a second-opinion tool for doctors to get a diagnosis of the level of stenosis in the coronary arteries. Moreover, it provides the ability to visualize and track the regions which impacted the algorithm's decision. A doctor can also observe the confidence of each prediction.

How we built it

We collected data from the network of Australian medical centers, which contained 80 patients in the beginning. During the development stage, we continuously increased the dataset, as new data arrived. We have gathered CT scans of 800 patients in total. Using the collected scans, we have implemented and trained several different neural network architectures for performing image classification task. We tested a few efficient models: ShuffleNet V2, SqueezeNet, ResNet18 each separately, and each in combination with LSTM block. Each incoming image is fed into the algorithm and achieves a probability of having non-significant, significant, or no stenosis. Then, our method combines the predictions for each image into the final class for a patient. This pipeline was developed using PyTorch library and implementing research papers on deep learning.

Challenges we ran into

We learned that working with medical data requires getting specific expertise in that field, even if you're just developing a machine learning model. Moreover, gaining such expertise is time costly and demands to have an expert mentor in your team. We spent a lot of time understanding the data we were working with. In the end, we realized that all we needed to do was to listen to an expert from the very beginning.

Another challenge we have run into was the ability to explain the results of our model's predictions. Doctors are willing to see why the network made that particular decision. Thus, it is important to explain the model's behavior. We faced this challenge by developing the visualization of the prediction and obtaining explainable results by displaying features that impacted the network's decisions.

Among the biggest challenges was also gathering the dataset from the clinic for our specific task.

Accomplishments that we're proud of

  1. Publication on scientific workshop - CVWW. Proceedings - http://data.vicos.si/cvww20/CVWW20-proceedings.pdf
  2. Poster presentation on WiML at NeurIPS 2019.
  3. Huge dataset formation. In cooperation with Australian clinics, we have collected the biggest dataset of over 800 patients.
  4. Gathered medical expertise.
  5. Accurate prediction. The algorithm achieves more than 80% accuracy.
  6. A tool that provides explainable results.

What we learned

The period during the hackathon was prosperous for the experience. We want to categorize our insights into a few categories.

Video-shooting. We have wasted lots of time trying to make our video pitch very actionable and exciting. As it turns out, if you have no professional shooting team, it is better to follow the KISS principle - Keep It Stupid Simple. Clinical data access. It wasn't surprising that data is the critical component in the AI project. However, to collect the dataset from the hospital in short terms is a mission near impossible. Customer communication. Collaboration with every person, which you involve in the project, should be discussed in a very detailed manner. Otherwise, it may cause different expectations and, as a result, conflict in communication. Software development. Before the hackathon, we worked only on the AI algorithm. Now we gained the knowledge of how to properly wrap this algorithm into the PoC version of the product. AI in medicine. It is crucially important to understand the medical part of the project, where you are working in(our case is the cardiovascular system of the human body). It will speed up the algorithm development and increase the effectiveness of the future product. Roles in a team. Our team consists of the 3 best friends. We still have not killed each other. To keep our relationships, it was beneficial for us to clearly divide the roles in the project.

What's next for CardioVision

We have tons of ideas about the further development of our project.

Clinic Integration

We will integrate the web-service into the working workflow of the doctors in several clinics soon time.

Algorithm improvement

We gained medical expertise and understanding which types of data we also can use as input for our Neural Networks to improve the accuracy on the classification neural network. We also see a couple of ways how to extend the classification task to the localization, i.e., pixel-wise segmentation. We see a range of challenges there. The biggest problem is to get the annotated dataset for this task is expensive and time costly. We will try to avoid it firstly by solving tasks with semi-supervised approaches(Multiple instance learning, weakly labeled segmentation, etc.). We also would like to add the plaque type characterization. However, it will take lots of time, as this task also was not solved even in the scientific literature.

Report autogeneration

We also want to implement the fully automated report generation with all found pathologies and the morphology of the found plaques. The doctors will have the ability to easily edit the report with respect to their own diagnosis.

Changing communication

We also see one possible innovative way to transform the concept of the project. We want to change the way how doctors communicate with our algorithm. In addition to the web-interface, we want to create a chat-bot assistant, which will help doctors to inspect the algorithm's prediction even faster and more convenient way.

Share this project:

Updates