Unsped up video here:

Demo at https://www.youtube.com/watch?v=o8Ey9no-3BQ&feature=youtu.be

How to use it

Our model is deployed to a web application. https://covid-chest-xrays.herokuapp.com/

Simply click the URL and upload images for diagnosis. Or, if you only want to test it, download COVID-19, viral pneumonia, or normal chest x-rays from Google and upload them for diagnosis. It's that easy.

Inspiration

COVID-19 nasal swabs have been widespread for months. However, they are intrusive and harmful for people with sensitive sinuses.

Studies have proposed chest imaging, but found that “No single feature of covid-19 pneumonia on a chest radiograph is specific or diagnostic, but a combination of multifocal peripheral lung changes of ground glass opacity and/or consolidation” (Cleverly, 2020)

This describes the functionality of convolutional neural networks; they are great at extracting many features. Therefore, we decided to apply them to this task.

What it does

Given a Chest x-ray in Dicom, Jpeg, or PNG form, our application will diagnose as either COVID-19 infected, viral pneumonia, or normal. Our solution is more robust and accurate than existing solutions, as well as being extraordinarily fast (see our video).

Performance is as follows:

EfficientnetB0 (runs on 500mb RAM/disk, 1 cpu core):

Test Metrics
Test AUC Test Accuracy
0.90 95.44

EfficientNetB4 (Requires GPU for fast inference)

Test Metrics
Test AUC Test Accuracy
0.92 96.92

How I built it

We used Tensorflow to train the model. Since we needed to iterate quickly to improve the model in only 24 hours, we used very poserful cloud GPU's (Nvidia Tesla P100) on Kaggle. For more robustness, we used augmentations on the images during training, and TTS validation (k-fold would have been marginally better). Our notebook for training is here. We ended up training two models. One model was a small one optimized for efficiency, which was designed to run on 500mb of RAM/disk, and a single CPU core. The other was an extremely large model optimized purely for performance, and achieves slightly better scores. It must be run with a dedicated GPU (4+gb vRAM).

Our frontend for Heroku is using Streamlit, a python frontend library. However, our github repository has a much more functional frontend that couldn't be deployed to Heroku, and instructions to run this frontend are in the repository. However, we believe that for most users, our Heroku frontend is adequate.

What's next for us

Ideally, we would have deployed our better model to a web hosting service. However, it needs to be run on a GPU otherwise speed is awful, so we just opted for a smaller model with worse performance, but can be run extremely quickly on extremely limited resources. We would also like to use react/flask for our frontend, so we can have demo images and other functionality.

Challenges we faced

The modelling part went surprisingly smoothly, as the dataset was easy to use and I have tons of previous experience in computer vision. Training took a while, but a powerful GPU allowed us to finish training in time. Our frontend hit some roadbumps however. We initially used flask and react for the frontend, but our end goal was to deploy to Heroku. We couldn't figure out how to render JS with flask, and as a result, needed to run two commands simultaneously to route flask through react.js. As a result, it could not be deployed to Heroku. We had to create a new frontend quickly, this time in Streamlit, to deploy to Heroku. The new frontend is less aesthetic and much less functional than originally planned, but it was a necessary compromise.

References

[1] Cleverley, J., Piper, J., & Jones, M. M. (2020). The role of chest radiography in confirming COVID-19 pneumonia. BMJ, m2426. https://doi.org/10.1136/bmj.m2426

Built With

Share this project:

Updates