The coronavirus COVID-19 pandemic is the defining global health crisis of our time and the greatest challenge we have faced since World War Two. Since its emergence last year, it has been the cause of nearly 1.5 million deaths worldwide and in countries like India, attempts to contain it have been largely ineffective.

Although the pandemic has wreaked havoc on every continent except Antarctica, third world countries like India have taken an extra harsh hit. Many of the healthcare and public health systems in developing countries are compromised by the lack of equipment required to care for COVID-19 patients.

Being Indians, the state of the healthcare industry under the pandemic induced strain ached our hearts. People are dying due to a lack of hospital beds and a dearth of medical expertise. Consequently, doctors (primarily pulmonologists) have been overstrained, causing misdiagnoses, which can in an extremity result in long-term disability or death. Pneumonia is the second most misdiagnosed condition leading to readmission after a previous hospitalization, second only to congestive heart failure. Diagnostic skills are fundamental to the practice of medicine, yet misdiagnosis causes death or disability to over 150,000 patients annually.

We all need to come together in this fight against COVID. Our team strives to use their love for tech to give rise to a cheaper, more scalable, and instant approach towards diagnostics.

What it does

Our project, Medicus, integrates AI and healthcare, to analyze chest X rays and identify pneumonitic lungs. It brings together powerful deep learning algorithms and techniques and an appealing UI that gives a highly accurate diagnosis, instantly and completely free of cost. We've used cutting edge techniques in the field of deep learning to bring to our users a diagnostic tool with a recall of around 97%, which not only matches but surpasses similar metrics of the traditional human-made diagnosis. All the user needs to do, is to upload a jpeg image of their chest X-ray to get the diagnosis, instantly, and free of cost.

The benefits extend far beyond the accuracy metrics though. An AI-powered diagnostic tool not only has the obvious advantage of being immune to stress, fatigue, and illness, but it is also superior when it comes to updation, cost-effectiveness, and scalability.

Let's say if the World Health Organisation identifies a new disease, or technique, it is almost impossible to update all human doctors on these developments. In contrast, even if you have a billion patients using an AI-powered diagnostic tool, you can still update each one of them within a split second. The model can also grow smarter and more accurate with an increasing number of patients(since a deep learning model gets better asymptotically with an increasing amount of data).

Artificial Intelligence is a technique our team is extremely enthused about. Looking at the immense advances it can offer when introduced into the field of healthcare, made it an obvious path for us to explore.

How we built it

We developed a model to detect and classify pneumonia from chest X-ray images taken from frontal views at high validation accuracy. The algorithm begins by transforming chest X-ray images into sizes smaller than the original. The next step involves the identification and classification of images by the convolutional neural network framework, which extracts features from the images and classifies them. Due to the effectiveness of the trained CNN model for identifying pneumonia from chest X-ray images, the validation accuracy of our model was significantly higher when compared with other approaches. To affirm the performance of the model and get robust results, we made several training-sets using the cross-validation technique on training data through tuning parameters & introducing different layers in the model architecture. We chose the most stable one by comparing the results of different models. Finally, Retraining the model on the train test dataset given to fetch the best results.

We build a model using 5 Convolution blocks each consisting of separable conv2d, Batch Normalization, Maxpool2d layers.

Batch Normalization has been used to stabilize the learning process of convolution blocks. Maxpool2D to downsample input and using a maximum value for each dimension along the feature axis. A fully connected (FC) layer was applied for extracting the results from Convolution layers and classifying the image into a label. We also used dropout in several layers to reduce the chances of overfitting.

This will go a long way in improving the health of at-risk children in energy-poor environments. With increased access to data and training of the model with radiological data from patients and nonpatients in different parts of the world, significant improvements can be made.

Once the ML model was ready, we set up a flask server to create an API endpoint for anyone to consume. The frontend of our app has been coded in React.js.

Challenges we ran into

To bring two powerful tools such as deep learning and web development together while incredibly idealistic on paper, wasn't averse to its set of allotted challenges.

To deploy the backend we went down the traditional route of using the Heroku app. The multitude of things that our website deals with can be inferred from the 'requirements.txt' file because of the heavy libraries that it owes its smooth functionality to. Every file mentioned here helps us improve the user experience to leave little to no room for inconvenience in terms of having to worry about resizing images on the user's side or face any difficulty learning about the interface. This, however, pushed us to come up with wittier hacks to include all the files without exceeding the allotted 500 MB of hard limit set up by Heroku.

To achieve this we trimmed the libraries that we used posing to us this new challenge of optimization coupled with the pre-existing goals of setting up the frontend, ensuring thorough model testing, and making the ends meet all within a span of two days. The streamlining process involved us going library by library to find the heaviest ones (example, previously used TensorFlow) and find their replacements that pertained to the solution statement of our hack(in case of the TensorFlow example we read about how we could do away with the GPU support that the library inherently came with as Heroku couldn't support this aspect anyway thus, making it pointless to install the mentioned module. More research lead us to the perfect replacement for this-> TensorFlow-CPU).

Accomplishments that we're proud of

The Deep Learning classification model we developed performed a LOT better than any of us had imagined. We tore through research paper after research paper to put together an architecture that suited our needs and we finally succeeded to do so :)

At the time of Medicus' inception, (i.e 2 days ago) we had no idea how to make a python based Deep Learning model work with the asynchronous web framework. Our team consisted of Node.js developers for the backend.

We thought of using TensorFlow.js for model training and pipeline construction but we faced a roadblock. Even though we could train the model and perform other Tensorflow dependant tasks with TensorFlow.js, there were a lot of other libraries that we needed to use which weren't supported in the javascript environment. So we turned to the other natural option. Coding the backend in a Python-based environment and the front end in a Javascript-based environment. None of us had ever developed backends with a python framework. We chose Flask for our needs and learned to write Flask code in a day. After a sleepless night, we finally managed to set up the backend API to use our ML model

What we learned

Sure, we learned to work with a lot of new technologies like Flask, Tensorflow.js, etc., and learning about different convolutional network architectures and how they impact the model performance. But we strongly believe that the most important thing we learned was that sometimes just giving something a shot, no matter complex it seems at first, can be worth the effort. You can end up creating something novel. Something good for the community!

What's next for Medicus

Currently, due to a hard limit of 500MB set by the free tier of Heroku, we can't afford to save the uploaded X rays. In the future, we plan to allocate some funds towards a cloud storage bucket and store every upload that our user base feeds in. This data can be then used to train and improve our model. We also plan to extend our models. We plan to develop diagnostic tools for melanoma detection, prostate cancer detection, etc.

As of now, we haven't even scratched the surface of what an AI-powered doctor is capable of. The benefits of integrating AI in healthcare are likely to be immense. AI doctors could provide far better and cheaper healthcare for billions of people, particularly for those who currently receive no healthcare at all. Thanks to learning algorithms and biometric sensors, a poor villager in an underdeveloped country might come to enjoy far better healthcare via her smartphone than the richest person in the world gets today from the most advanced urban hospital.

An argument can be raised that in the long, this path could lead to job displacements for the healthcare industry, but looking at the immense improvements AI can offer in the field, it would be madness to block innovation in this area.

After all, what we ultimately ought to protect is humans - not jobs.

Built With

Share this project: