Inspiration

4 random strangers who became teammates on the day of the hackathon had a conversation to get to know each other. Slowly the conversation regressed into something very personal - Katie's experience with having pneumonia as a child. Soon, we discovered another teammate's (Khushi) uncle was diagnosed with pneumonia only too late. He lived in a rural town where healthcare wasn't accessible enough for him so he postponed doing anything about his symptoms. We started wondering whether anything could be done to make the whole diagnosis process any easier. And the eventual ML algorithm was an interesting solution.

What it does

LungPlus is a portal for radiologists to receive a quick pneumonia diagnosis on any chest X-ray they scan into the algorithm. The application employs a computer aided diagnosis system that further classifies pneumonia positive scans according to severity. The second feature of our application is the ability to automatically generate prescriptions for the particular patient (if the radiologist chooses to enter the patient’s biodata). An important aspect of the app is a ‘confidence rating’: a rating provided after every iteration of how confident the algorithm is about its diagnosis. This allows the user to make a judgment about whether they need to re-diagnose the X-Ray, further increasing reliability of the system.

How we built it

Once we came up with the idea to use AI for disease detection, we set about figuring out what model to choose. With a long time set out for research we did a thorough evaluation of prior work that’s been done into it. After making a pros and cons list of each model, we decided upon transfer learning as our best bet and we found datasets to work with. We then trained 4 different models using transfer learning, chose the best two and combined them using a weighted average method. The models used were ResNet-18 and VGG 16. Evaluation metrics for weighted average method used were precision and F1 Score.

Challenges we ran into

One challenge we ran into was collaboration. At noon on Saturday when we all first discussed our different contributions to the code we realised we all had very different styles and approaches to the transfer learning models since we'd learnt in different ways. When we started combining them together, we kept running into errors and spent almost two hours trying to debug it. That's when we made the collective decision to scratch the prior code and began working together to create a framework we would all follow for the next code. We also decided to meet more often - working in the same room as well as having discussions every 45 minutes to make sure we don't lose out on much time again.

Accomplishments that we're proud of

The last minute achievement of the auto-generated prescription! We came up with that idea at 12am and got very excited about its implementation. Teamwork came on at its full form when we combined all the things we'd learned about each other, quickly assigned tasks and fully implemented the idea within an hour.

What we learned

Being absolute beginners, most of us learned how to code ML for the first time! A variety of technical concepts were learned ranging from transfer learning to uploading code on github. The most important skills we learnt was jumping from a roadblock and optimisation. When we realised our individual code wasn't working we were almost going to give up but we decided to go forward with it and give it our 100%. Also, we kept constantly optimising our code.

What's next for Lung+

A key feature we didn’t have time to incorporate would be the use of NLP to autogenerate diagnostic reports for X-rays. This feature would almost entirely eliminate the involvement of the radiologist in the process. This gives them more time to focus on highly specialised cases and training. Using similar models for other diseases as well - chest x-ray scans can also indicate lung cancer, lung conditions such as emphysema or cystic fibrosis

Built With

Share this project:

Updates