Inspiration

The inspiration behind the project was the idea that so much money and time could be better spent doing other things than being wasted in our current process for analyzing medical imagery. In a traditional diagnosis, a concern is verified through imaging, done by an imaging technician. They are not allowed to make any interpretations, and immediately forward the images to a physician. The physician then takes much time interpreting the scans, and if found to be concerning, then follows up with a patient, sometimes forwarding to a specialist where additional appointment coordination is required. If nothing is found, time is wasted. Money is lost, and people who could've had a more effective treatment with the allocation of would be free resources do not benefit.

What it does

The project uses machine learning to accurately predict whether medical images (in this instance, specifically lung x rays) exhibit signs of abnormalities or not. It then outputs this data to the user such as the technician, and gives a diagnosis. To incentive the utilization of the platform, it offers a client/physician information forwarding feature to further speed up and optimize the follow-up process, in the event a concerning diagnosis is predicted. Specifically, the user information is forwarded to the appropriate specialist with their ip as an id, written to a database so that the responding physician may quickly and in bulk, respond to legitimate cases.

How I built it

I wanted to get my feet wet in ML this hackathon. After learning that training an entire neural network would take hundreds of hours, and would most likely be unnecessary (retraining the fundamentals of computer vision detection, such as edge detection, color spaces arrays, etc.), I decided to retrain the top layers of the inception v-3 tensorflow model, to classify the abnormalities within chest imaging. I used categorized, unlabeled, chest X-ray datasets (around 5-6k images).

In terms of actual software, I designed GUI in pygame and tkinter, used socket for web communication, tensorflow for training.

Challenges I ran into

Specific machine learning approaches and underfitting/overfitting was at first confusing. Understanding why loss functions behaved the way they did and how to not get flawed models were originally a challenge. As a result I had to retrain the model several times.

Accomplishments that I'm proud of

Validation accuracy of >95%! Go Tensorflow!!!!

What I learned

Learned the fundamentals of tensorflow. Overall the docs were pretty confusing, but after 24 hours of trying to get this to work, I'm beginning to grasp an understanding of their approach and their interactions in high level apis.

Everything about properly training data. Researched several alternatives, including unsupervised and supervised training alternatives.

What's next for PneuMo.tech

Definitely identifying new trends in medical images and training deeper networks from those variables (or more so, classifications. We don't exactly know what variables tensorflow trains itself on). Specifically in this case, the dataset can be further broken down into bacterial and viral abnormalities, which is particularly useful as early treatment for both are extremely different.

Built With

Share this project:
×

Updates