Final Deliverables

Project Check-Ins

  • Checkin-1 (Proposal): link
  • Checkin-2 (Reflection): link
  • Initial and final checkin were Zoom/signup only.

Note: Brown accounts should have access to all links. If you require access and don't have it, please email: mrod@brown.edu, adam_berkley@brown.edu, christopher_cataldo@brown.edu, aaron_j_wang@brown.edu and we'll be happy to take care of it.

Thank you for your time and interest in our project!

Built With

Share this project:

Updates

posted an update

Reflection (11/23) for Project Title: Deep Codi (Coronavirus Diagnostic)

Introduction:

The COVID-19 pandemic is severely impacting the health and wellbeing of countless people worldwide. Early detection of infected patients is a crucial first step in controlling the disease, which can be achieved through radiography, according to prior literature that shows COVID-19 causes chest abnormalities noticeable in chest x-rays.

Deep Codi learns these abnormalities and is able to accurately predict whether a patient is infected with coronavirus based on the patient’s chest x-ray. Codi is an effective diagnosis tool that has immediate downstream effects in clinical settings and in the field of radiology.

Our dataset consists of 5000 chest x-rays of healthy and infected individuals. To combat data imbalance, we will utilize data augmentation techniques to increase the number of samples of infected patients. Dataset: https://github.com/shervinmin/DeepCovid/tree/master/data

Our Progress:

We have: Implemented simple preprocessing Implemented our model architecture Implemented Dice Scoring function Met to discuss this week’s goals and potential issues discovered

Challenges:

We have discovered that our data set contains messy data; a not insignificant number of X-Rays contain marker arrows (presumably drawn by radiologists and doctors), circles, and even post-its. We have discussed this as a team and have chosen to extend the period of preprocessing in order to tackle the issues over the forthcoming week, before getting carried away with implementing other model architectures and model tuning.

Insights:

We currently do not have any concrete results to share at this point. However, we feel that we are on track to meet the requirements in time.

Plans:

We are on track, however have chosen to extend the period of time originally allotted to preprocessing in order to address the challenges mentioned above. Those include, but are not limited to, removing spurious data (and potentially training a small model to identify such data and then automatically remove it), and then apply appropriate data augmentation techniques to those good data, such that we have enough data for our model to train on. The plan is to have this all done by Monday after Thanksgiving, and continue working on the model in parallel while prioritizing the data pipeline.

Log in or sign up for Devpost to join the conversation.