We decided to explore the problem areas within Diagnostic Imaging. Currently, the daily error rate in the interpretation of CT scans is 3-5%. In addition, there are many communication gaps between radiology technicians and doctors. It can take days, even weeks for surgeons to contact radiologists. We want to expedite this process and improve the accuracy of patient diagnosis.
What it does
It puts together various cross-sectional slices from a CT scan into a singular 3D model. In addition, it will allow doctors to directly add notes onto the model. It will also have a reply function to the comments so doctors can collaborate on a single model. In addition, it will colorize the model to improve the readability.
How we built it
We created a web app to upload a DICOM format (2D image) of a scan, automatically convert it to an STL format (3D rendering), and push the mesh model into the cloud, hosted on MongoDB. We then import the image into our mobile app supported by the Torch AR engine where the user can now view their model on a mobile device, create annotations on parts of the structure, and share insights with fellow medical professionals.
Challenges we ran into
Finding suitable DCOM files on the internet was particularly difficult due to privacy concerns in healthcare. Many of the DCOM files were not good.
Accomplishments that we’re proud of
We are particularly proud of pulling together a broad amount of research to prove the presence of a problem for radiologists. It’s easy to say, “this is a problem I will solve”, but it is difficult to quantify and prove that it actually exists. We have very clearly shown use cases for the product. In addition, being able to take DCOM files, which are 2D, and turn them into 3D is far harder than it sounds.
What we learned
By doing a deep dive into the state of the radiology service line and exploring it to determine where and how this product fits in, we learned a great deal about the different parts of the industry.
What’s next for ScanShare
We want to further expand the program with knowledge trees to help doctors. It should have natural language processing to derive and give diagnostic recommendations and knowledge based on the notes of the doctor. In addition, it should also be able to use algorithms to predict the diagnostic group in which the scanned fracture or problem may be in. This should be done as a comparison between a healthy model and the model being analyzed. We would also like to expand on the collaborative aspects of the platform to include systems where experts may give assistance to radiologists.