RadLens2: A Google Lens for Radiology Powered by Tensorflow 2.0

Inspiration

I'm a doctor/radiologist by profession. I've always wanted to learn more about artificial intelligence and machine learning (AI/ML) as well as how it can be applied to my field of practice. However, my lack of a dedicated computer science or data science background always made me feel that this was beyond my grasp. Fortunately, I found Tensorflow. Tensorflow has allowed me to explore and experiment with deep learning in ways I never thought possible. The higher level API (keras) has made it easier to use and design neural net architectures while tensorflow.js has allowed me to easily deploy models to the cloud for real world tests and usage.

By building this web app, I hope to empower my fellow radiologists to more actively participate in the development of AI/ML projects and applications for radiology. In my country (the Philippines), tools to locally develop AI/ML models in radiology is limited at best unlike in other more advanced countries where numerous platforms are already in place to develop, deploy and scale machine learning models. I hope that this project can help as a spring board for regular radiologists like myself (without access to more expensive and robust proprietary infrastructure) to learn more about and explore the brave new world of AI/ML with Tensorflow. Let's all help #democratizeAI.

I hope to prove to my fellow rads that with amazing libraries like tensorflow, working with, developing and deploying AI/ML models is possible, even for people like us with no formal computer science or data science background. Hopefully, the github repo for this project will serve as a starting point for more radiologists that are new to machine learning to work with, design and deploy even more machine learning models for the benefit of our fellow doctors and more importantly, our patients.

Many of the AI/ML medical products today are currently tied down to expensive hardware attached to huge server-centric installations with AI/ML models walled off from clinical testing by neutral parties. This means that private data/images need to either be uploaded to the cloud (with privacy issues) or the server has to be installed wherever the raw data is (adding to cost).

By using tensorflow.js, I hope to have a platform to help train and distribute AI/ML radiology related models that are

  1. end-user machine agnostic, not requiring the need to upload sensitive private patient image data and...
  2. fully deployable to radiologists/doctors for their use without the need to purchase expensive proprietary equipment since all they need is the browser.

What it does

RadLens2 is a version of RadLens that has been fully migrated and updated to Tensorflow 2.0 (python) and tensorflow.js (ver 1.0.0). It is a web app that tries to classify images as either a Monteggia or Galeazzi fracture (fractures of the forearm). It uses an AI/ML algorithm that runs right in the browser on the local machine (phones running android 8.0 or iOS devices running safari 11; latest chrome, firefox, opera on laptop and desktop). It uses the phone's/laptop's/desktop's camera to scan images. When it makes a prediction, it writes that down as a hyperlink at the bottom of the main app screen. The end-user can click on the link which will then take the user to a google image search for either Monteggia or Galeazzi fractures. The idea here is that the radiologist can supervise and get additional information for himself to judge whether or not the machine is right or wrong (example of supervised use of AI/ML and decision support by the AI for the radiologist).

How I built it

I used

  • the nightly build of tensorflow 2.0
  • tensorflowjs_converter, and
  • tensorflow.js ver 1.0.0
  • I used a mobileNetV2 architecture pre-trained on imageNet at 224x224 resolution and used transfer learning on images of monteggia and galeazzi fractures.

Challenges I ran into

DIFFERENT HARDWARE CONFIGURATIONS of the end user.

  • Although I reached acceptable accuracy values of above 0.7 when training the model in python (i did not want an extremely high value to avoid overfitting) and further callibrated the model using javascript, I realized that the AI/ML model becomes quite sensitive to differences in hardware. Since I'm deploying to the cloud, end-user hardware can affect parameters like the camera's image resolution and the phone's processor speed (the sample of raw image data frames taken in by the app for inference can be affected by this). I callibrated the app on my phone, which uses a 13 mp camera sensor and a Qualcomm SDM636 processor. Other hardware configurations may affect the sensitivity and specificity of the app. Possible future solutions for this are noted below (see What's next section).

Using the new tf.keras api in tensorflow 2.0

  • Tf.keras.initializers in tf 1.13 is now in tf.initializers in tf 2.0 (some of the keras functions were placed under tha main tf name space so I had to search for them outside of the tf.keras api itself)

Using the new api and namespace changes in tensorflow.js ver 1.0.0

  • tf.fromPixels is now tf.browser.fromPixels (change in namespace)

Limited nightly build support for windows.

  • I had difficulty installing the nightly version on a windows develoment machine since windows has a filename path limitation of 260 characters only. I believe you can increase the limit on windows 10 but I opted to switch to a linux distro or Ubuntu 18.04LTS which does not have such a limit.

Accomplishments that I'm proud of

  • The project was initially based on the emoji scaventer hunt which was written with typescript and used yarn. I initially had problems with the typescript and yarn as well as packaging the app with radiology image based models and training. The current webapp is modded and re-written in raw/vanilla javascript (thank you to the developers for the emoji scavenger hunt for all their patience in answering my questions).

What I learned

  • AI/ML is a wonderful piece of technology and holds many opportunities for the future of radiology. However it also has many limitations and misconceptions that should not be used to sell products with goals aimed merely at cutting costs.
  • Tensorflow is a great library for AI/ML experimentation that helps one create an app from the R&D stage all the way to deployment especially with tensorflow.js.
  • Tensorflow 2.0 is truly faster than the original. It converted my h5 model into only 3 shards when the old converter would produced over 10 or even 20 shards. This results in a smaller model foot print for the browser resulting in much faster initial loading times. The older app's initial loading time ranged from 1-2 minutes but the new app needs around 10-30 seconds (sometimes even less than 10s on fast connections). Subsequent loading times are really fast since the model is cached.
  • Still hoping for more radiology specific augmentation parameters for the dataGenerators. Radiologic data is usually limited so other means of augmentation for image data besides the usual rotation and zoom or shear would be truly helpful for radiology related image data handling and pre-processing.

What's next for RadLens2

  • Possible solutions for end-user hardware differences affecting the model's performance and inference
    1. Set up the javascript to detect if the end-user hardware is a mobile phone, tablet or laptop (including the camera resolution). This however will require a lot of preset callibration settings per hardware type.
    2. The current model is only for image classification. Integrating object detection into the model may significantly improve the sensitivity and specificity of the app. I was thinking of doing just that with tensorflow 1.0 but realized that it became computationally expensive to use generative adversarial neural networks or GAN's in the browser particularly on mobile phones further decreasing the frame rate to as low as 1 fps in some articles I read. Tensorflow 2.0 and tensorflow.js 1.0 however are significantly faster and I will have to look in on using object detection in the browser again for radiologic images.

SOCIAL IMPACT

  • Hopefully I can help jumpstart the development of more AI/ML radiology related research in my native country or anywhere that AI/ML is still basically a growing field and resources are limited. More advanced countries already have a robust selection of AI/ML related start ups and research facilities. I hope we get to start our own that is related to radiology.
  • Hopefully this can also bridge the gap so that doctors/radiologists in my country can work together with the local AI/ML developers in the country to facilitate the creation of a platform for AI/ML app development and research in Radiology/medical imaging.
  • Hopefully get others to use the project to train their own AI/ML models and deploy to the cloud.

Built With

Share this project:
×

Updates