What if you could understand and visualize the journey of an image in a deep learning model? Working and advocating Machine Learning for quite a long I thought to build a tool to help me visualize the journey of an image through a quite complex neural network architecture. Visualize.AI makes it possible.

This idea started when I was once working on a project and built an ML model which seemed to perform quite good and thought to test it on some of my own images where it failed miserably and I could not understand why. This pushed me to take initiative to make this project and make this a lot easier.

What it does🔎

My project, Visualize.AI allows people to understand and visualize the journey of an image in a convolutional neural network. I build an API to run the model on a given image and implement the GradCam paper to help visualize these. On top of this API I make use of Postman Visualizer to allow one to see the original image and the features a model has seen overlapped on it. Here is an example of the visualization for an example image, all made possible with Postman Visualizer:

How we built it🔨

I like to classify this project into 2 parts: Building the API and Visualizations

Building the API

This section shows how I built an API especially for this project to run an ML model on an image and implement GradCam to help visualize the journey of the image in the model. It turned out there was no API to do so, motivating me to build this API from scratch. I wrote this API in Python and hosted it on Google Cloud Functions and also made the source code for the API public here.

The API functions as:

  • Identify query parameters passed when making the request
  • Take the image from the query parameters, resize it to (224, 224) and convert it to an array
  • Load the MobileNet model and populate it with pre-trained weights on ImageNet
  • Implement GradCam in TensorFlow to visualize convolution layers
  • Place generated GradCam outputs on top of the original image using OpenCV
  • Save this updated image to a Google Cloud bucket with the file name passed in query params
  • Return the output image in JSON format


To make sense and create visual explanations of the ML Model from the API I then use the Postman Visualizer. With the Postman Visualizer, I show the original image and another image which has been highlighted with the features learned by the ML model overlapped on it.

How to test this out🧪

You can very easily test out my API + Postman Visualizer code especially after an automation update (Many thanks to Claire Froelich for pointing this out). We have two parameters in the API: image and destination. The image parameter is the URL of the image you want to put in and destination serves as a unique ID and is already automated by the API.

To test this out on your own image all you need to do is update the Image URL Collection level variable in Postman with your own image (make sure it's a direct link) and voila! If you notice we also use Postman Variables in the Visualizer code so be sure to edit the variable and not the request directly.

Challenges we ran into⚠️

I was originally planning on finding and using some existing API to help me visualize the inner working of an ML model and then use Postman Visualizer to make sense of it. However, turns out that there was no API that even did something similar to this. I did find a demo of GradCam here however not only this is quite time-consuming and huge (even though it runs on a GPU) but also just supports 3 tasks. I fixed this by building a new API from scratch and using that.

Accomplishments that I'm proud of🥇

Optimizing the ML Code to run on quite low compute and building the API to be quite scalable, hosted on serverless function is something I am quite proud of. I also explored and used Postman Visualizer for the first time in this hackathon and I am quite happy about the result I had.

What I learned🧠

  • Managing Serverless Functions
  • Effectively designing REST APIs
  • I used and learned Postman VIsualizer for the first time in this Hackathon

What's next for Visualize.AI💭

Here are some ideas I had next for this project:

  • I plan to animate the visualizations created from Postman Visualizer and show visualizations for the data going in through every neuron and show what happens
  • At the moment I only support using the MobileNet architecture which is one of the most famous models but I also hope to support other models in the API. I am currently working on support out of the box for two other quite popular architectures: VGG16 and EfficientNet

Built With

Share this project:


posted an update

Using the destination parameter while calling the API is no longer needed, it is now automated (Many thanks to Claire Froelich for pointing this out). Follow the instructions in the "How to test this out" section of the description.

With no automation in place for the destination parameter, since visualization metadata is stored on Google Cloud Storage, if two requests use the same destination (also serves as a unique ID) they got cached results ( not allowing people to use their own images very easily.

Log in or sign up for Devpost to join the conversation.