Chemotherapy treatments aim to kill cancer cells in the human body but sometimes also kill other healthy cells such as the neutrophils, which are a type of White Blood Cells (WBC’s). This causes Neutropenia, where the reduction in neutrophil count suppresses the individual’s immune system, making them highly vulnerable to infections. Regular monitoring of neutrophil count through tests in hospitals are expensive and many might not be able to afford it. Thus, there is a need for a solution that can enable easy, at-home monitoring of the blood neutrohpil count in order to avoid any delay in detection of neutropenia, while also facilitating the physicians with remote monitoring of individuals.This project aims at building a low cost, simple, portable and a smart device that can be used by patients to keep a regular check on their blood neutrophil count at home.


The proposed device works in the following manner:(Figure 1)

1) The user uses a commercially available lancet to draw a few drops of blood and then adds it into the disposable cartridge. The cartridge is then inserted into our device.

2) Once the cartridge is properly slotted and aligned inside the device, a programmable syringe pump sucks the blood further into the chip towards a region where antibodies are present. Further, a certain amount of dye is inserted into the region of interest.

3) The end of fluid pull triggers an LED to illuminate the cartridge from underneath it, making the cells fluorescent.

4) A camera now captures an image of the region of interest and performs some image processing to determine the intensity of the fluorescent cells.

5) The resultant value is now uploaded onto the google cloud using MQTT protocol for remote monitoring. The physician will now be able to analyze the variations in patients daily neutrophil count by looking at the varitation graph.


1. Processing the captured images (Baseline 1) :

The chip is illuminated using a UV LED of wavelength 455 nm. The illumination excites the dye making the neutrophil cells fluorescent. The Pi Camera attached to the Raspberry Pi now captures an image of the illuminated sample for further processing. The processing starts with the image conversion to Hue Saturation Value (HSV) color space and filtering of pixels that lie in the given HSV range.The filtered set of pixels are now used as a mask and applied onto the original image to obtain the information that we actually need (Figure 2). The pixels are now converted into HSL colorspace and the Luminance values are averaged across the image. This average value will later be mapped to a neutrophil count and saved in a text file along with the date and time stamp.

2. Implementing the Syringe Pump (Baseline 2) :

Meeting the team:

In order to get a clear idea about the working and implementation of the pump, we met with the Sanguis team and Professor Dan Huh from the Penn bio-engineering department. Although the initial plan was to use a peristaltic pump to push/pull the liquid, we realized that we would need to a programmable syringe pump to generate the necessary pressure.

Designing the pump:

The process of implementing a syringe pump mainly involved some basic mechanical designing that would convert the rotational motion of the servo motor into a linear motion of the syringe. We used the standard SG5010 180 degree servo motor and controlled its angle by sending PWM signals over the control wire.(Figure 3)

3. Integrating the system (Baseline 3) :

Meeting the sanguis team:

We met with the sanguis team to discuss about the chip design and how the channels for the liquid flow would be implemented. The team then manufactures a chip and hand it over to us for testing.

It was now time to integrate the implementations made in Stage 1 and Stage 2 with an actual chip. We created a basic design of our device with a card board box and then placed all the components and the chip inside it. We tested the system to ensure that no external light was interfering with the illumination from the sample and all the functionalities were performed in a smooth fashion.

4. Displaying the data over cloud (Reach 1):

We have implemented this using an MQTT broker and javascript file, which we are hosting on the google cloud server. Once all the processing is complete, the Raspberry Pi utilizes its inbuilt WIFI to connect to the internet and publish all the necessary data onto the MQTT broker "". The Javascript file acts as a client that has subscribed to the relevant topic on the MQTT broker and will fetch live data that is parsed onto the HTML page. The HTML page shows a line chart that represents the last ten readings made by the patient (Figure 4). This could be accessed by the doctor remotely, enabling a more easier way to monitor the patients. Website:

5. Making the data available on user mobile phone through android app (Reach 2):

We built an android app on android studio to facilitate the user to view their readings on the phone. The application also displays the trend of data over a period of time(Figure 5).

With everything is place, we now test the complete end to end system from the power button to the data publishing happening on cloud. The overall functionality can be viewed here:

What's next for Sanguis :

1) Having gotten the basic idea about the end to end system, we now plan to build custom PCB using standard micro controllers and peripherals that are ideal to perform the given set of tasks and are low power as well.

2) We need to fine tune the image processing by testing on more number of samples and generate an accurate mapping of the luminescence value to the neutrophil count.

3) Implement the pumping of the liquid in a more compact manner using new approaches and custom designed components.

4) Fine tune the cloud application and the android application to make it appealing and user friendly.

System and performance evaluation:

1) Main component of our system was image processing, for which we have tested with different samples to get different luminescence values. Figure 2 is one of our test sample.

2) Second component was pushing the dye to the region of interest in the chip and then pushing bash the dye to the dye chamber when image processing is done. We have evaluated this and the video can be seen in Figure 3.

3) Third part which we have tested is that if the same data is available on the android app and the website on google cloud.

What we learned :

1) This project gave us an opportunity to work with a wide range of tools and technologies ranging from image processing on OpenCV to cloud hosting to Android.

2) Working with an external team, understanding their requirements and having regular interactions about our implementations to get valuable feedback, was a new experience!

Hardware Components Used:

Raspberry Pi 3 B+, Pi Camera, Commercial Syringes, SG5010 Servo Motor, Battery Pack, MP1584EN DC-DC converter.

Github Link for Codes:

Share this project: