While talking with my sister about common problems she faces in the operating room, she mentioned she has trouble approximating how much blood a patient has lost during surgery. Blood can be suctioned using a suction tool, in which case it is easy to measure volume by observing liquid in the collecting container. However, when blood is absorbed using surgical gauze sponges, it is much harder to estimate the volume of blood lost. This information is important for doctors, nurses, and anesthesiologists to know. It helps them prepare for post-operative preventative measures, and indicates if an intra-operative blood transfusion is required to prevent complications. Currently, the standard used is very crude estimation using weight/volume calculations by hand - which is time-consuming during a life-or-death operation.
What it does
Our code receives input in the form of pictures of surgical sponges, and outputs to the screen the number of sponges processed and an estimation of the total blood volume lost.
How I built it
We used Java to create three components: a GUI, a "Pic" class and a "Sponge" class. The Pic class analyzes an image file pixel by pixel, and gets a proportion of red versus red & white pixels to analyze how soaked the sponge is. It also takes the saturation level of the red colour into account. The proportion of gauze saturated is then passed to a Sponge object. There is a method to multiply the maximum saturation capacity of a fixed sponge size by this proportion in order to determine a volume absorbed. The values used were obtained from a study in a journal article (https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5003499/).
Challenges I ran into
Initially, we wanted to receive input as a continuous video stream and use object recognition/AI. However, this proved to be difficult, especially because we did not have all the hardware we needed to build the kind of device we imagined. We decided instead to simplify our concept and use photos as input for now to demonstrate our idea.
Accomplishments that I'm proud of
We're proud of how we attempted to solve our problem initially using our "big ideas," but when we ran into obstacles, we adjusted our plan to still prove our concept but in a simpler way in its earlier stages.
What I learned
We learned how to analyze pictures and obtain important info about them to be used elsewhere - ie analyzing a photo pixel by pixel. We also learnt how to use external resources to supplement our prior knowledge when working on new challenges.
What's next for The Blood Bot
We would love to have live-time video footage as the input for our program, making use out of AI and object recognition. It would also be beneficial to have settings to change hospital standards of sponge sizes and corresponding volume values. Finally, we want to integrate our device to consider values from other blood volume sources as well, like the values received from the suction collection container, in order for the number outputted to be a comprehensive total of blood volume lost.