Inspiration

At this hackathon, our goal was to implement a computer vision workflow to address sustainability. After generating a range of ideas from classifying waste in different settings to managing people's electricity through analyzing the country's power grid, we settled on healthcare operating rooms, which had the most promising impact. In hospitals, various equipment from personal protection equipment (PPE) to surgical tools are not only one-time use but improperly prepared and ultimately wasted. This contributes to the healthcare industry's growing impact on climate change, accounting for over 8% of all carbon emissions in the US (Yale School of Public Health, 2020).

What it does

Once we mapped out our initial scope, we focused on creating a prototype to see if we could track improper use of PPE, specifically gloves in the operating room. Doctor Snap uses computer vision to track people's hands and classify glove usage in real time. It gives doctors insight into how often they remove their gloves and how that will affect the environment. By providing these insights, we allow for doctors to be more mindful of their waste and promote sustainability.

How we built it

When the user provides a live video feed, we intercept and parse it with OpenCV. This data is then passed to MediaPipe, which will identify the hands that are present in the scene. We use this data to find a bounding box and crop out the hands from the feed. Using fast.ai, we built, trained, and tested data sets of images of hands with and without gloves to build a computer vision model. The cropped out hands from the previous step will pipe data into this fast.ai model, which will determine if the hands are gloved or not. Afterwards, the model is passed back to OpenCV, which tracks real-time image and translates it into usable data. Both processes then work together to classify a set of hands and predict if it has gloves on. The front-end is built with Next.js, providing a simple database for the user. Although not fully implemented, this will interface with a Flask backend that provides constant data updates to the user.

Challenges we ran into

The first challenge we ran into was finding a data set that for gloved hands. There was already a data set of hands that we could use, but no gloved data set. To handle this, we got some gloves and started taking pictures of gloved hands in different lighting and in different positions. In the end, we were able to build a dataset and train the model.

Another challenge we ran into was gaining a fundamental understanding of machine learning with images. The conversion from images to data, then data to model presented area of inexperience. It took us longer than we wanted to figure out how to train the model, especially since we did not have a trained data set. After research and experimentation, we were able to train the model using fast.ai.

Finally, we also faced many constraints due to the scoping of our project. We started out wanting to do an extremely complex system and had to rethink our solution multiple times over the course of the hackathon to find a balance between features and feasibility. We had a miscommunication due to this and had to scrap a good amount of code (though a lot of the code was applied to our final product!) Although this was a rocky point for us, we worked through it and still learned a ton of new things along the way.

Accomplishments that we're proud of

Being able to display a live feed of the working classifier model was a big accomplishment for us, and it was a cool visual element. Testing the model to identify hands with gloves and other objects was a grind, but it was exhilarating to finally have our model work after hours of tweaking.

What we learned

As this was our first computer-vision project for all the members on our team, there were many issues we ran into and plan to address in future challenges. For machine learning, it is important to have a good data set to train the model. Without understanding the different aspects and attributes that images can produce, we ran into problems with a biased data set resulting in over-fitting.

What's next for Doctor Snap

With a fully-functional model, this has the possibility to be sold as an SaaS to hospital chains. Future versions would include greater recognition of other healthcare tools. By integrating computer vision into the operating room, new insights could arise to improve affordability and increase access in our healthcare system.

Built With

Share this project:

Updates