I was inspired by seeing how voting sites really maintained cleanliness + taking anything that you touched (pens, stickers, etc). But I realized that not every public area has these stringent standards to prevent transmission of COVID-19 and other diseases through indirect contact. So I designed this tool to help managerial/janitorial staff keep a certain video monitored area clean.
What it does
The project is designed to track inanimate objects in a video space and count how many times there has been human contact with it. If a threshold is crossed, the tool will alert any assigned staff to either sanitize the area or remove any used/touched items.
It can also be repurposed to track how long an object has been in the space.
How I built it
I used Kinesis as a video streaming platform and the GStreamer plugin to directly send video from my webcam to the pipeline. Then I trained an object detection model on the COCO dataset to recognize common objects on SageMaker and deployed an endpoint for it. Using a serverless function on Lambda, I wrote clips and object detection text files to an S3 bucket.
Challenges I ran into
I was going to use Rekognition at first, but then realized that it didn't support streaming live video so I had to quickly pivot to using my own object detection model. I also had issues installing and setting up the GStreamer plugin so that an external webcam could be recognized.
Accomplishments that I'm proud of
I worked through many, many bugs and issues to spin up an MVP within 24 hours :) !
What I learned
How to use Kinesis + GStreamer + SageMaker.
What's next for Vector Tracker
The COCO dataset did not have hands as part of it's training data, so if I'm going to track human contact, that is probably the most important element to work on. I also need to track if bounding boxes for the hands and inanimate objects collide and keep a counter for each individual object for when that happens.