Every day when I come home from work and try to find parking at UofL, I spend at least 20 minutes checking every lot in the hopes of getting a good spot. If only there was some way to find an open parking spot without having to drive around every parking lot on campus...
What it does
Spot Spotter uses an open-source image recognition library called darknet (https://pjreddie.com/darknet) to analyze a live feed of a parking lot and identify which spots are taken.
How we built it
We used insecam.org to find open security feeds of parking lots. Using a simple python script, we download the live image every ~12 seconds. We run it through a pre-trained darknet model that can identify multiple objects (including cars) using a YOLO (You Only Look Once) algorithm. It generates a collection of pixel coordinates on the image as well as bounding boxes, which we used to identify the locations of cars. I built a Golang API that does all of the backend logic. It imports the bounding box coordinates and builds a grid of lines connecting boxes. We look for as many intersections with other boxes as possible (and assign a confidence to each line accordingly). After the lines are imported, we identify the most likely parking rows and look for gaps in the bounding boxes. We can use that to determine which spots are empty.
Challenges we ran into
It was difficult to find an algorithm that would be successful in predicting the parking rows of various parking lots without training a new machine learning model.
Accomplishments that we're proud of
Successfully using machine learning to recognize multiple objects in an image, pulling images from a live feed, building a website to display the result, applying mathematics in our identification of empty spots.
What we learned
I learned a lot about machine learning and am better able to link mathematics and programming.
What's next for Spot Spotter
We'd like to train our own image recognition model that can recognize parking spots instead of vehicles. It would greatly simplify the process and be more accurate.