Inspiration

Ever wondered which parking lot on campus to park in? And more specifically, what hour during the day to commute to school? In order to help students and faculty find parking on campus efficiently, we developed a parking tracker called SlugPark. Using real-time object detection, we will give live updates of different parking lots on campus, as well as give predicted number of available spots in a parking lot at a particular time. Our vision is to help solve environmental problems and we believe there is potential to reduce the number of cars on campus. Not only would it help reduce traffic and accidents by directing the students and faculty to empty parking spots in an efficient way, but it could also encourage car-pooling and commuting by another source of transportation.

What it does

Simply visit Slugpark.com before you head to campus to find out where to park. The final version of our website will display the number of occupied parking spots in all the different parking lots on campus, as well as predict the occupancy of these parking lots during different hours of the day.

Everytime Slugpark.com is refreshed, webcam images from the different parking lots will be retrieved and processed with the real-time object recognition program “YOLO”(YOLO on Github). This program provides our website with the number of cars that are detected on the webcam image. The “Current Car Count” acquired from YOLO will be fed into a database / table that already has the “Lot ID” and “Maximum Capacity” of the parking lots stored. The information will be read out and fed into another program called “justGage” in order to display the current occupancy of each parking lot. In order to provide the user with a predicted occupancy count, SlugPark will average out the number of parking spots available at a particular time/day on an easily understandable graph.

How we built it

Our working prototype includes the website Slugpark.com that random numbers for three parking lots in form of gages as well as predictions for the whole day based on randomly generated data stored in arrays. The real-time object recognition program YOLO has been tested on parking lot webcam images and the absolute number of cars has been extracted from the analysis. The website UI prototype was built using Javascript, HTML, and CSS.

Challenges we ran into

We tried several different approaches to accomplish an accurate and reliable parking tracker. Our initial idea was to base the information about parking lot occupation on user input. The website/ app would provide the user with the option to enter when they entered or left the parking lot or left the parking lot without finding a spot. However, it seemed very unlikely that our users would remember to do that, and it would require all the users to download the application. Therefore, our next approach was to come up with the necessary statistical framework (Rayleigh distribution) to predict and fill in missing data. In our next pipeline, we considered utilizing geofencing data by having the app/website open in the background of the user’s smartphone. Based on the speed at which the user was moving, we would be able to predict the parking behavior of the user (e.g. if the user is parking if they are driving into the lot and walking out of it). Our main problem with this approach was our lack of knowledge in app development, but extracting geofencing data and making very accurate predictions about the user’s behavior from the web browser is also very limited. Finally, we came up with our last and current approach to use an image recognition software.

Accomplishments that we're proud of

We’re proud of our product idea and that we stuck with it for the duration of the entire weekend. We all worked outside our comfort zone, learned a lot, and successfully worked as a team without any internal conflicts. We’re most proud of attempting to create a useful parking tracker with multiple different approaches. We thoroughly discussed and developed all of them in-depth but had no problem moving on to a new approach once we saw the limitations.

What we learned

Databases, APIs, machine learning / real-time object recognition, web development, google cloud platform, flask, git workflow, data visualization, UI for website in Javascript, HTML, CSS, and last but not least, teamwork.

What's next for SlugPark

The next step will be to establish a data structure in which the information received from the real-time object detector YOLO can be stored and read out by our website. Next will be the establishment of the necessary infrastructure (installation of webcams on the parking lots), so SlugPark.com can display actual real-time data. Furthermore, we’re planning to train and develop YOLO further to be able to identify cars on live video footage and to become more and more reliable in identifying cars in full parking lots. Additionally, we would like to extend SlugPark to display the parking lot information in a more detailed way, such as information about each level of a parking garage and information about separate spots being occupied or open (The object recognition software is already able to do that). After that, we want to turn SlugPark into a mobile application so that users do not need to visit the website anymore if they are already on their way to campus or on campus. For reliable predictions, we will download, organize, and store class schedules and class sizes as well as store the data that YOLO generates from the webcam images. For that purpose, we will need to develop a plan for secure data storage. SlugPark can also be applied to other college campuses as well as to city-wide parking systems.

Share this project:
×

Updates