Inspiration
With the population constantly increasing, parking spaces have become harder to find. It is often hard to find parking spots in busy plazas, which in turn creates huge inconveniences. Searching for parking spots wastes money because you are wasting your gas, which also hurts the environment, and also wastes your valuable time. In some cases, one must wait for already parked cars to leave and open up their spot, or in extreme cases, park in a completely different location. Additionally, when circling around parking lots, drivers are often stressed, which can impact their driving ability and in the worse cases result in collisions.
Our team noticed these issues and felt like we needed to take initiative to resolve them. We feel that parking should not be a burden, but rather a simple action which will allow you to spend more time at the location you want to be instead of wasting time parking your car.
Currently, there is no technology that can solve the issues presented by parking spots, so drivers must rely on their own observations. The issues with this are that human eyesight can only see things that are in its field of vision. In parking lots, SUVs and trucks can easily block a driver’s ability to locate parking spots. However, our team has developed a more effective way to identify these parking spots. By using cameras in parking lots, our program can use the live feed to detect open parking spots and send notifications to users. This is a far more effective solution to parking inconveniences because cameras have a far better field of vision than the human eye. This when added together with notifications to drivers on when spots are open reduces the stress caused by aimlessly driving around the parking lot looking for open parking spots and reduces the total time spent parking, creating an effective solution to parking inconveniences.
What it does
Our web application detects open parking spots from pictures and videos of parking lots using ML. The core focus of the project is to use Python deep learning models integrated with OpenCV to help people in a variety of ways.
When used in conjunction with cameras in parking lots, we can train our model and display a live feed of parking lots. Green highlighted parking spots indicate ones that are open, and ones that are taken remain unchanged. The original photo/video will be on the left, and the segmented image resides to the right of it.
Finding data that was usable was hard, and therefore we trained the model on just a few images and videos. However, this was good enough to create a very accurate model, which could identify single parking spots and determine whether they were taken or not.
On top of detecting open parking spots, we are also able to notify users when a parking spot has opened in their specified parking lot. This is crucial because it saves drivers time, hassle, stress, gas, and money. Our website also helps the community because we are able to prevent unnecessary traffic and carbon emissions, because users will know where the closest open parking spot is and won’t have to drive around endlessly to find one.
Using Google Cloud, we store data, which contains the names of many free parking lots in the Bay Area. We also use Google Cloud to host our training data so that we would all have easy access to it. Using Radar.io, we are able to find the user's location, and prompt users to suggested parking lots that are nearby based on our Google Cloud database. From here, users can go to our smart parking page and enter their email so they can be quickly notified when a parking spot is open in the lot they selected. By using Radar.io and our Google Cloud database, we are able to present users with a couple options in a drop down menu while they are selecting their parking lot. Currently, we have limited the number of parking lots per area on the dropdown menu to reduce the strain on our servers. Our project is able to save users money because we are able to direct them to open parking spots, so they avoid paying for parking meters and wasting gas by aimlessly driving around.
How we built it
When we first came up with the idea, we didn’t realize how difficult machine learning would be. The first step was to distinguish every parking spot in a given parking lot. From here, we had to figure out which parking spots were filled, and which weren’t. This required training of a model on singular parking spots that consisted of open and taken parking spots.
We programmed our web app in a variety of languages, and API’s. The frontend for our website is built in HTML, CSS, and Javascript, while our backend was built with Javascript, PHP, Python(OpenCV), Google Cloud and Radar.io. We used Javascript to help handle our forms, with PHP being the main support for the forms. PHP was also used for our notification system. Python and OpenCV are used for our deep learning models. We used Google Colab notebooks to test our models so the models ran quicker and we integrated Google Cloud by storing our parking lot data in it and hosting our website with its firebase. Lastly, we use Radar.io to figure out the users location to narrow down parking lot locations.
Challenges we ran into
Our main issues arose in training our model. At first, differentiating the parking spots was quite hard. It took a long time, however we were eventually able to combat the problem by using clustering logic. Differentiating open spots from vacant spots was also a challenge, however after extensive model training we were able to combat the issue by creating an extremely accurate model. Creating the model using deep learning and OpenCV was quite hard because this was our first time using vision with AI. After researching and countless hours we were able to finish our model.
Accomplishments that we're proud of
We are proud of being able to build a fully functioning vision model. This was the first time any of us had done anything with OpenCV and it was great to see the model working. It was also 2 of our members' first hackathons and creating a project, which incorporated so much intricate code, while simultaneously helping people in many ways was amazing. We are also proud of incorporating Google Cloud Platform to store our parking data, testing data, and host our website. On top of this, we were proud of being able to incorporate Radar.io to narrow down our list of parking lots.
What we learned
We learned how to implement vision with python and deep learning. We also learned a good amount about both frontend and backend development, as we were able to create a great UI and use Radar.io and Google Cloud for the first time.
What's next for Parking-Prediction
We hope that we are able to get functioning live camera footage in the future, so that we can run our model on this live footage. This would allow us to display live feed to our website with our modeling editing the images so the open spots can show. Alternatively, we could use our model to display parking lots on other companie’s websites so they can show their own parking lots with our integrated model. By gaining access to live footage not only would we be able to display footage, but our Radar.io and Google Cloud functionality would be deployed to their full potential. Currently, our Notification System is just a proof of concept, with code ready to be deployed as soon as we get live footage.
Built With
- ai
- css
- firebase
- google-cloud
- html
- ipython
- javascript
- jupyter-notebook
- ml
- php
- python
- radar.io
Log in or sign up for Devpost to join the conversation.