Inspiration

Our inspiration for this project was based on our past experiences in trying to find parking spots in Montreal. After driving around for more than a dozen minutes, trying to find a convenient parking spot, we began wondering if there might be a better to address the issue! Essentially, we wanted to build a framework that could help diminish the traffic within the city, help against road rage and ultimately, reduce CO2 emissions from driving around "aimlessly".

What it does

Our application uses a live stream from an IP Camera surveilling parking lots. The stream is sent to our Google Cloud server where it then is analysed in real-time. Video processing allows us to automatically detect free parking spaces and to dispatch our users to a location nearest to them.

How we built it

We build our framework with the help of many technologies. The framework can be broken down into 4 essential components. The first one is our IP Camera. For the needs of the demonstration we used an Android Streaming Application (CamON Live Streaming) as a mean of surveilling our model of the parking lot. This live stream is sent over Wifi to our python server. This server processes the input of the stream with the help of a few tools such as OpenCV and Scipy. By applying various manipulations to our incoming video, such as Hough Transforms, Otsu binarisation and Morphological transformations, we manage to extract key data from the stream. We are able to identify available and occupied parking spaces. The information is then transmitted to a second server (in Node.js) on Google Cloud, where it can be shared with users. The final component of our project is the client interface. This interface can be accessed anytime through our domain and will indicate users the closest available parking space.

Challenges we ran into

We ran into quite a few challenges along the way. One of our greatest challenges was building a robust parking spot detection algorithm. The particularity of the parking spots we chose to work with, is that they have an open shape. This means that the pre-processing needed to extract the information about each spot was not an easy task. Also, we decided to lower the quality of our stream to mimic real-life situations where the data would be blurred and more difficult to process. Aside from the processing challenges of our project, we had difficulties with various technicalities of Node.js and Angular, being technologies we weren't very familiar with.

Accomplishments that we're proud of

We are very proud to have been able to work as a team. Each member had an essential role in the development of the project and we were able to help one another whenever needed even though the respective technologies were not in necessarilly in our are of expertise. We are also proud to have built our demonstration model from repurposed pieces.

What we learned

We learned learned a great deal about image processing during this Hackatown. We also rediscovered some technologies we hadn't touched in quite a while and had the chance to dive head first into webhosting and computer vision without having any previous noteworthy experiences.

What's next for Close EnHough Parking

The next steps for our project would be to truly add geolocalisation to our application. Then we would like to expand the preprocessing algorithm we developped in order to deal with aerial views of the city in real-time. This could be achieved by using the already existing surveillance infrastructures around the city or even by deploying a fleet of drones to map the surrounding streets. The combination of these technologies, would mean that we would be able to give real-time recommendations to our users (based on their current location) and therefore allow us to help manage the traffic around the city.

Share this project:
×

Updates