Consider this, you have been late to class for the 6th time this month, not because you live 15 minutes off campus, but because finding parking is a dice roll even when you show up 15 minutes early. There are currently RIT pawprints petitions for more parking spaces, but we live in a swamp, so this is not a reasonable fix for this issue. We propose Blind Spot, an alternative solution to reduce frustration and help students get to class on time. Blind Spot is a tool to utilize existing security cameras to determine available parking spots in real time.

What it does

We have developed a system to turn existing cameras into sensors that determine parking availability. This system could improve the lives of anyone that drives a car, and improve a customer experience within a market.

How we built it

We utilized overhead cameras, already in place, as a sensor for the cars in a lot. This data is streamed over the network to a central server and is processed to determine location of cars, amount of cars, and metadata about the camera stream.

The central server runs YOLOv3 framework to recognize cars and map them to a location in the image. The server also runs a web server to disperse this information to our app, the admin tool, and our RGB lighting rig that reflects the state of the parking lot in an easily digestible medium. People that are currently driving should not be checking their phones to find available parking spots.

Challenges we ran into

Our challenges were divided into two sections: general networking and detection methods.


Brickhack dhcp server decided that other people were more important than us and would give other people our IP. In addition, we would constantly end up on different networks when we requested a new IP. 5g would have been great.

Determining optimal detection method

Color recognition

Determining the color of the ground and checking for changes to the norm was our original idea. This method turned out to be unviable because snow would change the color of the ground and ruin our data.

Car recognition using preexisting classifiers

Using the open source YOLOv3 framework and COCO weights, we could identify cars and get their position in the image, even using black and white cameras!


By making it easier to find parking, we have removed a friction point when buying Constellation products. By utilizing a 5g, IP cameras could be placed in areas that do not have wifi. Why would wifi be in a parking lot?

What's next for Blind Spot

We should expand to parking lots of heavily utilized areas to relieve frustration and consequently accident rates.

Timeline of Brickhack progress

10:30 - Determined project goal

We discussed multiple projects to work on, and came up with the idea to track open parking spaces using cheap cameras.

11:34 - Incorporated the 5g competition requirements

After discussion with the representative we incorporated off-device computing using a centralized server. Wifi is a substitute for the 5g element, while streaming video to the server, this will take significant data, and will come from places that may not have wifi access in production.

14:36 - First draft system engineer overview diagram


The data is no longer streamed directly to the user.

JSON Format

15:30 - Attended 5g talk

Attended 5g talk regarding specs and perks of 5g itself.

16:46 - First image of functional car recognition

Using YOLOv3 we were able to identify every car and map them to a region of the image using basic coordinates. daytime recognition

17:29 - Prototyped manual parking space definition interface

The interface to identify available parking places uses a drag and drop interface. The corners of a polygon are moved into position and the number of parking spots inside are defined. Cross referencing this with our previous car recognition, we can tell if a space contains a car.

18:13 - Setup web server

Using a Node web server we are hosting the file required by the different interfaces.

18:39 - Calculated security camera bandwidth

Minimum bandwidth estimate for 250 security cameras: 21.76 Gbps

Maximum bandwidth estimate for 250 security cameras: 168.04 Gbps

Estimations done using the Commercial Video Security Products calculator.

19:12 - Integrated and finalized web server and json file sharing

The web server has been initialized. It hosts a json file at :8080/live

19:46 - Verified that nighttime tracking is functional

19:56 - Front end stream view

Below is how the stream will look from from a user’s perspective. In addition to this they will be given information about the capacity of the lot at a glance.

22:25 - Created the image streaming service on an IoT device

Set up a raspberry pi as a makeshift IP camera. It is running the “motion” webcam streaming framework, but modified to always stream video regardless of the motion found in the image.

22:29 - HUD example

This is the first iteration of the bounding box defining interface.

22:54 - GUI manipulation

Created dynamic user interface to define parking spaces in 3D space

0:29 - Accessed the api successfully

1:11 - First iPad app milestone

React code interacting with the server and lighting controller.

3:02 - Functional iPad app

5:45 - Added more detail to app

Created a map with the ability to pan and zoom to increase clarity of the parking lot being defined

6:00 - Server RAM observation

This level of processing power is not available onboard a typical IP camera. Without 5g this project would not be possible*

9:29 - Freaked out about the approaching deadline for brickhack project submissions


9:37 - Group picture

Built With

Share this project: