Businesses have been hit hard with COVID-19 as they try to survive consecutive lockdowns and restrictions alongside logistical uncertainties. This includes challenges that entail dealing with certain people who disobey building occupancy limits and regulations, irregular foot traffic, and dealing with lower profits while trying to meet operational expenses.

What it does

With 2020 Vision, businesses can add a live video stream from the entrance of the building where the app can analyze and track the number/flow of people and enforce a lock upon the maximum occupancy. Data is collected on the foot traffic for the business which is visualized in the dashboard. This data is also used to calculate various metrics like avg entrance/hour, avg exit/hour, avg people/week, avg capacity, customer density, and people in & out. This helps business owners to make decisions for when and how long to operate during the day or week.

How we built it

A React web app was built to display a dashboard of visualized data metrics with Chart.JS. The web app also displays a live video feed that has been processed by OpenCV for detecting movements of people frame by frame. It keeps track of the number of people inside the building and enables a lock signal for when it has exceeded maximum occupancy. The React web app and OpenCV API are both socket clients that transmit and receive data to/from the NodeJS socket server. Furthermore on the computer vision, the object detection is designed with Single Shot Detector, which identifies the object(s) in a video frame. This is built alongside a MobileNet architecture which is a Deep Neural Network built to perform on devices with lower memory and processing/computing power like smartphones and IP cameras. Centroid tracker is used for object tracking by calculating the center of the bounding box of an object. A detected object (a customer) is uniquely identified and tracked over the following frames and is utilized to calculate number of people inside, and people leaving/entering. The detected object has a directional vector which gauges their movement for leaving or entering. The machine-learning algorithm also identifies a physical threshold that represents the line of entry/exit point.

Challenges we ran into

Fixing broken modules and package dependencies in order to establish a socket client/server connection. Dealing with everybody's individual schedule and coordinating the team in an online remote environment. Figuring out how to send real-time data from socket connections. Difficulties with data collection for creating useful logistical metrics and sending video inputs to a computer vision model that is less resource intensive for our web architecture. Issues for deploying computer vision model onto AWS Cloud9 within EC2 instance.

Accomplishments that we're proud of

Clean and easy-to-use UI, effectively setting up web architecture alongside socket connections for real-time data handling, creating an end-to-end solution, deploying a computer vision model seamlessly integrated in the web architecture.

What we learned

OpenCV/SSD/MobileNet/Centroid Tracker for implementing a computer vision API, Chart.JS for data visualization, Socket.IO for real-time data handling in clients/server.

What's next for 2020 Vision

Integrating IoT for smart-locking systems to further modularize the software for varying businesses.

Share this project: