Traffic congestion is already the most painful part of city life, and it's only going to worsen as cities grow. We need clever solutions, and those solutions need data. But that data is sorely lacking: even today, traffic counting is often done by hand, making it prohibitively expensive to deploy on a large scale.

Thankfully, machine vision has improved by leaps and bounds in the last couple of years, meaning that the problem can now be solved in 24 hours by four college students in a basement.


GoWithTheFlow is an open source app that enables any group with a simple camera to quickly and easily analyze the flow of objects in the camera frame. The user inputs a video, and the software counts the number of objects of interest moving between different exit and entry points within the frame. The user can also manually select intersections for additional information.

While this project was initially created to monitor traffic flow and eliminate the need for manual traffic counting, due to the versatile nature of the machine learning algorithms, we were able to modify the detection scheme so that it is able to detect any kind of object moving in the camera's view. This means that this simple piece of software can be used to monitor the movement of people, cars, animals, basically anything that might stand out as "of interest" in an environment.


While flow and movement analysis is hugely important in countless areas of society, one of the most immediate applications of this software is when it comes to hosting events. The hosts may want to know what areas of the venue get the most attention, how the crowd develops, or even just how to best guide people as they move about when planning for an event, and this app can be a great asset in doing just that.

It could also be a great asset in monitoring the movement of herds of groups of animals for ecological reasons. While it may be very difficult to manually comb through countless hours of footage to try and make sense of the large-scale movements of animals, our system provides a simple accessible way to do much of this analysis automatically.

While we could go on, suffice it to say that we have a created a generalizable way to monitor and analyze the movement patterns of almost any macroscopic system using just a camera.

How we built it

The user interface was written using PyQt5. The input video is split into frames, then each frame is fed into the YOLO-9000 image classifier. The large size of this classifier is what ultimately allows our project to work as a generalizable flow monitoring system. The classifier returns the positions of interesting objects in each frame. This data is then linked and analyzed using an unsupervised machine learning algorithm to determine the probable location of entrance/exit points. The data is then analyzed and the flow rates in and out of each entrance and exit are provided. The user can also manually adjust the exit/entrance locations, and see what the probable flow rates and patterns would be for their setup. We used an AWS 2.2xlarge instance to run the majority of our video analysis for two main reasons: one, the GPU support enabled us to process large videos very quickly for testing purposes and two, it was a community instance that already had the majority the software required for our project already installed. The image overlays are drawn using OpenCV.

Built With

  • industrial-iot
  • k-means-clustering
  • machine-vision
  • opencv
  • pyqt
  • python
  • unsupervised-learning
  • yolo9000
Share this project: