I wanted to work on a project that was challenging for me, using something I have never used before. Image processing using OpenCV seemed like the perfect task, and the interface in Python, a language I never used before, seemed like the perfect task. After talking with the BookHolders and MetaMind team, I figured I would be able to use BookHolders live stream of the downtown intersection and MetaMinds AI API to do an analysis of the traffic situation!
How it works
TrafficTracker currently takes in feeds from MDDOT's website, this is then filtered and analyized, watching for moving cars and tallying them if they cross a certain point. This is how I obtain my quantitative data. However, since numbers can only tell part of the story, and to many people "Cars/Minute" doesn't really translate to anything useful, I utilize MetaMinds AI API to investiagte the current traffic and qualify it based ona 3 point rating system (Light, Moderate, Heavy). This is returned via a custom filter that has many example images uploaded to train the service what different traffic situations look like.
Finally, this stream is then sent via a simply Python MJPG server to stream to the website, http://traffictracker.yeomans.io
Challenges I ran into
Many challenges were encountered when making this project. The first stream I had looked into, provided by BookHolders, was a web sockets stream. It was pretty difficult getting the socket stream to play nice with OpenCV, with my final solution being to use a Node.js script to read the socket and pipe the raw video data to a FIFO pipe on my local system, then have OpenCV read from the pipe.
Unfortunately, my second challenge was the low volume of traffic on the BookHolders feed. The camera was placed close to the road, and at an intersection where cars did not traverse often (or at least often at 330am when I made the decision to switch).
Accomplishments that I'm proud of
I'm very proud of the fact that I had learned both Python and OpenCV in order to accomplish this task, having used neither before. Seeing those green boxes being drawn on the feed, along witth the counts going up is awesome!
Training the MetaMind AI API produced a great feeling after an hour of work, when you upload your first image to be scanned, and it correctly gets the right class!
What I learned
I learned Python! A very powerful and versatile language that I had not used before. I also learned how to deploy an Amazon Web Server to run my site on. I had always used physical computers before so using a virtual instance had a bit of a learning curve with all the different settings, etc.
What's next for TrafficTracker
Next up will be to improve my blob coalescing algorithm, currently is a double FOR loop (O(n^2)) and eats a lots of CPU power. I also would like to better investigate my Pythin MJPG server, as after a certain amount of time the server will cut the stream.