Inspiration

We noticed one of the tracks involved creating a better environment for cities through the use of technology, also known as making our cities 'smarter.' We observed in places like Boston & Cambridge, there are many intersections with unsafe areas for pedestrians and drivers. Furthermore, 50% of all accidents occur at Intersections, according to the Federal Highway Administration. This can prove to be enhanced with careless drivers, lack of stop signs, confusing intersections, and more.

What it does

This project uses a Raspberry Pi to predict potential dangerous driving situations. If we deduce that a potential collision can occur, our prototype will start creating a 'beeping' sound loud enough to gain the attention of those surrounding the scene. Ideally, our prototype will be attached onto traffic poles, similar to most traffic cameras.

How we built it

We utilized a popular Computer Vision library known as OpenCV, in order to visualize our problem in Python. A demo of our prototype is shown in the GitHub repository, with a beeping sound occurring when the program finds a potential collision.

Our demonstration is built using Raspberry Pi & a Logitech Camera. Using Artificial Intelligence, we capture the current positions of cars, and calculate their direction and velocity. Using this information, we predicted potential close calls and accidents. In such a case, we make a beeping sound simulating a alarm to notify drivers and surrounding participants.

Challenges we ran into

One challenge we ran into was detecting the car positions based on the frames in a reliable fashion.

A second challenge was calculating the speed and direction of vehicles based on the present frame & the previous frames.

A third challenge included being able to determine if two lines are crossing based on their respective starting and ending coordinates. Solving this proved vital in order to make sure we alerted those in the vicinity in a quick and proper manner.

Accomplishments that we're proud of

We are proud that we were able to adapt this project to multiple levels. Even putting the camera up to a screen of a real collision video off Youtube resulted in the prototype alerting us of a potential crash before the accident occurred. We're also proud of the fact that we were able to abstract the hardware and make the layout of the final prototype aesthetically pleasing.

What we learned

We learned about the potential of smart intersections, and the benefits it can provide in terms of safety to an ever advancing society. Surely, our implementation will be able to reduce the 50% of collisions that occur at intersections by making those around the area more aware of potential dangerous collisions. We also learned a lot about working with openCV and Camera Vision. This was definitely a unique experience, and we were even able to walk around the surrounding Harvard campus, trying to get good footage to test our model on.

What's next for Traffic Eye

We think we could make a better prediction model, as well as creating a weather resilient model to account for varying types of weather throughout the year. We think a prototype like this can be scaled and placed on actual roads given enough R&D is done. This definitely can help our cities advance with rising capabilities in Artificial Intelligence & Computer Vision!

Share this project:

Updates