Inspiration

Passion for automated vehicles and ZF's problem statement.

What it does

Looks out for signs and pedestrians and behaves according to what it sees.

For example, if the vehicle is going 45mi/hr and sees a speed sign for 25, it will slow down to 25mi/hr. If it sees a STOP sign it will gradually come to a stop at the stop sign.

If it sees a pedestrian about to cross the vehicle, it will automatically hit the emergency breaks.

In case of an accident, it takes three snapshots from each camera and saves the pictures along with the time, date, and location where the accident occurred into its database system.

All this is done to ensure safety while bringing autonomous driving to the daily driver, where it is accessible to everyone and not just the Tesla drivers, who seem to have a leverage when it comes to autonomous driving.

How we built it

We built this by incorporating pytesseract to upload crucial data regarding what the cameras are supposed to lookout for (pedestrians, speed limits, stop signs, vehicles, etc.) and how programmed how the vehicle is supposed to behave in specific situations using OpenCV and python instructions.

Challenges we ran into

The first challenge we ran into involved getting OpenCV to work so we can get past the step of using our laptop cameras' as substitutes to real cameras that would be incorporated to vehicles.

Other challenges included using new libraries in a new programming language, technical challenges such as getting the cameras to detect the signs properly and taking snapshots in case of an accident.

Accomplishments that we're proud of

Getting the code to understand the data needed to react in certain ways and the cameras being able to read the signs properly are the two of the major setbacks we having during the implementation of our code. Getting past these hurts is something we're proud of ourselves for.

What we learned

As a team we learned how to use Python, OpenCV, pyTesseract, and numpy. We faced various challenges within our route of creating our project. None of us knew how to detect images in real time, use openCV, or read any text from an image in real time. Through trial and error, we learned how to read images from a live webcam, read text from an input given by our camera, and learned how to use machine learning to implant image recognition into our application. We also learned how to collaborate In a team where multiple people are making changes simultaneously , and making sure our production ran seamlessly.

What's next for Autonomous Perception

Incorporating online data from resources such as Google Maps to know what sign is coming up on a road/highway.

Built With

Share this project:

Updates