Inspiration

Approximately 107.4 million Americans choose walking as a regular mode of travel for both social and work purposes. In 2015, about 70,000 pedestrians were injured in motor vehicle accidents while over 5,300 resulted in fatalities. Catastrophic accidents as such are usually caused by negligence or inattentiveness from the driver. With the help of Computer Vision and Machine Learning, we created a tool that assists the driver when it comes to maintaining attention and being aware of his/her surroundings and any nearby pedestrians. Our goal is to create a product that provides social good and potentially save lives.

What it does

We created SurroundWatch which assists with detecting nearby pedestrians and notifying the driver. The driver can choose to attach his/her phone to the dashboard, click start on the simple web application and SurroundWatch processes the live video feed sending notifications to the driver in the form of audio or visual cues when he/she is in danger of hitting a pedestrian. Since we designed it as an API, it can be incorporated into various ridesharing and navigation applications such as Uber and Google Maps.

How we built it

Object detection and image processing was done using OpenCV and YOLO-9000. A web app that can run on both Android and iOS was built using React, JavaScript, and Expo.io. For the backend, Flask and Heroku was used. Node.js was used as the realtime environment.

Challenges we ran into

We struggled with getting the backend and frontend to transmit information to one another along with converting the images to base64 to send as a POST request. We encountered a few hiccups in terms of node.js, ubuntu and react crashes, but we're successfully able to resolve them. Being able to stream live video feed was difficult given the limited bandwith, therefore, we resulted to sending images every 1000 ms.

Accomplishments that we're proud of

We were able to process and detect images using YOLO-9000 and OpenCV, send image information using the React app and communicate between the front end and the Heroku/Flask backend components of our project. However, we are most excited to have built and shipped meaningful code that is meant to provide social good and potentially save lives.

What we learned

We learned the basics of creating dynamic web apps using React and Expo along with passing information to a server where processing can take place. Our team work and hacking skills definitely improved and have made us more adept at building software products.

What's next for SurroundWatch

Next step for SurroundWatch would be to offset the processing to AWS or Google Cloud Platform to improve speed of real-time image processing. We'd also like to create a demo site to allow users to see the power of SurroundWatch. Further improvements include improving our backend, setting up real-time image processing for live video streams over AWS or Google Cloud Platform.

Share this project:
×

Updates