Inspiration

We built Dash because we felt that we could create a safer user experience using technology. While we love the service and community that Waze has built, we believe that manually entering information on mobile phones is extremely unsafe. We wanted to build a way to provide the same value, but automate the process of updates, making safety information instantaneous and open to everyone.

What it does

Dash utilizes the rear camera on the mobile phone to record and analyze the road. Dash can currently detect car crashes that occur ahead of the driver using a neural network we trained from scratch, running locally on the iPhone. When Dash detects a crash, it collects your location data automatically and notifies all nearby drivers of the accident. These incidents are visualized in a live map and shared with all users in the proximity of these events. Dash hopes to connect the drivers on the road, keeping everyone informed and safe. We plan to add more functionality to be able to provide updates for traffic, road closures, bad infrastructure, and more.

How we built it

Dash was built using the Polarr SDK, Smartcar API, FastAI, Pytorch, Apple's CoreML, Mapkit, and Firebase.

To detect car crashes, we built and trained a convolutional neural network using FastAI and then transferred the pre-trained model to the Native iOS app using Polarr Vision Engine SDK and Apple's new CoreML framework. This lets us run a neural net locally on an iPhone twice every second, which is easily 5-8 times as often as if we were sending the data to an external server.

In order to notify people of traffic issues, we utilized SmartCar API to get realtime car data and send the location of the crash to firebase which was then pushed to all the user's maps. We then used Mapkit to properly show the accidents around you.

Challenges we ran into

We ran into various challenges in the actual development of the application. Since the technologies that we used were so new, there aren't many good translation tools between the different tools we used. We needed to come up with a way to get them to work together.

Specifically, we needed to convert our Pytorch trained model to Apple's CoreML .mlmodel format. Since Apple doesn't have a tool to do this conversion, we had to make a workaround by writing a script to convert our pytorch model to the open source format .ONNX, and then from .ONNX to CoreML.

Also, going into the hackathon we were all more comfortable in React Native and planned to use that, but when we realized that Polarr SDK would not work efficiently with React Native, our team had to learn Swift to showcase the final implementation.

What we learned

We learned how to utilize different SDKs, utilize Figma for interaction design, and learned how to make a neural net.

What's next for Dash

We'll be training the model to be able to detect more incidences in the road to be able to provide value data to both cities and insurers. We will be incorporating Twilio API to automatically call emergency services if a user gets into an accident in an effort to save lives as it could be the difference between life and death for our users. Being able to detect and report general traffic data like slowdowns and speed traps will also help make our users' commutes a lot safer and help to get more efficient use out of the road network.

Built With

Share this project:
×

Updates