We joined Miami Hackweek and worked on the Lula prompt to use unique data sources to determine safer routes for a car insurance policy
What it does
We created a risk model by using dash-cam footage to assess environmental factors that influence car accidents for a dynamic insurance plan. Namely, we look at road quality.
How we built it
Dashcam footage was preprocessed and fed into a 3D-Convolutional Neural Network to classify, on a per frame basis, whether the road quality is good or bad. This feature, in combination with 3-axis accelerometer and gyroscope data, using sensor fusion algorithms, is built into a risk scoring model for car trips. The output of the result is displayed in a dashboard!
Challenges we ran into
Video processing algorithms require large computational loads to get high accuracy. Instead of using transfer learning techniques off of popular trained models on video data, we trained a basic 3D-Convolutional Neural Network in a local python environment. The model isn't great due to a lack of sufficient data and training compute resources, but it proved the concept.
Accomplishments that we're proud of
Most insurance risk models take features such as time on the road, weather, or driver characteristics to price the cost of insuring a car trip. We wanted to learn about new technologies and approach this problem through a unique perspective that leveraged real-time sensors such as cameras and IMU sensors. We're proud that we tried a different approach.
What we learned
Video classification using CNNs/RNNs (recurrent neural networks) takes a lot of resources; the end output has to justify the amount of compute/dev-time and should outperform simpler approaches for this solution to be worthwhile. We also learned that there are unique features in real-time sensors that are sometimes overlooked in insurance businesses.
What's next for Eyes on the Road
Train a better model and display a higher classification rate for a more generalized set of routes.