Team 64442 Discord: Ayaan#0434, Buns#2028, Ishaan#5777, Vijaarj#8645, Sajiv #4051
Throughout the last few months, our team has received our permits and begun driving for the first time. We are now able to experience how dangerous the roads and infrastructure are. With our eyes finally opened to such an understated and risky problem, we set out to help solve it using technology.
Car crashes remain the leading cause of death for people under 30, meaning it is incredibly critical to understand and attack this problem. Specifically, we were incredibly shocked to find out that during COVID-19, several states have found an increase in fatal car crashes. We noted that most technological solutions combating this problem focus on driver education and safety, and while this is incredibly important, we focused our efforts towards a less addressed approach.
Thus, we took a different approach from the common hackathon project. Instead of creating an application meant for general use, we developed an application specifically for state and city governments. We plan to implement our software as part of a nationwide government plan to promote smarter designing of road infrastructure. Since governments often utilize outside developers to build applications, we believe our website fills a normally unoccupied niche, and projects like this should be encouraged in the hackathon community. However, we still made our website useful to normal drivers with useful features.
Thus, we developed PreDent, which analyzes road data through a machine learning algorithm to identify high-risk crash sites.
What it does
PreDent is a unique progressive web application that identifies the accident-prone areas of a city through machine learning. The core of our project is an ML model that inputs static features (speed limits, road signs, road curvature, traffic volume), weather (precipitation, temperature), human factors, and many other attributes to ultimately output a map of city roads with hotspots of where collisions are likely. Note that our demo shows the process, but because our model is incredibly complex and large, the only way for us to deploy our model is to get access to expensive, high-powered servers. Our model will work on any city’s dataset, but they would have to be collected or provided to us.
First, government officials can upload a csv file of their collected traffic data, which many already have in private storages. This file is uploaded to Google Cloud, and we then input it into our model. Once we finish processing their data, we notify them via email. Our model then outputs: 1) coordinates of crash sites 2) specific issues at each crash site, and 3) a heat map overview of the city. Additionally, using the model generated coordinates, we create an interactive map using the Google Maps API. With this information at hand, city designers can informatively improve their roads by determining where to fix roads, add additional signs, adjust speed limits, and more. This information is essential for promoting safer roads and infrastructure.
We also have a page for common drivers. Residents from partner cities can find a map with the hotspots of where crashes are likely. These heatmaps change based on an hourly basis and time of year to account for rush hours and temperature/weather. The common pedestrian or driver can also help improve the efficiency of our model by inputting data about crashes in their neighborhoods by interactively placing pins on the map, which we aggregate with already collected data using Firebase.
Lastly, we have a Data Visualization page, where we show our process of analyzing data and determining which factors are important. We show our exploratory data analysis process and visualizations of key attributes. We used GeoPandas and Fiona to render these images. Instead of just uploading plots and graphs, we rendered our data into real dimensional visualizations and maps.
How we built it
After numerous hours of wireframing, conceptualizing key features, and outlining tasks, we divided the challenge amongst ourselves by assigning Ishaan to developing the UI/UX, Adithya to connecting the Firebase backend, Ayaan to managing, training, and developing the ML model and creating heatmaps, and Viraaj to developing our map system and integrating our heatmaps.
After reading documentation, we developed our model and tested it on open-sourced data from Utah roads (from Medium) and produced the heatmaps. We also created a web-scraper to collect data from state databases to create our training sets. We scraped weather and road infrastructure databases to add to our available data. We pinpointed thousands of crash sights as our positive samples, and randomly sampled for negatives from locations where crashes never occurred. We trained two models, a gradient boosting model and a neural network, and found that the gradient boosting model performed better. We documented all our progress in our Jupyter Notebook, which we recommend reading.
Challenges we ran into
The primary challenge that arose for us was training and deploying our model. It was incredibly difficult to find data; we only were able to find one publicly available dataset from Utah. In addition, since we have never created a geospatial-ML model, developing our model and creating maps with hotspots was our main challenge. We read lots of documentation to learn how frameworks like ArcGis work. While we were not able to deploy our model due to not having an affordable yet high-computation web server, we were able to make it functional regardless of the dataset, meaning as long as cities give us data, we can create heatmaps for them.
Accomplishments we are proud of
We are incredibly proud of how our team found a distinctive yet viable solution to revolutionize road development and driving. We are proud that we were able to develop one of our most advanced models so far, which was mostly possible through UIPath training. We are extremely proud of developing a solution that has never been previously considered or implemented in this setting and developing a working model.
What we learned
Our team found it incredibly fulfilling to use our Machine Learning knowledge in a way that could effectively assist governments in assessing roads and finding ways to make them safer, especially when there aren’t quick and effective ways to do so currently. Seeing how we could use our software engineering skills to impact people’s daily lives and safety was the highlight of our weekend.
From a software perspective, developing geospatial-models was our main focus this weekend. We learned how to effectively build models and generate descriptive heatmaps. We learned how to use great frameworks for ML such as AI-Fabric from UIPath. We grew our web development skills and polished our database skills.
What is next for PreDent
We believe that our application would be best implemented on a local and state government level. These governments are in charge of designing efficient and safe roads, and we believe that with the information they acquire through our models, they can take steps to improve roads and reduce risks of crashes.
In terms of our application, we would love to deploy the model on the web for automatic integration. Given that our current situation prevents us from buying a web server capable of running the model, we look forward to acquiring a web server that can process high level computation, which would automate our service.
PreDent has a few different meanings, which we’ve listed out below:
“Pre” means prior to an accident
“Dent” refers to denting a car during an accident
“Dent” is also short for a car accident, which we try to avoid
“PreDent” is very similar to “prevent”, which is the primary goal of our system