It all started with a story. Dylan, our backend developer told us about the problems his dad faces on a weekly basis as a volunteer firefighter. How to best respond in a natural disaster? When a flood or wildfire occurs, first responders have to survey the entire area to determine who is affected and what locations need the most support. This process is time-consuming and costly, leaving people stranded during a crucial time. That's where SNDRR comes in.
What it does
Smart Natural Disaster Recognition and Response (SNDRR) uses a custom AI vision system with current high-resolution satellite imagery to determine what areas have been affected automatically. This insight is combined with population data to determine the hotspots for action by first responders.
How we built it
We started with a dataset, 6000+ satellite images of before and after natural disasters. We used this data in Google Cloud Platform's Vision service to create a custom AI model that can recognize whether a satellite image has been affected by a disaster. With this model trained, we set up a location search. This search uses US census data to determine population density for a given area. These two ingredients allow us to generate insights for the user in our Response Dashboard. Here, the user enters a location and an image to receive information on how critical the area is for disaster response teams.
Challenges we ran into
FINDING DATA! It took us hours to find the data we needed for this project, and easily took up the largest portion of our time here at HackPSU.
Using GCP Vision's AutoML API. It was unclear to us how to implement this API (it is in beta after all), so we ended up having to forego that feature until a later date.
Connecting all the bits. Once we had the model in place and the census API, we were tasked with putting all of it together. On an already limited time crunch, we did what we could to make the best product possible.
Accomplishments that we're proud of
From not knowing anything about GCP, we managed to create a working and somewhat accurate custom model to determine if an image is affected by a natural disaster.
We created a dashboard for first responders to gain insight into highly affected areas.
From not knowing what API even means, two of our group members managed to test and access the US census API to retrieve data for our location functionality!
What we learned
Our group consisted of wildly different skillsets. Some of us spent this hackathon learning how APIs work and how to leverage them, while others pushed their existing skillsets to new levels with a fully integrated system. Additionally, none of us had experience with GCP Vision before, so we went in knowing nothing and came out with a working model!
What's next for Smart Natural Disaster Recognition and Response
Our next step with this product is to create an AI-Generated Area of Effect. While our dashboard allows people to enter a single image, this is more for the purposes of the demo. Our planned integration is to have this system set up on the backend to receive satellite images when they come in, stitch them together, and show first responders the areas that need the most attention the fastest.