Our project is intended to help first responders during flooding events by identifying flooded areas through aerial imagery. In the last few weeks Florence has devastated parts of the east coast so we decided to create a program to assist in future disasters.

We used keras and tensorflow to implement a neural network consisting of two max-pooled convolution layers followed by two fully connected layers. To train this network, we used google’s static map API to obtain patches of satellite imagery as well as the corresponding ground truth data about where bodies of water are. We trained the network to identify whether a specific pixel was water, based on a surrounding patch of pixels.

This network was then applied to aerial imagery of areas impacted by hurricanes released by the National Oceanographic and Atmospheric Administration. We see here that that it is able to correctly identify patches which are flooded by water.

Using this outputted file, we analyzed the image and calculated a set of coordinates within the flooded area. Using this set of coordinates and the Google Maps and Roads API, we calculated the roads that run through the flooded area as well as the location along the road that is affected.

We've attached 5 photos of the output of our program on a patch of land near New Orleans, and how our model performs in detecting flooded areas after 2005's Hurricane Katrina.

Built With

Share this project:

Updates