Disasters caused by floods require substantial humanitarian actions and the improvement of the efficiency of those actions can reduce the loss of human life and property.
What it does
It increases the efficiency of first responders in addressing flood situation by reducing the amount of picture footage that requires further scrutiny. That goal is accomplished by processing the images with a Machine Learning model that draws boundaries around flooded areas in the images (image segmentation). That can be further used to determine if the picture contains flooded areas (object detection). The user of our app can upload an image to our website and see the results in a few seconds.
How we built it
We use Azure Machine Learning Studio and Python to build, train and test the model. Then we built the website on Huggingface and deployed the model there.
Challenges we ran into
We had to figure out how to install FastAI in the Anaconda-centric Python virtual environments provided by the Azure Machine Learning Studio.
Accomplishments that we're proud of
We are getting pretty good accuracy in segmenting the flooded areas.
What we learned
Azure has a vast amount of machine learning resources. Our team leveraged our experience in AI to pick resources that we needed to build the app
What's next for Flood Resilience with Artificial Intelligence
Our app forms just one critical component and a few more components will be needed to create a good initial architecture for flood response solution. For example, the detection of people in the segmented area can be another component. We plan to collaborate with others who are interested in flood resilience to create a comprehensive architecture.