The Problem

The frequency and severity of natural disasters are increasing at alarming rates each year. Roughly 6,800 natural disasters take place every year, affecting 218 million people worldwide. Wildfires, floods, earthquakes, drought, tornados, hurricanes, and other natural disasters pose a serious threat to humanity, claiming upwards of 68,000 lives per year. For those affected, finding a route to safety can be difficult without real-time analysis of the situation and damaged area; and for those not affected, there is no clear way to help others in need.

The Solution

In response to the challenge these disasters present, we present Disaster Atlas, an engaging disaster-preparedness, response, and recovery app to inform users about natural disasters occurring both locally and globally, help them locate safety zones away from blocked areas using state-of-the-art ML models, and spark action by encouraging and facilitating donations to help for those affected by the outcome of these disasters.

Design Challenge

How might we leverage Machine Learning and Cloud Architecture on AWS to accelerate disaster awareness to help identify and guide users toward optimal safety routes so that we can improve response and recovery efforts while reducing adverse outcomes in the event of a natural disaster?

Inspiration

In alignment with the AWS belief that technology has the power to solve the world’s most pressing issues, Disaster Atlas was created to explore ways in which Machine Learning can be applied to solve pressing challenges in natural disaster preparedness and response while also delivering social impact.

At the start of the challenge, research was compiled gathering information and insights from datasets and academic resources to understand and recognize where ML could perform the greatest good. Together, our team used Miro to brainstorm solutions focusing on areas where our collective interests and skillsets aligned and what areas we could explore to create an engaging and impactful solution.

alt text

What It Does

Disaster Atlas is a mobile iOS app designed to:

  • Inform users about natural disasters occurring both locally and globally
  • Offer an optimal route towards safety by applying ML models to detect blocked routes and areas
  • Spark action by encouraging users to donate to causes in need.

The app utilizes satellite and aerial drone technology to gather and synthesize graphical data to determine optimal safety routes and shelter locations in the event of a natural disaster. Safety zones are presented to users to safely guide them towards government-designated shelter locations.

Taking UX cues from the popular app, Citizen, combined with the Google Maps interface, Disaster Atlas has been designed to ensure users and organizations with limited technology and Machine Learning knowledge can access this tool with few barriers and learning curves. The app focuses on delivering engaging content to earn and appeal to audiences worldwide while building and fostering a community-driven environment. The app also promotes disaster-awareness impact, utilizes the power of a mass audience to immediately notify users of impending danger in the event of a natural disaster, and presents users with the most trusted routes to seek shelter if necessary. Disaster Atlas also considers how additional Amazon services can be used to help people and communities impacted by natural disasters by facilitating donations using Amazon Pay.

How We Built It

Disaster Atlas is a simple platform built around the model we trained in Sagemaker Studio Lab and using open-source software, open datasets, APIs, and various services from AWS’s portfolio, including Amplify, S3, EC2, and Cloudfront, DynamoDB all of which are used to empower and deploy our application to offer optimal routes to safety. Further, Disaster Atlas leverages Amazon Pay to facilitate payments from both users and organizations to support people in need during a natural disaster.

Disclaimer: The parts with dotted lines were not built in the end because we ran out of time. However, this has been scheduled for future iterations of our project.

alt text

Value Proposition

Beyond the value of the ML component alone, Disaster Atlas is a scalable product that has been created to build audiences around a social media model, as well as integrate Amazon Pay to send and receive monetary donations. Currently, the app demonstrates the use of Amazon Pay alone; however, later iterations would see the app offer users the option to create Amazon Wish Lists if emergency supplies such as diapers, clothing, or food are required, or possibly the option to utilize Amazon Smile, extending the opportunities available to make a deeper impact.

In addition to the business value Disaster Atlas has the potential to offer, our team has also looked at and learned from existing state-of-the-art Machine Learning techniques to enhance our model’s accuracy at isolating damaged civil infrastructure but have also figured out a way to integrate that into a usable product, which an end to end workflow has been build for, with both backend and frontend on a mobile app. We combined the power of machine learning with the cloud services that AWS offers and harvested that to build a useful product for anyone who could be in need.

The system works in the following way:

We used Sagemaker Studio lab to:

  1. Experiment around with our dataset, apply various degrees of post-processing satellite image corrections, enhancements, and augmentations.
  2. Train our model using the powerful TensorFlow library
  3. Ship our model to S3 right from the notebook

We used EC2 to host our inference API and Flask Backend API

  1. Our inference API is based on an open-source project called Sat2Graph (based on work done by Songtao in his paper “Sat2Graph: Road Graph Extraction through Graph-Tensor Encoding”).
  2. We built our own container based on this project (viewable here)
  3. During inference, we pull down the model hosted on S3 and serve it out as a Flask API which accepts latitude/longitude location and area information of our users from our app and analyzes satellite images of those areas for blocked roads in order to provide safer alternate routes. For this, we download two satellite images -- one pre-disaster image and one real-time, post-disaster image, and run it through our trained model. The model outputs a graph as well as a road segmentation mask which we then send back to our iOS app.
  4. The mobile app renders the segmentation mask on the top of the Google Maps map to mark areas that could be possibly blocked (and must be avoided during a natural disaster).

alt text

Challenges We Ran Into

  • Our solution was developed around the limits of Sagemaker Studio Lab as a free service. Due to this, we were limited in the size of the dataset we could use to train our model. As a result, our model’s ability to generalize over different satellite images is low, but we have tried to compensate for this using researched techniques like data augmentation, mean subtraction as well as color correction.

  • Finding an accurate, labeled dataset was a challenge considering that most (high-quality) data of the sort is primarily proprietary and not easily accessible by the general public.

  • On the mobile app, there were challenges around Amplify's user-access policy -- sharing Amplify data across users from different organizations is not feasible, and was one of the very first challenges we ran into when creating the app.

  • While working on the map powered by Google Maps API, one of the more pressing issues with using the react-native-maps library was to draw damaged routes with preciseness. Having made an early decision to uitlize the Google Maps API on the app and the Mapbox API for the backend server, we very quickly noticed how differently both Google Maps and Mapbox process their point systems internally --with the same latitude/longitude coordinates, both mapping libraries resolve to different (x, y) points on the map. Conversion between one point system to another was not feasible, but we’ve attempted to use Web Mercator Projection to compensate for this.

  • Toward the end, we were unable to host our backend API on AWS since the Docker container we used ate up about 2.5GB of memory. Using a t2 medium would cost us extra, and our teammate wasn’t very comfortable using his credit cards. We eventually settled using ngrok to test the API by exposing a port on our local machine.

alt text

Accomplishments

Kori: As a UX designer who is new to learning about ML technology, this has been a great learning opportunity to understand the capabilities of the technology and how to design with ML capabilities in mind.

Matt: Together with Shri, I set up the Native app to be built with React Native and Expo, leveraging AWS Amplify. I also built the initial authentication flow using AWS Amplify, and the UI for the social elements, including a TikTok-like video browser.

Shri: Together with Matt, I worked on the boilerplate code for the repo and built the login, sign-up, forgot-password using Amplify and video upload to S3 in the alpha version of the app. In the beta version, I built the Disaster Atlas map that shows disasters both locally and globally and shows congregation areas and safe routes around damaged areas. I also built several features like disaster cards, search-and-navigate-to-disaster on the globe, drawing damaged routes using GeoJSON, geofencing around disaster areas, safe waypoints, different terrain views, and 3D modes on the map.

Rohit: Learned a lot about different AWS technologies and how different services come together to create APIs on AWS. Learned about different tiling systems used by map services like Mapbox and Google Maps.

Romeo: Learned a tremendous amount about the impacts of natural disasters and the different phases of disaster management and response. Designed our solution’s architecture and data flow. Read a ton of research papers about the use of ML for disaster response and experimented on the road extraction model and what would be the best way to build an inference API.

Jason: I’m a seasoned AI researcher, but this type of problem was something I never really gave much thought to. Today, my worldview around using AI to help aid us in our fight during natural disasters has changed completely as --despite being in the AI field for quite some time -- I was unsure of the real impact AI could have in disaster preparedness and response. Also, met some wonderful people and made amazing friends.

alt text

Learnings & Takeaways

Key learnings from ML research/training/inference:

  • Experimenting with and building new Machine Learning models is great (and fun!) but figuring out a way to apply this in practice and to create value in an application is rather difficult. However, we believe that with careful thought, planning, and sufficient time, everything’s possible.

  • When training an accurate ML model, high-quality data is gold. For some problems, such as ours, most datasets are proprietary and not easily available. In order to compensate for this as well as periodically retrain our models over time, we were exploring other sources to get said data: some include allowing the users of our app to upload additional content such as drone and remote sensing modalities and employing humans in the loop to verify this uploaded data in order to create crucial ground truths.

Key learnings from the mobile app development:

  • AWS Amplify is an amazing platform to integrate with other AWS services!
  • React Native Maps has some pretty decent map features but setting it up and reconfiguring pods can be messy.
  • When working with global formats like GeoJSON., it's important to be mindful of what kind of structures we are trying to form. It's easy to mistake one format for another and end up complicating the code.

What's Next?

We believe there is a lot of improvement for our app. Further iterations would include several new features, including:

  • Damaged-building detection and identification via satellite imagery. This could help further cement in our shelter, safe route recommendation.
  • The use of community-contributed drone footage to augment our safe route recommendation capabilities and provide additional information about areas that must be avoided at all costs.
  • Possibly deploy our models to Sagemaker Serverless Inference to scale our API further.
  • Integrate a content upload feature giving users experiencing a natural disaster the opportunity to record and post footage documenting their experience.
  • Introduce a feature giving users the ability to share recorded content and comment on it.
  • Set up SMS notifications to notify users about local disasters with shelter recommendations, and global disasters to encourage donations.
  • Expand donation options to allow users to create Amazon Wish Lists with requests for emergency supplies as well as explore opportunities to integrate Amazon Smile.
  • Used https://github.com/microsoft/CoCosNet-v2 to create synthetized post-disaster satellite imagery for data augmentation purposes and to achieve more accurate machine learning models.

Built With

Share this project:

Updates