Inspiration

Witnessing the recent devastating wildfires in Los Angeles, I was both alarmed and deeply concerned by the widespread impact. These disasters not only threatened the safety of those in the vicinity but also left lasting damage on homes, schools, and entire communities, disrupting lives long after the flames were extinguished. Researching further on the impact of climate events in the past year, I found that “in 2024, there were 27 individual weather and climate disasters with at least $1 billion in damages in the U.S.... These disasters caused at least 568 direct or indirect fatalities... the eighth-highest for these billion-dollar disasters over the last 45 years (1980-2024).” [1]

Coming from an aerospace engineering background, I wondered why we were not using satellite imagery to quickly identify and alert first aid responders in these situations. We need not only accuracy, but timeliness for these natural disaster events. Inspired by these challenges, I set out to leverage satellite imagery and AI to provide first responders with real-time, actionable insights—hopefully helping to save lives and mitigate damage.

What it does

CHRONOS is a satellite-powered disaster response application designed to support first responders in quickly responding to natural disasters. By analyzing high-resolution satellite imagery, CHRONOS uses a classical ML method of change detection models (PCA & K-means) to rapidly identify land areas affected by wildfires, flooding, earthquakes, and other climate events.

When a significant change is detected (enough pixels are found to have changed in the image), a Vision Language Model (VLM) is used to perform inference and generate a caption describing the event. This caption is then fed into Perplexity's Sonar API to provide first responders with further information about the area and affected regions, such as the type of disaster, its location, and potential impact. For instance, during the recent Los Angeles wildfires, CHRONOS could have quickly identified the affected areas and provided first responders with critical information about the fire's progression. Finally, first responders can receive instant notifications for critical events affecting emergency operations during natural disasters via the CHRONOS mobile app.

How I built it

CHRONOS is a combination of three-stages:

  1. Satellite Imagery Analysis: High-resolution satellite imagery is continuously analyzed using change detection models (PCA & K-means) to identify affected land areas.
  2. Real-time Damage Assessment: VLMs identify climate events in high-priority images, and the generated image caption is used in Perplexity's Sonar API to provide further information to first responders about the area and affected regions. Models used include BILP and Salesforce Vision Language Model (VLM).
  3. Mobile App for First Responders: A mobile app built with Flutterflow delivers instant notifications for critical changes affecting emergency operations during natural disasters.

Challenges we ran into

  • Processing High-Resolution Imagery: Trying to get the high-resolution satellite imagery directly from planet labs API was difficult (especially on an education account), but could possibly be addressed later by purchasing images directly from the provider. In the future I expect that there's going to be a need to think carefully about how to handle and process large volumes of high-resolution satellite imagery even when using the less computationally intensive PCA and K-means methods.
  • Optimizing Change Detection Models: Fine-tuning the PCA and K-means change detection models only tells us pixel differences in images, it may provide false positives when providing events with climate disasters (from cloud cover vs. no cloud cover etc). There's going to be a need extensively experiment and make sure we can minimize false positives in future implementations (which is why I implemented the VLM as a second step to provide inference.)
  • VLM Inference: Finding open source VLMs was also difficult, which is why I used the Salesforce BILP VLM for this project. The BILP model is not the most descriptive / accurate, and it would be nice to find additional VLMs to support multimodal inputs (the image, coordinates, and weather data).
  • Integrating with the Sonar API: I was initially hoping O could use the Sonar API as a VLM, but i eventually switched it to take inputs from the VLM inferences so that we can effectively extract online information using perplexity's huge databases to provide meaningful and actionable information for first responders. Trying to figure out what information would be helpful to responders vs not was slightly difficult.

Accomplishments that I'm proud of

  • Real-time Disaster Insights: I'm proud that we successfully created this three pronged system that used so many cool new tools that can hopefully provide real-time insights into disaster impacts. The main hope is to enable faster and more informed responses.
  • Automated Image Analysis: I didn't realize the process of automating satellite imagery analysis could be this simple, and I think I'll keep exploring how we can use VLMs to keep analyzing more satellite imagery. Specifically, I'm hoping it can provide more granular information with more multimodal inputs to help first responders.
  • Mobile App Integration: It was really fun to make an app through Flutterflow, that successfully integrates lots of the components I think first responders would need for immediate notifications and critical information. Hopefully it looks relatively slick too!
  • Saving Computation Costs: I'm glad I was able to build in a multi layer process of parsing through images so that we can ensure that only the most important events are brought to attention with the initial classical ML algorithm approaches (PCA and kmeans). Trying to minimize the use of a VLM only when we find important events will be crucial to help this system work efficiently.

What we learned

  • I'm really glad I got to play around with how we can leveraging satellite imagery and AI for disaster response.
  • I learned how to effectively use change detection models (PCA & K-means) to identify affected land areas.
  • I deepened my understanding of Vision Language Models (VLMs) and their application in image analysis.
  • I honed my skills in integrating different APIs and platforms, such as Perplexity's Sonar API and Flutterflow.

What's next for CHRONOS - Satellite Powered Disaster Response

  • Enhance Change Detection Models: I think there is definitely a need to continue working on improving the accuracy and efficiency of the change detection models by incorporating additional data sources and machine learning techniques.
  • Expand Disaster Coverage: I want to extend the system to cover a wider range of natural disasters, including hurricanes, tornadoes, and tsunamis. Maybe I can help predict these events many weeks beforehand! Or, combine information from weather experts to analyze satellite imagery data from predicted areas early on.
  • Improve Mobile App Features: I also want to add more features to the mobile app, such as offline access to critical information, communication tools for first responders, and integration with other emergency response systems.
  • Increase VLM efficiency: I also want to increase the efficiency of the VLM, and hopefully integrate directly into perplexity as well.
  • Partnerships: If any folks are interested, would love to seek partnerships with emergency response organizations and government agencies to deploy CHRONOS in real-world scenarios.

References for Quotes: [1] https://www.climate.gov/news-features/blogs/beyond-data/2024-active-year-us-billion-dollar-weather-and-climate-disasters

Built With

Share this project:

Updates