Inspiration
I am from Nepal, where wildfires are a recurring and devastating issue. Every year, we lose forests, wildlife, property, and even human lives to wildfires that are beyond our control. This isn’t just a local problem—Australia, the Amazon Rainforest, California, and many other parts of the world face similar catastrophes due to wildfire outbreaks.
This inspired me to build a system that not only visualizes wildfire data, but also empowers users to take action, contribute to early detection, and potentially save lives.
What it does
CrisisVision AI is a real-time wildfire detection and alert platform that combines satellite data, machine learning, and user participation to tackle global fire disasters.
Key features include:
- Live Map showing the 200 most recent fire events using NASA VIIRS API, filterable by region, timeframe, and crisis type.
- Data Charts (line, bar, pie) to analyze trends, compare regions, and understand the distribution of wildfires and other crises.
- AI Image Upload: Users can upload fire images. If the model detects fire with more than 90% confidence, it sends location-based alerts to others nearby.
- Downloadable Reports: Users can export insights as PDF or CSV for analysis or distribution.
- All data is stored in MongoDB Atlas, and services are powered by Google Cloud, Node.js, Flask, and React + Firebase.
How we built it
- Built the frontend with React, hosted on Firebase.
- Used NASA VIIRS API for real-time fire detection data.
- Stored fire metadata and user inputs in MongoDB Atlas, enabling fast geo-queries.
- Developed a Node.js backend to fetch and process fire data.
- Created a custom fire detection model trained using both Infrared (IR) and RGB images.
- Implemented Learning Without Forgetting (LwF) to retain IR knowledge while fine-tuning on RGB data, so the model performs well in both day and night conditions.
- Deployed the AI model via Flask on Google Cloud Run.
- Used Google Cloud Storage to manage uploaded images and processed results.
Challenges we ran into
- Integrating thermal (IR) and RGB data into a single robust model required thoughtful training and architecture.
- Learning and implementing the Learning Without Forgetting algorithm for cross-modal learning was both complex and rewarding.
- Handling large, geospatial datasets in MongoDB and optimizing performance for real-time querying.
- Building a full-stack, cloud-native, AI-powered platform on a tight timeline while ensuring a clean and responsive UI.
Accomplishments that we're proud of
- Successfully implemented a custom dual-trained AI model capable of detecting fire in both RGB and IR images.
- Enabled real-time community alerts based on AI predictions and user location.
- Created a platform that doesn’t just present data—but allows users to interact, respond, and contribute.
- Built a full cloud-hosted stack integrating MongoDB, Google Cloud, Flask, Node.js, and React.
What we learned
- Advanced model training techniques like Learning Without Forgetting (LwF) for multi-phase training.
- How to manage geospatial fire datasets and optimize data flow from backend to frontend using MongoDB Atlas and Google Cloud.
- The importance of intuitive UI/UX when presenting disaster-related data to users.
- How to make AI practically useful—by connecting it to real-world actions like alerting people in crisis zones.
What's next for CrisisVision AI
- Add support for other disasters such as floods, earthquakes, and pollution using similar real-time data feeds.
- Integrate SMS/email-based alert systems for broader community outreach.
- Implement vector search in MongoDB Atlas to improve fire pattern detection.
- Collaborate with disaster relief agencies to deploy the app on a national or international scale.
- Launch a mobile app version for on-the-go fire reporting and alerts.
Built With
- chart.js
- firebase
- flask
- google-cloud
- google-cloud-run
- learning-without-forgetting-(lwf)
- mongodb-atlas
- nasa-firms-viirs-api
- node.js
- react.js
- user-inputs
- yolov11
Log in or sign up for Devpost to join the conversation.