Inspiration
It is genuinely heartbreaking to watch our lakes and reservoirs vanish. The proof is right above us in satellite archives, but that data has been locked away in formats that everyday people and politicians simply cannot read. They are making critical water policies in the present without truly grasping the devastation of the past or the terrifying reality of our future.
We built Chrono-Climate for the Data Storytelling track because we wanted to change this narrative completely. We realized that raw data isn't enough. You need to make people feel it. We set out to build a tool that translates decades of complex, raw imagery into an undeniable, deeply original presentation that any legislator can understand and act on in under two minutes.
What it does
Chrono-Climate is an intensely visual and highly useful policy tool that bridges three eras of time:
Unlocking the Past: For example, we took the Aral Sea, of which our application ingests 38 years of satellite imagery (1984 through 2022) of a shrinking water body. We run custom computer vision on every single image to map and measure the exact water surface area in pixels.
Predicting the Future: We then train a machine learning model on that historical pixel-count data to definitively project the exact year the lake will completely dry up.
Visualizing the Reality: The platform renders a visceral, side-by-side historical reveal. Users watch the lake shrink decade by decade, aided by an AI segmentation overlay that highlights the remaining water. We then push into the future, visualizing three brutal depletion stages (2032, 2042, 2052) to maximize the emotional and visual impact.
Empowering the Present: We do not just leave the user with a sad chart. The app automatically generates a policy brief: a concise, data-grounded paragraph written directly to legislators with immediate, actionable intervention steps.
How we built it
We knew the design had to be flawless and the technical complexity had to be robust to handle decades of messy planetary data.
The Brains: Python and FastAPI
The core engine relies on two Python modules built from scratch:
The Vision Model: Processing satellite JPEGs is incredibly tough. We used OpenCV to convert images from RGB to HSV color space. We then built three layered cv2.inRange masks to isolate water pixels (targeting deep saturated blue, light cyan, and desaturated grey-blue for cloud cover). A fourth custom exclusion mask strips out warm, arid land. The surviving non-zero pixel count gives us our precise water surface area.
The Prediction Model: We feed those pixel counts into a scikit-learn LinearRegression model mapped against the years. We extract the learned slope and intercept, and then solve the math to find the exact year the trend line hits zero. We deliberately chose this explainable, linear approach over a black-box neural network because our target audience requires transparent, auditable math.
These modules live inside a FastAPI server with REST endpoints that serve raw images, processed segmentation masks, future projections, and our automated OpenAI GPT-4o-mini policy reports.
The Canvas: Next.js, TypeScript, and Tailwind
For the presentation layer, we designed a beautiful 5-step narrative flow using Next.js 16. We styled it with Tailwind CSS v4 and Shadcn/ui components to keep the interface looking clean and urgent.
The segmentation mask toggle fires live API calls to swap between raw and OpenCV-processed images.
The pixel decay linear regression model is charted interactively using Recharts as a future depletion slider to really drive the point home.
Challenges we ran into
The absolute toughest technical hurdle was the water pixel detection. Real satellite water does not look like clean blue. It is murky turquoise, obscured by clouds, and bleached by the sun. A standard single HSV range completely failed us. We spent hours iterating and calibrating against real historical images to build our additive three-mask system. Figuring out how to subtract the arid land without losing the shallow water was a massive victory for our team.
Accomplishments that we're proud of
Originality in Application: We successfully turned highly technical, legacy satellite pixels into politically actionable, human-readable text automatically.
End-to-End Execution: We built a from-scratch computer vision pipeline and seamlessly wired it up to a modern TypeScript frontend.
Design with Purpose: We did not just build a dashboard. We built a structured data story that forces the user to confront the past, understand the present, and attempt to change the future.
What we learned
We learned that the HSV color space is an absolute lifesaver for natural imagery segmentation. We also learned a valuable lesson in data storytelling: sometimes a simple, explainable linear regression is far more powerful and convincing to a non-technical audience than the most advanced neural network in the world.
What's next for Placeholder
We are so passionate about keeping this going. Next up, we want to scale this beyond the Aral Sea to map shrinking reservoirs globally. We plan to add time-series animations to show the water loss fluidly year by year. Finally, we want to integrate directly with live NASA Earthdata APIs for real-time monitoring and build a one-click PDF export so activists can print these policy briefs and hand them directly to their local representatives.
Built With
- fastapi
- matplotlib
- next.js
- numpy
- openai
- opencv
- pandas
- pillow
- python
- react
- scikit-learn
- shadcn
- tailwind
- typescript
- uvicorn
Log in or sign up for Devpost to join the conversation.