Inspiration

GAIA PULSE transforms environmental data into first-person narratives, making the planet's condition feel personal and urgent. I realized AI agents aren't just for automation - they can be storytellers that bridge data and emotion.

Climate data feels abstract - numbers and charts that don't connect emotionally. I wanted to give Earth a voice. What if the Amazon Rainforest could describe the heat it feels, or the Great Barrier Reef could express its stress from warming waters?

What it does

GAIA PULSE gives Earth a voice by transforming environmental data into AI-generated first-person narratives.

Core Features:

Monitors 22 Global Regions - Forests, oceans, deserts, mountains, cities, ice caps, reefs with live weather data

Detects Environmental Events - Automatically flags heat stress (≥1.5°C above baseline) and air quality spikes (PM2.5 >55 µg/m³)

Generates AI Narratives - Claude 3 Haiku creates data-grounded stories from Earth's perspective Example: "I am the Amazon. I feel stressed because my temperature is 32.1°C and PM2.5 shows 45 µg/m³ from nearby fires."

Interactive 3D Globe - Click any region on a rotating Earth to view its latest narrative with color-coded ecosystem markers

Real-Time Updates - Frontend refreshes every 60 seconds with live data; backend runs daily for all regions

Self-Healing - Nightly scans ensure 100% data completeness by auto-regenerating missing narratives

User Flow: Visit demo → Click region on globe → Read Earth's story → See live metrics → Understand planet's condition

How I built it

Backend (Python + AWS Serverless):

gaia-ingest-lambda - Fetches weather data, detects environmental events (heat stress, air quality spikes), writes to S3

gaia-narrative-lambda - Reads data, calls Bedrock Claude 3 Haiku, generates narratives, calculates confidence

gaia-read-latest - Public API via API Gateway for frontend access

gaia-repair - Nightly self-healing to ensure data completeness

Orchestration:

AWS Step Functions coordinates ingest → narrative workflow EventBridge triggers daily runs for all 22 regions

Frontend (Next.js 15 + React 19): Interactive 3D globe with react-globe.gl Real-time data display with 60-second auto-refresh Glassmorphic UI with Tailwind CSS 4 Deployed on Vercel

Storage: S3 structured as diary/{region_id}/{timestamp}.json Separate files for raw data and AI narratives

Challenges I ran into

Bedrock Throttling - Implemented exponential backoff with idempotence checks Missing Data - Built isEmptyValue() checks, display "—" instead of crashing AI Quality - Required "because" clauses to ground narratives in specific metrics Globe Performance - Disabled SSR, optimized markers, reduced rotation speed State Passing - Used Step Functions ResultPath to pass S3 keys between Lambdas CORS - Enabled proper headers on API Gateway Self-Healing - Scan S3 for missing narratives, re-invoke Lambda only when needed Timestamps - Standardized on ISO 8601 with backward compatibility helper

Accomplishments that I'm proud of

Built a Fully Autonomous AI Agent: System runs daily without human intervention - collects data from 22 regions, generates narratives, and self-heals errors automatically

Scaled AI to 22 Regions: Integrated Amazon Bedrock Claude 3 Haiku with retry logic and idempotence, achieving 100% narrative generation success rate

Created an Interactive 3D Earth Visualization: Learned Three.js and react-globe.gl from scratch to build a production-quality globe with smooth animations and mobile optimization

Architected Production-Grade Serverless Pipeline: 4 Lambda functions + Step Functions orchestration with zero idle costs and infinite scalability Designed Stunning UI Without Templates Custom Earth-themed glassmorphic interface with Tailwind CSS 4—gradient animations, backdrop blur, futuristic aesthetic

Live Public Demo: Not just code - a working product anyone can use at (https://gaia-pulse-3.vercel.app)

Learned 5+ Technologies in One Week: Mastered Bedrock, Step Functions, Three.js, Next.js 15, and Tailwind CSS 4 while building a full-stack app solo

What I learned

Technical: Amazon Bedrock: Crafting prompts for Claude 3 Haiku that balance poetic language with factual accuracy Step Functions: Visual workflow design with retry logic and state management 3D Visualization: React-globe.gl + Three.js for interactive Earth rendering with 22 markers Real-time APIs: Integrating Open-Meteo and handling missing/incomplete data gracefully

Conceptual: AI transparency through confidence scores and source attribution Self-healing systems with autonomous error recovery Data provenance for scientific credibility

What's next for GAIA-pulse

Short-Term (3 Months): Integrate real NASA POWER, NOAA, and EPA APIs for official data Expand from 22 to 100+ regions (endangered ecosystems, vulnerable coastal cities) Add historical trends and predictive analytics

Medium-Term (6-12 Months): Multi-modal narratives: audio (AWS Polly), video, visualizations Public API, user subscriptions, embeddable widgets Cross-region AI analysis and yearly "State of Earth" reports Native mobile app with push notifications

Long-Term (1-2 Years): Partner with NASA/NOAA for scientific validation K-12 educational curriculum and policy dashboards Scale to 1000+ regions with multi-language support VR experience and "Future Earth" predictive mode

Vision: Make GAIA PULSE the world's most accessible climate platform where anyone can hear Earth's voice in seconds.

Built With

Share this project:

Updates