Overview:
GeoLens is an AI-powered environmental intelligence platform that translates fragmented, highly technical climate data into actionable insights for everyday users. By querying 7 live authoritative APIs and running the data through a deterministic mathematical engine, GeoLens generates strict, zero-hallucination climate risk profiles and multi-city comparisons. Know the physical risks—heat, floods, earthquakes, water stress—before you make a life-changing move.
🔗 Live Platform: https://geolens-mocha.vercel.app/
💻 GitHub Repository: https://github.com/JawadGigyani/GeoLens
📺 Demo Video: Watch on YouTube
Inspiration
Millions of people relocate every year for work, education, or retirement. However, when making life-changing decisions about where to move, they usually evaluate taxes, housing, and schools—completely flying blind on environmental and climate risks. Crucial questions like "How reliable is the freshwater supply?" or "What is the 10-year seismic trend?" go unanswered because this data is siloed across highly technical scientific databases (USGS, NOAA, WRI).
We were inspired to build a bridge between complex climate science and everyday decision-making. Our goal was to translate fragmented data into actionable, human-readable environmental intelligence, ensuring users can clearly identify physical risks before they sign a lease or buy a home.
What it does
GeoLens is an AI-powered environmental intelligence platform focused entirely on usability and data transparency.
You simply type in any city in the world, and GeoLens instantly queries 7 different live environmental APIs on the backend. It assesses 7 specific risk dimensions: Flood, Seismic, Heat, Air Quality, UV, Storm, and Water Stress.
Instead of showing users raw scientific spreadsheets, our platform generates a comprehensive City Risk Profile with 1.0–10.0 scores, historical climate trend visualizations (using an intuitive UI), and personalized AI narratives. Through our Multi-City Compare feature, users can stack up to 6 cities side-by-side. Our innovative approach allows users to apply custom mathematical weights based on their unique use-case (e.g., Student vs. Retirement vs. Agriculture), generating a dynamic, use-case-specific AI verdict on the most resilient option.
How we built it (Technical Implementation)
We separated the architecture into a strict computational engine and an AI orchestration layer to ensure absolute data fidelity.
1. Robust Architecture
- Frontend: Built with Next.js 14 App Router. We designed a flat, high-contrast custom CSS design system devoid of sluggish component libraries to ensure maximum accessibility and speed. We utilized Server Components to enable instant page navigation and
Rechartsfor interactive data visualization. - Backend: Fueled by FastAPI (Python 3.11), providing a highly concurrent REST API backbone necessary for handling multiple simultaneous requests.
- Deterministic Math Engine: To prevent AI hallucinations, we programmed a purely deterministic Python scoring engine. The LLM never calculates risk.
2. The 7 Authoritative Data Sources
We fetch raw physical data in real-time. There is no scraping and no LLM guesswork:
- Open-Meteo Archive: Pulls 5 years of daily historical data (temperature, precipitation, wind, UV, humidity) to assess Flood, Heat, and UV risks.
- USGS FDSNWS: Queries 50 years of Magnitude 5+ earthquake data within a 100km radius for Seismic risk scoring.
- NOAA IBTrACS: Cross-references 30 years of global cyclone tracks within a 200km radius for Storm risk.
- WAQI (World Air Quality Index): Pulls real-time air quality (PM2.5, PM10, ozone) from the nearest regional sensor station.
- WRI Aqueduct: Uses global baseline water stress indices to determine long-term freshwater reliability.
- NASA FIRMS: Supplements analysis with active fire/wildfire detections around the region.
- Open-Meteo Geocoding: Translates any user-inputted city name into precise global coordinates.
3. The 6-Agent LangGraph AI Pipeline
Once the real-world data is collected and scored by the math engine, it flows into our innovative AI orchestration pipeline consisting of 6 specialized agents:
- Coordinator Agent: Parses the user input, resolves the city to precise coordinates, and kicks off the pipeline.
- Data Collector Agent: Asynchronously dispatches requests to all 7 external APIs and aggregates the massive raw JSON responses.
- Analyst Agent: Executes the deterministic mathematical scoring engine on the raw data (generating 1.0–10.0 scores) and detects compounding hazards (like high heat occurring alongside low water storage).
- Supervisor Agent: Validates the math, ensuring all data arrays exist within valid bounds and handles any missing third-party API responses gracefully.
- Hallucination Checker Agent: Acts as an LLM firewall. It strictly monitors the Generation layer to guarantee the AI narrative doesn't invent risks or contradict the mathematics.
- Responder Agent: Powered by Qwen3 (via Featherless.ai), this agent writes the final, compelling markdown narrative and personalized user recommendations.
Challenges we ran into
Building a unified API layer across 7 distinct scientific databases was immensely difficult. For example, the NOAA IBTrACS cyclone database is a massive 26 MB CSV dataset. Eagerly downloading this during our initial DigitalOcean deployments kept timing out our cloud health checks. We had to pivot our cloud architecture, shifting the data processing into a multi-stage Docker build so the data was pre-baked into the image, dropping server start time from 2 minutes to less than 2 seconds.
Additionally, keeping the LLM strictly bound to the mathematical truth was a massive challenge. Initially, the LLM would occasionally try to "guess" weather patterns based on its training data. We solved this by implementing our LangGraph Hallucination Checker agent, ensuring the text outputs perfectly and strictly mirror our mathematical scoring engine.
Accomplishments that we're proud of
- Zero AI Hallucinations: We successfully bridged determinism and generative AI by strictly separating the math engine from the narrative engine.
- Speed & Scalability: We reduced our frontend navigation latency from 8+ seconds down to instant transitions using Next.js streaming states.
- Robust Data Ingestion: Orchestrating 7 live, third-party API calls concurrently while staying within strict timeouts.
What we learned
- Agentic Workflows: We learned the incredible power of LangGraph in keeping multi-step AI tasks organized, testable, and highly reliable.
- Cloud Infrastructure: We practically learned why separating build-time dependencies from runtime execution is critical when working with containerized App platforms.
- UX The Hard Way: We learned that "sleek" isn't always best. We migrated from an initial heavily-glassmorphism UI to a flat, high-contrast, accessible design because presenting critical environmental data requires immense clarity.
What's next for GeoLens (Scalability & Future Feasibility)
Our immediate roadmap focuses on scaling up with deeply localized resilience data. We plan to integrate live satellite imagery APIs to allow users to run an "Interactive Map Scan," generating localized risk scores based on specific neighborhoods rather than just whole cities. We also aim to incorporate civic infrastructure data (e.g., drainage capacity or green roofing metrics) to score a city's actual preparation for climate events, not just their physical exposure. The backend is stateless and fully Dockerized, meaning we are completely prepared to scale our API logic horizontally on DigitalOcean to support this.
Technologies Used
- Frontend: Next.js 14 App Router, Vanilla CSS, Recharts, React Leaflet, Vercel (Hosting)
- Backend: FastAPI, Python 3.11, Docker, DigitalOcean App Platform (Hosting)
- AI / Orchestration: LangGraph, LangChain, Qwen3-30B (via Featherless.ai)
- Data APIs: Open-Meteo Archive, USGS FDSNWS, NOAA IBTrACS, WAQI (Air Quality), WRI Aqueduct, NASA FIRMS
Team Details
- Muhammad Jawad (@JawadGigyani) — AI & Frontend: LangGraph multi-agent pipeline architecture, Qwen3 prompt engineering, hallucination detection system, Next.js frontend architecture, UI/UX design.
- Ali Ahmad (@aliahmad-aa) — AI & Backend: Deterministic scoring engine design (all 7 factor algorithms), FastAPI REST API design, 7 live data service integrations.
- Hamad Khan (@HamadKhan345) — DevOps & Data: Cloud architecture, Docker containerization, DigitalOcean deployment pipeline, large dataset preprocessing (IBTrACS), Next.js build optimizations.
Built With
- css
- digitalocean
- docker
- fastapi
- featherless.ai
- langgraph
- nextjs
- python
- qwen3-30b

Log in or sign up for Devpost to join the conversation.