Inspiration

Our inspiration came from the theme "Reduced Inequalities." Urban heat islands don't affect everyone equally; they disproportionately impact low-income and vulnerable communities. We were inspired to turn passive satellite data into an active tool for climate justice, empowering citizens to understand their local heat risks and, more importantly, take direct action to fix them.

What it does

HeatQuest is an AI-powered, mobile-first web app that transforms any neighbourhood into an actionable "climate mission."

Detects: A user can point anywhere on our live map. Our FastAPI backend performs a real-time analysis, fetching the latest Sentinel-2 (for vegetation) and Landsat (for temperature) satellite data for that specific area.

Calculates: The backend's Python pipeline calculates two key numbers for every 30m grid cell: NDVI: (Measures "greenness" or vegetation), Temperature: (Meget's "hotness" in Celsius). It then combines them into our final heat_score (Temperature - [0.3 * NDVI]).

Analyses (This is the AI part): This heat_score and ndvi data is then sent to our Vertex AI (Gemini) model. The AI interprets these numbers to generate an ai_summary explaining why it's a hotspot (e.g., "This is a paved surface with no vegetation").

Acts: The AI uses this analysis to generate a gamified, location-specific "climate mission" (like "Asphalt Avenger: Document this heat trap!"), which is saved to our Supabase database for the user to complete.

How we built it

We built HeatQuest as a complete, full-stack application, managed within a single monorepo.

The user's journey begins on our mobile-first frontend, a sleek app built with React, Vite, and TypeScript. We used React Router for navigation and Framer Motion for smooth animations, with a beautiful component library from ShadCN/UI. The central experience is the mission screen, where users can view and complete gamified tasks.

When a user selects an area on the map, the app calls our "central brain": a high-performance FastAPI backend written in Python. This server orchestrates all our logic, and we use Pydantic models to define strict, reliable data contracts, ensuring the frontend and backend always speak the same language perfectly.

This API call triggers our "Heat Engine," the data science pipeline that works in real-time. First, it uses Shapely and Pyproj to create a precise geographic buffer around the user's coordinates. It then generates a high-resolution grid for the analysis. The engine orchestrates the download of raw satellite data from Landsat (for Temperature) and Sentinel-2 (for NDVI), running our core scientific formula, heat_score = temp - (0.3 * ndvi), to calculate the risk for every single cell.

This live analysis is powerful but slow. To solve this, we architected a "Smart Cache" using Supabase (PostgreSQL). Our API uses a parent/child cell schema: the first time an area is scanned, the results are saved. Every subsequent user who scans that same area gets an instant response in under 0.5 seconds.

Once a hotspot is identified, the backend calls our "Mission Architect," Vertex AI (Gemini). The AI doesn't just see the data; it analyses the numbers (e.g., "high temp, low ndvi") and returns a structured JSON response with a human-readable summary and a list of suggested actions.

This AI response is then saved to our Supabase database, which also doubles as our "Game Engine." A complete service layer manages user profiles (tracking points and levels), missions (pending and completed), discoveries, and a global leaderboard, creating a fully interactive and engaging experience.

Challenges we ran into

Our main challenge was a fundamental conflict: we wanted live data, but we also needed a fast app. Our core idea was that any user could tap anywhere on the map and get a real-time heat analysis. But doing this "live" for every single person is incredibly difficult. When a user queried our API, our FastAPI backend had to:

Fetch multiple large satellite files (Landsat and Sentinel) from the cloud. Create a precise, high-resolution grid for that user's specific 500m area. Run our complex data science math (calculating NDVI, Celsius temperature, and the final heat_score) for every single cell in that grid.

This entire on-demand pipeline took 30-40 seconds. For a mobile app, that’s an eternity. A user would just assume the app was broken and close it. We solved this by architecting a "Smart Community Cache" directly into our API and Supabase database. We realised we only had to pay that 30-second cost once.

Now, the first user to scan an area triggers the full analysis, and our API saves those results to our parent/child cell tables. Every subsequent user who explores that same area gets an instant response (<0.5s) loaded directly from the cache. This complex solution was our biggest breakthrough, turning a slow, individual calculation into a lightning-fast, shared community asset.

Accomplishments that we're proud of

The "Smart Community Cache" API: This is our biggest technical win. We successfully engineered a high-performance FastAPI backend that solves our core 30-second performance problem. Our parent/child schema in Supabase acts as a powerful cache. The first user's scan populates the database, allowing every subsequent user to get an instant (<0.5s) response.

The Full End-to-End AI Pipeline: We didn't just build a map; we built a complete "brain." We're incredibly proud of the full-loop integration: a React frontend triggers our FastAPI backend, which runs a GeoPandas analysis, which then calls Vertex AI (Gemini) to get a structured JSON analysis, which is then saved to Supabase as a gamified mission for the user to see.

A Complete Gamification Engine: We didn't just plan the "quest" part; we built it. Our supabase-client.py is a complete service layer that manages user profiles, add_points, create_mission, and complete_mission. Our React frontend (MissionDetail.js) is a fully functional, animated UI using Framer Motion and ShadCN/UI that allows users to check off actions and complete those missions.

Real-World Data Science in Production: We successfully translated a complex data-science model from a notebook into a robust API. We aren't using fake numbers. Our backend uses Rasterio and Shapely to process raw Landsat data, correctly convert it to Celsius, and combine it with Sentinel-2 NDVI to create a scientifically valid heat_score.

What we learned

Local vs. Production: We learned the massive difference between an analysis in a Local and a production service. We had to refactor all our GeoPandas logic into a high-performance FastAPI endpoint that could handle live, asynchronous requests.

Architecture is the Solution: Our 30-second API lag wasn't a problem we could fix with faster code; it was an architecture problem. We learned that our Supabase parent/child "Smart Cache" schema was the real solution. The design of the system was more important than the speed of the code.

Cloud Orchestration is Key: We learned how to make multiple, complex cloud services "talk" to each other. We used FastAPI as the central orchestrator to query our Supabase database, call the Vertex AI (Gemini) API, and process data from the AWS (Landsat) bucket.

Data Contracts are Essential: Using Pydantic models in our FastAPI backend was a lifesaver. By defining strict schemas (like GridHeatScoreResponse), we eliminated all bugs between our Python backend and our TypeScript/React frontend, allowing us to build faster with zero integration errors.

What's next for HeatQuest

Our technology is the foundation for a global, scalable platform. Our next steps move from a "game" to a real-world "green economy engine," connecting citizens, governments, and corporations.

  1. Launch the Green Economy Engine (B2B/B2C) Our profiles and missions tables aren't just for game points. We will connect them to real-world value. We will partner with local organisations and city governments to sponsor missions. When a user completes a "tree planting" mission, they won't just get 100 points; they'll get a real-world reward (like a $5 voucher or a direct payment) funded by that sponsor. This turns our app into an economic engine that pays citizens to actively cool their own neighbourhoods.

  2. Scale as an "Impact-as-a-Service" Platform (B2G) Our "Smart Cache" and on-demand analysis API are powerful. We will productize this backend and sell it directly to city governments and urban planners, especially in low-income countries. We can provide them with a live, high-resolution dashboard showing precisely where heat is a critical risk. This allows them to allocate public funds for green infrastructure (like new parks or cool roofs) with maximum impact, saving money and lives.

  3. Deploy the Fully Automated AI Engine (Global Scale) To "upload this to all low-income countries," we can't manually create missions. This is where our AI becomes the key. By fully activating our Vertex AI (Gemini) vision models, the AI won't just infer a cause from numbers; it will see the "paved parking lot" or "bare school rooftop" from Mapbox satellite imagery. This allows us to auto-generate thousands of hyper-local missions in any new city, making our integration and scaling instant.

  4. Prove the ROI (Data-as-a-Service) This is our ultimate business model. Our "Feedback Loop" (re-scanning a cell 6 months after a mission) is our key data product. We can provide our government and corporate partners with verifiable reports that prove their investment worked. We can show them a chart: "Your sponsored missions in this district actually lowered the average heat_score by 3.4°C." This data provides the accountability and "Return on Investment" (ROI) they need to fund the platform long-term.

Built With

  • amazon-web-services
  • cloud
  • fastapi
  • folium
  • framer
  • geopandas
  • google
  • icons
  • lucide
  • mapbox
  • motion
  • numpy
  • postgresql)
  • pydantic
  • pyproj
  • python)
  • rasterio
  • rasterstats
  • react
  • react-leaflet
  • router
  • s3
  • shadcn/ui
  • shapely
  • supabase
  • typescript)
  • uvicorn
  • vertexai
  • vite
Share this project:

Updates