Inspiration
The inspiration for Terra Zone AI struck during a university career fair. I was speaking with representatives from an environmental consulting firm who were hiring interns to help evaluate whether plots of raw land were physically and legally viable for development. They described an arduous, manual process: analysts spend weeks pulling geological surveys, cross-referencing local zoning codes, and running foundation estimates just to give a client a preliminary "yes" or "no."
I realized that this specific, high-stakes industry was completely underserved by modern technology. While generative AI is transforming software, the civil engineering feasibility sector is still relying on manual research. I wanted to step into this untouched space with AI and give developers an instant, scalable answer to a very expensive problem.
What it does
Terra Zone AI is like "Zillow meets ChatGPT" for real estate developers. Instead of spending $20k and waiting six weeks for consultants, a user simply draws a polygon on our interactive map. In under 60 seconds, our AI decision engine pulls together geological data, municipal zoning codes, and construction APIs to generate an investment-grade feasibility report.
It calculates buildable area, models construction risk, and ultimately generates a definitive GO, NO-GO, or CONDITIONAL verdict complete with a risk-adjusted financial breakdown and actionable alternative building strategies.
How we built it
The platform is a decoupled, serverless application built with React 19, TypeScript, Vite, and Tailwind CSS.
- Interactive Mapping: We used
@mapbox/mapbox-gl-drawon top of MapLibre so users can map out custom land parcels.@turf/turfcalculates the geospatial boundaries. - Serverless Orchestration: The backend uses Supabase Edge Functions (Deno runtime). We wrote a
decision-enginethat simultaneously queries the USGS Bedrock API, USDA Web Soil Survey, and Open-Elevation APIs to fetch raw physical data. - Mathematical Engine: To model feasibility, we use dynamic financial logic. For example, risk-adjusted ROI is calculated as: $$ROI_{adjusted} = \left( \frac{\text{Projected Revenue} - \text{Total Cost}}{\text{Total Cost}} \right) \times (1 - \Sigma \text{Risk Penalties}) \times 100$$
- AI Synthesis: The raw geospatial and financial data is fed into Google Gemini 2.0 Flash. We engineered strict, structured JSON prompts so Gemini acts as a deterministic evaluator, writing an executive summary without hallucinating metrics.
Challenges we ran into
Building an application that bridges raw geospatial data and conversational AI came with significant hurdles:
- Taming the AI Output: Early on, the AI would occasionally generate malformed text or empty bullet points for its "Path to GO" recommendations. We had to build a 7-layer bulletproof parsing mechanism and rigorously tune the prompt constraints to ensure the AI always output validated JSON with zero UI-breaking empty elements.
- Geospatial Calculation Syncing: Aligning the frontend polygon area calculations with the backend's geographic coordinates was tricky. Small discrepancies caused pricing model anomalies. We ultimately implemented a strict "Frontend calculation as the source of truth" patch to permanently resolve the area discrepancy.
- Performance and Asynchrony: Condensing a 6-week process into a 60-second window required heavy optimization. Coordinating slow external APIs (like USGS and USDA) with the
gemini-orchestratormeant carefully managing race conditions inside Deno to prevent edge-function timeouts. - Persistent Render Errors: We encountered underlying Three.js and React-Three-Fiber context errors that persistently bloated the browser console. We fixed this by building a specific logic interceptor to cleanly suppress 3D artifact rendering issues.
Accomplishments that we're proud of
We successfully transformed a messy, unstructured, and expensive real-world process into a single, seamless digital action: drawing a shape on a map. We are incredibly proud of the 7-layer fallback system we engineered; even if an external API fails or the LLM hallucinates, our platform gracefully falls back to deterministic mathematical calculations, guaranteeing the user never sees a broken interface.
What we learned
This project was a masterclass in orchestrating decoupled architectures. We learned how to build robust, serverless API chains on Supabase where Edge Functions communicate seamlessly in parallel.
More importantly, it completely shifted how we view Large Language Models. We learned how to treat LLMs not just as chatbots, but as deterministic data synthesizers. By enforcing JSON schemas and strict prompt instructions, we discovered how to extract professional, investment-grade logic from Gemini safely.
What's next for Terra Zone
Right now, our zoning and real estate data is focused on our pilot market in Princeton, New Jersey. The immediate next step is expanding our spatial databases to a statewide rollout in NJ, and eventually scaling nationwide.
Technically, we are working on implementing user authentication (Supabase Auth) so developers can save historical analyses. We are also integrating more predictive layers, including FEMA flood zone overlays and historical municipality permitting success rates, to make the AI verdict even more precise!
Built With
- deno
- framer-motion
- google-gemini-2.0-flash-api
- maplibre-gl
- open-elevation
- postgis
- postgresql
- radix-ui
- react-19
- react-three-fiber
- supabase
- tailwind-css
- three.js
- turf.js
- typescript
- usda-web-soil-survey-api
- usgs-bedrock-api
- vite
Log in or sign up for Devpost to join the conversation.