Inspiration

The real estate market is notorious for requiring buyers to have incredible imagination. When looking at a "fixer-upper" or an outdated listing, it's incredibly difficult to visualize its true potential. Furthermore, assessing a property isn't just about the four walls; it requires hours of cross-referencing maps, checking transit times, and calculating neighborhood livability.

We were inspired by the newly released Amazon Nova foundation models. We realized that by combining Nova's deep reasoning capabilities with its lightning-fast multimodal vision, we could build a fully autonomous AI team that does the work of a real estate agent, a location analyst, and an interior designer—all in a matter of seconds.

What it does

Property Nova is an autonomous multi-agent system that takes the guesswork out of property investment. When a user inputs a target city, our system:

  1. Scouts live real estate listings from the web and extracts the raw data.
  2. Analyzes the location using Google endpoints to calculate a livability grade based on proximity to essential amenities.
  3. Reimagines the property using Amazon Nova Vision to analyze the listing photos, understand the room's geometry, and instantly generate stunning, photorealistic interior design renovations.

How we built it

We built Property Nova using a modern, scalable architecture heavily reliant on AWS Bedrock.

  • The Brains: We utilized us.amazon.nova-pro-v1:0 for complex, data-heavy reasoning tasks (like the Location Analyst agent) and us.amazon.nova-lite-v1:0 for lighting-fast multimodal vision tasks (like the Interior Decorator agent).
  • The Orchestration: We used LangChain and LangGraph in Python to create the multi-agent swarm, managed by a Supervisor agent that routes tasks appropriately.
  • The Backend: A robust FastAPI server handles the API endpoints and streams the agentic steps to the front end.
  • The Frontend: A responsive, animated UI built with React, Vite, TailwindCSS, carefully designed to mimic a high-end luxury real estate platform.
  • Deployment: The entire stack is containerized using Docker for seamless local execution.

Challenges we ran into

One of our biggest hurdles was multimodal context filtering. Our Interior Decorator agent was initially redesigning everything it saw—including the exterior facades and front lawns of the houses! We had to engineer a strict two-step prompt pipeline where Amazon Nova Lite first classifies the image (e.g., "living room", "kitchen", "exterior") and aggressively filters out non-interior shots before attempting to generate a redesign.

Additionally, managing state across multiple autonomous agents required careful orchestration. Passing large chunks of scraped HTML and image byte-arrays between agents quickly inflated our context size.

What we learned

Building this project was a masterclass in Agentic AI. We learned:

  1. The power of a 300k context window: Amazon Nova Pro allowed us to dump massive, unrefined JSON arrays of Google Places data into the prompt without hitting limits, allowing the model to organically extract and reason about the neighborhood metrics.
  2. Vision models are game-changers: Using Nova Lite to physically "see" a room's geometry and existing furniture completely revolutionized how our agents interact with standard web data.

What's next for Property Nova

In the future, we plan to integrate financial forecasting agents that can pull live mortgage rates and estimate the exact ROI of the interior renovations our vision model suggests. We also want to implement Amazon Nova Act to allow our agents to autonomously book viewings on behalf of the user directly on third-party real estate platforms!

Built With

Share this project:

Updates