Inspiration

Enterprise design workflows often involve fragmented tools, slow feedback cycles, and difficulty connecting early-stage sketches to structured outputs. We were inspired to solve this by building a platform that could bridge that gap—making the messy, creative front-end of design work better aligned with structured decision-making. With the emergence of agent-based AI systems and the powerful capabilities of Perplexity’s Sonar API, we saw an opportunity to radically streamline and enrich the design process. We also noticed growing interest in incorporating non-traditional design considerations—like Vastu principles—into architectural and spatial planning, and wanted to explore how AI could support those too.

What it does

ArchiSonar is an AI-powered platform that transforms design sketches into structured concepts and offers instant, contextual feedback through domain-specific AI agents. Users can upload a design sketch or describe their idea, and ArchiSonar intelligently analyzes it using Perplexity’s Sonar Pro to generate relevant, actionable insights. The system supports a variety of domains—from user experience and accessibility to system architecture and even Vastu Shastra—providing nuanced, context-aware suggestions. Through Sonar Deep Search, users can query real-time, research-grade information on their design choices, best practices, or compliance guidelines, helping them iterate smarter and faster.

How we built it

We built ArchiSonar using TypeScript and Next.js for a scalable and performant web application architecture. The core intelligence layer is powered by Perplexity’s Sonar Pro and Sonar Deep Search APIs. Sonar Pro helps us extract meaning and generate structured concepts from sketch inputs or descriptions, while Deep Search is used to pull in live, contextual research—ranging from accessibility standards to Vastu alignment suggestions. We also created modular AI agents for different design domains, with prompt engineering tailored to ensure relevant and non-generic feedback. The platform integrates dynamic input handling, semantic search, and real-time content generation to support designers from ideation through refinement.

Challenges we ran into

One of the biggest challenges was extracting structured meaning from unstructured sketches and vague inputs. While we don't currently have a feedback loop implemented, designing an intuitive interface that allows for clear input and meaningful output was still complex. Fine-tuning the prompts for Sonar Pro and Deep Search to yield domain-specific insights—especially for nuanced areas like Vastu—required a lot of trial and error. Balancing the performance demands of real-time search with the depth of information retrieval was another major technical hurdle. Additionally, defining agent personas that provided helpful, context-sensitive advice without overgeneralizing was surprisingly tricky.

Accomplishments that we're proud of

We’re proud of delivering a functional MVP that demonstrates how generative AI can enhance and accelerate enterprise design workflows. The integration of Sonar Deep Search allowed us to expand the platform’s capabilities beyond conventional design feedback, enabling support for more specialized areas like Vastu. We successfully created a system that makes it easier to move from an idea or sketch to something that feels actionable, structured, and grounded in real research. We also managed to create a clean, user-friendly UI that showcases the potential of intelligent design collaboration.

What we learned

We learned that great AI output relies heavily on clear, purpose-driven prompt engineering—especially when dealing with domain-specific queries. The versatility of Perplexity’s Sonar APIs enabled us to support a wide range of user needs, but tuning that power into precise and relevant feedback required iteration and focus. We also saw the importance of user experience design in AI tools—users need clarity, not just capability. Another key insight was that real-time research can play a powerful role in design decisions, especially when it taps into non-obvious domains like cultural or spatial practices (e.g., Vastu).

What's next for ArchiSonar

Next, we plan to build a feedback loop into the platform, allowing users to rate, refine, and guide the suggestions they receive, which will help improve relevance over time. We’re also working on integrating a sketch parser using computer vision models, so users can upload rough hand-drawn layouts that are auto-tagged and analyzed structurally. A Figma plugin is in the works to enable real-time AI feedback directly inside design tools. We’re also exploring enterprise integrations, allowing companies to train their own domain-specific agents. Ultimately, we want ArchiSonar to be the co-pilot that helps design teams go from messy ideas to validated, implementable solutions—with clarity, speed, and intelligence.

Built With

Share this project:

Updates