Inspiration
Real estate platforms are powerful but notoriously hard to onboard onto. New agents spend hours hunting through help docs just to find where a single setting lives. We wanted to eliminate that friction entirely — what if you could just ask the platform where to go and it would show you?
What it does
Cactus is an AI-powered assistant built into a real estate SaaS platform. It operates in two modes:
Guide Mode — Recognizes user intent (e.g. "how do I set up notifications?"), matches it to a known workflow, and walks the user step-by-step through the actual UI — highlighting nav links and buttons in real time. RAG Mode — For broader questions, it retrieves the most relevant answers from a pre-built knowledge corpus (FAQs, glossary, tutorials) and generates a grounded, cited response using Claude.
How we built it
Next.js 15 + React 19 for the app shell and UI Tailwind CSS for styling, FullCalendar for calendar views Claude (Anthropic API) for grounded answer generation via tool use Gemini Embeddings API (text-embedding-004) for semantic vector search Custom RAG pipeline — a build-time script embeds 59 help records into a JSON artifact; at query time, cosine similarity + lexical scoring retrieves the top hits passed to Claude as context Guide catalog — a declarative system mapping user intents to DOM targets and step-by-step UI flows
Challenges we ran into
Hallucination prevention — We constrained Claude with tool use (submit_answer) and only allowed it to cite source IDs from retrieved context, making it impossible to fabricate answers Retrieval without a vector DB — Running fully bundled with no external database meant building our own blended scorer that degrades gracefully when embeddings aren't available Windows compatibility — The build script used Unix rm -rf which broke on Windows; we replaced it with a cross-platform Node.js solution mid-hackathon
Accomplishments that we're proud of
A fully working hybrid RAG pipeline built from scratch — no LangChain, no managed vector store A blended relevance scorer that outperforms pure vector search on small, well-structured corpora A live UI guide system that highlights real DOM elements and walks users through workflows step by step Claude never hallucinates — every answer is grounded in retrieved sources
What we learned
Building RAG from scratch forced us to deeply understand every tradeoff: embedding dimensionality, relevance thresholds, and how to keep an LLM grounded. The biggest insight: a simple blended ranker often beats pure vector search when your corpus is small and well-structured. We also learned how powerful Claude's tool use is for constraining output format and preventing hallucination.
What's next for Cactus
Swap the static JSON corpus for a live vector database to support real-time content updates Personalize guide flows based on user role (agent vs. team lead vs. admin) Expand the guide catalog to cover the full platform surface Add voice input so agents can ask questions hands-free while on the go
Built With
- claude-(anthropic-api)
- gemini-embeddings-api
- next.js-15
- rag-(retrieval-augmented-generation)
- react-19
- tailwind-css
- typescript

Log in or sign up for Devpost to join the conversation.