Project Story

🔗 Slides Demo: mareengine-demo.vercel.app 🔗 Video Demo: mareengine-demo.vercel.app

💡 Inspiration

The luxury salon industry thrives on high-touch, personalized experiences. However, scaling a B2B network in this space usually requires tedious, manual outreach. We created MaRe (named after our Growth Leads, Marianna & Rebecca) to bridge the gap between premium scalp wellness and scalable B2B growth. We wanted to build a system that could automate the heavy lifting of salon prospecting without losing the authentic, guarded luxury tone required to close high-end partnerships.

🛠️ How we built it

MaRe is a comprehensive ecosystem powered by a deterministic, multi-agent AI pipeline. We utilized a modern, multi-tiered approach:

  • The Brain (AI & Agentic Flow): We built a linear multi-agent pipeline using LangGraph and Python. The backend uses Gemini's Native JSON Schema Output (with_structured_output) via Vertex AI to ensure our agents never hallucinate data formats.
  • The Math (Analyst Agent): Our Analyst Agent evaluates prospective salon profiles and strictly executes a Python tool to project the ancillary revenue jump. We use a standardized baseline formula for this projection: $$ \text{Projected ROI} = \left( \frac{\text{Est. Ancillary Revenue} - \text{MaRe Onboarding Cost}}{\text{MaRe Onboarding Cost}} \right) \times 100 $$
  • The Voice (Copywriter Agent): Takes the Analyst's output and drafts highly personalized, luxury-toned outreach messages.
  • The Human Gate (Backend & State): The entire LangGraph pipeline is wrapped in a FastAPI server. To achieve our Human-in-the-Loop (HITL) architecture, the graph serializes its state to a local PostgreSQL database, pausing the execution thread.
  • The Interface (Frontend): We built a unified shell application in Flutter (Dart), targeting iOS, Android, and Web simultaneously, allowing our team to review, edit, and push feedback back into the AI revision loop before anything is sent.

🧗 Challenges we ran into

Building a multi-agent system that actually works in production is much harder than a simple chat completion. Our biggest hurdle was implementing the Human-in-the-Loop Revision Loop.

We had to figure out how to completely pause a LangGraph Python execution thread, serialize the complex nested state of the Analyst and Copywriter agents to a PostgreSQL database, return the draft via API to a Flutter UI, and then successfully re-hydrate that state when a human lead submitted feedback so it could route back to the Copywriter. Handling database socket connections seamlessly across Docker and Google Cloud Run was another major hurdle.

🧠 What we learned

We learned that AI agents are incredibly powerful, but they require strict guardrails. Giving an agent a deterministic "traffic-cop" router and forcing strict Native JSON schema adherence transformed our project from a fun demo into a highly reliable enterprise tool. On the frontend, we leveled up our skills in cross-platform Dart development, specifically mastering Provider state management and Dio interceptors for handling complex AI backend states.

Built With

Share this project:

Updates