Inspiration
Before writing a single line of code, I asked ~20 women: "When you bought something you ended up not wearing, what would have actually helped?" Every single one said the same three things:
- Seeing what they could pair it with from their existing wardrobe
- Seeing what new pieces they'd need to buy to make it work
- Knowing what occasions they'd realistically wear it to
Phia solves price and discovery. But it can't tell you if something actually works with what you own or show you what a full outfit looks like. Complete the Look fills that gap.
What it does
Upload a piece you never wear. Set the occasion, vibe, location, date, and budget. Complete the Look builds a full outfit around it, pulling from your existing wardrobe first, then surfacing real shoppable products for what's missing. Then it generates a Pinterest-style photo of the complete look on a person, so you can see the outfit before you commit to buying anything.
How we built it
The core insight was that this required a multi-model pipeline, not a single API call. Each model does something the others can't.
Weather grounding. Open-Meteo pulls real weather before Claude sees anything. For dates 0–7 days out we use the live forecast. For dates 7+ days out, forecasts become unreliable, so we fall back to historical monthly averages. If no date is provided, weather is excluded entirely rather than hallucinated.
Vision + reasoning. Claude Haiku receives the image as base64 alongside the occasion, vibe, weather, budget, and free-text notes. It returns a structured JSON outfit plan with what's missing, why, and what to search for. Free text was added directly from user feedback to capture details like "polished but not too corporate, I run cold" and help with refinement if the user wanted to regenerate the outfit.
Parallel product search. Serpapi fires one Google Shopping search per missing piece simultaneously using Promise.all, cutting search time from O(n) to O(1) for that stage.
Semantic ranking. Rather than keyword matching or an extra LLM call, we use OpenAI's text-embedding-3-small to embed each product title alongside the occasion and vibe context, then rank by cosine similarity. Results are deduplicated and grouped by category before the top 3 per category reach the client.
Visualization. The selected pieces go to Google Imagen 3 Fast via Vertex AI, returning a Pinterest-style editorial image of the complete outfit on a person.
Architecture. React + TypeScript + Vite + Tailwind + Framer Motion on the frontend. Stateless Vercel serverless functions on the backend.
Challenges we ran into
Wardrobe conflict detection. Claude would occasionally recommend another blazer when the user uploaded a blazer. Solved with a category detection layer that filters conflicting suggestions before they reach the client, reinforced in the system prompt.
Weather fallback logic. Forecasts beyond 7 days are unreliable, so we split the logic: real forecast for near-term dates, historical monthly averages for anything further out, excluded entirely if no date is provided.
Latency. The pipeline is inherently sequential. Parallelizing all Serpapi searches and embedding calls with Promise.all combined with a tight Claude prompt keeps end-to-end latency under 10 seconds for a full outfit.
Accomplishments that we're proud of
A five-model pipeline shipped and deployed in under 24 hours with real products, real weather, real semantic ranking, and a generated outfit photo.
We’re also proud of the process: we grounded everything in real user research before building, then stayed in a tight feedback loop with those same users—continuously iterating through follow-up questions and refinements. Every product decision was driven by what we heard directly from users, not assumptions, which is how we ensured we were delivering something with real, tangible value.
What we learned
The hardest part of agentic systems isn't the AI, it's the orchestration. Getting five models to talk to each other reliably, with the right data passed between each step, under real latency constraints, is a genuine systems problem. And user research done in an afternoon is more valuable than assumptions made in a week.
What's next for Complete the Look
Phia API integration. Replace Serpapi with Phia's existing product database. Pull preferred brands from the user's Phia profile. Populate the wardrobe from purchases Phia already tracks.
Persistent wardrobe + saved looks. Add a database layer so outfits persist across sessions. Every saved look becomes training signal for future recommendations.
Richer personalization. Color palette matching, body type awareness, and cultural occasion intelligence — surfacing what's appropriate for events users aren't familiar with.
Upload multiple orphan pieces. "I have this blazer and these trousers — what's missing?" Start from a partial outfit rather than a single piece.
Built With
- ai
- gcp
- haiku
- imagen
- openai
- react-+-typescript
- serpapi
- tailwind-css
- typescript
- vercel
- vertex
- vite
- weather-open-meteo


Log in or sign up for Devpost to join the conversation.