## About the Project
(For a more technical breakdown, please refer to our readme.md file)
### Inspiration
To win at online resale fashion, you have to scroll — constantly,
repeatedly, for the slim chance of finding that diamond in the rough.
Resale moves at the speed of social media, but the tools haven't caught
up. Saved searches notify you too late, offer no way to incorporate user
feedback, and completely fall apart when you don't know exactly what
you're looking for yet. Sometimes you don't want a specific item — you
want a feeling. A Pinterest board full of Mediterranean summer fits. A
Mamma Mia vibe for a Greece trip. No saved search can handle that. We
built Sniper to close both gaps.
### What We Built
Sniper is an AI agent layer built on top of Phia that shops resale
continuously on behalf of users. The core insight is that shopping intent
exists on a spectrum — from "I know exactly what I want" to "I have a
vibe and need help getting there" — and no existing tool serves the full
range.
We built two agent types to anchor each end of that spectrum:
**High-Intent:** The user specifies exact criteria — brand,
item, size, condition, price ceiling — and the agent loops continuously
until it finds a match. Not a saved search. A saved search waits. This hunts.
**Low-Intent:** The user describes an aesthetic in natural
language or pastes a Pinterest board URL. The agent interprets the vibe
and surfaces resale items that match it, ranked semantically rather than
by exact criteria.
Users can run multiple agents simultaneously, each with its own toggle,
configuration, and live matches feed.
### How We Built It
We started with the matching problem. For the high-intent agent, the
challenge is speed and precision — polling listings frequently enough
to matter in a market where the window on a coveted item can be measured
in minutes. For the vibe agent, the challenge is semantic: translating
something as fuzzy as "Mamma Mia energy" into a ranked list of real
resale listings requires genuine aesthetic reasoning from the model, not
just keyword matching.
We used an LLM to handle the vibe translation layer — converting natural
language descriptions and Pinterest board content into search parameters
and ranking criteria. The agent loop runs on a polling architecture,
continuously querying listings and running new results through the
matching logic while the toggle is active. The multi-agent dashboard lets
users manage their full portfolio of running agents in one place.
### Challenges
The hardest problem was the pivot. We originally built a single
low-intent vibe agent. After presenting to Sophia and Phoebe, the
feedback pushed us toward something more high-intent — an elevated saved
search that runs continuously and delivers real-time notifications when
an item is sniped. Going back to the drawing board, we realized the
stronger product wasn't one or the other — it was both. Users should be
able to run multiple agents at once, each directed at a different point
on the intent spectrum. Rebuilding the dashboard architecture to support
this mid-hackathon required rethinking core assumptions fast.
The second hard problem was vibe matching. Exact matching is
deterministic — a size large either matches or it doesn't. Vibe matching
is probabilistic, and getting the model to rank results in a way that
feels genuinely aligned with a user's aesthetic — rather than just
topically related — required careful prompt engineering and iteration on
how we represented intent to the model. This also surfaced the need for
user feedback loops: on Felix's suggestion, we added a feedback field so
users can refine what the agent knows about their taste over time.
We also had to think carefully about the agent lifecycle — what it means
for an agent to be "running," how frequently it should poll without
hammering APIs, and how to surface new matches in a way that feels live
rather than static. For the purposes of this demo, the agent runs to
completion upon toggling on. In a production environment, this would be
backed by cluster-managed deployment jobs running chronically in the
background.
### What We Learned
Intent is a powerful frame for thinking about shopping tooling — and most
existing products do not give it enough attention (we almost didn't either!). A shopper hunting a specific grail and a shopper building a vacation wardrobe need fundamentally different
tools, and the gap between what they need and what exists is large enough
to build in.
We also learned that great AI products aren't just about the model —
they're about the partnership between probabilistic and deterministic
tooling. The LLM handles the fuzzy, aesthetic, judgment-heavy parts, such as receiving a Pinterest board turning it into a powerful prompt.
Deterministic systems handle the precise, reliable, structured parts, such as scoring matches numerically.
Getting that balance right is what makes the experience feel both
intelligent and trustworthy.
Log in or sign up for Devpost to join the conversation.