Inspiration
Everything is expensive right now. If you live in the Bay Area, you already know this, but it's not just here. Rent, groceries, gas, basically the cost of existing keeps going up. And one of the smartest ways to stretch your money is buying used. Thrifting, secondhand marketplaces, whatever you want to call it. The deals are out there.
The problem is finding them.
If you want to buy something new, it takes about 30 seconds. You search Amazon, sort by price, pick one, done. But if you want to buy that same thing used, suddenly you're juggling five different apps. You're scrolling through Mercari, checking Craigslist, opening Facebook Marketplace, refreshing OfferUp, browsing Goodwill's site. Each one has different filters, different interfaces, different ways of listing the same item. You end up spending more time hunting for the deal than the deal is actually worth. And most people just give up and buy new because it's easier, even though they know they're overpaying.
We kept coming back to that frustration. Why is it so hard to do the thing that saves you the most money? The listings exist. The deals are real. But no one has time to manually cross-reference five platforms every time they need a pair of headphones or a desk chair. So we built something that does it for you.
What it does
How we built it
ReFind is an AI-powered secondhand shopping agent. You tell it what you're looking for in plain language, like "I need a desk chair under $80" or "find me some used Jordan 4s in size 10," and it goes to work. Multiple agents fan out across secondhand marketplaces simultaneously, searching Mercari, Craigslist, Goodwill, OfferUp, and Facebook Marketplace at the same time. They pull listings, compare prices, check condition descriptions, and rank the best deals for you.
The whole thing runs through a chat interface. You talk to it like you'd text a friend who's really good at finding deals. It shows you what it's doing as it works, which listings it's looking at, what it's comparing, and how it's scoring each option. When it finds something good, it gives you the link so you can go grab it. And if a deal looks too good to pass up, it can help you act on it fast before someone else does.
The key thing is that it doesn't need you babysitting it. You give it a goal, it searches, it comes back with results. You're not refreshing five tabs and comparing prices in a spreadsheet. The agent does the tedious part so you can just make the final call.
Challenges we ran into
The browser automation was the hardest part by far. Getting agents to reliably navigate different marketplace websites, each with their own layout, load times, and quirks, was way more painful than we expected going in. Some sites load listings dynamically, some have anti-bot protections, and the HTML structure isn't consistent even within the same platform. We spent a lot of time just getting the agents to reliably extract the right information from pages without breaking.
Getting the different agents to work together smoothly was also tricky. When you have multiple agents searching in parallel and then need to merge their results into one coherent ranking, there are a lot of edge cases. Duplicate listings across platforms, inconsistent pricing formats, and missing condition descriptions.
The extractor agent had to handle all of that normalization, and getting it right took a lot of iteration. Time pressure made everything harder, too. We scoped the project, knowing we only had a few hours, but there were moments where something that should've taken 20 minutes turned into an hour of debugging because the agent was parsing a page slightly wrong or a marketplace changed how it rendered search results. That's just how it goes when you're scraping real websites in real time.
Accomplishments that we're proud of
The thing we're most proud of is that it actually works end to end. You can type in what you want, and agents go out, search real marketplaces, come back with real listings, and rank them. That full loop working in a single day of building feels good.
We're also proud of the parallel search. It's not searching one site at a time. Multiple agents hit different platforms at once, which is the whole reason this is faster than doing it yourself. Watching the tool UIs light up as different agents report back what they're finding, that was the moment it clicked that this was actually useful and not just a demo.
The fact that it runs mostly autonomously is a big deal to us too. You don't have to hold its hand. You give it a query, walk away for a minute, and come back to a ranked list of deals with links. That's what we wanted from the start, something where the AI is doing real work, not just summarizing text.
What we learned
The biggest lesson was about how hard it is to make agents interact with the real web. It's easy to build an agent that calls a clean API. It's a completely different game when your agent has to navigate websites built for humans, deal with popups, dynamic loading, and layouts that change depending on your screen size. That gap between "agent that works in a controlled environment" and "agent that works on real websites" is massive, and we underestimated it.
We also learned a lot about orchestrating multiple agents. The idea of "just run a bunch of agents in parallel" sounds simple, but the coordination layer, making sure they're not duplicating work, handling failures gracefully when one agent can't load a page, merging results that come back in different formats, that's where the real engineering is. The planner/scout/extractor/ranker pattern we landed on worked well, but we went through a few bad approaches before getting there.
On the product side, we realized that showing the agent's work matters almost as much as the results. When the UI just says "searching..." and then dumps results, it doesn't feel trustworthy. But when you can see the agent opening Mercari, scanning three listings, moving to Craigslist, comparing prices, you trust the output more because you watched it happen. Transparency turned out to be a feature, not just a nice-to-have.
What's next for ReFind
The immediate next step is expanding the number of platforms we search. Right now, we cover Mercari, Craigslist, Goodwill, OfferUp, and Facebook Marketplace, but there are a lot of other secondhand sources out there: Poshmark, Depop, local thrift store websites, and even estate sale listings. Each new source makes the tool more useful because you're casting a wider net.
We also want to add image-based search. Right now, you describe what you want in text, but ideally, you should be able to take a photo of something and say, "Find me this, but cheaper and used." That's where multimodal input becomes really powerful. We want to use Google DeepMind's capabilities to let the agent understand product photos, compare conditions from images, and give smarter rankings based on visual quality, not just text descriptions.
Price tracking and alerts are on the roadmap, too. Instead of searching once and getting results, you should be able to say "watch for Nike Dunks under $60 in my size" and get notified when a deal appears. That turns ReFind from a tool you use actively into something that works for you in the background. Long term, we think there's something here for anyone who wants to buy smarter. Students, families, and anyone trying to stretch their budget. The used market is huge and underserved by good tooling. We want to be the reason someone doesn't overpay for something they could've found secondhand in two minutes.
Built With
- assistant-ui
- augment-code
- digitalocean
- nexla
- railtracks
- unkey
- workos
Log in or sign up for Devpost to join the conversation.