Justsell - The Google of Marketplaces
Inspiration
Google is the most visited website on the planet and it's just a search bar. That simplicity stuck with me. Traditional marketplaces feel stuck in 2010. Cluttered filters, rigid category trees, keyword-only search that breaks the moment you misspell something. Buying and selling secondhand shouldn't require a manual.
I kept coming back to this question: what if a marketplace worked the way Google does? You just describe what you want, or better yet, show it a photo, and it figures out the rest. When Gemini 3 dropped with its multimodal capabilities and this hackathon landed at the same time, the pieces clicked. I didn't want to slap a chatbot onto a CRUD app. I wanted to see if Gemini could be the foundation of an entire product, not just a feature bolted on at the end.
What it does
Justsell is a marketplace with multiple Gemini 3 integrations woven into every part of the experience:
Vision Search - Snap a photo of anything. Gemini Vision identifies the product and semantic search surfaces matching listings. No typing needed.
Semantic Search - Describe what you want in plain English. Gemini embeddings paired with pgvector understand what you mean, not just what you typed.
AI Shopping Assistant - Every listing has an advisor powered by Gemini 3 with Google Search grounding. It pulls live market data, compares prices across the platform, and tells you whether you're getting a good deal, with cited sources.
Smart Listing Creation - Upload product photos and Gemini Vision auto-extracts the title, description, category, condition, specs, and a price suggestion. What used to take ten minutes now takes thirty seconds.
AI Backdrop Enhancement - One tap transforms a messy product photo into a professional studio shot using Gemini's image generation.
Content Moderation - Gemini Vision scans both text and images in real-time, blocking policy violations before listings go live.
Similar Listings - Embedding similarity surfaces related items across the marketplace.
Deal Alerts - Background semantic matching compares new listings against saved searches and notifies users when something matches.
How I built it
The backend is Go with a service-oriented architecture (handlers, services, repositories). PostgreSQL with pgvector stores 768-dimensional Gemini embeddings alongside relational data. HNSW indexing keeps vector similarity searches fast. Real-time messaging runs over WebSocket with a hub-based connection manager. Deal alerts use PostgreSQL's LISTEN/NOTIFY. When a new listing is inserted, a trigger fires, the backend runs semantic matching against saved searches, and pushes notifications over WebSocket. No external message queue needed.
The frontend is Next.js 16 with React 19, Zustand for state, Tailwind, and Framer Motion. The sell flow runs Gemini Vision analysis immediately after image upload, so form fields start populating before the user even begins typing.
Every AI feature calls the Gemini API server-side. The vision service handles multimodal analysis with structured JSON output and thinking enabled. The embeddings service generates vectors using gemini-embedding-001 with task-type differentiation (RETRIEVAL_DOCUMENT vs RETRIEVAL_QUERY). The assistant uses Gemini 3 Flash with Google Search grounding and maintains conversation history with thought signatures.
Infrastructure is Docker Compose for local dev, Nginx with SSL for production, S3 for images, deployed on EC2.
Challenges I ran into
Hybrid search ranking. Pure vector search returns semantically similar results but sometimes misses exact keyword matches. Pure keyword search is brittle. Getting Reciprocal Rank Fusion right, blending vector similarity with full-text scores into a single ranking, took real iteration. The weighting still isn't perfect, but it's better than either approach on its own.
Vision analysis reliability. Gemini Vision is surprisingly good at identifying products, but confidence swings a lot. A clear photo of an iPhone gets near-perfect extraction. A blurry photo of a couch from across the room, not so much. I built the form to treat AI suggestions as editable defaults rather than final answers. The user always has the last word.
Embedding consistency across categories. "Good condition" means something very different for a phone vs a car. Enriching the embedding input by concatenating title, description, category, make, model, and aliases before generating the vector was the key insight for cross-category search quality.
Real-time architecture without overengineering. WebSocket connection management, heartbeats, graceful disconnection, auth validation via JWT in query params. Building reliable real-time messaging from scratch in Go without reaching for a heavy framework was a deliberate choice. It paid off in control but cost time.
Subject isolation for background replacement For studio type background generation, sometimes it's very hard to isolate the subject and keep it intact. Still fine tuning this.
Accomplishments that I'm proud of
The vision search pipeline. You point your camera at a product, and within seconds Gemini identifies it, generates an embedding, and returns matching listings. And every piece of that pipeline (vision, embeddings, search) was already built for other features. They just needed to be wired together. That's what happens when you build with AI from the start instead of bolting it on after.
The listing creation flow. Watching Gemini populate an entire form from a set of photos, including category-specific fields like make, model, year, and mileage for vehicles, genuinely feels like how marketplaces should work.
Multiple integrations that aren't demos. They're production features in a full-stack app with auth, messaging, offers, reviews, notifications, and moderation. This isn't a proof of concept.
What I learned
Gemini 3 is underutilized. Most applications treat it as a chatbot API. But embeddings, vision, image generation, search grounding, and thinking are all distinct capabilities that can be composed into new user experiences. The marketplace is just one example. This pattern of deep, multi-surface AI integration applies to any domain.
I also learned that pgvector is ready for real workloads. No need for a separate vector database. PostgreSQL with HNSW indexing handles hybrid search well, and keeping vectors alongside relational data in one database makes the architecture dramatically simpler.
What's next for Justsell - The Google of Marketplaces
- Native mobile app with a camera-first UX built around vision search
- Gemini Live integration for real-time conversational listing creation using voice and camera at the same time
- Price prediction using historical data and Gemini analysis to predict optimal listing prices and time-to-sell
- Expansion beyond New Zealand where the architecture is location-agnostic by design, currently proving the concept in NZ's $4B secondhand market
Built With
- antigravity
- claudecode
- gemini3pro
- go
- next.js
- opus4.6
- pgvector
- postgresql
- react
- tailwind
Log in or sign up for Devpost to join the conversation.