FindSecure
Lost something important? Let AI and real humans help you get it back.
Inspiration
Every year, thousands of people lose valuable items — phones, wallets, keys, sentimental jewelry, important documents — in public spaces, campuses, transit systems, events, and workplaces. Most lost & found systems are either:
- completely passive (a physical box or a basic Google Form),
- fragmented across different organizations,
- or rely purely on keyword search which fails when descriptions are vague or the finder uses different words.
At the same time, modern AI (especially multimodal LLMs) has become remarkably good at understanding natural language descriptions, comparing images, and reasoning about similarity — yet very few real-world lost & found systems actually use it.
We wanted to build something that combines:
- AI-powered fuzzy matching (text + image understanding)
- human-in-the-loop verification (real assistants reviewing top candidates)
- privacy-first design (users only see matches when approved)
- real-time updates via Supabase
…to create the fastest, most accurate, and most respectful way to reunite people with their lost belongings.
What it does
FindSecure is a lost & found platform with these core flows:
Anyone can report a lost item
→ rich form (title, detailed description, category, color, brand, distinguishing features, photos, location/date lost)AI immediately generates potential matches against the private inventory of found items
→ using Gemini to rank relevance (text + optional image reasoning)Trained assistants (staff) review the top candidates
→ see inquiry details + ranked found items side-by-side
→ approve the best match or mark "no match"When approved, the real owner sees the proposed found item
→ with photos and description
→ can confirm "Yes, that's mine!" → status → resolved
→ or "No, not mine" → status → rejected/not foundRealtime updates everywhere via Supabase Realtime
→ owner sees status change instantly
→ assistant sees new inquiries/matches
How we built it
Tech stack
Frontend
- React + Vite + TypeScript
- shadcn/ui + Tailwind CSS
- React Router for navigation
- Sonner for toasts
- React + Vite + TypeScript
Backend / Database / Auth / Storage / Realtime
- Supabase (PostgreSQL + Auth + Storage + Realtime + Edge Functions)
- Row Level Security (RLS) on every table
- Custom
is_staff()security definer function
- Supabase (PostgreSQL + Auth + Storage + Realtime + Edge Functions)
AI Matching
- OpenAI GPT-4o-mini (via Supabase Edge Function)
- Prompt engineering to rank found items by relevance
- Stores ranked candidates in
potential_matchestable
- OpenAI GPT-4o-mini (via Supabase Edge Function)
Admin dashboard
- Sidebar navigation
- Matches-focused view (only inquiries that have AI candidates)
- Side-by-side review interface with approve / no-match buttons
- Sidebar navigation
User dashboard
- Track Inquiries page with realtime status updates
- Matched item preview + Confirm / Reject buttons
- Track Inquiries page with realtime status updates
Deployment
- Vercel (frontend)
- Supabase (everything else)
Challenges we ran into
- Supabase
.single()behavior — throws PGRST116 when 0 rows instead of returning null → had to switch to.maybeSingle()and handle null explicitly. - Enum pain — forgot to add
'not_found'and'rejected'toinquiry_statusenum → caused 400 Bad Request on PATCH until we ranALTER TYPE. - RLS debugging — initially users couldn't see matches → had to carefully craft a policy allowing
auth.uid() = user_idon thematchestable. - Matching quality — early GPT prompts were returning mediocre rankings → iterated many times adding few-shot examples, structured JSON output, and category/color weighting.
- Image handling — Supabase Storage public URLs + multiple images per item → had to manage array rendering carefully and handle missing images gracefully.
- Realtime gotchas — making sure the subscription only fires for the current user's inquiries and properly merges updates without duplicating cards.
Accomplishments that we're proud of
- Fully private found-item inventory — only assistants see it, never exposed to public
- True human-in-the-loop — AI proposes, humans decide → much higher trust & accuracy
- Beautiful, responsive UI with glassmorphism + dark mode out of the box
- End-to-end realtime experience (status changes appear instantly on user side)
- Solid RLS setup — users can only see their own data, assistants see everything needed
- Clean separation:
potential_matches(AI proposals) vsmatches(confirmed by human)
What we learned
- How surprisingly good small/cheap models (gpt-4o-mini) are at semantic matching when the prompt is well-structured
- The importance of explicit null handling with Supabase queries (
.maybeSingle(), checkdata === null) - How powerful Supabase Edge Functions + Realtime can be for building full-stack apps with almost no backend code
- That lost & found is as much a trust & privacy problem as it is a matching problem
- Writing good few-shot prompts takes time but dramatically improves consistency
What's next for FindSecure
- Multimodal image similarity — use CLIP or GPT-4o vision to compare lost & found photos directly
- Location-based filtering — add geolocation awareness (with user consent) to prioritize nearby matches
- Public found-item reporting — allow anyone to submit found items (with moderation)
- Notifications — email/push when status changes or new follow-up question appears
- Analytics for assistants — dashboard showing match success rate, average time to resolution
- Mobile app — React Native version so users can report lost items on the go
- Community features — campus/company-wide instances, reward system for finders
We believe FindSecure can become the modern, trustworthy lost & found platform that actually gets items back to people faster — and we're excited to keep building it.
Thanks for checking out the project!
Feedback / ideas / bug reports very welcome → @marzito212
Built With
- gemini
- lovable
- supabase
Log in or sign up for Devpost to join the conversation.