Inspiration
Communities are full of people who want to help - someone with extra groceries, a neighbor who can tutor, a local business willing to donate - but there's no easy way to connect them. Existing platforms are either too transactional (Craigslist, Facebook Marketplace) or too siloed (food banks, volunteer org websites). We wanted to build a single open marketplace where generosity flows freely across volunteering, food, items, and crowdfunding/donations - and where contributing feels genuinely rewarding.
What it does
GoodPlace is a community resource exchange where users post Requests (calls for help) and Offers (contributions) across four categories: Volunteering Services, Food, Crowdfunding/Donations, and Items.
Users can browse a smart-sorted discovery feed filtered by location, urgency, and category. When someone wants to help, they respond to a post; the author accepts or rejects responders, and matched users enter a real-time chat to coordinate. Once a contribution is completed, both parties rate each other - and the contributor earns Community Points scored by an LLM engine that weighs the task difficulty, performance feedback, and reviewer credibility.
Points unlock ranks (No Rank → Bronze → Silver → Gold → Platinum), power a public leader board, and build a transparent contribution history on every user's profile.
How we built it
GoodPlace is a mono-repo of independent FastAPI microservices, each owned by one team member to eliminate merge conflicts:
- Posts service - post CRUD, respond/accept matching, status lifecycle, and LLM-based CP range estimation on every new post
- Discovery service - smart feed sorting combining urgency (40%), proximity (30%), recency (20%), and CP reward potential (10%) using a Euclidean geolocation approximation
- Social service - ratings, a deterministic CP scoring engine (LLM reasons about contribution value and outputs a math expression; a safe AST-based calculator evaluates it), leaderboard, and user profiles
- LLM gateway - a stateless proxy to
google/gemma-4-31b-itvia OpenRouter, with text, vision, and SSE streaming endpoints - Frontend - React 19 with Tailwind CSS, a custom earthy design system (Fraunces + DM Sans, terracotta/sage/linen palette), and a mobile-first layout with bottom tabs and sidebar. Chat uses Supabase Realtime for real user-to-user messaging and LLM streaming for simulated partners.
All services share a single Supabase (Postgres) database and deploy together via Docker Compose.
Challenges we ran into
LLM arithmetic is unreliable. We needed consistent, explainable CP scores - but asking an LLM to produce a final number directly led to values that didn't match its own reasoning. Our solution: the LLM outputs a structured math expression (base * performance * credibility_weight) and a safe AST-based Python calculator evaluates it deterministically. Same reasoning always produces the same number.
Geolocation and smart sorting together. The discovery feed needs to rank posts by four weighted factors simultaneously while excluding posts beyond the user's radius. Getting the normalization right - especially for CP reward potential, which is relative to the current result set - required careful query design.
Real-time chat with two modes. Seed-to-seed chats use Supabase Realtime subscriptions. All other pairings use LLM streaming via SSE. Detecting which mode to use, handling the streaming token-by-token UI, and persisting LLM messages back to the database without race conditions took more coordination than expected.
Keeping five services in sync. Cascade deletes, denormalized stats (rating averages, community points totals), and cross-service reads required tight contracts between services. We documented every table ownership boundary explicitly and enforced them strictly.
Accomplishments that we're proud of
- The CP scoring engine - the LLM-as-reasoner + deterministic-calculator pattern produces scores that are both nuanced and fully auditable. Every Community Points award comes with a structured markdown explanation stored on the user's profile.
- Zero merge conflicts - strict service ownership meant four people worked in parallel across a full-stack monorepo for an entire hackathon without a single conflict.
- A design system that feels cohesive - the earthy palette, custom typography, and consistent component library make GoodPlace feel like a real product, not a hackathon prototype.
What we learned
- LLMs are reasoning engines, not calculators. Separating those concerns - letting the model think and a deterministic system compute - unlocks more reliable and trustworthy AI behavior.
- Microservice ownership boundaries are worth over-engineering upfront. The investment in CLAUDE.md, per-service contracts, and strict table ownership paid back immediately in parallel throughput.
- Real-time features surface infrastructure assumptions fast. Supabase Realtime is powerful, but threading it correctly through React state with deduplication guards and cleanup on unmount took meaningful care.
What's next for GoodPlace
- Real authentication - bcrypt + session tokens replacing the stub login, with email and phone verification
- ID verification - LLM vision check on uploaded government IDs, unlocking blue checkmarks and rank certificates
- PDF certificates - downloadable Action Certificates per contribution, Rank Certificates on milestone achievements, and full contribution Transcripts
- Map view - a geographic visualization of nearby posts alongside the list feed
- Smart matching - proactive suggestions connecting posts to nearby users whose contribution history matches the need
Log in or sign up for Devpost to join the conversation.