About the project

Inspiration

We've all watched friends say "I should volunteer more this semester" and then never follow through — not because they don't care, but because finding the right thing takes 20 minutes of scrolling Craigslist-style listings across five sites. The apps we keep opening (TikTok, Spotify, Instagram) all share one trait: a feed that knows what you want before you do. Skaler started with a simple question — what if a volunteering app felt like that? Not "swipe TikTok-style" (we tried, didn't work), but a personalized feed that explains why it picked each thing for you.

How we built it

  • Data first. Three Mongoose models — User, Opportunity, Save — with a unique (userId, opportunityId) index so re-saving is idempotent and a follow graph between users so social proof has real signal. A seed script generates 8 realistic student profiles, 35 opportunities across 8 categories, and prior saves so the demo feels alive on day one.
  • Two matchers, one signature. We built a deterministic skill-overlap matcher first in pure TypeScript. That locked the contract — rankOpportunities(...) → { score, reason, socialProof }[] — so when we layered the LLM matcher on top, the API route didn't change a line. The stub turned out to be life-saving as a fallback.
  • One batched LLM call per user. geminiMatcher.ts sends the user profile and all 35 opportunities in a single call with responseSchema forcing structured JSON output. ~2-3s cold, cached per-user for 5 minutes so swiping doesn't re-burn quota.
  • Two surfaces, one backend. The Vite + React SPA in frontend/ calls relative /api/* URLs that proxy to the Next.js API routes at the project root. Two artifacts, one mental model.
  • Same login, two roles. A Volunteer / Organizer toggle on the login page routes to two different dashboards — same backend, two markets.

Challenges we faced

  • Atlas auto-paused our free M0 cluster mid-hackathon. The error pointed at the IP allowlist and sent us debugging network rules for a while. Real cause: TCP was being accepted by Atlas's load balancer but the underlying mongod was off, so TLS failed with internal_error. We caught it with a direct tls.connect probe and one click on Resume.
  • Gemini's free-tier daily quota is shared across model variants. Switching from gemini-2.5-flash-lite to gemini-2.0-flash didn't help — same bucket. gemini-flash-latest was the only model we found with a separate quota pool. Both the matcher and coach got templated fallbacks for when the LLM is unreachable.
  • Gemini chat history must start with a user turn, but our UI shows the AI greeting first. The first user message would 500 the API. We now strip leading model turns server-side before calling startChat.
  • Mongoose's connection cache is a footgun. A rejected mongoose.connect() promise sat in cache forever, replaying the original failure on every subsequent request even after Atlas came back. Wrapped the cache to null the promise on rejection.
  • Pivoted from swipe-cards to a feed mid-build. We initially shipped a TikTok-style swipe stack with framer-motion, then realized the Indeed/Facebook feed pattern made saved state more obvious and gave us room to surface the AI reason as a permanent callout. ~200 lines of gesture code went in one commit and we didn't look back.

What we learned

  • Graceful degradation matters more than feature count. Every fallback we built (stub matcher, templated coach reply, connection retry, history-trim, etc.) was because something broke during the hackathon. The app feels solid not because nothing failed, but because every failure has a recovery path.
  • responseSchema is the difference between "demo-grade" and "production-grade" LLM output. Without it, JSON parsing fails ~5% of the time on long batches. With it, every response is parseable.
  • Building two parallel UIs (a Next.js scaffold and a polished Vite SPA) was a 2x speedup, not a 2x slowdown. The scaffold gave us a fast, ugly way to verify the API; the SPA gave us the visual polish judges remember. Both shared the same backend.
  • Honest copy beats clever copy. The AI reasons feel personal because they name the user's actual skills. The coach feels real because it admits when it's rate-limited and falls back to something useful. Generic platitudes ("great opportunity!") would have killed the magic.

Built With

Share this project:

Updates