Inspiration

LearnBridge was inspired by the large class sizes and resource gaps in many government schools across South Asia — classrooms with 50–70 students where teachers can’t give targeted, one‑on‑one help. We wanted a low‑cost, scalable way to combine lightweight AI tutoring with local volunteer mentors and offline learning materials so every child can get personalized support, even with poor internet access.

What it does

Performs a 5‑question diagnostic to identify learning gaps and instantly generates a one‑week personalized study plan. Provides an LLM-powered Chat Tutor that gives tiered hints and step‑by‑step guidance (does not reveal full solutions unless requested). Offers a Mentor Dashboard (UI simulated in the demo) for volunteers to accept sessions, award points, and track hours. Produces downloadable offline lesson packs (text + audio script) delivered via Base64 so students can learn without reliable connectivity. Includes a demo rewards system: students earn points for progress and can redeem mock in‑kind rewards (vouchers, food/coupon codes); tutors earn recognition and can award points.

How we built it

We built LearnBridge using the Lovable design system for a friendly, accessible UI and Node.js + Express for the backend API and OpenAI proxy. Lovable helped us deliver clean, approachable screens for students and mentors, while Node/Express provided a minimal, reliable server for chat, lesson-pack generation (Base64), and demo rewards.

Challenges we ran into

Latency & API reliability: LLM responses can be slow/unpredictable; we added canned replies and UI messaging to keep the demo smooth. Time constraints: building a full backend, frontend, rewards, and offline export in a short window required tradeoffs (mentor scheduling simulated; persistence is in‑memory). Offline/size tradeoffs: packaging lessons for offline use (text + audio) required a simple, lightweight format to avoid large downloads on slow networks. Safety & privacy: designing reward flows for minors needs parental consent and fraud prevention; in demo mode we use mock vouchers only. UX clarity: balancing helpful stepwise tutoring with not giving away answers required prompt engineering and explicit tutor behavior rules.

Accomplishments that we're proud of

Functional MVP completed in a short timeframe demonstrating the full learner journey: diagnostic → plan → LLM tutor → mentor acceptance → offline download. Implemented a tutor persona and structured prompt that reliably returns tiered hints + walkthroughs and a parseable JSON block for the frontend. Built an in‑demo rewards module (points, catalog, redemption flow with mock voucher codes) to test motivational mechanics. Offline capability: built lesson-pack endpoint and client‑side download so content can be used offline or printed.

What we learned

Prompt engineering is central: small changes to the system prompt materially change the tutor’s pedagogy (hint-first vs. answer-first). Keep the demo deterministic: canned responses and parseable JSON outputs make the frontend robust even if APIs fail. Offline-first UX matters: short, audio-friendly lessons and small text packs are more practical in low-bandwidth environments than large PDFs. Motivation mechanics need policy: rewards are powerful but require anti‑fraud, parental consent, and administrative oversight in production. Rapid prototyping strategy: use managed services (Replit, OpenAI, simple in‑memory stores) to move fast and demonstrate value.

What's next for LEARNBRIDGE

Short-term (0–3 months)

Pilot with one community center or school: measure pre/post diagnostic gains, plan completion rates, and mentor engagement. Add persistence (Supabase/Firebase) for student profiles, point balances, and audit logs. Improve offline packs: generate lightweight PDFs and optional MP3 audio for lessons. Medium-term (3–9 months)

Curriculum RAG: add embeddings and retrieval so lessons and explanations are curriculum‑aligned and locally relevant. PWA/mobile app: make LearnBridge installable, offline-capable, and optimized for low-end devices. Rewards partnerships: pilot real in‑kind rewards with partner NGOs or local vendors; add parental consent and admin approval flows. Long-term (9–18 months)

Scale pilots into districts via government or NGO partnerships, localize content into regional languages, and run controlled evaluations to measure learning gains at scale. Build tutor accreditation and verified volunteering records to make volunteer hours portable for college/scholarship applications. Add fraud prevention, rate limiting, and stronger moderation workflows for safety and compliance. Suggested KPIs to measure in pilots

Diagnostic completion rate and weekly plan completion rate. Average improvement in topic mastery (pre/post diagnostics) after 8–12 weeks. Active mentors recruited and sessions delivered per month. Redemption rates for rewards (demo vs real) and any correlation with engagement. If you’d like, I can:

Produce a one‑page “Project Summary” slide using this content, or Create the exact README/slide text for submission, or Draft the pilot evaluation plan (metrics, sample size, timeline).

Built With

Share this project:

Updates