Inspiration
One thing I've noticed is that a number of students, as well as I, use AI tools to study. Whether it's class, personal topics, ChatGPT, Gemini, etc, it seems like we're almost too reliant on this. The issue is that as good as these LLMs are, they aren't the professor, and even if they have the same knowledge (which some models seem to be nearing or even surpassing), at the end of the day, they're not the ones teaching the actual course the students are sitting in. For example, I know many students heavily rely on LLMs for homework; they can explain solutions, break things down, and it's the smartest resource available to me at the moment. However, what I've found this most helpful for homework is going to my Professor's office hours, and doing my homework in the room. This way, if I have any questions, they can answer and point me in the right direction of how they interpret the field and want to teach their course. The question I asked myself when I was brainstorming was "What if we can replicate this?
What it does
CourseForge.ai turns any class into a course-anchored AI tutor. You create a course, upload PDFs or images of notes/lectures, and then chat. Answers are grounded in your materials with inline numeric citations, page-deep links into the PDFs, lightweight confidence signals, and image callouts when the source is a figure. The UI keeps a chat-like history so you can ask follow-ups naturally.
How I built it
Supabase (Postrgres + Storage) forVectorDB. Also used RPCs to keep service thin and fast. FastAPI for service layer Next.js for frontend Lightweight llm for image description RAG LLM for Enhanced Responses
Challenges we ran into
Env quirks: mixing server vs browser env vars; switching to NEXT_PUBLIC_* for the UI.
Module/name bugs: ModuleNotFoundError: pdf_parser (package path), and an image-ingestion regression referencing pdf_path.
UX polish: rendering LaTeX instead of raw \cosh/\gamma, avoiding broken citations inside \text{…}, and keeping multi-turn chat history instead of overwriting.
Accomplishments that we're proud of
End-to-end flow works: Create course → Upload notes → Ask → Cited, LaTeX-rendered answers with page-deep links.
Fully working RAG LLM
Image ingestion + retrieval actually shows up as sources (with an “image” tag).
A small, clean service layer that’s easy to extend, with solid validations and simple RPC boundaries.
A lightweight but pleasant UI: sidebar courses, collapsible uploader, sticky composer, and per-message sources.
What we learned
The service layer pays off—keeping UI ↔ data concerns clean makes debugging fast.
Next.js env scoping matters: if the browser needs it, it must be NEXT_PUBLIC_*.
Retrieval details (hybrid fallback, dedupe, deep links) dramatically improve trust and usability.
Little content fixes (KaTeX, citation sanitizing) turn “works” into “feels great”.
Ownership checks are essential even at hackathon speed—failing safely beats silent wrong answers.
What's next for CourseForge.ai
uth + RLS: Supabase Auth with row-level security; remove the demo header.
Persistent chat history: store Q/A in Postgres per course, with filtering and export.
Streaming answers & typing indicators for snappier UX.
Better ingestion: progress bars, chunk-windowing, table/figure OCR, smarter image embeddings.
Reranking & query rewriting for tougher questions, plus guardrails for academic integrity.
Instructor tools: upload syllabi, tag topics/week, generate practice sets, track coverage.
Deploy: Vercel (frontend) + Fly.io/Render (service) + Supabase managed.

Log in or sign up for Devpost to join the conversation.