Inspiration
What it does
How we built it
Challenges we ran into
Accomplishments that we're proud of
What we learned
What's next for LionAgent
At Columbia, when students have a real question (“Where is the dining hall?”, “How do I appeal a waitlist?”, “Which professor should I take?”, “Why is my bill wrong?”), the usual options are messy:
Ask a friend / find an advisor (slow, not always available)
Search official websites (accurate but scattered across many pages)
Dig through Reddit / student forums (real experiences, but hard to filter and verify)
We built LionAgent to reduce that pain: one place where a student can ask a question and get a clear, structured answer—grounded in official sources + real student experiences.
What it does
LionAgent is a Columbia campus helper that answers student questions across:
Courses & planning (what to take, waitlist appeals, credit rules)
Billing (fees, refunds, disputes, common policy steps)
Housing (eligibility, lease questions, common issues)
Professor insights (what students say, what to expect, who might be a good fit)
Instead of giving a vague chatbot reply, LionAgent returns a structured response like:
Final answer
Checklist (what to do next)
Templates (email/appeal drafts when relevant)
Risks / edge cases
Citations (where the info came from)
How we built it
We combined three pieces:
Data collection & curation
We collected student discussions from Reddit and community sources
We added offline snapshots of official policy pages (so it still works even when links move or students can’t find the right page)
We chunked these documents into a local searchable knowledge base
RAG + multi-agent pipeline
A Router decides the category and what to search
A Retriever/Evidence step pulls the most relevant passages
A Composer writes the structured response using those passages
A Critic checks that the answer is supported and flags missing evidence
Product experience
A FastAPI backend exposes a simple /ask endpoint
A Streamlit frontend lets users ask questions and see results instantly
Challenges we ran into
Information quality mismatch: official pages are reliable but scattered; Reddit is rich but noisy. We had to design the system to balance both and avoid misleading answers.
Retrieval grounding: making sure the model doesn’t hallucinate and actually uses evidence.
Developer setup friction: local environment issues (ports, module imports, env vars, API auth) slowed iteration.
API differences: switching from a local LLM to a hosted reasoning model required careful request formatting and output handling.
Accomplishments that we're proud of
Built a working end-to-end flow:
Student question → retrieval → structured answer → UI
Unified official policy guidance + real student experiences into one search + response system
Designed outputs that are actually useful (checklists/templates), not just “chat”
What we learned
Students don’t need “more information”—they need the right next steps with sources.
Multi-agent structure helps reliability: routing + retrieval + critique reduces random answers.
A good campus assistant must treat official sources and student voices differently:
Official sources = rules / procedures
Reddit/forums = lived experience / expectations / warnings
What's next for LionAgent
Improve professor/course matching:
more review sources (CULPA, RateMyProfessors, department evaluations)
filters by learning style (hard-but-learn-a-lot vs easy-A vs project-heavy)
Add stronger citations and confidence signals (official vs community tagging)
Expand to more campus workflows (financial aid, registration, GS policies, advising navigation)
Make data updating easier (scheduled imports + re-indexing pipeline)
Built With
- api
- pycharm
- python
Log in or sign up for Devpost to join the conversation.