Inspiration
We built CampusFundsAI for University of Maryland students - especially first-generation students and students whose families can't easily navigate the financial aid system - who are leaving real money on the table because the scholarship landscape is fragmented.
The problem isn't that scholarships don't exist. Between UMD institutional awards (Banneker/Key, President's, Dean's, Frederick Douglass), Maryland state programs run through MHEC (Guaranteed Access Grant, Educational Assistance Grant, Senatorial, Delegate, Teaching Fellows, foster care and homeless youth tuition waivers), and federal grants (Pell, FSEOG, TEACH), a single in-state undergraduate might qualify for more than a dozen stackable awards. The problem is that this information is scattered across a dozen different .edu and .gov pages, each with its own vocabulary, deadlines, and eligibility fine print. To find out what applies to them, a student has to read all of it. Most don't.
The result is that programs specifically designed for students who need help - foster youth, homeless youth, first-gen students, future Maryland teachers - routinely go underused because the students who qualify never learn they exist. We wanted to flip that.
What it does
CampusFundsAI reverses the scholarship search. Instead of a student sifting through dozens of pages, they enter their situation once - state, major, GPA, year, financial need, first-gen status, background - and Claude reads every scholarship's requirements against that profile and returns the top matches with a one-sentence explanation of why each one fits.
A Maryland CS sophomore with a 3.8 GPA and high financial need gets pointed to the Terrapin Commitment, the Banneker/Key Scholarship, and the Pell Grant. A former foster youth transferring from a community college gets a completely different list - the Maryland Tuition Waiver for Foster Care Recipients, the 2+2 Transfer Scholarship, the Frederick Douglass Scholarship. Same engine, same dataset, very different output, all driven by what each student actually needs.
How we built it
The whole project is a Python CLI in two files: engine.py (the matching logic) and scholarships.json (the data).
The engine. We use the Claude Sonnet 4.6 API via Anthropic's Python SDK. The engine sends Claude a system prompt defining its role as an expert financial aid advisor, the user's demographic profile, and the full JSON list of scholarships. Claude reads the free-text requirements of each scholarship against the user profile and returns the best matches as structured JSON - scholarship name, match percentage, and a one-sentence "why you match" explanation. The number of matches returned is dynamic: typically 3, but more whenever a student is a strong fit (80%+) for several scholarships at once, so highly eligible students aren't artificially capped.
Using Claude for this instead of writing rule-based filters mattered because scholarship requirements are written in natural language, not code. "Son, daughter, or surviving spouse of a deceased or 100% disabled veteran" isn't easily expressible as a checkbox filter, but Claude reads it the way a human advisor would and applies it correctly. Same for criteria like "high-needs subject area" or "rural background" - fuzzy phrases that regex and if-statements get wrong.
The data. scholarships.json is not mocked. It contains 24 real scholarships pulled from official UMD Admissions pages, the Maryland Higher Education Commission, and studentaid.gov, totaling around $250,000 in representative annual funding. Every record includes a source_url field pointing back to the authoritative page so a student can always click through and verify.
Output discipline. We constrain Claude's output by giving it a strict JSON schema with an example in the system prompt, and we defensively strip markdown fences in case the model wraps its response. That makes the output parseable and directly displayable.
Challenges we ran into
Hallucinated scholarships. An LLM asked to find scholarships could invent one that doesn't exist or cite requirements that aren't in the source. We solved this by keeping scholarships as structured data in a separate JSON file rather than asking Claude to generate them. Claude can only match against records we give it. Every record carries its official source_url, so any claim the tool makes is verifiable in one click.
Calibrating match percentages. Early versions of the prompt produced inflated scores - Claude would return "92% match" for scholarships the student barely qualified for. We fixed this by explicitly instructing Claude to base percentages on hard requirements actually met and on how specifically the scholarship targets the student's profile, not on raw enthusiasm. We also made the result count dynamic - fewer than 3 when fewer scholarships are a reasonable fit, and more than 3 when a student strongly matches (80%+) several scholarships at once - rather than padding or artificially capping.
Forcing parseable output. LLMs love to wrap JSON in markdown code fences or add a friendly preamble. We addressed this by including a strict format example in the prompt and adding a defensive parser that strips fences before calling json.loads.
Choosing real data over mock data in 90 minutes. The original guidance was to skip the scraper and use a synthetic dataset to save time. We tried that first, but the demo was a lot less compelling when none of the scholarship names were ones a UMD student would recognize. We pivoted and pulled real data from official UMD, MHEC, and federal pages, which made the matches actually meaningful - at the cost of more setup time.
Accomplishments that we're proud of
Real dataset, not mocked. Every scholarship in our database is a real program with an official source URL. A student who wants to verify can click through immediately.
End-to-end matching that works on day one. Run python engine.py with an ANTHROPIC_API_KEY and a user profile and you get back top matches with explanations. No setup beyond the SDK.
Coverage across all three funding tiers. UMD institutional aid, Maryland state programs, and federal grants are all represented in one matching call. A student doesn't need to know in advance which bucket a given program belongs to.
Auditability. The match percentage and the "why you match" sentence are always grounded in a specific requirement the user satisfies, so a student can see why a match was made, not just that it was made.
What we learned
Free-text requirements are natively LLM-shaped. Scholarship eligibility rules are full of phrases like "or equivalent," "preference given to," and "such as" that resist rule-based parsing. Claude reads them the way an advisor would. This isn't a generic "AI is the answer" claim - it's specifically that LLMs win when the input is human-language criteria, which is what scholarship pages are made of.
Schema-with-example beats schema-alone. Claude returned cleaner JSON when we gave it a worked example of the output format alongside the schema description. Without the example, key names drifted ("match" instead of "match_percentage").
Source URLs are a feature, not just a footnote. Including a source_url for every record turned the dataset into something a skeptic can audit. It also lets us refresh the data later without rewriting code.
Real data is worth the extra effort even in a sprint. A demo with familiar names (Banneker/Key, Pell Grant, Maryland Senatorial) lands very differently than one with invented ones.
What's next for CampusFundsAI
"Near-miss" feature. Alongside the top matches, surface a couple of "you'd qualify if…" scholarships with the one change (a GPA bump, a specific volunteer commitment, declaring a minor) that would unlock them. That turns the tool from a search engine into something closer to academic advising.
Deadlines and reminders. Match results should feed into a calendar with reminders, not just live as a one-time list a student forgets about by next week.
A reproducible framework or formula to get the percentage match instead of using an LLM-based score.
Built With
- python
- streamlit
Log in or sign up for Devpost to join the conversation.