Inspiration
Undergraduates trying to enter research face a structural problem that nothing on the market addresses. Core coursework consumes the first two years, leaving students unable to engage with field-specific work until they're nearly out of time to be useful to a lab. Self-teaching from research papers is unrealistic when the papers themselves take experienced researchers months to internalize. Existing tools don't close this gap. Reading platforms like Semantic Scholar surface papers but don't tell you what you'd need to know to read them. Course catalogs list classes but don't connect them to research. Nothing grounds the required teaching in what a specific paper or lab actually needs. Discovery tools like Connected Papers map citations but assume the user can already engage with the work. No existing platform takes an artifact: a paper, a lab, a topic, and produces a personalized, grounded path from where the student stands to where they want to be. Course catalogs don't connect classes to research. Discovery tools assume you can already engage with the work. Nothing grounds the teaching in what a specific paper or lab actually requires. That gap is what Lesearch fills.
What it does
Lesearch is a research navigation app for people who are looking to break into research and don't have the access to mentors or professors like others do. Give Lesearch anything that you want to understand, whether it is a paper, an abstract goal, or a research field, and it will run the input through a deconstructor that identifies specific concepts you need to understand it, grounded into a map of modules that are all personalized with quizzes to make sure users fully understand the material. Alongside the maps, a persistent AI tutor loads your full learning history — concepts in progress, fields you're interested in, papers you've reviewed — into every conversation. It asks questions instead of handing answers, so users are forced to think for themselves rather than offloading to the LLM. It also provides a platform for users to find like minded researchers, discuss and work on the same fields, and learn together so you don't have to do it alone. It has real research context: UCLA faculty profiles with relevance to what you're learning, a social layer for finding peers in your field, and a feed of opportunities so you can move from researching alone to finding your community.
How we built it
Lesearch is a Next.js 16 app deployed on Vercel, with MongoDB Atlas for persistence and Clerk for authentication. The interesting parts are the AI infrastructure and the data grounding.
- Most LLM-powered learning tools let the model invent concepts freely, which means the same paper returns different prerequisite lists across runs and downstream systems can't track progress reliably. Lesearch hardcodes a closed catalog of ~15 subject areas with stable concept IDs (src/lib/concepts-catalog.ts) and forces Gemini 2.5 Flash to select prerequisites only from that catalog. Every concept ID flowing through the system — user progress, concept maps, deconstructor output, field readiness — references this catalog. This is what makes the maps consistent and progress meaningful.
- Each concept's learn module is produced by an orchestrated three-agent flow: a Researcher agent gathers source material and produces a brief, an Architect agent (Gemini 2.5 Pro) structures the brief into sections and a check question, and an Evaluator agent grades free-response answers against a rubric with retry logic. The agents are registered on Agentverse and discoverable via ASI:One. When the agent VM is offline, the system gracefully falls back to a direct Gemini call, so the demo runs anywhere without infrastructure dependencies.
- A personalized tutor with persistent memory. A separate ASI1-powered chat that loads the user's full context — completed concepts, field interests, recently reviewed papers — into the system prompt on every message. Chat history is stored per-user in MongoDB so conversations persist across sessions. Any page in the app can open the tutor with a pre-filled prompt by dispatching a custom event, which is how features like "Analyze Professor" surface contextual guidance without coupling the tutor to specific pages.
- Real academic data, not seeded mock data. Papers are pulled live from the ArXiv API. UCLA faculty (12 professors, 153 papers) are seeded from Semantic Scholar. Field readiness percentages are computed in real time against the user's mastered concepts.
Challenges we ran into
Due to having to build this alone, I had to figure out how to setup the Fetch.ai uagent because my local network kept denying access for no reason. I had to end up using cloudflare to solve this but it took way too much time. I also kept leaking my API keys which led me to debug in endless loops until I realized the keys were disabled. I also had to think deeply of what I could possibly provide in this platform that was important for students trying to break into research while keeping in mind that a lot of people offload their thinking to LLMs these days, so building a pipeline that encouraged users to think deeply and critically of whatever they were learning was the biggest problem that I had to figure out. I was also overfetching and calling so many APIs without endpoints or caching for routes.
Accomplishments that we're proud of / What we learned
How to build a multi-agent system end to end. Designing the agent contracts, registering on Agentverse, handling timeouts and fallbacks, and coordinating message flow between specialized agents. The depth of the gap in academic accessibility. The structural problems aren't just hard — they're nearly invisible to people already inside the system. Building a tool against a real, lived problem produced a product that felt different from the average EdTech demo. Forcing Gemini to select from a fixed catalog rather than generate freely was the single most impactful design decision in the project. Every downstream system became simpler because IDs were stable and finite.
What's next for Lesearch
I want to generalize this beyond UCLA, build out the social layer, really nail down the module creation so it's completely independent and can bring a student from not knowing anything to having deep understanding, and validate this with more undergrads and people in research.
Built With
- clerk
- cloudinary
- fetch.ai
- gemini
- google-cloud
- javascript
- mongodb
- next.js
- python
- react
- typescript
- uagents
- vercel
Log in or sign up for Devpost to join the conversation.