Inspiration
We both got into IIT and CMU. When people ask how, the honest answer is not just hard work — it is that we had mentors who saw us clearly before we saw ourselves. A teacher who asked the right question. A peer group that modeled a different kind of ambition. Most people never get that. They make their biggest life decisions based on fear, expectation, and the consensus of whoever is around them. Dario Amodei calls this "herding behavior masquerading as maturity." We wanted to build the equalizer — the mentor that used to require proximity and privilege, available to anyone with 20 minutes and a browser.
What it does
Faculty is a psychologically grounded AI mentorship system. It interviews you the way a great therapist would — not "what do you like?" but "what did you give up that you still think about?" and "what do you envy in others?" After 18 questions, a 5-step agentic Claude pipeline extracts your psychological entities, matches you to one of 5 archetypes (The Quiet Fire, The Reluctant Pioneer, The Keeper of Forgotten Things, The Bridge, The Seeker), writes a personalized portrait using your exact words, and generates an interactive force-directed life map of your unlived creative self. Every node on the map can be tapped to summon one of 6 specialist mentor voices — The Artist, The Strategist, The Socratic, The Mirror, The Connector, The Pragmatist — each powered by a distinct Claude prompt with its own tone and purpose. A RAG chatbot lets you keep exploring with your session as context, grounded in indexed psychology and career research. Human counselors and parents stay in the loop via an upload console where they set the guidelines the agent follows.
How we built it
Flask backend proxying to the CMU Anthropic gateway. Five sequential Claude Sonnet 4 API calls form the core pipeline: entity extraction, archetype matching, portrait personalization, life graph generation, and counselor brief. The life graph renders in D3.js with a force simulation. Each node tap fires a /api/faculty call that selects one of 6 system-prompted mentor voices and caches the response in the session file. A Milvus vector store handles RAG over indexed psychology research PDFs — Csikszentmihalyi, Gilovich, McAdams, Cacioppo and others — chunked and tagged by domain. Sessions persist as JSON on disk and are restorable from the intro screen. The emotional heatmap behind the questions is generated in real time from emotional valence scanning of each answer. The whole thing runs from a single python server.py.
Challenges we ran into
Getting Claude to return valid JSON reliably across all five pipeline steps without preamble or markdown fences required careful prompt engineering and fallback validation at every step. The D3 force graph needed significant tuning to render the node hierarchy legibly across screen sizes. Keeping the 6 faculty voices tonally distinct — so The Socratic never gives answers and The Pragmatist never philosophizes — required iterative prompt work under time pressure. Milvus cold-start latency during the demo window was a real concern, so we built a graceful fallback to direct Claude if RAG is unavailable.
Accomplishments that we're proud of
The portrait step. When Claude takes the archetype template and rewrites it using the person's exact words from their answers, people recognize themselves in a way that generic AI output never achieves. That required prompt design that explicitly instructs Claude to ground every claim in a specific quoted answer — and it works. We are also proud of the human-in-the-loop architecture: the counselor RAG layer means the agent amplifies chosen humans rather than imposing the model's defaults, which is the right answer to the hardest ethical question this kind of tool raises.
What we learned
Structured output prompting at scale is harder than it looks — five sequential Claude calls, each dependent on the last, means one malformed JSON response breaks everything downstream. We learned to validate and sanitize at every step rather than trust the model to be consistent. We also learned that the quality of the interview questions matters more than the quality of the generation — garbage in, generic out. The questions that work are the ones that create distance from the inner critic: "what do people come to you for, even without asking?" outperforms "what are you good at?" every time.
What's next for The Latent Map
Three things. First, the Longitudinal Agent: when a user returns for a second session, a diff agent compares new answers against session history, generates a "what has changed in you" paragraph, and updates the life graph — moving nodes from "unlived" to "in progress" when the evidence supports it. This turns a one-time experience into a relationship. Second, the Observer Agent: instead of one big extraction call at the end, a lightweight agent updates a running psychological model after each answer, so the archetype match by question 18 is confirmation rather than discovery. Third, a proper counselor dashboard — session list, thematic summaries, flagging, and the ability to schedule a follow-up prompt to the student. The architecture is already built for all three. We just need more than 90 minutes.
Built With
- claude
- python
Log in or sign up for Devpost to join the conversation.