Inspiration
Taste emulation is one of the unsolved problems on the path to AGI. LLMs can reason, generate, and retrieve — but they can't yet reconstruct how a specific person actually thinks. Not their stated opinions. The pattern underneath: what they react to, where they contradict themselves, what they systematically misjudge, who shaped their worldview.
Real taste diverges from stated taste. Paul Graham has praised things that violate his own essays. Investors say they want bold ideas but fund safe ones. The gap between what people say they value and what they actually respond to is where judgment lives — and no system models that gap today.
I wanted to build one that does. And in true Arlan and Nozomio style, I did this hackathon solo.
What it does
TasteGraph indexes any public expert's taste across seven dimensions — stated vs actual values, taste drift over time, blind spots, influence graph, contextual variation, linguistic tells, and falsifiability — then judges anything you submit through their eyes.
The verdict comes back with cited evidence, contradiction callouts, blind spot warnings, context variance, and a confidence score computed from evidence, not LLM self-report.
How I built it
The core engine uses Nia's Oracle API for deep autonomous research — going far beyond surface-level web search to build a rich, structured profile. Each of the seven taste dimensions gets its own agent loop, where the agent thinks, searches the indexed profile, and iterates up to six turns before producing a grounded sub-verdict. All seven run in parallel. A synthesis layer merges them into a final verdict with a deterministic score breakdown.
The frontend streams every event in real time — you watch the spider chart fill in, see what Nia is finding and indexing, and track each agent's reasoning as it works.
What I learned
- Retrieval architecture matters more than prompt engineering. Getting the right evidence to the right agent at the right time is the hard problem.
- Agent loops need escape hatches. Agents that can search indefinitely will. The think-then-act cycle with a turn limit was the key design decision.
- The system is surprisingly good at self-awareness. When evidence is thin, it says so — and the computed confidence score reflects it honestly.
Challenges
The hardest part was the agent loop — making seven parallel agents produce grounded, evidence-backed results rather than hallucinating. Each agent needs to search, evaluate what it found, decide if it needs more, and know when to stop. Getting that cycle to reliably produce real results instead of going in circles or giving up too early was the core engineering challenge.
Built With
- ai
Log in or sign up for Devpost to join the conversation.