Inspiration
One of us was doomscrolling when he came across a reel about TRIBE v2, a framework that models how people emotionally and cognitively respond to content. Still scrolling, he started thinking: what pathways does this actually open up? What if you could catch a bad ad before it ever goes live? He already knew about MiroFish, a swarm-style humanoid simulation tool. And that's when it clicked: plug TRIBE v2's emotional model into MiroFish's simulation engine, and you can run a campaign past thousands of simulated humans before spending a rupee. No focus groups. No guesswork. Just data.
A trillion dollars gets spent on advertising every year. Only 0.2% of that, $1.8 billion, goes toward actually understanding whether those ads work at a neurological level. Not because marketers don't care, but because the tools are broken. A single fMRI neuromarketing study costs $50K to $200K, takes six months, scans maybe 30 people, and delivers results long after the campaign has already launched. Meanwhile, 90% of purchase decisions happen subconsciously, the exact thing focus groups and surveys can't measure. We saw a massive gap: the science exists to predict how brains respond to media, the simulation frameworks exist to model how populations behave, but nobody had connected them. We built Adneural to close that gap, making brain-level ad testing accessible to any brand, at any budget, in minutes instead of months.
What it does
Adneural is a three-engine platform that predicts how an advertisement will perform, neurologically and socially, before it ever goes live. You upload a video or audio ad, and the platform runs it through Meta's TRIBE v2 brain encoding model (trained on 450+ hours of real fMRI data) to predict second-by-second cortical activation across up to 148 brain regions. A video summarizer extracts scene-by-scene descriptions from the content, providing semantic context to every downstream agent. That brain data then flows through two parallel tracks simultaneously. The first is our Seeding Bridge, which translates cortical activation patterns into a 9-dimensional emotional state vector and modulates it against Big Five personality traits to create 200 individually distinct AI personas, each carrying a biologically grounded emotional response to the ad, not a random or scripted one. The second track is the Neuro-Translator, a diagnostic agent that queries a Neo4j knowledge graph built from 29 peer-reviewed neuroscience papers (2,684 findings, 9,921 relationships) to explain why the brain responded the way it did, using both the brain data and the video summary as context. Those 200 emotionally seeded personas then enter a MiroFish/OASIS social simulation, a full synthetic social network where agents post, share, argue, and go quiet on simulated Reddit, with every action timestamped. Finally, a three-agent LangGraph diagnostic pipeline (Social Analyst, Strategist, and Guardrail system) merges the neural and social findings into four audience-specific reports: Executive (go/no-go), Marketing (timestamped edit recommendations), Compliance (mental health impact scoring), and Full Research (complete citations). We validated the platform against Jaguar's 2024 "Copy Nothing" rebrand. Our system predicted weak emotional tagging, no narrative processing, and a narrative vacuum vulnerable to hostile reframing, all of which matched the real-world outcome, without us ever telling the system what actually happened.
How we built it
The backend is Python with FastAPI, orchestrated by a LangGraph state machine that manages the full pipeline with conditional edges and retry loops. Brain encoding runs through Meta's TRIBE v2 model, which outputs cortical surface data (roughly 20,484 vertices mapped to the Destrieux atlas). A video summarizer pipeline extracts frames via ffmpeg, sends them to a vision LLM, and generates scene-by-scene descriptions that feed into the Neuro-Translator, Social Analyst, Strategist, and agent seeding prompts. Our custom Bridge module aggregates cortical vertices into 8-10 functional brain networks (depending on analysis depth), computes a 9D emotional state vector (anxiety, trust, excitement, discomfort, memorability, etc.), and modulates each vector against Big Five personality distributions to generate 200 unique agent profiles across 6 archetypes (enthusiastic sharer, empathetic worrier, skeptical analyst, hostile critic, average viewer, emotional storyteller). Subcortical structures like the amygdala and hippocampus are inferred from cortical proxy patterns with explicit confidence tagging. We document this honestly throughout. The knowledge graph runs on Neo4j with built-in vector search (sentence-transformers, all-MiniLM-L6-v2) over 29 neuroscience papers for hybrid GraphRAG retrieval. Social simulation uses the MiroFish engine built on CAMEL-AI's OASIS framework. The three diagnostic agents (Neuro-Translator, Social Analyst, Strategist) each return structured JSON, with a guardrail node that checks for blocked language, fear/anxiety escalation, and risk-decision consistency, retrying once if violations are found. All scoring (neural manipulation score, social contagion score, mental health impact score) is deterministic Python. LLMs are used only for narrative synthesis, never for numerical decisions. The frontend is Next.js 15 with React 19, Three.js for 3D brain visualization (fsaverage5 GLB meshes), Motion for animations, and Tailwind CSS 4 with a custom design system (Geist display/body fonts, JetBrains Mono for data). The full pipeline runs in 45 to 120 seconds at roughly $0.05 per scan on free-tier LLMs. LLM backend is model-agnostic through OpenRouter (currently running Qwen 3.6), swappable via environment variables.
Challenges we ran into
The hardest problem was the bridge, figuring out how to translate cortical activation data into meaningful emotional states that could seed individual agent behavior. There's no existing literature that maps brain region activations directly to agent-based simulation parameters, so we had to build that translation layer from scratch, grounding every mapping in peer-reviewed neuroscience while making the output usable by a social simulation framework that was never designed to accept neurological input. TRIBE v2 outputs cortical surface data only, no subcortical structures, so we had to design a principled inference system for amygdala and hippocampus activity from cortical proxy patterns, being transparent about confidence levels rather than pretending we have direct measurement. Getting the LangGraph agent pipeline to produce consistent, structured output across three agents with a guardrail retry loop required significant iteration on prompt engineering and state management. On the frontend, rendering real-time 3D brain visualizations with activation heatmaps overlaid on cortical meshes while keeping the interface responsive was a substantial Three.js engineering challenge. And building a Neo4j knowledge graph from 29 dense neuroscience papers, extracting 2,684 findings and 9,921 relationships into a queryable graph, was a research task unto itself.
Accomplishments that we're proud of
We're most proud of the bridge, the novel connection between computational neuroscience and multi-agent social simulation that, as far as we can find, nobody has built before. The fact that our platform predicted Jaguar's real-world outcome without any knowledge of what actually happened is the strongest validation we could ask for during a hackathon. We built a complete end-to-end pipeline, from raw video upload to four audience-specific diagnostic reports, that runs in under two minutes and costs five cents. The guardrail system is something we're genuinely proud of: every recommendation gets checked to ensure we're not suggesting manipulative tactics, which matters when you're building a tool that understands how brains respond to media. And the frontend, a full 3D brain viewer, force-directed social network visualization, interactive knowledge graph, and multi-tab report dashboard, came together into something that actually looks and feels like a real product, not a hackathon prototype.
What we learned
Neuroscience papers are dense and often contradictory. Building a knowledge graph from them forced us to understand not just what the findings say, but how they relate to each other and where confidence levels diverge. We learned that the gap between "the AI generates a response" and "the AI generates a structured, consistent, verifiable response" is enormous. Deterministic scoring with LLMs only for narrative was a design decision that saved us from unreliable outputs. We gained deep familiarity with brain atlas systems (Destrieux parcellation), functional network organization, and the real limitations of fMRI prediction models. And we learned that the most compelling demo isn't showing the technology, it's showing the technology being right about something the audience already knows went wrong.
What's next for Adneural
The immediate next step is running TRIBE v2 on live video input at scale using cloud GPU compute. We want to expand the knowledge graph from 29 to 450+ papers and integrate real-time MiroFish simulation with larger agent populations. On the product side, we're building a SaaS platform with per-scan pricing, $49 per scan for agencies that currently pay $50K to $200K for a single neuromarketing study, giving us 90%+ gross margins on $2 to $5 of GPU compute per run. Beyond marketing agencies, the compliance reporting engine opens up pharmaceutical advertising (FDA scrutiny on direct-to-consumer drug ads) and social platform content auditing under emerging regulations like the EU AI Act. The long-term vision is to become the standard pre-launch check for any content that's designed to influence human behavior, not just ads, but political messaging, public health campaigns, and educational content, with the guardrail system ensuring the tool is used to make communication more effective, never more manipulative.
Why This Is the Next Big Startup
This is not just a hackathon project. It is a startup ready to launch. The neuromarketing market sits at $1.86 billion in 2026, projected to hit $3.82 billion by 2035 at 8.35% CAGR. But that number is small because the tools are broken: fMRI scanners, lab rentals at $800 an hour, six-month timelines, sample sizes of 30 people. That is not a market, it is a bottleneck. The real opportunity is the $1.17 trillion in global ad spend that goes out every year with zero neurological testing. We are not competing for existing neuromarketing budget. We are creating a new category. A traditional neuromarketing study costs $50K to $200K. We deliver the same insights for $49 per scan. Our SaaS pricing: Starter at $49 per scan (basic report, 100 agents), Professional at $299/month for 10 scans with full scientific citations and 200-agent simulation, Enterprise at $2,499/month for unlimited scans with API access and white-label. Unit economics: marginal cost per scan is $2 to $5 in GPU compute, giving us 90%+ gross margins. Our go-to-market is built into the product. We run retrospective analyses on famous ad failures, publish the results, and let the case studies sell. Jaguar is case study number one. Every future brand disaster becomes our top-of-funnel marketing. What stops an incumbent like Nielsen from copying this? Nobody else combines brain prediction with social contagion modeling. Nielsen tells you how the brain reacts. We tell you what happens next, at population scale. And our pricing makes it impossible for them to follow — they cannot cannibalize their $50K lab business to compete at $49. We automated what used to require a neuroscience lab, six months, and six figures into a single upload that runs in under two minutes. That is not an incremental improvement. That is a category shift.
Log in or sign up for Devpost to join the conversation.