Inspiration
Neuromarketing and UX studies give great insights, but they're expensive and slow, especially for early design decisions. Meta's FAIR team open-sourced TRIBE v2 on March 26, 2026, just a few weeks before this hackathon, and it seemed like a natural fit for the problem. The model predicts high-resolution fMRI brain activity across ~70,000 voxels using over 500 hours of fMRI data from 700+ people — so we figured we could use it to give teams a quick, cheap read on which designs actually grab attention before they invest in a full study.
What it does
Matter lets you upload images, run predictions powered by TRIBE v2, and compare design variants side by side. It scores each image on four engagement metrics: attentional capture, reward/value, processing demand, and encoding strength. It then maps it to a 3D brain visualization to show which regions of the brain are lighting up on that frame.
How we built it
The app is built with Next.js and uses Convex as the database. We deployed the TRIBE v2 model to a GPU on Modal, which gives us an API endpoint to hit for the cortical analysis. The Modal deployment is written in Python. Saliency was also built in Python and deployed as a separate Modal endpoint running on a CPU. The 3D brain visualization uses Three.js and an external dataset linked in the ReadMe which provides the atlas mappings for the visualization. Authentication is handled by Clerk.
Challenges we ran into
TRIBE v2 had been public for less than three weeks when we started building, so there was virtually no community knowledge to lean on. Our biggest hurdle was deploying the model to a serverless GPU on Modal, getting the container image and model loading, and cold starts working reliably took real effort. Building a working saliency model with OpenCV also required a good amount of finagling to get useful visual attention overlays. On the neuroscience side, assigning the correct HCP-MMP1 atlas regions to the brain mapping and making sure our cortical regions of interest actually corresponded to the right engagement signals was a careful, iterative process.
Accomplishments that we're proud of
We turned a research model into something people can actually use. Neuromarketing studies are expensive and slow, so being able to offer a similar type of insight through a web app feels like a step in the right direction. We also got the full pipeline working end-to-end in a hackathon timeframe, from image upload to GPU inference to interactive results.
What we learned
We spent a lot of time in Google Colab just figuring out how TRIBE v2's code worked, since the model had only been out for a few weeks and there was basically nothing online about it yet. For the brain atlas stuff, we dug through repos and looked at how Meta handled the region mapping in their own work, then adapted that for our visualization and scoring. On the engineering side, we learned how to deploy a model to a serverless GPU on Modal and connect it to a web app.
What's next for Matter
The next step is collecting more verifiable data and running statistical validation against real neuromarketing study results. If the predictions prove statistically significant, we plan to distill the model and train it on our own dataset of fMRI scans tailored to a specific type of trial, moving from a general-purpose cortical model to one optimized for creative A/B testing. We'd also like to build out a collaboration layer so teams can share and annotate comparisons.
Log in or sign up for Devpost to join the conversation.