Inspiration

Neuroscience has historically been a "black box" accessible only to those with multi-million dollar fMRI machines and advanced degrees. With the release of Meta’s TRIBE v2 brain encoding model, we saw an opportunity to bridge the gap between cutting-edge research and practical application. We wanted to answer a fundamental human question: How does this media actually land in the human brain? Resonance was born from the desire to give creators, researchers, and curious minds a "Virtual fMRI" that turns complex neurological data into instant, actionable insights.

What it does

Resonance allows users to upload any form of media, i.e., video, audio, text, or images, and see a predicted 3D brain activation map in seconds.

  • 3D Visualization: Renders a high-fidelity 20,484-vertex brain mesh (fsaverage5) directly in the browser.
  • Claude-Powered Insights: Uses Claude 4.6 Sonnet to interpret raw voxel activations into plain-English explanations of which cognitive functions (like emotional processing or visual attention) are being triggered.
  • A/B Testing: A dedicated mode to compare two different stimuli (e.g., two ad copies or two podcast intros) to see which one creates a stronger "resonance" in specific brain regions.

How we built it

We built a sophisticated pipeline that bridges heavy-duty machine learning with a sleek, responsive frontend:

  • The Brain: Meta’s TRIBE v2 (Llama 3.2-3B based) serves as our encoding engine, deployed as a GPU endpoint on Modal using L4 instances.
  • The Interpreter: Claude 4.6 Sonnet acts as our resident neuroscientist, taking top-activated region data and context to provide descriptions.
  • The UI: A "Scientific Instrument" aesthetic built with React 19, TypeScript, and Tailwind CSS. We used Three.js with custom shaders to map activation intensities to a heat map.
  • The Backend: A FastAPI (Python 3.11) server manages media preprocessing via FFmpeg and coordinates the flow between the inference engine and the LLM.

Challenges we ran into

One of the steepest hurdles was the 2-minute cold start for the TRIBE v2 model on Modal, which also included a high cost to run. To circumvent this and speed things up while keeping costs down, we implemented a Redis cache to store the image hashes of the results and brain vertices. We also wrestled with the technicality of mapping 20,484 data points onto a 3D geometry in real-time without sacrificing frame rates. Finally, we had to implement strict "Prompt Engineering" patterns in our claude_interpreter.py to ensure the AI stayed grounded in the actual predicted data rather than hallucinating generic neurological facts.

Accomplishments that we're proud of

We are incredibly proud of our Design System. By ditching the standard "SaaS" look for a clinical, monospace-heavy interface (using JetBrains Mono and zero rounded corners), we created a tool that feels like a professional laboratory instrument. We also successfully integrated a 3-billion-parameter transformer model into a seamless web workflow that can process diverse media types, from a simple text sentence to a high-def movie trailer, and provide a unified brain response.

What we learned

This project was a huge lesson in Brain Encoding Models (BEMs). We learned how transformers can be trained to predict fMRI BOLD signals and how to translate those signals into the fsaverage5 surface space. On the engineering side, we gained deep experience in orchestrating serverless GPU deployments and managing complex 3D state in React. We also learned that a strict, opinionated design system can significantly enhance the "vibe" and perceived authority of a technical tool.

What's next for resonance-app

The roadmap for Resonance includes:

  • Real-time "Live Stream" Mode: Processing video frames in real-time to show a dynamic, updating brain map as you watch.
  • Customization: Provided easy ways to change how the brain's activation is presented, the heatmap, and what view the brain is being rendered from.
  • Specific Tuning: Allowing users to better filter individual parts of the brain, ie specific lobes.
  • Exportable Reports: One-click PDF generation of neuro-analysis findings, complete with high-res 3D captures and statistical summaries for marketing teams.

Built With

Share this project:

Updates