Feynman: The Visual Engine for Intuition

The Spark

I’ve always believed that the most important part of science is building intuition. If you can’t visualize a concept, you’re just memorizing names and formulas without understanding how they move. It’s why Richard Feynman created his diagrams; he needed to see quantum mechanics to solve it. I built Feynman because I wanted to give that same power to every student, for any subject.

Nevada currently ranks 48th in education, and I think that’s largely because we’ve replaced the joy of discovery with the chore of rote memorization. This tool is my way of fixing that.

What it does

Feynman is a generative engine that creates a "3Blue1Brown" experience on demand. You type in any mathematical or CS topic, and instead of a wall of text, the platform generates a 3-5 minute high-fidelity animated video in under two minutes. It takes the abstract, like the way a matrix transforms space, and makes it cinematic and narrated.

How I built it

I had to figure out how to bridge the gap between a language model and a rigorous animation engine. The Brain: I used Groq-accelerated Llama 3 to act as a scriptwriter and a "Manim Director," generating Python code for the animations on the fly. The Voice: ElevenLabs provides the narration, keeping the tone conversational and pedagogical. The Assembly: A Node.js backend handles the rendering through Manim, while FFmpeg stitches the audio and visuals together in a synchronized pipeline.

The Challenges

The primary hurdle was orchestration. Getting three disparate APIs, Groq, ElevenLabs, and a local Manim environment, to communicate without latency was a massive undertaking.

Code Reliability: Llama 3 frequently "hallucinated" Manim syntax or overlapped text with graphics. I implemented strict system prompting and visual constraints to ensure every video is clean and mathematically accurate.

Rendering Speed: Standard Manim renders take minutes or hours. I optimized the backend and reduced the overhead to achieve a "render-on-demand" experience in under 60 seconds.

What I learned

I learned that the gap between a "static" AI and an "intuitive" AI is the future of education. Building this taught me how to deal with the nuances of programmatic animation and the difficulty of synchronizing synthesized speech with dynamic visual math.

Built With

Share this project:

Updates