About the Project — Intueri
Inspiration
Creating clear STEM animations takes hours. We wanted a way to turn plain-English explanations (e.g., “derive $\sin(x)$”) into clean visuals in minutes. That became *Intueri *: prompt → animation, optimized for accuracy and speed.
What it does
Intueri converts natural-language prompts into Manim animations and returns a rendered MP4.
- User enters a prompt (e.g., “integration of $\sin(x)$”).
- An LLM generates Manim (Python) scene code.
- A queue triggers a renderer that produces a video with LaTeX, graphs, and geometry.
- The user downloads/plays the MP4.
How we built it
- Frontend: Next.js/React (status polling, job queue UI).
- Backend: FastAPI (Python) exposing
/apifor job submit/status. - Queue: Redis + Celery for async job processing.
- Renderer: Manim in Docker; outputs 1080p@60fps MP4s.
- Database: MongoDB (prompts, scripts, metadata).
- Infra: Docker Compose; deployable on GCP.
Challenges we faced
- Sharing a
/sharedvolume between worker and renderer containers. - Making LLM-generated Manim code both syntactically valid and mathematically correct.
- CORS/config in local dev; real-time polling for render status.
- Debugging Manim scene errors and stabilizing resource usage.
Accomplishments we’re proud of
- Successful renders for multiple math prompts (e.g., $\int \sin(x)\,dx$).
- Clean, modular UI with live status.
- Solid containerized architecture that’s easy to scale.
- AI text-to-speech voiceovers and subtitles.
What we learned
- Manim scene composition (axes, LaTeX, transforms).
- Productionizing LLM output safely into executable code paths.
- Container orchestration with Docker Compose and Redis queues.
- Fast iteration under hackathon constraints.
What’s next
- Multi-language prompts and captions.
- Public “Prompt-to-Lecture” library of shareable scenes.
- Render scaling via cloud workers.
Built With
- docker
- gcp
- mongodb
- next.js
- python
- redis
- manim
- fastapi

Log in or sign up for Devpost to join the conversation.