What it does
π Inspiration
The idea for this project came from a simple yet powerful question:
"What if anyone could create structured, high-quality educational content β just by typing a prompt?"
We saw how difficult and time-consuming it is for educators and learners to design curriculum, generate practice material, and create engaging visual content. Our goal: make AI a true co-pilot for education.
π οΈ What We Built
We created an AI-powered web platform that allows users to:
- Enter a learning prompt (e.g., "Teach me supervised learning").
- Automatically generate a main topic, subtopics, and for each subtopic:
- Multiple-choice questions (MCQs)
- (Upcoming) Structured reading content
- Generate animated explanatory videos using Manim for selected topics.
- Preview generated content via an interactive UI with:
- Scrollable subtopic cards
- Real-time selection feedback
- Embedded video preview
The backend uses a multi-agent architecture to handle generation and rendering tasks modularly.
π‘ What We Learned
- How to break down unstructured user input into structured learning modules using LLMs.
- Building a seamless pipeline between frontend selections and backend content generation.
- Designing intuitive UX for educational tools (balancing structure with flexibility).
- Generating programmatic animation using Manim, integrating AI and visuals effectively.
- Use of ADK for building multi-agent systems and workflows.
- Integrating Gemini-based LLMs effectively
π§© Challenges We Faced
- Balancing AI creativity with curriculum structure: Ensuring content was pedagogically sound, age-appropriate, and coherent.
- MCQ generation quality: Validating that each question had exactly one correct answer and meaningful explanations.
- Manim integration: Creating readable and engaging video animations without overloading the screen.
- Scalability: Designing a backend that could scale content generation and video rendering reliably.
- Learning and integrating Agent Development Kit (ADK)
β Accomplishments that we're proud of
- Architected and deployed a modular AI system with agent-based design
- Enabled secure video rendering using isolated subprocesses
- Built a complete content-to-video pipeline from scratch
- Created an intuitive, interactive UI for end users
- Delivered a working prototype that bridges education and generative AI
π§± How We Built It
- Frontend : Built using Next.js, deployed on Firebase
- Authentication : Powered by Firebase Auth, with user IDs stored in our backend
- Backend : Developed in Spring Boot, uses ADK for routing requests to agents
- Agents : Handle generation of topics, questions, and video content
- Video Rendering : ADK calls a Manim Running Agent, which sends code to a remote Manim server. The server runs the code in a subprocess, generates video, uploads to Google Cloud Platform (GCP)
- Video Playback : The UI retrieves and plays the video based on the returned job ID
π What's Next?
- Add user profiles and learning progress tracking
- Add support for different languages and accessibility features
- Improve video rendering speed and introduce voice-over support
- Generate longer-form videos by combining multiple clips, with support for video editing
Log in or sign up for Devpost to join the conversation.