NeuroGraph LIVE – Project Story
The Learning Experience
NeuroGraph LIVE is designed to feel less like using an app and more like learning with a teacher who understands what you are studying in real time.
Students can access the platform through mobile devices or VR (Google Cardboard) and explore a dynamic 3D knowledge graph where concepts appear as nodes connected to related ideas. Instead of reading static pages, learners can visually navigate through ideas and understand how topics relate to each other.
Students interact with the system using natural voice conversation. The AI tutor listens, understands the context of the question, and responds instantly with explanations. As the interaction continues, the system builds and expands a live knowledge graph, helping learners see how concepts connect rather than memorizing isolated definitions.
To support deeper understanding, NeuroGraph LIVE can generate:
- Concept videos explaining difficult topics visually
- Mind maps summarizing relationships between ideas
- Interactive knowledge graphs that grow during learning
- Real-time explanations from a live AI tutor
This creates a learning experience where students can ask questions, explore ideas, and visualize relationships instantly, similar to learning with a personal teacher guiding them step by step.
The goal is simple: help students connect the dots between concepts so learning becomes intuitive instead of overwhelming.
The Problem We Are Solving
Modern learning is highly fragmented. Students trying to understand a topic often jump between textbooks, YouTube videos, blog posts, and lecture slides. Each resource explains only a small part of the idea, and the connections between concepts are rarely shown clearly.
For example, when learning about neural networks, students encounter terms like gradient descent, activation functions, optimization, and backpropagation. Each concept is explained separately across different sources, forcing learners to piece together the relationships themselves.
This leads to a major issue in education: knowledge fragmentation. Students collect information but struggle to build a mental model of how ideas connect.
NeuroGraph LIVE aims to solve this by transforming learning from isolated explanations into a connected knowledge experience.
Inspiration
The inspiration behind NeuroGraph LIVE comes from how the human brain organizes knowledge. Our brains do not store information as separate documents or chapters. Instead, ideas are linked together like a network of neurons.
When we learn something new, we naturally connect it with what we already know.
However, most educational tools today present knowledge linearly — page by page or slide by slide — hiding the relationships between ideas.
This led us to ask a simple question:
What if learning worked like a knowledge network instead of a textbook?
Instead of scrolling through content, students could explore a living graph of ideas, where connections are visible and interactive. An AI tutor could also expand this graph dynamically based on what the student is studying.
This concept became the foundation of NeuroGraph LIVE.
How We Built It
NeuroGraph LIVE is built as a real-time multimodal learning platform combining spatial visualization, AI tutoring, and immersive interaction.
At the center of the system is an interactive 3D knowledge graph, where nodes represent concepts and edges represent relationships between them. This allows students to explore subjects spatially instead of navigating static text.
The AI tutor is powered by Gemini 2.5 Flash through the Multimodal Live API, enabling the system to process voice input and visual context simultaneously. A student can speak to the tutor while showing a textbook page or diagram through their camera, and the system interprets both streams together.
The backend is built using FastAPI, acting as a multimodal gateway that connects the user interface with AI services and manages real-time streaming interactions.
The frontend uses React and Three.js to render the 3D knowledge graph and enable smooth spatial exploration of concepts.
For visual explanations, the system includes a dynamic video teaching engine. When a student asks for a concept explanation, Gemini 2.5 Pro generates a structured Manim animation script that visually demonstrates the concept. The animation is rendered, processed with FFmpeg, and delivered to the student as a generated explanation video.
The infrastructure runs on Google Cloud, using services such as Cloud Run, Firebase Hosting, Vertex AI, and Cloud Storage.
Dynamic Video Teaching System
One of the most powerful features of NeuroGraph LIVE is its on-demand educational video generation system.
When a student asks a question that requires visual explanation, the AI tutor triggers a tool to generate an animation. The backend sends the request to Gemini 2.5 Pro, which produces a structured Python script for Manim, a mathematical animation engine.
The script is executed in a secure environment where Manim renders the animation frame by frame into a video. The resulting video is stored and streamed to the learner, allowing the AI tutor to narrate and explain the concept alongside the animation.
Unlike traditional learning videos that are pre-recorded, these explanations are generated dynamically based on the student's question, making them personalized and contextual.
This approach allows the system to function like a teacher who can draw animated diagrams instantly whenever a student asks a question.
Impact and Future Vision
NeuroGraph LIVE introduces a new way of interacting with knowledge.
Instead of navigating disconnected information sources, students explore a living knowledge network that grows as they learn. The combination of voice interaction, visual graphs, and AI-generated explanations transforms learning into an exploratory experience.
In the future, we envision deeper integration with AI research tools similar to Notebook-style learning environments, where students can upload notes, documents, or textbooks and automatically generate interactive knowledge graphs and mind maps.
Students will be able to interact directly with these graphs, asking questions like:
- How are these two concepts related?
- What prerequisite ideas connect to this topic?
- Show the learning path between these subjects.
This will allow learners to move beyond structured summaries and instead explore connected learning paths across topics.
Another important direction is AI-generated educational videos at scale. Instead of relying on large libraries of static videos, students will be able to generate custom visual explanations for any concept instantly. This approach could significantly optimize educational video creation for large learning platforms, where millions of students ask similar conceptual questions.
NeuroGraph LIVE can also be integrated into schools, universities, and digital classrooms, where teachers can generate live visual explanations, graphs, and mind maps while teaching.
Our long-term vision is to build a global interactive knowledge network, where students learn by exploring connected ideas rather than memorizing isolated information.
What We Learned
Building NeuroGraph LIVE showed us how powerful multimodal AI interaction can be for education.
Low-latency responses make AI tutoring feel conversational. Visual context allows the tutor to understand exactly what a student is studying.
Most importantly, knowledge graphs make relationships visible, turning studying into exploration rather than memorization.
Challenges We Faced
Some of the major challenges included:
- Managing real-time multimodal streaming
- Rendering AI-generated animation reliably
- Designing semantic clustering for knowledge graphs
- Integrating multiple Google Cloud services efficiently
Solving these required combining AI reasoning, distributed systems, and real-time rendering pipelines.
What Makes NeuroGraph LIVE Different
NeuroGraph LIVE is not just a chatbot.
It is an interactive knowledge environment where students explore ideas visually, interact with an AI tutor, and dynamically generate explanations.
Instead of consuming disconnected content, learners navigate a living map of knowledge.
Our mission is to transform education from static content consumption into interactive exploration of ideas.
System Architecture
NeuroGraph LIVE follows a hub-and-spoke architecture where a real-time backend acts as the multimodal hub connecting the interface, AI models, and cloud services.
The frontend, built with React and Three.js, runs on mobile devices and VR. It renders the interactive 3D knowledge graph, captures voice input and camera frames, and streams them to the backend.
The backend, built with FastAPI, runs on Google Cloud Run and manages real-time communication with Gemini 2.5 Flash (Multimodal Live API) for AI tutoring.
For deeper reasoning and visual explanations, the system invokes Gemini 2.5 Pro, which generates animation scripts for Manim.
The video engine renders animations using Manim, processes them with FFmpeg, and stores them in Google Cloud Storage for delivery or caching.
Concept relationships are generated using Vertex AI embeddings, allowing the system to cluster and connect ideas within the knowledge graph.
Together, these components enable low-latency multimodal learning, where students can speak, explore knowledge graphs, and generate visual explanations in real time.
Built With
- artifactregistry
- chromadb
- cloudbuild
- cloudrun
- cloudstorage
- d3.js
- docker
- fastapi
- gemini-2.5-flash
- gemini-2.5-pro
- google-genai-python-sdk
- networkx
- react.js
- three.js
- vertex-ai-vector-search
- web-audio-api
- websockets
Log in or sign up for Devpost to join the conversation.