Inspiration

The idea for Concept Graph AI came from a very real and common frustration: getting lost while learning. Whether it is a 100 page textbook, dense documentation, or a complex syllabus, students often struggle not because they are not trying, but because they cannot clearly see what actually matters and how everything connects. You read more, but understand less. We noticed something important. Learning becomes easier when you can see the structure. Mind maps and dependency graphs make concepts stick. They show not just what to learn, but how ideas depend on each other. The problem is that creating these maps manually is slow, tedious, and impractical. So we asked a simple question. What if this entire structure could be generated instantly? That is how Concept Graph AI was born. It takes any learning material and turns it into an interactive visual learning path. To go beyond passive visualization, we added targeted quizzes so learners do not just explore concepts but validate their understanding node by node.

What it does

Concept Graph AI transforms any uploaded syllabus or learning content into an interactive knowledge graph.

  1. It extracts all concepts, organizes them into structured topics and subtopics, and visualizes them as a mind map.
  2. Users can click on any node to attempt dynamically generated quizzes
  3. The system identifies weak areas and enables focused learning by guiding users to the exact concepts they need to improve.

How we built it

We developed a full stack web application focused on performance, structure, and user experience. Frontend: Built using React with modern CSS, we created a smooth and intuitive interface. We used React Flow for graph visualization and built custom logic on top of it to efficiently handle deeply nested topic structures.

Backend: A Node.js and Express backend handles document parsing, request management, and communication with the AI layer.

AI Integration: We used the Google Gemini API to process uploaded content. The backend constructs structured prompts to extract a hierarchy of topics and subtopics in strict JSON format.

Dynamic Quizzing: Each node in the graph becomes interactive. The system prompts the AI to generate context specific quiz questions for the selected concept, turning the platform into an active learning system.

Challenges we ran into

LLM hallucinations and formatting issues The AI often returned inconsistent outputs such as markdown or numbered lists, which broke JSON parsing. We solved this by enforcing strict output formats using responseMimeType set to application json.

API rate limits and concurrency Parallel quiz generation caused too many requests errors due to API limits. We redesigned the system using sequential processing, exponential backoff, and controlled request handling.

React lifecycle complexity Handling asynchronous AI responses caused rendering issues and hook inconsistencies. We resolved this by restructuring useEffect dependencies and stabilizing state management.

Accomplishments that we're proud of

Successfully built a system that converts raw learning material into a structured, interactive knowledge graph.

Achieved reliable AI integration with structured outputs suitable for real time applications.

Transformed passive content into an active learning experience through dynamic quizzes.

Built a scalable pipeline that separates extraction, structuring, and visualization.

What we learned

AI integration requires careful engineering. It is not just about calling a model but handling failures, inconsistencies, and limits.

System design is more important than prompt design. Separating extraction from structuring significantly improved reliability.

Visualizing relationships between concepts enhances understanding far more than linear content.

Building reactive systems with asynchronous data requires strong control over state and lifecycle behavior.

What's next for Concept Graph

Retrieval Augmented Generation (RAG) for Scope Control We plan to integrate RAG to ensure that all extracted concepts and generated questions strictly remain within the scope of the uploaded syllabus. This will improve relevance, reduce hallucinations, and make the system more academically reliable.

Enhanced Quiz Experience We will expand the quiz system to support both: Multiple choice questions (MCQs) Typed, open-ended answers This will improve usability while also enabling deeper evaluation of understanding.

Improved LLM Responses We aim to refine prompt design and system architecture to generate: More accurate More context-aware More consistent responses This includes better formatting, reduced ambiguity, and stronger alignment with the source material.

Blooms Taxonomy Levels We plan to incorporate cognitive levels into the learning experience by aligning quizzes with Bloom’s Taxonomy. This will allow the system to: Generate questions at different cognitive levels Evaluate depth of understanding, not just correctness Guide learners from basic recall to higher-order thinking

Built With

Share this project:

Updates