Inspiration
Modern supply chain systems involve large-scale, unstructured data that require efficient organization, analysis, and reasoning. Inspired by the limitations of traditional expert systems and the growing potential of LLMs, we set out to build a framework that combines the structure of knowledge graphs with the generative and reasoning power of large language models. Our goal was to enhance interpretability, flexibility, and reasoning depth in supply chain question answering.
What it does
Our framework constructs a domain-specific knowledge graph from unstructured supply chain data, indexes it, and integrates it into a Retrieval-Augmented Generation (RAG) pipeline. It supports multi-hop question answering, entity classification, and edge prediction by retrieving relevant subgraphs and prompting LLMs with structured context. The system shows significant improvements over baselines like expert systems and heuristic retrieval.
How we built it
We built the graph database using Neo4j and defined ontologies for entity types and relationships. I implemented the full data pipeline: from entity extraction and graph construction to retrieval methods (heuristic and embedding-based), as well as the LLM prompting and output evaluation. We also explored GNN-based enhancements and ran extensive ablation studies. The project resulted in two paper submissions.
Challenges we ran into
One major challenge was aligning the output of unstructured LLM reasoning with the formal structure of the knowledge graph. Another was designing retrieval methods that balanced precision and coverage, especially in multi-hop queries. Integrating all components—graph DB, retrievers, LLMs—into an efficient and modular framework also posed engineering difficulties.
Accomplishments that we're proud of
- Built a full end-to-end framework from raw data to explainable LLM outputs
- Designed and implemented both heuristic and semantic retrieval systems
- Achieved strong performance improvements over multiple baselines 4.Completed and submitted two research papers based on the work 5.Demonstrated flexible and interpretable reasoning with structured graph support
What we learned
We learned how to combine symbolic and neural systems effectively by leveraging structured graph knowledge in prompt design. We also gained deep insights into the limitations of expert systems, the power of multi-hop reasoning, and how to make LLM outputs more traceable and accurate through graph alignment.
What's next for 一顿吃三碗
Moving forward, I’m interested in extending this line of work by incorporating GNN-based reasoning into the retrieval and representation pipeline, applying the framework to other domains (e.g., biomedical or legal), and exploring ways to improve trust and factual consistency in LLM outputs through structure-aware decoding and training.
Log in or sign up for Devpost to join the conversation.