Inspiration
As a Master's student in Computational Materials Science, I noticed a persistent "characterization bottleneck" in the laboratory. Experimentalists often spend a lot of time correlating surface-level morphology from Scanning Electron Microscopy (SEM) with atomic-scale data from X-Ray Diffraction (XRD). I have always wanted to build something that could solve this-something beyond the scientific Machine Learning scripts. As a beginner in this area (dev), I wanted to challenge myself to build a interactive web application that could put power of senior scientist into pocket of every researcher. MatNexus was born from desire: to turn specialized training into seconds of AI-driven insights.
What it does
MatNexus is a multimodal Material Informatics Hub the automated the transition from raw lab data to physics-grounded models. It features:
- The Lab Debugger : A vision-integrated engine that analyzes XRD and SEM images simultaneously to identify material phases, space groups, and habit.
- Autonomous Discovery Loop : It compares experimental data against the Material Project API to detect lattice discrepancies and provide actionable synthesis advice.
- 3D Crystal Visualizer : Instantly renders unit cells and simulated synthetic powder diffraction patterns for experimental verification.
- Literature Miners : Uses high-context indexing to extract structured technical data from dense research PDFs.
How I built it
The project is built on a foundation of Python-based scientific libraries and cutting-edge AI:
- Gemini 3 Flash : Serves as the reasoning backbone, handling multi modal vision tasks and 1M+ token context for literature mining.
- Pymatgen : The core physics engine used for crystallographic analysis and diffraction simulation.
- Material Project API : Integrated as scientific Ground truth for validating AI-generated structural predictions.
- Streamlit : Powers the UI, and enables seamless deployment.
Challenges we ran into
The most significant challenge was Structural Data Integration. I had to learn how to parse raw text outputs from a LLM into Crystallographic Information Files (.CIF) that a physics engine could read. Additionally, ensuring the Multimodal Vision accurately interpreted the physics within a noisy plot was difficult. I had to refine prompts to ensure the AI understood the relationship in Bragg's law: $$n\lambda = 2d \sin \theta$$ where d is interplanar spacing. Getting AI to "see" the physics across different scales required extensive iterative testing.
Accomplishments that I am proud of
I am proud of building a platform that feels like a professional workstation. Seeing the 3D Unit cell render for the first time-knowing it was generated from a raw image of a diffraction pattern-felt amazing. Furthermore, as someone new to application development, creating a development-ready product that securely handles API secrets.
What we learned
This is first hackathon for me and it has been a masterclass in multimodal prompt engineering and UI/UX design for scientists. I learned that technical sophistication is nothing without human relevance; the way you present data is just as important as the accuracy of prediction itself. I also discovered the power of Gemini 3 Flash, which allowed me to balance processing speed with deep scientific reasoning.
What's next for MatNexus
I think this is just beginning. The roadmap includes:
- Integration of TEM/STEM data : Expanding the vision engine to include atomic-resolution microscopy images.
- Real-time Lab Connectivity : Connecting MatNexus directly to lab for on-the-fly characterization during experiments.
- Advanced DFT Proxies : Moving beyond density predictions to full electronic Density of States estimations.
Log in or sign up for Devpost to join the conversation.