Inspiration
During my physics studies, I encountered several abstract and difficult-to-grasp concepts, such as the “virtual object” in geometrical optics. Even after multiple discussions with teachers and classmates, I only half understood it and often made mistakes when solving problems. It wasn’t until I watched an experiment video and drew the ray diagram that I truly understood and absorbed the concept.
This experience made me realize how important a tangible, visual experiment or demo is for internalizing abstract concepts into one’s own knowledge.
At that time, I was self-learning Swift, so I used the double-slit interference experiment in geometrical optics as a practice project. I was pleased with the result, but there are far too many physics experiments to code them one by one, so I came up with the idea of using a large language model to generate these demos automatically.
What it does
This project is an interactive experiment generation agent powered by Perplexity MCP (AI search) and GPT-5.
When a user types a description of an experiment they want to generate into the chat box on the homepage, the agent will:
- Interpret the user’s request and determine the target experiment.
- Use Perplexity MCP to retrieve the experiment’s process and adjustable parameters.
- Have GPT-5 write an HTML script to display the experiment in an interactive demo with a dark theme and low-saturation colors.
How we built it
- The overall framework was built by SOLO Builder after I described my idea.
- The core functionality was straightforward: call the GPT-5 API and instruct the model on how to use Perplexity MCP to retrieve experiment data.
- I refined the system prompt multiple times to fix the output format so the generated results could run directly.
- The visual design went through multiple iterations using the select element feature, allowing me to adjust front-end code and layout intuitively, similar to using professional design tools. The final style is a dark, sci-fi aesthetic.
Challenges we ran into
- Initially, GPT-5’s output quality was worse than GPT-5-mini, requiring careful prompt tuning to achieve stable results.
- The first version’s three-column layout (chat on the left, code in the middle, preview on the right) was cramped, poorly colored, and looked like a scam website.
- Achieving the right balance between aesthetics and usability required multiple UI and styling adjustments.
Accomplishments that we're proud of
- Built a complete pipeline that automatically generates experiment demos from user input.
- Delivered a final interface that matched my vision, including animated light beams with an ocean-like flowing effect.
- Produced a virtual object demo that adheres fully to physical principles.
- Kept the cost low (around 0.02 USD per generation) and provided public access through my own API.
What we learned
- A visual, interactive experiment is incredibly effective for understanding abstract physics concepts.
- System prompt engineering is crucial when using large models to generate both code and UI.
- Attention to detail in layout and color design greatly enhances user experience.
What's next for Experiment-Generate Agent
- Support experiment generation for more subjects beyond physics.
- Add richer visual effects and interactive features to make experiments more immersive.
- Improve generation speed and stability while further reducing costs.
- Introduce a community feature for sharing and improving generated demos.
Built With
- gpt5
- mcp
- node.js
- perplexity
- supabase
- typescript
- vercel
Log in or sign up for Devpost to join the conversation.