Reptile Systems
Tagline
Interactive systems design, visualized.
Inspiration
Systems design is one of the hardest areas of software engineering to learn because most of it is invisible. Concepts like load balancing, caching, queues, databases, CDNs, object storage, APIs, and scaling patterns are usually taught through static diagrams, long articles, or interview-style whiteboarding.
Those formats can explain what a system looks like, but they often fail to show how the pieces relate. Beginners end up memorizing boxes and arrows without developing intuition for why components are connected, what each component contributes, and how a small design evolves into a scalable architecture.
We built Reptile Systems to make systems design more visual, interactive, and intuitive. Instead of passively reading diagrams, learners can build them. They choose components, combine them, see what architecture pattern emerges, receive feedback when a design is unstable, and gradually discover how real distributed systems are assembled.
What it does
Reptile Systems is an interactive learning environment for systems design. Users begin with core infrastructure primitives such as CLIENT, DNS, CDN, load balancers, API gateways, app servers, databases, caches, queues, and object stores. By combining these primitives, they unlock higher-level architecture concepts:
CLIENT + DNS becomes an edge entry path. API + APP becomes a backend service. APP + CACHE becomes a cached low-latency service. APP + QUEUE becomes an asynchronous worker flow. DB + CACHE becomes a read path. FAST + ASYNC becomes a scalable service.
The goal is not just to make combinations work. The goal is to help learners understand why each combination matters in a real system. The platform includes:
- A visual design studio for exploring infrastructure components.
- Gesture-based interaction using the user's webcam.
- 3D component visuals and real-time feedback.
- A discovery dashboard that organizes concepts by learning level.
- Guided training modules like Build Web App and Scale Service.
- A Spotify architecture challenge where users assemble a simplified streaming platform.
- An AI learning guide that explains created components, answers system-design questions, and gives contextual hints based on the current design state.
Together, these features turn systems design from a static diagramming exercise into an interactive learning experience.
How we built it
We built Reptile Systems as a React and TypeScript application powered by real-time visualization, gesture input, and AI-assisted explanation.
The frontend is built with Vite, React, React Router, Three.js, and React Three Fiber. The 3D scene renders architecture components, animated visuals, and the interactive lab environment.
For interaction, we use MediaPipe Tasks Vision to track hands through the webcam. The app detects pointing, pinching, hand presence, closed fists, reset gestures, and snap-ready proximity so learners can interact with the system without relying only on a mouse.
The educational logic is driven by a local systems-design composition graph. Each valid pair of components maps to a meaningful architecture pattern, such as cached services, routed traffic, data platforms, worker systems, and content delivery paths. Invalid combinations trigger feedback and guidance instead of silently failing.
For AI support, Gemini generates short explanations of newly created architecture concepts. ASI:One powers the interactive lab guide, with function-calling tools that let it look up exact component combinations. We also integrated Agentverse/uAgents through a Python agent so the guide can exist as an agentic assistant outside the frontend.
To make the assistant more contextual, the app passes structured lab state and optional screenshot context. html2canvas captures the current interface, while OpenRouter vision models can summarize what is visible on screen. ElevenLabs provides speech output for the guide, with browser speech synthesis as a fallback.
We used Convex for saving discovered elements and progress, plus localStorage for lightweight in-browser persistence during the hackathon demo.
Acknowledgment
Reptile Systems was built on top of Atomis, an open-source hand-tracked chemistry app, whose component architecture, TypeScript type system, and MediaPipe gesture interaction model we used as our foundation.
We repurposed its element combination engine — originally designed for chemistry compounds — to teach distributed systems design, replacing chemical reactions with systems components like load balancers, caches, and message queues.
On top of this scaffold, we built an entirely new educational domain, puzzle system, and mascot-guided learning flow that makes systems design tangible and interactive for CS students.
Challenges we ran into
1. Making systems feel intuitive
We had to ensure every interaction mapped to a real architectural concept—not just visual gimmicks.
2. Gesture control complexity
Hands are noisy input signals. We handled:
- Landmark tracking
- Coordinate mapping
- Latency + jitter
- Gesture thresholds
3. Context-aware AI
Generic explanations weren’t enough. We built:
- Structured state passing
- Tool-based lookups
- Screenshot-aware reasoning
4. System integration
We combined:
- 3D rendering
- Computer vision
- AI agents
- Real-time state
All in a hackathon timeframe.
How we overcame these challenges
1. Making systems feel intuitive
We designed the interaction model around real systems-design relationships. Instead of letting users combine components randomly, we created a structured composition graph where each valid pair maps to an actual architecture pattern. For example, APP + CACHE becomes a cached service, while APP + QUEUE becomes an async worker flow. This made the experience playful without losing educational value.
2. Gesture control complexity
We used MediaPipe hand tracking to detect hand landmarks, then translated those coordinates into the app’s screen space. To make gestures usable, we added forgiving thresholds for pinching, pointing, and bringing hands together. We also added reset and error states so the app could recover gracefully when tracking was noisy or when users made accidental gestures.
3. Context-aware AI
We avoided making the AI guide a generic chatbot. Instead, we gave it structured context about the current lab state: selected components, active course, fusion result, visible shelf items, and failure states. We also added tool-based lookup for exact component combinations, so the AI could give grounded hints instead of hallucinated advice. Screenshot context made the guide even more aware of what the learner was seeing.
4. System integration
We separated the project into clear layers: hand tracking, 3D visualization, course logic, component composition, AI guidance, and persistence. This helped us connect many moving parts without everything becoming tangled. React handled UI state, Three.js powered the visual environment, MediaPipe handled input, and our AI/agent layer provided explanations and hints on top of the core learning experience.
Accomplishments
We built a working educational visualization tool that lets users explore systems design through interaction instead of passive study.
We are proud that the component combinations are connected to real architecture ideas. Learners do not just unlock arbitrary items — they discover concepts like cached services, backend services, read paths, async workflows, scalable services, and content platforms.
We also built multiple learning modes, including guided modules and a Spotify architecture challenge, which make the product feel closer to a teaching tool than a simple demo.
The AI guide is another major accomplishment. It can explain what the learner built, reason from the current lab state, use a lookup tool for exact combinations, and provide conversational help while staying focused on systems design.
What we learned
We learned that educational software needs more than information. It needs interaction, feedback, and progression. Systems design becomes much easier to understand when learners can manipulate components and immediately see the consequences of their choices.
We also learned that visual metaphors are powerful, but they have to be designed carefully. If the metaphor becomes too playful, it can distract from the learning goal. Our best moments came from connecting visual interaction directly to real engineering concepts.
On the technical side, we learned a lot about real-time browser-based computer vision, 3D rendering, AI tool calling, and building contextual AI assistants that are grounded in application state.
What’s next
Add more architecture challenges based on real products like Netflix, Uber, Discord, and Google Docs.
Generate final architecture diagrams that learners can export and review.
Add richer explanations for why a design works or fails.
Build an instructor dashboard for classrooms, workshops, and interview-prep cohorts.
Expand the component graph to include replication, sharding, rate limiting, observability, authentication, pub/sub, and regional deployment.
Improve gesture calibration and accessibility so the tool works well for more users and environments. Persist user progress across accounts with Convex.
Built With
React, TypeScript, Vite, Three.js, React Three Fiber, MediaPipe Tasks Vision, Convex, Fetch.ai uAgents, Agentverse, ASI:One, Gemini, OpenRouter, ElevenLabs, Liveblocks, html2canvas, React Router, localStorage
Short Pitch
Systems design is one of the most intimidating topics in CS — traditionally taught through dense documentation and diagrams that give beginners no clear place to start. Reptile Systems makes it accessible by turning architecture into something you can see, touch, and experiment with. Learners combine infrastructure components, discover real design patterns, receive feedback, and get AI-powered explanations as they build — transforming abstract concepts into an interactive experience that meets learners where they are.
PRIZE TRACK NOTES
Light the Way (curriculum framing — pending modules): Reptile Systems directly addresses the gap in CS education around systems design. The platform provides structured progression from primitives to full architecture patterns, with guided modules and AI-powered explanations — making a topic that is typically inaccessible to self-learners genuinely approachable.
Fetch.ai Track 1 (deliverables locked, tool execution working): The AI learning guide is deployed as an Agentverse uAgent powered by ASI:One, with function-calling tools that let it look up exact component combinations and reason about the learner's current lab state. This makes the assistant grounded in the actual design the user is building rather than giving generic systems-design explanations. The Python-based uAgent runs independently of the frontend, meaning it can operate as a standalone agentic assistant outside the browser entirely. https://asi1.ai/chat/91006046-cda9-48bb-b634-fff410d7a108
https://asi1.ai/ai/agent1qgtnsn9vjug5l22nl8qc2cu8cz5z77ezscqnhw5ldaqj2u3yhrw3g5t55nk
ElevenLabs (mascot voice): When the AI guide explains a newly created architecture pattern — why a load balancer sits in front of an app server, what a message queue unlocks — ElevenLabs delivers that explanation as natural speech through the mascot. This was a deliberate choice: learners new to systems design benefit more from hearing a concept walked through conversationally than reading it off a screen.
MLH Gemma (vision context via Gemma 4 26B — genuinely strong now): Reptile Systems uses Gemma 4 26B via OpenRouter as its vision model. When a learner is stuck, html2canvas captures a screenshot of the current 3D lab state and passes it to Gemma, which summarizes what it sees and feeds that visual context to the AI guide. This is what makes the assistant feel genuinely aware of the learner's situation rather than operating blind off state variables alone.
Arista Networks — Connect the Dots: Reptile Systems connects learners to the infrastructure concepts they need, routing each user through a personalized discovery path based on what they've built. Planned multiplayer co-build sessions will let two learners collaborate in real time on the same architecture canvas.
Demo Notes
- Launch the studio.
- Allow camera permissions.
- Select infrastructure components from the dashboard or lab shelf.
- Combine components to discover architecture patterns.
- Use the AI guide to ask what a component means, why a design failed, or what to try next.
- Try the guided modules: Build Web App, Scale Service, and Design Spotify.
Links
Built With
- agentverse
- asi:one-api
- browser
- camera/webrtc
- convex
- elevenlabs-text-to-speech-api
- fetch.ai-uagents
- gemini-api
- html2canvas
- localstorage
- mediapipe-tasks-vision
- openrouter-vision-api
- python
- react
- react-router
- react-three-fiber
- three.js
- typescript
- vite
Log in or sign up for Devpost to join the conversation.