Inspiration
three.ws was inspired by the idea that AI should not stay trapped inside chatboxes forever. Most AI products today are still text-based, but the internet has always moved toward more visual and interactive experiences. It went from text, to images, to video, and now toward immersive digital environments. We believe AI will follow the same path.
What it does
three.ws gives AI agents fully rendered 3D bodies that can be embedded across the web. Instead of only interacting with a text interface, users can create visual agents that speak, react, and respond in real time.
For this hackathon, we focused on making these agents useful in crypto-native environments. The agents can connect to live onchain data, track Pump.fun activity, monitor migrations and graduations, detect whale buys and sells, deliver voice alerts, and support real-time agent actions.
How we built it
We built three.ws as a browser-based 3D AI agent framework using GLB/glTF avatars, web-based rendering, AI model integrations, real-time event systems, and Solana infrastructure.
We use Helius for Solana RPC and onchain data, along with Pump.fun tracking systems for live token activity. We also use Blender for 3D asset workflows, Claude Code for development support, and standard frontend, backend, and API tools to connect the full experience together.
Challenges we ran into
The biggest challenge was combining multiple technical layers into one smooth product. 3D rendering, AI behavior, voice alerts, live onchain data, wallet-related functionality, and real-time actions all have different requirements.
Making these systems work together inside a browser-based experience required a lot of iteration across performance, latency, data handling, and user experience.
Accomplishments that we're proud of
We are proud of building a working framework that moves AI agents beyond static chat interfaces. The project already supports visual 3D agent experiences, real-time alerts, Pump.fun tracking, whale monitoring, Solana data integrations, and browser-based embedding.
We are also proud of building a product that feels different from a normal dashboard. The goal is not just to show data, but to make AI agents feel more present, useful, and interactive.
What we learned
We learned that building useful AI agents is not just about the model. The interface matters. Timing matters. How information is delivered matters.
A visual agent that can speak, react, and guide a user feels very different from another chatbot or analytics page. We also learned how important real-time infrastructure is when working with fast-moving onchain environments like Solana.
What's next for three.ws
Next, we plan to keep improving the agent experience by expanding transaction ability, wallet-connected actions, onchain tracking, and real-time alert systems.
We also want to make the framework easier for developers, creators, and crypto communities to use through API access, embeddable agents, and more customization tools.
The long-term goal is to make three.ws the visual layer for AI agents across the web and onchain ecosystems.
Built With
- ai-model-integrations
- blender
- claude-code
- glb/gltf
- helius-rpc
- javascript
- node.js
- pump.fun-apis/data
- react
- real-time-event-systems
- solana
- three.js
- typescript
- wallet-integrations
- webgl


Log in or sign up for Devpost to join the conversation.