Inspiration

After loads of brainstorming, we thought about computers, and how they all can share the same space.

What it does

DistLM runs hundreds of AI agents across a network of everyday computers. You submit a prompt, choose your agent count, and the system distributes them across connected machines. Each machine runs our custom LLM locally. Agents process independently, then outputs get cross-validated across the network so hallucinated claims get caught and corrected automatically. Everything stays local so no data ever hits a central server. Contributors earn money for their idle compute while job submitters pay pennies.

How we built it

We built our own LLM from the ground up on AMD hardware with internal confidence signaling so the model flags its own uncertain outputs. The backend coordinator handles job routing, agent assignment, and payment distribution. Each node runs a lightweight Python client that executes agents locally. The frontend has a simulation interface, live network view, real-time agent feed, and results page. Nodes communicate over WebSockets.

Challenges we ran into

Cross-validation without killing performance was tough. Balancing load across machines with different specs was a constant headache. Training our LLM to carry confidence signals without bloating the architecture took heavy trial and error. Building fair payment splitting based on actual compute contribution was tricky. Keeping latency low enough for a live demo while syncing agent outputs across rounds was a battle the whole way through.

Accomplishments that we're proud of

We built a working LLM from scratch on AMD hardware that actually flags low-confidence outputs before they spread through the network. We got a distributed system running across multiple machines with real-time coordination and a working payment layer. And we did all of it while keeping data completely private and local on each node.

What we learned

Building your own model is a completely different beast from fine-tuning someone else's. We learned a lot about confidence calibration, distributed architecture, and keeping agents in sync without creating bottlenecks. Privacy and performance turned out not to be tradeoffs since running locally actually simplified things. Payment distribution taught us a lot about incentive design in decentralized systems.

What's next for DistLM

Domain-specific agent templates for policy analysis, research synthesis, and education so people can launch simulations without custom prompts. A reputation system where reliable nodes get priority on higher-paying jobs. Scaling to thousands of nodes. Long term, making DistLM the go-to platform for large-scale AI that's trustworthy, private, and accessible to anyone with a laptop.

Built With

  • amd
  • huggingface
  • llama
Share this project:

Updates