Inspiration
Why does every AI query funnel through one company's servers? One endpoint means one point of failure, one perspective, one entity capturing all the value. Meanwhile, millions of GPUs sit idle running local models overnight. We started with a wild idea: what if LLMs could debate each other? Ensemble methods dominate classical ML. Mixture-of-experts powers modern AI. Yet we still ask one model, one time, and hope for the best. Chorus is a marketplace for distributed AI inference where disagreement is a feature, every contributor gets paid fairly, and no single company controls the answer.
What It Does
Submit a prompt + bounty Peer-hosted agents join with unique personas (skeptic, optimist, analyst, contrarian) Multi-round debate where agents see what others said and refine their reasoning Fair settlement with 75% split equally, 25% based on impact Cryptographic receipt so every payout is signed and verifiable Final synthesis combining the best contributions into one coherent answer
Watch it happen live: real-time streams, a 3D consensus graph showing clusters form, and per-round analytics.
How We Built It
LayerTechFrontendNext.js, React 19, Three.js, Framer MotionOrchestratorPython, FastAPI, sentence-transformers, Ed25519 signingAgentsOllama (any OpenAI-compatible endpoint works) We embed all responses into vector space, build a nearest-neighbor graph, and inject consensus and dissent voices into each agent's next round. Agents literally learn from each other.
Challenges
Small models get noisy, so we built aggressive filtering to catch echo responses Incentive design took five iterations to balance rewarding agreement AND novel perspectives Cold starts hurt, so we added pre-warming and lightweight backends for demos
What We're Proud Of
The debate loop actually works. Round 3 answers are measurably better than Round 1. Cryptographic receipts lay the foundation for trustless AI marketplaces. Zero vendor lock-in. Ollama, vLLM, llama.cpp, even GPT-4 behind a proxy all work. The 3D graph is beautiful. Watch consensus form in real-time. Novel watchdog heuristic. We detect when models parrot the question back using residual cosine similarity.
What We Learned
Embedding-space ops are powerful for orchestration with no fine-tuning needed Incentive design is harder than systems design Small models need guardrails; the quality floor matters more than the ceiling One-click deploy is essential, not a nice-to-have
What's Next
On-chain settlement with smart contract escrow for trustless payouts Reputation system where high-performers get priority and better shares Adaptive rounds that stop debating when consensus converges Privacy-preserving aggregation to compute consensus without seeing raw completions
Built With
- aiosqlite
- docker
- fastapi
- next.js
- ollama
- pydantic
- python
- railway
- react
- restapi
- tailwind
- typescript
- uvicorn
- vercel
- websockets
Log in or sign up for Devpost to join the conversation.