Inspiration
Project Story: Arbitrage: Multi-Agent Marketplace Simulator
Inspiration
The project began with a simple question: what happens when negotiation is treated as an conversational process rather than a numerical optimization problem?
Market simulations typically rely on scoring functions, game-theoretic formulas, or engineered heuristics. Real human bargaining rarely works that way. Sellers hide motives, buyers infer intentions, and decisions emerge from dialogue, not equations.
This gap—between real negotiation and algorithmic negotiation—inspired the creation of Arbitrage, a system where autonomous agents engage in realistic, chat-based bargaining driven entirely by LLM reasoning.
What I Learned
I learned how differently LLMs behave when:
- No explicit scoring or numeric “decision engines” are provided
- Opponent models are opaque, forcing inference from conversation
- Each agent has asymmetric knowledge, requiring careful prompt design
- State must persist, meaning the database becomes the backbone of agent context
- Local vs cloud inference leads to meaningful architectural trade-offs
The project exposed how crucial well-designed visibility models, routing logic, and structured prompts are to simulating believable multi-agent interactions.
How I Built It
Architecture Overview
- A Session stores buyer configuration and seller profiles.
- Each NegotiationRun focuses on one item with one buyer and multiple sellers.
- An Orchestrator routes messages to the correct agents, enforcing visibility rules.
- A Database records all configuration, negotiation messages, and outcomes.
- Agents run in one of two inference modes:
- On-device mode: LM Studio (completely local)
- Cloud mode: OpenRouter (for more advanced reasoning models)
Key Design Principles
Opaque Opponent Models
Sellers’ costs, priorities, and price floors are hidden unless revealed in conversation.LLM-Driven Decision Making
No heuristics. No ( f(\text{price}, \text{behavior}) ).
The buyer simply reasons qualitatively: [ \text{Decision} = \text{LLM}(\text{conversation}, \text{constraints}) ]Fine-grained Visibility
Sellers see only buyer messages directed to them (or broadcasts).
Buyer sees only explicit seller messages.Persistent State
Sessions can spawn multiple negotiation experiments without re-entering configs.
Challenges
1. Getting Opaque Behavior Right
Ensuring sellers don't reveal internal numbers unless prompted—while still staying consistent across messages—required careful prompt engineering.
2. Avoiding Hidden Numerical Optimization
LLMs instinctively “invent scoring” when asked to decide. Crafting instructions that enforce pure reasoning without pseudo-formulas was nontrivial.
3. Multi-Agent Orchestration
Routing messages, isolating visibility, and synchronizing response cycles proved complex, especially when simulating concurrent seller responses.
4. Balancing Local vs Cloud Inference
Local models offer privacy and control, while cloud models offer richer reasoning. Supporting both cleanly required abstracting the inference layer.
Conclusion
Arbitrage became more than a negotiation simulation—it is a sandbox for emergent agent behavior where decisions arise through language alone. Building it clarified how LLMs reason, negotiate, and adapt when numeric shortcuts are removed, and how architecture shapes agent believability.
The project demonstrates that markets—and the conversations driving them—can be simulated with natural language as the primary engine.
What it does
How we built it
Challenges we ran into
Accomplishments that we're proud of
What we learned
What's next for Arbitrage 2.0
Built With
- docker
- fastapi
- git
- lm-studio
- openrouter
- postgresql
- python
- react
- sqlalchemy
- typescript
- websockets
Log in or sign up for Devpost to join the conversation.