What Inspired Us

The idea for this project came from a simple question: Can AI agents truly negotiate, not just compare prices? Traditional e-commerce platforms match buyers with sellers, but real commerce involves strategic thinking, understanding counterpart motivations, and making complex trade-offs.

We wanted to push the boundaries of agentic AI by creating agents that:

  • Reason strategically about market conditions and their own constraints
  • Model their counterpart's behavior and infer hidden motivations
  • Make multi-factor decisions balancing profit, risk, and opportunity cost
  • Negotiate dynamically through multiple rounds with evolving strategies

What We Learned

LLM Reasoning and Chain-of-Thought Prompting

One of our biggest learnings was how to structure prompts for reliable reasoning. We implemented a 5-step thinking process:

  1. Situation Analysis: Understanding market conditions and constraints
  2. Strategic Evaluation: Weighing different options (ACCEPT/COUNTER/REJECT)
  3. Multi-Factor Decision Making: Balancing financial, psychological, and strategic factors
  4. Counterpart Modeling: Inferring the other party's motivations and limits
  5. Reasoning Synthesis: Integrating all factors into a final decision

We discovered that explicit structure and repeated reminders about key constraints (like "CURRENT BUYER OFFER" vs "LISTED PRICE") dramatically reduced hallucinations.

The Challenge of LLM Output Parsing

Early versions failed because LLMs format outputs inconsistently. A simple split("|") approach broke when outputs looked like COUNTER|PRICE: **1980.00**|MESSAGE:... or contained multiple price mentions. We solved this by:

  • Using regex patterns to find the last DECISION|PRICE match (the final decision)
  • Extracting prices with regex that handles various formats (dollar signs, asterisks, etc.)
  • Validating extracted prices against constraints (e.g., buyer can't counter above seller's offer)

Price Synchronization and Validation

A critical bug emerged: agents would reason about one price but offer another. We implemented a two-layer defense:

  1. Prompt-level: Explicitly highlight the current offer price and remind agents to reference it
  2. Code-level: After constraint validation, we replace price mentions in reasoning text to match the actual offer price

This ensured agents' reasoning always matched their actions—crucial for transparency in an AI negotiation system.

How We Built It

Architecture

Backend (FastAPI): RESTful API managing marketplace state, products, agents, and negotiation sessions. We used Pydantic models for strict type validation, ensuring data integrity throughout the negotiation flow.

Agents (LangChain + Groq): Each agent extends a BaseAgent class with:

  • Groq API integration for LLM reasoning (using llama-3.3-70b-versatile)
  • Strategy-based decision making (aggressive, moderate, cooperative)
  • Context-aware evaluation that tracks negotiation history

Frontend (Streamlit): Interactive dashboard showing:

  • Real-time negotiation chat with distinct styling for buyer/seller messages
  • Expandable reasoning sections with formatted markdown
  • Statistics and history visualization

Negotiation Protocol: A round-based system where:

  • Buyer makes initial offer
  • Seller evaluates and counters/rejects/accepts
  • Process repeats until agreement, rejection, or max rounds

Key Technical Decisions

  1. Immediate Text Cleaning: We clean LLM outputs right after generation to fix character-by-character spacing issues (e.g., "i s s i g n i f i c a n t l y" → "significantly")

  2. Constraint Enforcement: Sellers never accept below min_selling_price, even if the LLM suggests it—critical for realistic economic behavior

  3. Reasoning Storage: We store the agent's full reasoning chain (before decision markers) for transparency, allowing users to understand why an agent made a decision

Challenges We Faced

1. LLM Hallucinations and Price References

Problem: Agents would mention listed prices or minimum selling prices in their reasoning instead of the actual current offer price.

Solution:

  • Enhanced prompts to explicitly highlight "CURRENT BUYER OFFER" vs "LISTED PRICE"
  • Post-processing to synchronize price mentions in reasoning with actual offer prices
  • Multiple validation layers to catch mismatches

2. Parsing Unpredictable LLM Outputs

Problem: LLMs format outputs differently each time: COUNTER|1800.00, DECISION: COUNTER|PRICE: **1800.00**, or even nested formats.

Solution:

  • Regex-based pattern matching that finds the last occurrence (final decision)
  • Fallback parsing methods for edge cases
  • Comprehensive validation after extraction

3. Character-by-Character Spacing in LLM Outputs

Problem: Groq's LLM sometimes outputs text with spaces between characters: "t h e p r i c e" instead of "the price"

Solution:

  • Created a robust text_cleaner.py utility with multiple cleaning passes
  • Aggressive pattern matching to combine single characters
  • Special handling for numbers followed by spaced text

4. Rate Limiting and Token Optimization

Problem: Verbose reasoning prompts hit Groq's rate limits (100k tokens/day).

Solution:

  • Condensed prompts while retaining reasoning quality
  • Reduced max negotiation rounds from 10 to 6
  • Optimized prompt structure using shorthand notation
  • Reduced token usage by 50-70% while maintaining reasoning depth

5. Realistic Negotiation Behavior

Problem: Early versions had agents accepting any offer or making unrealistic counters.

Solution:

  • Enforced minimum selling prices and maximum willingness to pay
  • Strategy-based counter-offer calculations
  • Round-awareness (agents become more flexible in later rounds)
  • Convergence detection (agents recognize when deadlock is likely)

Impact and Future Directions

This project demonstrates that agentic AI can handle complex economic interactions beyond simple pattern matching. Our agents:

  • Reason about multi-dimensional trade-offs
  • Adapt strategies based on context
  • Model counterpart behavior
  • Make transparent, explainable decisions

Future Enhancements:

  • Multi-product negotiations (buyer negotiating for bundles)
  • Learning from negotiation history (agents improving over time)
  • Integration with real e-commerce platforms
  • Advanced game theory strategies
  • Multi-party negotiations (multiple buyers/sellers)

Conclusion

Building this marketplace simulation taught us that agentic AI isn't just about making decisions—it's about reasoning transparently, handling uncertainty, and interacting strategically. The challenges of parsing, validation, and prompt engineering are real, but solvable. Most importantly, we learned that AI agents can negotiate with sophistication when given proper structure, constraints, and reasoning frameworks.

This project pushes the boundaries of what's possible with agentic AI in economic contexts, and we're excited to see where autonomous negotiation takes us next.

Built With

  • docker
  • fastapi
  • groq-api-(llama-3.3-70b-versatile)
  • langchain
  • langgraph
  • pandas
  • pydantic
  • python
  • streamlit
Share this project:

Updates