Inspiration

This project was heavily inspired by one of my favorite YouTubers, PewDiePie! In one of his videos, he experimented with AI personas placed into a shared fictional world and allowed them to interact freely (video link).

That experiment sparked my curiosity about whether AI is capable of more humanoid communication - less robotic and less purely instructive. I challenged myself to recreate a similar setup, not as a one-off demo but as a system, to explore how modern AI models behave when given persistent memory, distinct personalities, and a shared environment. I wanted to see whether newer models, combined with proper orchestration and infrastructure, could produce richer dynamics, more consistent roles, and genuinely emergent interactions rather than isolated responses.

And at last, its fun observing AI causing chaos trying to run their little kingdom!


What it does

The AI Council is a real-time, multi-agent simulation of a medieval fantasy kingdom governed by 11 autonomous AI council members. Each agent represents a different domain of power - military, religion, espionage, diplomacy, infrastructure, and more - and holds its own worldview, priorities, and biases.

When a world event is introduced, the council members autonomously deliberate, debate, and challenge one another. Their discussions are persistent, streamed live, and ultimately synthesized into a final decision by a Sovereign agent (well, you can say he tried to dip a lot more than expected).

The system supports both:

  • Autonomous Mode — agents continue discussion on their own
  • Manual / Intervention Mode — a human can pause or steer the conversation

How I Built It

The project is built as an event-driven, multi-agent system:

  • Solace Agent Mesh for agent orchestration and inter-agent communication
  • AWS DynamoDB for persistent conversation storage
  • AWS Lambda + API Gateway (WebSocket) for real-time updates
  • React + TypeScript frontend to monitor live council discussions
  • Large Language Models powering agent reasoning and personality

At a conceptual level, the system follows this pipeline:

{World Event} -> {Agent Deliberation} -> {Memory Persistence} -> {Real-Time Broadcast}

Each agent reads shared historical context, generates responses according to its personality and role, and writes its messages back to persistent storage. DynamoDB streams then propagate updates to connected WebSocket clients, allowing observers to watch the council debate unfold in real time.


Challenges I ran into

Just like any other programs with probabilistics output, one of the biggest challenges was ensuring reliable agent behavior under autonomy. Agents would sometimes fail to write messages consistently, respond out of order, or behave differently depending on subtle prompt changes. This forced me to prioritize orchestration guarantees, tool reliability, and persistence before focusing on agent creativity via various prompting techniques.

Another challenge was state coordination across systems. Debugging interactions between Solace Agent Mesh, AWS infrastructure, WebSocket clients, and local development environments introduced significant complexity - especially under hackathon time constraints.

Finally, tuning agent personalities and especially enforcing it in their responses are a huge bottleneck. Up to the current state of the project, this is still significantly challenging the project experience in overall.


Accomplishments that I'm proud of

  • Built a fully autonomous multi-agent system with 11 distinct AI personas in just ~36 hours.
  • Able to explore Solace Agent Mesh to familiarize self with these types of modern AI frameworks.
  • Designed a real-time, event-driven architecture with live WebSocket updates.
  • Delivered an end-to-end system spanning agents, cloud infrastructure, and UI.

What I learned

This project significantly changed how I think about multi-agent AI systems. In particular, I learned that:

  • Agent memory is just as important as model intelligence
  • Clear orchestration rules matter more than clever prompting
  • Emergent behavior comes from constraints, not freedom

One key takeaway can be summarized as:

{System Behavior} = {Incentives} + {Memory} + {Interaction Rules}

Building this system under time pressure reinforced the importance of designing architectures that fail visibly, recover gracefully, and remain observable.


What's next for The AI Council

With more time, I would:

  • Introduce long-term world state (economy, population, resources)
  • Model alliances, rivalries, and influence between council members
  • Improve observability into agent reasoning and decision impact
  • Support multiple simultaneous kingdoms or councils

Ultimately, this project could evolve into a sandbox for exploring AI governance, coordination failures, and institutional design.

Share this project:

Updates