Inspiration

Social media is unpredictable. One post can build a brand, another can destroy it overnight. But more interestingly… people constantly wonder:

“What would happen if I said this?” “How would people react?” “Would this blow up… or backfire?”

At the same time, new research and projects in multi-agent social simulation show that AI can model complex human interactions and emergent behavior at scale. We wanted to take that idea out of research labs and turn it into something:

  • interactive
  • fast
  • slightly chaotic
  • and genuinely useful

So we built Echo, a way to simulate the internet before you post to it.

What it does

Echo is a pre-flight simulator for human reactions.

You paste in:

  • a tweet
  • an idea
  • or even a hypothetical scenario

And Echo generates a live simulation of how people would respond. Under the hood:

  • A swarm of AI agents (modeled after real audiences) reacts in real time
  • Replies emerge as a dynamic network
  • Sentiment evolves second-by-second
  • You get a “ratio risk” score before hitting publish

But it’s not just for brands. You can also ask:

  • “What if the US invaded Canada?”
  • “What if a company released an AI too powerful to publish?”
  • “What if I tweeted this controversial take?”
  • "What if the iPhone 18 released with the ability to be controlled using your mind through an Apple head band."

Echo turns thought experiments into observable reactions.

How we built it

We built Echo around one core idea: simulate, don’t predict.

Frontend:

  • Real-time interface with a network visualization (D3-style graph)
  • Live “thread unfolding” experience

AI system:

  • Multi-agent architecture (~200 agents per run)
  • Each agent represents a persona derived from audience data
  • Agents interact with each other, not just the original post

Data layer:

  • Audience seeding via CSV or social data
  • Persistent profiles (Firebase)
  • Simulation engine:
  • Runs in timed “rounds” (~60 seconds total)
  • Generates replies, sentiment shifts, and interaction edges

The key decision: Make the simulation feel alive, not like a static output.

Challenges we ran into

Making agents feel human (not generic)

  • Early outputs sounded repetitive and robotic
  • We had to push for diversity in tone, opinion, and behavior
  • Additional personas generated based on who's impacted in the submitted scenarios
  • Real internet = messy, contradictory, emotional

Balancing speed vs realism

  • Rich simulations take time
  • Hackathon constraint forced us to compress everything into ~60 seconds

Scope creep

  • This could easily become a full research project about large-scale simulation
  • We had to focus on a compelling MVP

Accomplishments that we're proud of

  • The simulation actually feels alive. Watching replies appear feels like a real thread forming. The “ratio risk” concept is immediately understandable.
  • It works for both serious use (brands) and fun exploration (what-if scenarios)
  • Building our own social media algorithm used by the agents, mimicking those of X or instagram

Most importantly: People don’t just use Echo, they play with it.

What we learned

  • People are fascinated by seeing reactions before they happen
  • Multi-agent systems are way more powerful when agents interact with each other, not just the prompt
  • UX matters as much as AI, and the feeling of the simulation was everything

What's next for Echo - What would people think if...

What if Echo became:

  • A sandbox for public opinion
  • A way to test ideas, policies, announcements, narratives

What if:

  • founders used it before launches
  • journalists tested headlines
  • governments simulated public response
  • or anyone explored alternate realities

And the bigger question: What happens when you can simulate how society reacts… before society actually does?

Built With

Share this project:

Updates