The core of the system is a massive number of agentic personas that react and provide reactions to different stimuli/opinions.

These agents are instantiated from the ANES study (https://electionstudies.org/data-center/) which has rich data that include demographic data, multiple choice answers about their opinions, as well as free response on their opinion about candidates and positions. These were all programmatically extracted out of CSVs and PDFs, creating 3457 rows in total using Gemini and Retrieval Augmented Generation.

Tech Stack:

Python for backend

Supabase for database

FastAPI/REST for server routing

Gemini for AI Model

Redis for temporary state storage

  1. Persona-Based AI Agents Agent Creation: Each agent is instantiated with a rich persona embedding demographic traits (age, race, income, geography, political leaning, education level, etc.).

Behavior Modeling: Agents simulate opinions, emotional reactions, and behavior based on:

Persona traits

Historical datasets or polling data

Refined through learned patterns (optional LLM fine-tuning or behavioral conditioning)

  1. Corpus Ingestion + Chunking User Input: The user first submits a corpus — a block of text such as a speech, debate transcript, or marketing material.

Chunking and Storage: The corpus is automatically chunked into meaningful sections and temporarily stored in Redis, enabling efficient multi-stage processing.

  1. Axis Generation Insight Prompt: After submitting the corpus, the user is prompted to define what kind of reaction or insight they want to model (e.g., "Approval of this speech," "Who won the debate").

Axis Creation: An LLM generates a continuous axis based on the user’s prompt (e.g., from Strong Disapproval to Strong Approval, or Kamala Won to Trump Won).

  1. Chunk-by-Chunk Agent Simulation Simulation Engine: Using the chunked corpus and generated axis:

Agents simulate their level of reaction to each chunk, scoring on a -10 to +10 scale.

Reactions are influenced by persona traits and previous context, enabling temporal dynamics across chunks.

  1. Aggregation and Visualization Result Aggregation: The system aggregates agent responses to provide:

Global sentiment/reaction over time

Segmented breakdowns (e.g., by generation, race, income)

Changes in opinion from one chunk to the next

Visualization Dashboards:

Reaction trajectories over the corpus

Segment-specific breakdowns

Choropleth/geographic mapping

Emotional distribution curves

Replay Mode: Visualize how opinions evolved chunk by chunk through the corpus.

  1. System Architecture Backend:

Corpus ingestion and chunking module

Redis integration for temporary state

Axis generation module via LLM

Agent memory stores (for longer context simulations)

Reflection and reaction scoring modules

Frontend:

Corpus upload interface

Axis prompt input

Simulation run controls (e.g., "Simulate on 10,000 agents")

Interactive, time-evolving visualizations

Optional:

API Access for external systems to run simulations

Example User Flow Upload a corpus (e.g., "First Presidential Debate Transcript").

System chunks and stores it in Redis.

User defines insight prompt: "Who won the debate?"

LLM generates the axis: Trump Victory ←→ Kamala Victory.

System simulates chunk-by-chunk agent reactions (-10 to +10 scale).

User views:

Overall winner perception

How perceptions shifted during specific key moments

Segment-specific analyses (e.g., Gen Z vs Boomers)

Emotional distributions at each stage

Target Use Cases Political campaign analysis

Media messaging simulation

Public policy reaction modeling

Debate performance analysis

Academic research on opinion dynamics

Brand messaging testing on sensitive topics

Built With

Share this project:

Updates