Inspiration
What if you could simulate an entire trading floor of diverse market participants and watch them react to breaking news before the market fully prices the information?
The Bloomberg Terminal is a product that provides real time financial data to professional traders and finance professionals. It generates over $10 billion in annual revenue.
Meanwhile, new marketplaces like Forum, Kalshi, and Polymarket now allow us to trade on more than just stock prices. We can trade on culture, sports, events. And AI agents now allow us to simulate the behaviors of 100 different traders in parallel to give us simulated data of how people behave, including how traders in these platforms react to news.
We wanted to build the Bloomberg Terminal for prediction markets, complete with dependency graphs between headlines and markets, and AI simulations of market reactions to news.
What it does
We have 2 main features:
First is the event dependency graph. Every day, traders are bombarded with millions of headlines and each one could be relevant to the markets that they are trading in. Imagine having to read through all those headlines yourself and deciding how it effects your portfolio, if at all. Using AI, we analyze the latest news headlines with the prediction markets of your choice to build a dependency graph, linking relevant information to markets that you are monitoring instantly. The Bloomberg Terminal has huge dependency graphs connecting millions of news headlines with stock prices. Using AI, we can copy that in minutes.
Second is The Event Intelligence Terminal. It ingests prediction markets and real-world news events and simulates hundreds of hundreds of agents with different personas. Each agent's sentiment is shaped by its intrinsic biases, topic tilts, and calibrated noise, producing an aggregate model probability that is compared against the current market-implied probability to surface a BUY, SELL, or HOLD signal for every market. A drill-down mode re-simulates individual markets using two rounds of LLM-powered agents that read each other's prior stances and update their views, producing richer narratives and more nuanced signals. All of this data is used to calculate an updated market probability based on the new information provided by the headline. The model then decides if you should BUY, SELL, or HOLD based on that updated probability.
How we built it
The frontend is a multi-page Streamlit app with four views: a Dashboard for signal overview and market drill-downs, a Markets page for managing prediction contracts, a News page with live NewsAPI headline fetching, and a Dependencies page featuring an interactive directed graph rendered with streamlit-agraph. The simulation engine in simulation.py implements the full pipeline: keyword/phrase scoring tables, topic inference via token overlap, composite multi-news scoring weighted by dependency edges, and persona-based agent reactions using frozen dataclasses and seeded randomness for reproducibility. For the AI layer, model.py wraps OpenAI's chat completions API with in-memory caching, structured JSON output parsing, and graceful fallback to heuristics when the API key is missing or a call fails. Edge generation in edge_analysis.py uses GPT-4o-mini with carefully crafted system prompts to infer news-to-market and market-to-market causal links, validating target IDs and clamping strength/direction values. All state — markets, news events, and dependency edges — is persisted as a single JSON file, loaded into Streamlit session state at startup, and mutated through CRUD helpers that cascade deletions across the graph.
Challenges we ran into
We pivoted quite a lot in trying to decide what would create actual edge in the market and we can build in under 6 hours. We finally decided on this.
Accomplishments that we're proud of
Building this from start to finish.
What we learned
Spawning a bunch of AI agents and running them in parallel, how edge is discovered in (theoretically) efficient markets, backtesting.
What's next for AgentMarketSim
Right now we are only looking at a limited amount of prediction markets and a limited number of headlines (a mix of real and generated). We are also using a free news API, which has a 24 hour delay for headlines. A real terminal that discovers real market edge should only base its prediction based on breaking news that just got released. When you can simulate entire market behavior within 10 seconds of a headline being released, that's where you get a competitive edge.
Built With
- gpt-4o
- python
- streamlit
Log in or sign up for Devpost to join the conversation.