Inspiration

Prediction markets are powerful because they compress public belief into a price. But the hard part is not just creating a market. The hard part is knowing whether the market is actually grounded in evidence.

A market can move quickly because of headlines, speculation, stale sources, or incomplete information. That creates a trust gap. The price says one thing, but the real world evidence may say something else.

Verity Oracle was inspired by that gap. We wanted to build the verifiability layer prediction markets deserve: an agentic system that can import market signals, collect evidence, challenge its own verdict, and publish a human readable and machine verifiable resolution artifact.

What it does

Verity Oracle audits market truth.

The system imports a live or shadow market, gathers evidence from real web sources, normalizes that evidence, runs a resolver agent, runs an adversarial challenger agent, and publishes a final cited.md.

The final artifact includes the market question, evidence chain, timestamps, confidence scores, resolver verdict, challenger result, paper action, Chainguard image digest, Guild agent reference, and a sha256 hash for tamper evidence.

The demo uses paper only balances and paper only actions. It does not place real trades, does not settle real money markets, and does not operate as a gambling product.

How it works

Verity Oracle runs a five agent tournament.

The Market Creator imports or creates the market question and source policy. The Evidence Gatherer uses TinyFish to browse live web sources and extract relevant evidence. Nexla Express normalizes messy evidence into a consistent event format. Redis acts as the hot bus for market state, evidence streams, deduplication, odds history, and challenge queues. Ghost stores the warm evidence ledger, including resolutions, agent runs, challenge records, and generated cited outputs.

Guild.ai governs the resolver and challenger agents. The Resolver proposes a verdict from the evidence. The Challenger tries to contradict that verdict before finalization. WunderGraph Cosmo exposes hot Redis state and warm Ghost records through one federated MarketView query.

InsForge supports the application layer by generating verdict narratives and storing paper user actions, paper ledger state, and cited output records. Chainguard Images provide the secure resolver runtime, and every final verdict includes resolver image provenance.

What makes it innovative

The core innovation is adversarial, attested market reasoning.

Most market tools show prices. Verity Oracle asks whether the price is grounded.

The system does not stop at an AI answer. It produces an auditable artifact. Every verdict has evidence, timestamps, challenge history, resolver provenance, and a file hash. That makes the output inspectable by humans and verifiable by machines.

The most important design choice is the challenger agent. The resolver is never trusted alone. A separate adversarial agent reviews the verdict, searches for contradictions, and records whether the resolution survived challenge.

How we built it

We built Verity Oracle as a modular agentic system.

TinyFish handles live web browsing and evidence extraction. Nexla Express provides the evidence normalization contract. Redis Cloud powers hot state using RedisJSON, Streams, Sets, and TimeSeries style state. Ghost acts as the warm Postgres evidence ledger for resolutions, agent runs, challenge records, and cited outputs.

Guild.ai publishes and governs the resolver and challenger agents. WunderGraph Cosmo federates hot and warm data into one MarketView API. InsForge supports narrative generation and app level paper ledger flows. Chainguard Images secure the resolver runtime and provide the digest attached to every verdict.

The frontend dashboard lets a judge run the full flow, inspect evidence, view the adversarial agent result, and open the generated cited.md.

Challenges we faced

The biggest challenge was making the system powerful while keeping it safe. Prediction markets involve real money and regulation, so we separated market analysis from real trading. Verity Oracle can use live or realistic market signals, but the demo only creates paper actions and paper credits.

Another challenge was separating the system into the right layers. Redis handles fast state. Ghost handles evidence records. Guild handles agent governance. WunderGraph handles the unified API. InsForge handles application level product state and narratives. Chainguard handles resolver provenance.

We also had to make the agent process visible. Instead of hiding everything behind one final answer, the demo shows evidence extraction, normalization, deduplication, resolver output, challenger review, cited artifact generation, and attestation.

What we learned

We learned that trust in AI agents requires more than a good answer. It requires evidence, reproducibility, contradiction checks, and provenance.

We also learned that prediction markets are a perfect test case for agentic verification because the market price is public, the evidence is scattered, and the resolution must be trusted.

The larger lesson is that AI agents should not only act. They should leave behind a verifiable receipt.

What is next

Next, Verity Oracle could support more market platforms, richer market imports, historical resolver calibration, stronger evidence source ranking, public verifier pages, and automated dispute workflows.

The long term vision is simple: every important market, forecast, or public claim should be backed by a cited, challenged, and attested evidence artifact.

Built With

Share this project:

Updates