Inspiration As LLMs grow in influence, we noticed increasing risks from data poisoning and wanted a systematic way to detect and address it.

What it does Antidote Intelligence uses agent-based reasoning to identify and mitigate poisoned data in LLM training sets by applying the scientific method.

How we built it We built a desktop app using Python and Electron. Agents analyze datasets, form hypotheses, test them, and flag suspicious data based on observed effects.

Challenges we ran into Designing effective agent workflows and simulating realistic poisoning attacks without existing datasets was tough.

Accomplishments that we're proud of Our agents successfully detected and explained multiple types of simulated poisoning, demonstrating a clear path to scalable defenses.

What we learned Structured reasoning (via the scientific method) makes AI safety tools more reliable and transparent.

What's next for Antidote Intelligence We plan to support real-time dataset monitoring, expand poisoning scenarios, and test on real-world LLM training pipelines.

Built With

Share this project:

Updates