Inspiration
Online spaces are increasingly shaped by AI-generated content, bots, and low-effort “slop,” making it harder for real conversations to thrive. Moderators are overwhelmed, platforms react too late, and users lose trust. We were inspired to build sntri.AI to help communities proactively identify harmful, deceptive, and low-quality AI content—before it derails discourse.
What it does
sntri.AI is an AI-powered moderation and analysis platform that detects AI bots, spammy or low-effort content, and offensive or policy-violating comments in real time. It assigns risk scores, flags suspicious behavior patterns, and provides clear explanations so moderators can make fast, informed decisions without relying on black-box judgments.
How we built it
We combined natural language processing with behavioral pattern analysis to evaluate both what is being said and how accounts behave over time. The system uses lightweight classifiers for real-time scanning, deeper contextual analysis for flagged content, and a dashboard that surfaces trends, alerts, and moderation insights in a clear, actionable way.
Challenges we ran into
One of the biggest challenges was balancing accuracy with fairness—avoiding false positives while still catching harmful content early. Another challenge was designing explanations that are transparent and understandable to moderators, rather than just outputting a “yes/no” decision from the model.
Accomplishments that we're proud of
Built a working moderation pipeline that analyzes content and behavior together
Reduced noise by prioritizing high-risk cases instead of flooding moderators with alerts
Created explainable flags that help humans stay in control of moderation decisions
Designed a scalable foundation that can adapt to different community rules and platforms
What we learned
We learned that moderation is as much a human-centered problem as a technical one. Transparency, trust, and configurability matter just as much as model performance. We also gained hands-on experience building AI systems that operate responsibly in real-world social contexts.
What's next for sntri.AI
Next, we plan to expand platform integrations, improve multilingual support, and refine bot-detection through long-term behavioral modeling. We also want to give communities more customization—letting them define what “low-quality” or “harmful” means for their own spaces.
Log in or sign up for Devpost to join the conversation.