Inspiration

The inspiration for DetectABully came from watching thriving Discord communities deteriorate due to toxic behavior, while good community members got falsely flagged by existing moderation bots. We realized most moderation systems operate on a punishment-only model, they're reactive and often drive away the people who make communities great. This sparked our revolutionary concept: what if instead of just catching bad behavior, we could reward good behavior and teach AI systems to trust community members who've proven themselves?

What it does

DetectABully is an AI-powered Discord moderation bot that combines Google Perspective API and OpenAI to detect toxicity, harassment, and violations with advanced accuracy. Unlike traditional bots that only punish, it features a revolutionary Community Immunity System where users earn points for positive behavior and gain protection from AI false positives. The bot provides progressive punishment (warnings to timeouts to kicks), rich logging with AI analysis, and 11 comprehensive slash commands for complete server management.

How we built it

We built DetectABully using Python with a modular architecture, integrating dual AI APIs (Google Perspective + OpenAI) with intelligent fallback systems and PostgreSQL for persistence. The core innovation was designing the Community Immunity algorithm that calculates real-time user reputation through positive point tracking and dynamic threshold adjustments. We implemented async operations for performance, comprehensive error handling for reliability, and created an intuitive command interface with rich Discord embeds for transparency.

Challenges we ran into

Our biggest challenge was balancing the immunity system to reward good behavior without making it exploitable or unfair to newer users. Integrating two different AI APIs with varying response formats and rate limits required building a unified detection pipeline with robust fallback mechanisms. We also had to optimize database performance for real-time point calculations and ensure the bot gracefully handles Discord permission limitations while maintaining security.

Accomplishments that we're proud of

DetectABully is the first Discord moderation bot with a positive reinforcement system* while every other bot focuses solely on punishment, we created revolutionary technology that rewards and protects good community members. We successfully built a production-ready system with dual AI integration, real-time immunity calculations, and comprehensive administrative tools. Most importantly, we created something that doesn't just moderate content but actively builds healthier communities through gamification and trust-building.

What we learned

We discovered that combining multiple AI services creates more nuanced and accurate content moderation than relying on a single system. Database optimization and async programming are crucial for real-time applications that need to process hundreds of messages without lag. Most importantly, we learned that positive reinforcement is more effective than punishment users respond better to recognition and earned trust than to threats and penalties.

What's next for DetectABully

We plan to expand DetectABully with machine learning capabilities that adapt to each server's unique culture and communication patterns. Future features include cross-server reputation tracking, **advanced behavioral analytics to predict and prevent toxicity escalation, and integration with other platforms beyond Discord. Our ultimate vision is creating an AI-powered community health ecosystem that helps online spaces become more welcoming and inclusive through intelligent positive reinforcement.

Built With

Share this project:

Updates