Inspiration

We were inspired by the raid on the HenHacks 2025 Discord Server, it happened just a week ago and could have been prevented by Orwell. Furthermore, any social media or communication app inevitably has bad actors who want to steal information or harm others via cyberbullying, and so we wanted to make an improved automoderation system to ease the workload of human moderation, since repeatedly consuming or reviewing harmful content can affect your mental health.

What it does

Orwell has low and high level filtering built in to protect servers from malicious actors. Every message gets put through multiple checks to determine if the message is safe and within compliance of the server's ruleset. The ruleset is defined by the moderation team and stored in a database. Using AI, it can take action on members that violate the rules. Orwell will kick, ban, or timeout users who violate the provided ruleset.

How we built it

We built it using Python, Discord API, Gemini API, Smalltalk, and MongoDB. We brainstormed a skeleton for basic functionality of the bot to try to combine both rudimentary checks against malicious content and more general AI-powered suggestions to encompass a more diverse set of content to prohibit.

Challenges we ran into

While trying to use Smalltalk it was very difficult to work out how to use it, simply because it is an older obscure language which lacks many resources to help. We also experienced difficulty debugging with Discord's API as the errors were unhelpful. Additionally, Trying to manage version control management software with a team was arduous from lack of experience with branching.

Accomplishments that we're proud of

We are proud of Orwell's capability to adapt to a changing ruleset and make tough decisions on it's own. Ideally, it can take a lot of load off human moderation teams. We are also proud of the number of things we learned about Python, various frameworks, and utilizing database management along the way.

What we learned

We learned about creating a skeleton for a complex app from scratch and implementing it in an unfamiliar library (discord.py). We also learned about how to use Gemini's API to query for machine powered suggestions, in this case for punishment and moderation actions. We learned how to use non-relational database management to keep track of server data and store AI ruleset information. We learned alot about integrating multiple APIs into one project.

What's next for Orwell Auto Moderation

Orwell could have the added functionality of logging all actions provided by the bot and allowing for more human interactivity, allowing an end user to give feedback to the AI's suggestions. I think a more user-friendly interface with more buttons versus commands and an initialization process would be helpful for new users who are less tech-savvy.

Built With

Share this project:

Updates