Inspiration

As a data professional, I wanted to transform player-generated content into something meaningful—with real impact. Azure Guardian was born from the desire to protect all players, regardless of age, race, or background, from hate-fueled attacks. The project promotes safer, more inclusive communities and embraces Xbox Copilot’s arrival as a stepping stone for deeper AI integration in gaming.

What it does

Azure Guardian is an AI-powered moderation system for online games that detects and blocks toxic text and voice chats in real time to protect gaming communities.

How we built it

I've used a combination of Azure services — Language, Speech, and Cosmos DB — along with a Python backend to simulate real scenarios of online toxicity. The system detects hate speech in both text and voice chats, including leetspeak and disguised slurs, blocking them instantly. All interactions are logged for potential review or escalation, and the setup is designed to integrate easily into gaming platforms.

Challenges we ran into

A major challenge was training the system to detect nuanced or hidden forms of hate speech, while maintaining high accuracy.

On a deeper level, the ethical aspect was heavy: using real hate speech for testing is emotionally taxing. If it was hard for me to write these test cases, imagine what victims feel in real scenarios. Azure Guardian isn’t just tech — it's about morality, empathy, and using AI to create safe, respectful digital spaces.

Accomplishments that we're proud of

I'm proud to have built a system that genuinely helps protect players — especially the most vulnerable — from anonymous trolls, smurfs, and toxic communities. Beyond blocking messages, Azure Guardian provides traceability and moderation tools, enabling communities to take action and foster safer, more welcoming environments.

What we learned

Deepened my understanding of Azure’s AI capabilities, improved code security, and optimized for real-world performance. Also, I've learned how to dynamically integrate our solution with game systems and voice chat platforms. Most importantly, gained insight into how tech can support ethical, human-centered design.

What's next for Azure Guardian

I'm planning to add new features that empower players to review and report incidents blocked by Guardian, giving them ownership over their experience. These include automatic logs, easy report buttons, and documentation of flagged hate speech.

The goal? Treat online hate as seriously as real-world hate — because virtual spaces are real too. The digital world deserves the same respect, safety, and accountability as the physical one.

Built With

Share this project:

Updates

posted an update

I, Vinícius Borges, author of the Azure Guardian project, categorically state that none of the hate speeches used in the tests (such as "fuck you bitch", "n1gg3r", or similar) reflect my personal opinion or values. On the contrary, I abhor any and all types of hate attacks, discrimination or verbal violence. These examples were used exclusively to simulate real content moderation scenarios, which unfortunately occur every day on online platforms, with the sole purpose of testing and demonstrating the effectiveness of the system in detecting and blocking this type of behavior. Azure Guardian was created to combat this, promoting a safer and more respectful digital environment. I reiterate my commitment to inclusion and mutual respect.

Log in or sign up for Devpost to join the conversation.