Directions for running:

Running the java file will parse chat logs to be pasted into an excel doc that you import into LightSide: http://ankara.lti.cs.cmu.edu/side/

The Problem:

All too often in the gaming community people are made to feel excluded by toxic and inappropriate comments from opposing players, and even teammates they are queued with. This most commonly occurs to women, causing them to feel uncomfortable with participating in text and voice chats. We’d like to implement a positive change in the gaming community to make it a welcoming environment for everyone.

Our Solution:

Good Gamers is a game queueing algorithm that allows users to queue with other uses who choose not to disrespectfully act out of prejudice towards fellow gamers. This allows gamers who have been targets of prejudiced actions or messages to feel comfortable with playing the game, talking in chat, and having fun.

This works by taking our machine learning data set and applying it over chat logs to detect and rate the content of people's messages, building a unique profile that can be used to queue people together with similar rating. This can be easily implemented on games that already use skill based performance to match players together by similar skill ratings by doing the exact same thing for content ratings.

Troubles we encountered and how we built our project:

We had no former experience with machine learning, so finding and learning LightSide took commitment. Once we were able to understand the interface we were excited to fill it with our own data to make our very own prejudice seeking bot, but that would also prove difficult. After hours of searching for chat logs, our best bet was to use League of Legend's former reporting tribunal. Unfortunately, the site was taken down years ago. After an attempt to use the waybackmachine to access archived chat logs, we found there still wasn't enough data. We then used "OverRustleLogs" to obtain logs of twitch chat which we read from a java program and formatted to a spreadsheet. We then used and updated the spreadsheet over hours to ensure our AI achieved an optimal accuracy in identifying prejudice.

Our machine:

We are excited to say our machine learning can identify prejudice with 75% accuracy. It has a learning set of nearly 1000 entries that have each been identified by hand with some aid of previous iterations of the machine.

Built With

  • java
  • lightside
  • twitch-logs
Share this project: