Content moderation plays a vital role in addressing a wide range of social problems in online spaces by promoting safety, civility, accuracy, inclusivity, and integrity. However, moderation efforts should be balanced with free speech and expression to ensure that platforms remain open and conducive to diverse viewpoints and discussions. Teaching proper communication is better than punishing users, and the focus should be on educating users to avoid harmful messages.

AI content moderators offer advantages in scalability, consistency, speed, 24/7 availability, context detection, cost-effectiveness, and reduced bias compared to human moderators. However, current AI moderators can process text or images, but not both. Separate applications are needed to moderate multiple content types.

Another problem is most moderators produce binary benign or toxic output. Most don’t provide explanation about why certain content is not appropriate. This clarification is essential for educating users on avoiding the creation of harmful messages.

To address these problems, we have developed the Multimodal Moderator. This AI moderator is a single application that can check if text or image is appropriate or not. It can “understand” the message and provide explanation about why certain content is not appropriate.

NOTE:
This app has no code or GitHub because it's a Discord bot created by Zapier no-code platform. In the video we showed the workflow of our Zapier zap.

Built With

Share this project:

Updates