This hackathon provided the answer to a barrier I faced and inspired me to create an open-source solution that will help myself and other indie developers to be part of the metaverse.

I've been working on Idea Engine for over a year. The app encourages users to create interactive stories, games and experiences within VR by importing photos, 3d assets, audio and bulk text. It's a perfect fit for the metaverse, but how could I cope with the moderation of such a large and creative environment?

What it does

Metaverse Moderator (MeMo) is a cloud-based solution for all your text moderation needs, providing web services for easy integration from VR apps and a web page to perform your moderation tasks.

The metaverse creator experience

Metaverse creators will be inexperienced end users. Before pushing your creation to the metaverse, MeMo is called to analyse your project.

  • You're informed / educated of any hate speech in your story and where it can be found.
  • You're shown how your story will be objectively classified for users (sentiment / traits)
  • You're informed if your project requires moderator approval before it goes live. Moderation is required when certain personal data has been detected (bank details, emails... preventing illegal activities) or if sufficient levels of hate speech together with a negative sentiment were detected. Lower risk items can also be pushed onto the queue while still allowing them to be made public.

The VR user experience

When users browse the metaverse and see your story, a summary screen helps them understand if the experience is well suited to them, their age and their mood. This helps protect users from inappropriate content.

  • Overall sentiment - is it a neutral, positive, negative experience.
  • Sentiment score - how strongly positive or negative is it.
  • Emotional traits - despair, delight etc. ordered by dominance in the text.
  • Content warnings - may include depictions of racism, sexism etc.

You can report a story. Select a hate speech category, enter a comment and push it onto the MeMo moderation queue.

You can post a review. If you include selected personal information or any hate speech you are informed of the text you need to change before submitting. This protects creators from hate speech.

Reviews also have sentiment ratings, so as well as the typical star-rating, you can understand the sentiment of the reviewer's text (positive / negative strength). This will be interesting because people can give 5 stars and then list lots of issues or 3 stars while giving praise. I'm excited to see the results.

The Moderator Experience

View the moderation queue, with a status of: OK, it's live but needs checking, blocked, requires approval to be made public, reported, user claims it's abusive.

  • Fetch a story into the analysis area to see the full text and submit it for analysis.
  • Use a drop-down menu to jump to any personal info or hate speech and view it in context.
  • View all the other details, such as sentiment score (positive, negative) and emotional traits.
  • Approve or reject the story.
  • Paste your own text into the analysis area and submit it as a story or review to see what users will be shown.

Hopefully, many experiences will go live without being sent for moderation and the developer knows they have been checked by AI. This saves indie developers time.

How we built it

I used Azure's static web apps, storage solutions and web functions to provide the VR integration and moderation platform. My main web function calls four APIs: sentiment analysis, emotional traits, PII and hate speech. Elements from these APIs are selectively merged into a single response for MeMo or the VR app.

Challenges we ran into

Time constraints were my main challenge, with this being my first ever Azure static web app and my desire to integrate it into Idea Engine to validate its usefulness.

Accomplishments that we're proud of

Delivering everything as cloud-based solutions for easy reuse and scalability. Making it easier for other indie developers to branch into metaverse solutions by using AI to reduce the burdens of moderation. My experimental approach to objective reviews using sentiment ratings.

What we learned

Thorough understanding of NLP APIs and offerings. How to create Azure static web apps.

What's next for Metaverse Moderator

  • Investigate further classifications using's "Document classification" API.
  • Add additional web service for QueueSumbission rather than the VR app writing to the table.
  • Fine tune logic of when stories require moderation.
  • Release Idea engine with MeMo fully integrated to test and improve.

Built With

Share this project: