Skip navigation EPAM
CONTACT US

Fostering Trust & Safety for the Gaming Community

Fostering Trust & Safety for the Gaming Community

Gaming is growing, leveling up in a serious way. The $187.7 billion industry is larger than the global music, film and on-demand entertainment sectors combined with 3.4 billion gamers worldwide. As the gaming community continues to grow and gamers are becoming more connected than ever before, the risks that players face are simultaneously increasing. With online gaming becoming a new social platform that continues to evolve, there are many factors to consider in your trust and safety strategy:

  • millions of users can overwhelm old human-based moderation services
  • user-generated content (UGC), which includes 3D models and level scripting
  • voice and video chat
  • new legislation

Game development companies invest large amounts of resources to protect users from inappropriate and harmful material, securing an ever-expanding gaming ecosystem with magnitudes of user-generated content has proven to be an enormous challenge. Gaming companies are trying to find the best way to strike a balance between providing immersive, next-generation connected experiences while adhering to consumer protection laws.

This is where content moderation services come into play. These services are one tool that game developers can use when designing and implementing an effective security strategy to protect online communities without limiting engagement and creativity.

Stef Corazza, Head of Roblox Studio at Roblox, says, in this video from the most recent Game Developers Conference (GDC), that generative AI (GenAI) can help humans effectively oversee the gaming environment. Roblox, for instance, features real-time translation and moderation of chat “to make sure that the content is kosher and non-toxic.” Such work is not a task for people, he says: “If you have to detect the bad behavior in 100 milliseconds, there’s no way a human can do it.”

Building the Ideal Trust & Safety Platform

When thinking through the ideal trust and safety needed for game development, your content moderation tool must be built around data-driven decision-making, the well-being of moderators and the throughput of moderated content. The tool should adjust to different product policies and content toleration levels while gathering information from many data points and conducting analytics. There are five fundamental building blocks of a content moderation platform that should be considered:

  1. Policies and Regulations: the cornerstones of any platform as community rules
  2. Automated Moderation Pipeline: helps to automate and label incoming content as objectionable using artificial intelligence (AI), accelerating the moderation process and reducing (or fully preventing) humans from interacting with content
  3. Human Moderators: labeling, verification and solving edge cases still requires people, even for companies at the advanced level
  4. Data Analytics: helps to understand the content, providing performance and accuracy metrics
  5. System Management: the place where you can control and configure the whole system and gain operational insights

Based on where the gaming company is in implementing these foundational aspects, they can be considered in one of three trust and safety levels:

  • Starter: basic human moderation, no recognition tools of harmful content, policies and regulations are in their infancy, lots of inappropriate content
  • Medium: basic AI recognition tools and certain level of community policies in place 
  • Advanced: organization-owned platform and mature policies, massive staff of moderators, investment in trained AI models, most of the harmful content gets deleted before anyone, or at least most users, can see it

For a company that is considered ‘advanced’ in their trust and safety strategy, this is an example of what the content moderation platform may look like:

The Power of AI & the Human Element

With the boom in user-generated content, AI became an important moderation tool for gaming companies, however, it cannot moderate on its own – it needs rules and models to operate ethically. To be more predictably effective, businesses need to deploy a hybrid model of both human and AI moderation into their platform. If the system is missing any of the building blocks listed above or has other limitations, it is necessary for humans to step in and help the system by either manually labeling violations or making the decisions themselves. It’s important at this stage to prevent humans from being continually exposed to highly graphic content.

According to Nico Perony, Director of AI Research at Unity, moderation is essential, and that partnering with tech is an effective way to approach the process. In our GDC panel he said that Unity uses AI and machine learning to locate toxic comments in gaming environments. “We’re using AI to augment human moderators, not only to signal bad behavior, but also and more importantly, to encourage positive behavior and positive play.”

There are several AI solutions on the market that can be used to address this concern; however, to get to that ‘advanced’ level, companies likely will need to adjust these generic solutions to fit their own needs. AI can be used to enhance the metadata to enable better decision-making for humans and avoid exposure to graphic content. These solution mechanisms should deliver content to the most relevant human agent in a safe way to keep moderation speed and accuracy balanced.

Looking Ahead to the Future

Online play provides players and community with the opportunity to have memorable and enjoyable experiences, especially during a time when traditional social interaction may not be possible. Permitting a completely unmoderated space is a reckless practice; however, balancing users’ desires to participate in their own content while providing a secure ecosystem for the community is not an easy prospect. Game developers shoulder an important responsibility beyond entertainment; to protect their online communities from potentially harmful content. With the right combination of community guidelines and ethical AI, an effective content moderation platform can be introduced to provide a safe space for everyone.

GET IN TOUCH

Hi! We’d love to hear from you.

Want to talk to us about your business needs?