We love competitive games. Riot has been dedicated to creating competitive games since the company was founded in 2006. Competition, by nature, brings out incredible passion in players. It’s what makes the highs and lows of playing games feel so meaningful. But it can also be the fuel that causes some players to attack others, disrupting the gaming experience for everyone. 

We know there are issues. We’ve seen the clips, we’ve heard from players about behavior in our games, and we’ve experienced it ourselves when we queue up. While we can’t change the human condition, we can try to shift the way players interact in our games with the aim of creating a better gaming experience.

We are actively working on ways to make our games safer, more inclusive, more fair, and, at the end of the day, more fun for everyone.

There is no shortage of challenges that come with this goal and no easy solutions either. 

This is where Player Dynamics comes in. The goal of Player Dynamics as a design discipline is to build gaming structures that foster more rewarding social experiences and avoid harmful interactions from the jump. Simply put, the discipline aims to answer the question: “How do we foster and sustain healthy communities online?” 

If you’re interested in learning more about the craft of Player Dynamics and how it impacts game design, check out this two part series we published earlier in the year. 

In this update, we’re going to dive into the numbers behind Player Dynamics and what they tell us about creating healthy interactions in game. Player Dynamics is designed to be a Riot-wide discipline so we have a few different teams focused on the craft. The Central Player Dynamics team is just that, they work in the center of all of our games developing systems that can impact both current titles and ones in the R&D stage. Each game also has a dedicated team working to address challenges that are unique to that title.

 

riot-player-dynamics-reports-across-all-regions-games

 

Reports Across All Our Regions and Games

We are extremely humbled that hundreds of millions of people around the world love playing Riot’s games. An astounding number of games are played and players report other players for a spectrum of reasons ranging from legitimate concerns to normal results like playing well or poorly. In 2021, we averaged about 240 million reports a month for a total of slightly under 3 billion reports across our titles in the regions Riot publishes in. 

3 billion is a really, really large number. If every Rioter spent 365 days a year with their only job reviewing these reports, we would still need each person to review about six reports per minute to keep up.

Plus, each report can’t be treated as a clear signal - players issue reports we don’t want to take action on quite a bit. Sometimes this means the behavior they’re reporting may feel bad but doesn’t deserve a penalty like playing poorly in a match, and sometimes the reports themselves are deliberately malicious. 

However, our goal is that every report gets investigated. That means we have to get creative and build automated solutions which can detect disruptive behavior at scale. These systems need to be able to distinguish behavior that warrants penalties from behavior that doesn’t. For some behaviors this is easier - AFK behavior, for example - and for some behaviors such as intentional feeding or trolling this is really challenging. We’re working continuously to improve these methods of detection, but there’s a long way to go.

Before we dive into some of the things we’re working on and game-specific stats, we want to note some important learnings that help inform our Player Dynamics strategy. One important note is that there is a difference between someone having a bad day and someone who repeatedly is disrupting the experience.

Across the industry, the breakdown falls around a 95/5 percentage. That means that 95% of people who are disruptive in games are only disruptive sometimes. For those players, warnings and light penalties are usually enough to prevent them from reoffending. The last 5% are consistently and intentionally disruptive. We have zero tolerance for people who queue up games just to hinder other players' gaming experience. 

Another important point: penalties work. Of the players that received a penalty in 2021, less than 10% of players received another one within the calendar year.

Actions have consequences and many players do change after receiving those consequences. 

Equipped with this data and some of these learnings from around the gaming industry, we’re developing a variety of new ways to evaluate player behavior. Here are some of the things we currently have in the works.

 

riot-games-looking-forward

 

Looking Forward

Automated Voice Evaluation

Currently we rely on repeated player reports and manual processes to determine when voice chat abuse has occured. But a manual process requires constant monitoring and can only look at so many instances, which is why we are working to develop automatic voice evaluation

Similar to our text evaluation systems, voice evaluation is designed to help us automatically catch bad actors who are using voice comms to disrupt the gaming experience. Each report we receive helps inform this system to make sure it can detect the wide range of ways people around the world use voice chat to communicate. We want to catch the disruption while also making sure there’s no interruption to the hype after that crucial clutch. 

Our Central Player Dynamics team is working to put this together. The first game it will impact is VALORANT and, once it works well, it will be expanded to other titles that use voice comms. 

Improving Our Text Evaluation

For most of our games, text is the main way people communicate with teammates and opponents. It’s also a big source of potential disruption. We have, and will continue to, heavily invest in improving the way we monitor text in game including both in-game player names and chat. To better monitor inappropriate names, we’re continuing to invest in machine learning and adding additional language support that will allow us to automatically catch harmful text in-game at scale for players around the world. 

In addition, we are expanding our zero tolerance word list. Some words are simply not okay to use in-game, ever. We are adding more variant spellings and languages to this list as simply swapping a number for a letter doesn’t change the intent of using the word. 

We will be taking the new processes created by Central Player Dynamics and applying them across all our titles, including replacing League’s legacy text detection system which should show a marked improvement in text evaluation for League players. 

Credibility-Driven Report Evaluation

We are expanding our ability to detect outliers who receive significantly more reports over multiple games than the average population while slipping by our automated detection systems. This system currently focuses only on disruptive behavior in comms, but we’re in the process of expanding it to gameplay offenses and inappropriate names. 

Expanding requires careful investigation of reporting habits and conservative tuning to avoid unjustly penalizing players. However, we have had great results so far and expect this will be valuable in identifying high-impact, but hard to detect, disruption such as intentional feeding.

Real Time Evaluation

We are working towards the ability to take action on chat-based offenses in real-time. Imagine a system that could help people check themselves when they start to send an inappropriate message to their teammates. This will enable players to adjust behavior mid-match but we want to make sure that it doesn’t impact the player experience so we’ll be trying different things across different games to ensure it’s the right fit. We began working on this in 2022 and will be employing it more broadly when it’s working correctly. 

Investment in ProSocial

ProSocial behavior is, simply put, the intent to benefit others.

In games, that means a focus on rewarding players who improve the gaming experience for others, not just punishing the ones who disrupt the experience.

We are in the midst of developing a new framework, in collaboration with other major game developers, on ways to focus our efforts on rewarding positive behavior in addition to mitigating disruptive behavior. This is an expanding topic and one that we feel will make a meaningful positive shift in online gaming communities. We will have more to share on this topic in the future so keep your eyes peeled!

Industry Partners and Communities

Disruptive behavior isn’t a problem that is unique to games. We’ll continue to work with partners inside and outside of gaming who believe in creating safe communities and fostering positive experiences in online spaces including the Fair Play Alliance and #TSCollective. 

By working together with partners, we can share knowledge to grow our solutions to the complex problems that impact not only players who play our games but all people who interact online.

 

riot-games-stats-behind-the-game

 

The Stats Behind the Game

Central Player Dynamics 

Central Player Dynamics works with all of our games and focuses on detecting disruption in communication between players. Reports related to text and voice communication in games are evaluated using CPD’s systems. Gameplay-specific reports like AFK or intentional feeding (inting) have the individual game teams in charge. 

For text reports, the vast majority of reports that run through CPD, there were 120 million games with at least one report which resulted in 13 million games where a transgression was issued. These transgressions resulted in actions ranging from warnings to 365+ day bans, depending on the nature of the transgression and the player’s history of previous transgressions. 

League of Legends and Teamfight Tactics

Our League team is currently issuing about 700,000 penalties a month across text detection, AFK detection, and inting detection.

Leaverbuster, our AFK detection system, monitors every game to make sure players who quit early and impact their teams are punished for it.

We use tiers so players who AFK more receive harsher penalties. And for ranked games where your teammates go AFK, we provide early surrender and LP mitigation so you aren’t punished for your teammates' tilt. 

But leaving is just one option for tilting teammates, the other is feeding. This can be a bit more difficult to track so we use a learning model that tracks seven different data points across all champions to confidently detect when someone is intentionally feeding and not just playing poorly. As we continue to update it, false positives have become extremely rare for the system. 

If you want to learn more about how the League team is working on player behavior, check out this post from earlier in 2022. 

VALORANT

In addition to voice chat, VALORANT’s Social and Player Dynamics team is also focused on AFK and inting. Right now, about 27 players per every 1,000 playing VALORANT are showing up AFK. Some of these are bots trying to grind out XP. But we’ve begun seeing these bots end up in lobbies filled with other AFK bots and if no damage is done, no experience is earned. 

For players who are still at their keyboard but are intentionally throwing the game, our inting detection currently has one method with another in the works. 

The current method takes in all the inputs and decides whether a player's poor performance was intentional or not after the game. But this method only catches bad actors after the fact, it doesn’t help when you’re down 11 rounds and understandably not having a good time. 

So the VALORANT team is working on real-time inting detection. But there’s a lot of gray area when it comes to this issue as bad play can be the result of a lot of potential reasons and intentionally throwing is a small percentage of those. Once the VALORANT team has reduced false positives to a small number, we will roll out this new method which will work alongside the after game detection. 

Wild Rift 

Wild Rift processes have evolved in 2022. Previously, the AFK detection simply checked to see if players were doing any inputs at all. Because some players were evading this simple detection, we added new layers to truly make sure a player is actually in the game and performing useful inputs, not just moving forward. 

2022 also brought a new inting system to Wild Rift which utilizes machine learning to make sure that the reason a player is playing poorly is intentional. Since March 2022 the inting system has caught just under 2,000 instances of intentionally throwing games. As the machine learning, well, learns, this number will likely increase as more inting players are flagged. 

And finally, there’s wintrading detection. This looks at a variety of factors including what we call “co-against” players. These are players that are constantly playing with and against the same group of players. By looking at patterns in co-against players, the length of games, and the win-loss record for co-against lobbies, the detection system can identify wintrading. 

The Importance of Transparency

Going from one game to a bunch of titles brought plenty of new challenges with it. With more titles on the horizon, we’re working to instill Player Dynamics thinking in the earliest stages of game design to curate better communities from the jump. 

At the same time, we believe it’s important to be transparent about the data we are receiving across all of our titles. These are complicated problems and there’s no way to truly solve them completely. With that being said, we are committed to working to improve the gaming experience for all of our players and will be posting more regular updates about the work we are doing towards that end. 

As always, thank you all for playing.