Stop Normalizing Player Harassment

Last week I was in San Francisco for the 2018 Game Developer Conference and I actually learned some things. This year I went with the Independent Game Summit pass, which got me into indie-themed talks on Monday/Tuesday as well as sponsored talks. I highly recommend the pass as it’s much cheaper than the full pass and the talks were high quality. It does sell out though, so you’ll need to buy it early. I’m always very interested in player psychology, so I enjoyed several sessions from the Advocacy and Fair Play Alliance tracks. The single best session I went to this year was “Microtalks in Player Behavior” presented by several academic and industry researchers, focusing on player harassment and negative/toxic behaviors.

Melissa Boone started with an overview of Microsoft’s research into player harassment. Many players reported developing a “thick skin” to deal with harassment but from follow up reports it was clear that it wasn’t really working. Voice chat was found to be particularly dangerous because it reveals traits like gender/background that are frequent targets for harassment. Also, many players wouldn’t bother reporting slurs because they felt like tattletales, due to how common they were.

Andrew Przybylski from Oxford reported results from an academic study into cyberbullying in mobile gaming. In a sample of 2k 14-15 year olds in the UK, 33% had been bullied in mobile games, with 9% being bullied persistently. Being male, nonwhite, and having a history of previous bullying made people more likely to be victims. 40% indicates they were significantly affected by the bullying but sadly only 4% reported the issue to the developer, as the vast majority of mobile games do not have any way of reporting issues like this.

Natasha Miller reported results from Blizzard’s research into player harassment in Overwatch. At the start of the investigation 90% of players reported receiving harassment while 50% felt that reporting was completely ineffective. They first tried sending emails to reporters after enforcement, but this had no effect on perception. They switched to in-game messaging and that was much more effective. They also added in-game warnings to perpetrators. Together, this improved perception of report effectiveness by 40% and reduced toxic chat by 25%. These numbers aren’t super solid because they could easily be confounded by changes in enforcement, which were not discussed. But it does match with what I’ve heard from Riot before where in-client messaging was key to creating change.

Katherine Lo gave a really interesting talk comparing industry approaches to player harassment with the D.A.R.E anti-drug program of the 90s, which was largely a failure. One of the negative effects of that program was by emphasizing that “drugs are everywhere” it helped to normalize the behavior and made teens who abstained feel socially isolated. Relatedly, exaggerating the scope of online harassment has been shown to specifically make men think the problem is less important. Also, by creating a policy of Zero Tolerance it groups minor issues with more severe issues, which can encourage progression from one to the other because they are “just as bad”. Lastly, programs that try to discourage behavior in teens from the top down do not generally work as they are naturally resistant to arbitrary authorities. It’s important for developers to establish that the judgement of a behavior not being socially acceptable is from peers and not from “game cops”. When giving out punishment transparency is also helpful as it makes it feel less arbitrary, reducing resistance. Katherine’s talk had the most interesting ideas but the least hard data and I’m really interested to see it expanded into some proper empirical studies.

Lastly, Naomi McArthur gave an update on player toxicity research from Riot. She discussed how personality factors and cognitive biases could increase toxicity but much of her research focused on the importance of ingroup/outgroup dynamics in causing toxicity. On a team of 5 players you want to create a social ingroup that is fighting against the enemy outgroup, but the pressures of ranked matches and game mechanics make this difficult. When a team of 5 had one premade duo it showed no more toxicity than with no duos, but 2 premade duos showed a 36 percent increase because it pitted the two duos against each other socially. Also she mentioned the importance of community norm perception. If players perceived that intentional feeding of kills to the enemy team was a “common problem” they were significantly more likely to take part in it themselves.

All of these talks hit on a common theme: If abuse of other players is perceived as normal, players are more likely to commit it. This makes sense because if something is seen to be socially acceptable by your peers, you’re more likely to do it. Punishments from above can help to reduce this, but can only go so far. The real key is to make it clear that the player’s peers do not approve of the abusive behavior. Sure, there are some dedicated trolls who actively enjoy violating social norms, but they are not the direct source of most of the abusive behavior in games. The research shows that the vast majority of players do not approve of player harassment, and game developers need to find ways to make it more clear that harassment is not accepted by the player community at large.