Riot Games’ and Ubisoft’s recently announced joint venture for the Zero Harm in Comms research initiative has raised more than a few eyebrows online.How, they ask, can two companies now infamous for their poor internal cultures, claim to combat online toxicity in any meaningful way when they took so long to address their own problems? Others still are saying that unless something fundamentally changes in the way these games are designed, any outcome from this initiative is meaningless.
These are both extremely valid points, and indicate where a major problem with online toxicity stems from. Games like League of Legends or Rainbow Six Siege rely on a hypercompetitive and inherently antisocial design approach which encourages aggressive and skilled players. It’s very telling that for some people in the League community, toxicity is not a bug, it’s a feature. If you’re ‘tilt proof’ and can endure abuse or harassment, it’s viewed as a positive player ability.
Toxic vs Toxic
Not only that, but League is infamous for its addictive gameplay. There may well be an amount of fear mongering around video game addiction, but there’s no denying that a downward spiral of frustration at losing matches is not likely to engender a positive mental attitude in players. Like any kind of nasty behaviour, looking at it as simply an inherent issue in a segment of your audience denies the role that the game has in encouraging a toxic attitude by its very design.
AI may be an impressive sounding solution to the problem of online toxicity, but the fact is that it’s an incredibly complex thing to navigate. It’s easy to bar obscenities from chat, which is practically standard in the gaming industry now. But what about people who use specific words and phrases, ‘dog whistles’, to signal to others who know what they mean, usually the target of their ill-will? Would you need to keep the AI updated constantly? Would it need to be increasingly stringent until it’s basically censoring anything close to a certain phrase?
The fact is, people who want to troll within online games will always find a way for one simple reason: attention. Knowing they’ve hurt someone else’s feelings or made other people laugh at them is what gives them gratification, and whilst schadenfreude is something we’re all guilty of, there are some people who take it to the extreme. These are not people you can simply programme away because they take just as much pleasure in finding a way around the rules to abuse others as they do in the act itself.
Preemption vs prevention
The idea of ‘preemptive moderation’ also sounds as if it would present many more problems than it would solve. After all, how would this function? Would factors in a player’s regular communications affect the moderation systems’ predisposition towards them? Not only that, but as some people have pointed out, stringently moderating their audience would be counterintuitive, as it would risk alienating a fiercely loyal audience that, most importantly, are also loyal customers. This is where mobile may have an advantage, as communication is more difficult on a mobile device in most games, whereas the ease of text-chat makes it a relatively simple matter to flame someone in a game.
Proactive solutions would be a better idea, and this is not what preemptive moderation is. Especially since there is as of yet no indication how you would even moderate or discipline a player before they’ve broken any rules. Riot Games and Ubisoft would be better served to explore the root cause of toxicity in their games, whether this is indeed an inherent problem with their audience or triggered by something in their title’s design. Either way, it’s not a solution you can feed into a computer and hope for a solution. It’ll take time, and more importantly, money, something I’m not confident Riot Games and Ubisoft are willing to do.