Ubisoft (Assassin’s Creed) and Riot Games (League of Legends) have teamed up for a major research collaboration, with the two companies working to create safer online spaces by regulating toxicity and shaping moderation tools around game data. The new “Zero Harm in Comms” is designed around collective action and aims to “create a cross-industry shared database and tagging ecosystem that brings together in-game data, which will better train preemptive moderation tools based on AI to detect and mitigate disruptive behavior.
Data will be collected from various online spaces, with information then used as the basis to train AI moderation tools around the world. As the language changes online, this data will reflect current trends and the “disruptive behaviors” of those who create toxic spaces for other players.
“Disruptive player behavior is a problem that we take very seriously but also very difficult to solve,” Yves Jacquier, executive director of Ubisoft La Forge, said of the company’s goals in a press release. “Through this technology partnership with Riot Games, we are exploring how to better prevent in-game toxicity as designers of these environments with a direct connection to our communities.”
Riot acknowledges that disruptive behavior is not unique to online games, but believes change has to start somewhere. Toxic behaviors online can influence the actions of people in real social settings and have cyclical impacts on users.
The company has already taken drastic measures in sdisable cross-team chat (also known as general chat) in League of Legends in an effort to combat toxicity between players.
By identifying the root causes of this toxicity and working to create stricter limits for moderation, Ubisoft and Riot Games hope to use their game data to create more “positive experiences in online spaces.”
As Ubisoft and Riot Games step up their efforts, other companies like Microsoft are also looking to boost their online transparency and security. Recently, an Xbox-led transparency reporting initiative revealed that the company has been proactively disciplining around 4.3 million bot accounts in its online social spaces in 2022.
It is hoped that renewed attention to safety in these spaces will reduce overall toxicity and create a sense of welcome for everyone. Ubisoft and Riot plan to share the results of their initial research phase in 2023.