The company behind Call of Duty is using data collected from gamers of its most popular productions to create artificial intelligence that would prevent trolling in multiplayer games. For this purpose, Activision has teamed up with Caltech, or the California Institute of Technology.
Professors and researchers are taking part in a two-year project, which will teach the AI to detect, preventing and responding to toxic behavior, with data collected from players in Warzone 2 and Modern Warfare 2’s online modes as the basis. Call of Duty developers, on the other hand, will implement the artificial intelligence in their games.
“Whether it’s trolling, racism, sexism, doxing, or just widespread harassment, the Internet has a big problem with bad behavior. As the global network has grown, harmful behavior has become more extreme and it has become clear that moderators need better tools at their disposal.” Caltech’s statement explains.
.@AnimaAnandkumar and @rmichaelalvarez are teaming up with @Activision on a two-year research project that aims to create an #AI that can detect abusive online behavior and help the company's support and moderation teams to combat it.https://t.co/wOSSoJwJgF
— Caltech (@Caltech) December 15, 2022
“Partnering with Activision not only gives researchers access to data on how people interact in online games, but also to the expertise of developers. The project will contribute to the creation of artificial intelligence that detects online abuse and will help Activision’s moderation teams combat it.” the statement further reads.
Computer science professor Anima Anandkumar, who previously trained artificial intelligence to fly drones and study coronaviruses, will lead the project, as will Michael Alvares, a political science and computational social science professor who used machine learning tools to study political trends in social media.
Supporting them will be a team of engineers from Activision. A key issue for the researchers is to understand how players interact with each other, what language they use and what biases they have, helping to teach the AI to recognize trolling and other inappropriate behavior.
So far, it’s unclear what powers the algorithm will have. It will most likely serve only as a tool to signal the problem to moderators, though it is also possible that the machine will be given some autonomy.