Esports and video games have grown from a niche hobby to a multi-billion dollar industry spanning the world. It is no longer an unexpected sight to see an arena filled with 50,000 fans watching their favorite esports teams compete. However, despite this incredible growth, esports and the wider gaming sphere remain plagued by issues of inequality and hate. From developers maintaining toxic cultures, to individual games being so plagued with hate that players representing certain marginalized groups won't even play, these issues must be addressed in order for esports to grow into a true global industry, where everyone can compete.
Extremist hate messages are increasingly pumped through mainstream online platforms, such as those that host virtual games, including innocuous children’s games. They are also increasingly pushed in mainstream spaces, such as music shows and mixed martial arts gyms. As opposed to the recent past in which extremist hate was largely tucked away in hard-to-access racist rallies, publications, and racist group meetings, it now appears -- unsolicited -- in the daily life experiences and on the computers of regular Americans, including young children who have limited ability to evaluate and process such messages. Educational interventions are needed to reduce susceptibility to misinformation and disinformation, to mitigate harm, and to prevent radicalization among those, especially young people, who are reached by extremist hate messages online or in person.
There is considerable concern about the damaging social and individual consequences of increasing incidents of actions and expressions of extremist hate by military personnel and veterans. It is unclear to what extent such extremism reflects intentional infiltration of the military by aspiring domestic terrorists seeking access to and training in high power weaponry and to what extent it is the product of independent and self-reinforcing hate-based subcultures. We will bring together experts at CMU, Pitt, and adjacent institutions such as the VA, to jointly conduct research and develop interventions on online and in-person extremist hate subculture communities in the military and among veterans and to explore connections between gun violence and extremist hate.
Many approaches to identifying, studying, and intervening in hateful activities online rely on identifying signals related to hatred and verbal aggression from texts. However, existing tools are limited and often do not address evolving language. For example, effective implementation of content moderation policies on social media platforms can be difficult due to use of constantly evolving coded language in extremist hate group messages, language whose hatefulness depends on context and the background of the speaker, and rapidly changing online communities. As a result, content that should be removed according to a platform’s policy may be missed entirely or removed only after a significant delay. Furthermore, besides hate speech detection, there is a need for early indicators for and techniques to stop the creation of hate communities and to prevent activities that lead to their formation. This working group is developing collaborative research projects that leverage computational analysis of large-scale data sets in conjunction with expertise from linguists, sociologists, psychologists, political scientists, anthropologists, and others to help understand and counter the formation of hate communities and moderate hateful content.