Why Moderators Can’t Protect Online Communities on Their Own

Olivier Sibai, Kristine de Valck, Marius Luedicke

Research output: Contribution to specialist publicationArticle

Abstract

The data on online abuse is sobering: Nearly one in three teens have been cyberbullied and one in five women have experienced misogynistic abuse online. Overall, some 40% of all internet users have faced some form of online harassment. Why have online communities failed so dramatically to protect their users? An analysis of 18 years of data on user behavior and its moderation reveals that the failure stems from the fact that people responsible for moderating online behavior labor under five misconceptions about toxicity, specifically that people experiencing abuse will leave, that the incidence of abuse are isolated and independent, that abuse is not an inherent part of community culture, that rivalries in communities are beneficial, and that self-moderation can and does prevent abuse. These misconceptions drive current moderation practices. In each case, the authors present findings that both debunk the myths and point to more effective ways of managing toxic online behavior.
Original languageEnglish
Specialist publicationHarvard Business Review
Publication statusPublished - 5 Nov 2024

Keywords

  • online communities
  • toxicity
  • moderation
  • platform governance
  • trolling
  • online violence
  • flaming

Fingerprint

Dive into the research topics of 'Why Moderators Can’t Protect Online Communities on Their Own'. Together they form a unique fingerprint.

Cite this