In its early life the internet inspired optimism that it would improve the world and its people, but that has been supplanted by alarm about harmful, often viral words and images. Though the vast majority of online content is still innocuous or beneficial, the internet is also polluted by hatred: some individuals and groups suffer harassment or attacks, while others are exposed to content that inspires them to hate or fear other people, or even to commit mass murder.
Hateful and harmful messages are so widespread online that the problem is not specific to any culture or country, nor can such content be easily classified under terms like "hate speech" or "extremism": it is too varied. Even the people who produce harmful content, and their motivations for doing so, are diverse. Online service providers (OSPs) have built systems to diminish harmful content, but those are inadequate for the complex task at hand and have fundamental flaws that cannot be solved by tweaking the rules, as the companies have been doing so far. The stakeholders who have the least say in how speech is regulated are precisely those who are subject to that regulation: internet users. "I've come to believe that we shouldn't make so many important decisions about speech on our own," Mark Zuckerberg, the CEO and a founder of Facebook, wrote last year. He is correct.
Daunting though the problem is, there are many opportunities for improvement, but they have been largely overlooked. The widespread distress about it is itself an opportunity, since that means millions of people are paying attention, and it will take broad participation to build online norms against harmful content. Such mass participation is neither far-fetched nor unfamiliar: many beneficial campaigns and social movements have been born and developed thanks to mass participation online.
This paper offers a set of specific proposals for better describing harmful content online and for reducing the damage it causes, while protecting freedom of expression. The ideas are mainly meant for OSPs since they regulate the vast majority of online content; taken together they operate the largest system of censorship the world has ever known, controlling more human communication than any government. Governments, for their part, have tried to berate or force the companies into changing their policies, with limited and often repressive results. For these reasons, this paper focuses on what OSPs should do to diminish harmful content online.
The proposals focus on the rules that form the basis of each regulation system, as well as on other crucial steps in the regulatory process, such as communicating rules to platform users, giving multiple stakeholders a role in regulation, and enforcement of the rules.