Volume, speed, and accessibility are among the many features that the internet brought about that significantly changed and positively impacted the information ecosystem. The amount of information shared online and the reach of such information is unprecedented and has been deemed a democratizing force. The ability to keep and find information online has dramatically enhanced people’s access to information, political participation, research and learning capacities, hiring practices, due diligence, and so forth. While international standards over freedom of expression, content, and dissemination have remained mostly unchanged, volume, speed, and accessibility are novel. But are they sources of new redressable harms?
Historically, volume, speed, and accessibility have been looked at by the law to assess damages. These features do not speak to the nature of the speech—whether it is harmful or not, legal or illegal—but rather to the nature and scope of the damages potentially generated by illegal or legally reprehensible content. As a matter of law, the content is what causes the harm. Therefore, if the content is ruled to be harmful—and the harm is legally redressable and attributable—then most courts around the world would look to the volume, speed, and accessibility, or the permanence, to determine the damages. For a defamation lawsuit, for example, advocates and judges would first need to establish the defamatory nature of the statements. Only then would they look at the accessibility and reach of such statements to assess damages. The volume or replication of that speech would also serve as a basis to calculate the compensation owed to the injured party. However, speed and accessibility would not, in of themselves, render the statement defamatory.
Agustina Del Campo heads the Center for Studies on Freedom of Expression and Access to Information (CELE) at the University of Palermo in Argentina.
With the advent of the internet, however, there seems to be increasing consensus that speed, volume, and accessibility should be considered specific sources of harm. Yet, advocates and judges have faced severe difficulties in fitting these categories within existing civil and constitutional standards. Volume and speed often cannot be attributed to a single individual; these phenomena are not dependent on the willingness or the intention of the content generator themself. A piece of content may go viral inadvertently to the author, or it may not go viral despite the author’s best intentions. Virality is achieved through the concurrence of conduct by third parties who often act in uncoordinated and unrelated ways. Since this does not represent individual conduct, nor can it be effectively attributed, causality cannot be established. Thus, legal principles dictate that such content cannot generate liability the way that publishing, editing, or selling a specific work can.
Courts, legislators, and advocates have been unable to properly characterize volume, speed, and accessibility and deal with them. Instead, they have been looking for ways to factor these features into existing law without addressing the underlying shortcomings of the existing legal frameworks. It is not unusual to find court decisions or bills of law tweaking and tuning legal standards to account for these factors, usually addressing them indirectly and opaquely.
Moreover, what has been popularized as harmful but legal content often refers to content that, at an individual level, does not amount to illegal or redressable damage but, at a collective level, may be deemed unjust or unfair for targeted victims given the speed at which the content spreads and the amount of views it garners. The emergence of this messy hybrid and contradictory category of speech stemmed from the difficulty of cataloging content that was potentially harmful to vulnerable groups, democracy, public health, safety, and economics, among others, but failed to reach the threshold for illegal or legally reprehensible content. In many instances, legitimate content, when aggregated with other similar content and/or made available to wider audiences, becomes problematic. There are many current examples of this trend—where the content is not illegal by itself but can allegedly produce societal harms when aggregated—including anti-vaccine movements, COVID-19 disinformation, and electoral disinformation.
One critical example relates to harassment on the internet. The accumulated level of gender violence against women is often manifest, immediate, visible, and yet hard to address from a policy or legal perspective. In these situations, individual pieces of content may not produce significant harm, nor do they amount to legally reprehensible content. But the aggregate effect can be problematic. These questions were recently addressed before a district court in Amsterdam for a case related to the online harassment of a Dutch newspaper columnist. In this case, C. Gargard, a columnist at the Dutch newspaper NRC, had received 7,600 misogynist, discriminatory, and distressing messages. Gargard identified and presented 200 of them as evidence. The prosecutor filed charges against twenty-four defendants who were subsequently convicted by the court for a variety of criminal offenses, including incitement of assault, incitement of murder and manslaughter, incitement of discrimination, and incitement of defamation. The court found that the volume of messages increased the risk of harm for these crimes. The defendants argued that they had no intent to commit any of these crimes and their content individually did not amount to violations of the law. The Court determined that given the broader context where the statememts were issued, these 24 defendants were liable under different crimes/misdemeanors depending on each case. In isolation, it is probable that each individual case would not meet the three-part test for discrimination (legality, necessity, and proportionality) under Article 10 of the European Convention on Human Rights on freedom of expression. But, were the convictions proportional to the level of online harassment experienced by the plaintiff?
Given the difficulties of these issues, multiple stakeholders have turned to intermediaries for potential solutions. New bills of law are introduced daily around the globe to deal with internet companies’ liability for harmful but legal content or awful but lawful content. The European Union’s Digital Services Act (DSA) is probably the most recent law enacted in this area. The DSA mandates that companies must assess the risks that their services could generate and take steps to mitigate their impact. While this may make sense for speech deemed illegal in the EU (which is one of the many categories that the law asks companies to assess and mitigate), when it comes to awful but lawful or harmful but legal content, the questions that the Amsterdam district court faced will sooner or later come up for companies to decide. Should volume, speed, and permanence be considered risky per se? What would that mean for the internet as we know it or for the democratic advancements that the internet facilitated?
Modern legal systems did not anticipate legal conduct becoming illegal based on the number of viewers or the number of people who shared that same statement. Nor did they consider as relevant whether the statement would be accessible through time. As Yale Law School professor Robert Post explained:
The scale of the internet produces forms of harm that may best be characterized as stochastic. Previously we asked whether particular speech acts might cause particular harms. The internet has rendered this kind of question almost obsolete. Speech that is simultaneously distributed to billions of persons may produce harm in ways that cannot meaningfully be conceptualized through the lens of discreet causality. We will need instead to think in terms of statistical probability of harm. Yet at present we lack any legal framework capable of assessing such stochastic harms in ways that will not drastically over-regulate speech.
There are two ways to interpret Post’s assertion. One possible interpretation is that given the volume and the speed of harms produced, legal systems cannot effectively address them through ex post causality analysis. This is the argument that Standford Law School’s Evelyn Douek made in a 2022 article: “When the pursuit of formalism stands in the way of achieving other governance goals, like speed of decision making or responsiveness to prevailing social conditions, it will harm rather than enhance legitimacy and perceptions of accountability and effectiveness.” She was discussing private content moderation, however, rather than state restrictions to speech. Another interpretation is that causality in some or many of these cases cannot be factually established because it is not the content that gives rise to the harm but rather the speed, volume, and permanence.
State limitations based on speed, accessibility, and volume are incompatible with current freedom of expression standards globally. This may be the reason why so many recent laws and bills to address freedom of expression online fail to comply with international human rights law. If freedom of expression is the key to democratic societies, this right should protect not only popular opinions but unpopular and even shocking statements. Further attempts to address harms raised by speed, volume, and permanence should first acknowledge the existing gaps. A number of questions could be raised thereafter: What would be the implications of changing existing global standards for freedom of expression? Could these new harms be legally addressed without undermining the philosophical premises underlying the right to freedom of expression and its importance in democratic societies? And what other models could be brought in to serve as frameworks for new, human rights–respecting solutions?
This essay reflects a larger research agenda I am pursuing on the topic of online harms and their compatibility with freedom of expression principles. As I get deeper into the research, I am convinced that if volume, speed, and accessibility are to be considered independent of sources of harm, the way forward to achieve this within the existing human rights framework lies in the framing of these questions and issues. The first step is to attain clarity about the incentives and trade-offs in adopting one solution over another. It is important that the legal community acknowledge the gaps in modern legal systems about how to deal with these new phenomena, particularly if judges and courts aspire to continue incorporating human rights principles into their decisions.
Carnegie does not take institutional positions on public policy issues; the views represented herein are those of the author(s) and do not necessarily reflect the views of Carnegie, its staff, or its trustees.