Internet companies alone can’t prevent online harms

As the world hunkers down to curb the spread of the COVID-19 pandemic, the Internet’s ability to keep us in contact with friends, family, colleagues, and the wider world has proved its unique value as a communications platform once again. But the Internet can also be used to perpetrate bullying and abuse, to spread hate and misinformation, and to undermine safety and civility. Preventing these and other online harms is a worthy objective, but also a difficult one. Upcoming Online Harms legislation in the United Kingdom and the EARN IT Act in the United States are two early attempts at confronting this challenge… but we question their approach.

Until now, governments have concentrated on addressing harms caused by illegal conduct online. With some success, laws that ban unlawful sexual images of minors, the sale of illicit drugs and weaponry, and sharing of copyright-infringing material, have been extended to the online environment, through laws such as 18 USC 2258B, section 230 of the Communications Decency Act, and the Digital Millennium Copyright Act in the United States, and the e-Commerce Directive in the European Union. By defining a process under which Internet platforms are required to take responsibility for such content, these laws have resulted in the broad eradication of unlawful sexual images of minors from major Internet platforms.

Given that we are often told that the Internet is awash in such content, the last statement may seem bold. But although most people are concerned about the risk of encountering unlawful images, few ever do. In Twitter’s latest transparency report, only 0.3% of accounts reported for abuse were for child exploitation—by far the smallest of any category. By 2018, only 0.5% of child abuse content found by the Internet Watch Foundation was on social networking websites. In 2018 surveys of Denmark and Sweden, fewer than one in ten respondents had ever encountered such content online, and this figure is falling—between 1996 and 2018, the IWF saw a 99.7% reduction in the proportion of illegal content hosted on UK-based platforms.

So while the open availability of child abuse content on major platforms is a serious problem, Internet companies’ zero tolerance policies and robust takedown practices mean that it’s thankfully no longer a common one.

Online harms don’t mean unlawful content

Since unlawful content on major platforms has been brought under control, governments on both sides of the Atlantic are saying that more is needed to address online harms beyond unlawful content. In other words, we are no longer talking companies dealing with child sex abuse material that is universally accepted as being unlawful. We’re talking about a much broader category of content. And naturally enough, governments have turned first to the same tool that they used to successfully combat unlawful content—passing responsibility over to Internet platforms.

In the United States, directly regulating online speech is fraught with difficulty for the government, due to the First Amendment guarantee of freedom of expression. International human rights law also establishes limits to governments’ ability to censor speech, such as requiring that they do so by the least restrictive means possible, and only to advance a legitimate objective through clearly defined laws.

These safeguards don’t directly apply to Internet platforms, which have much more freedom to develop their own rules about what content they allow or disallow. This explains why governments would rather nudge them into addressing online harms, rather than regulating harmful content directly. 

The broader social harms associated with such content may be very real. But requiring Internet platforms to take responsibility to address them comes with a slew of risks and costs.

But whereas illegal content should be clearly defined—making it easier for companies to address it—there is a wide range of content that does not lend itself to legal definitions because it is inherently subjective and vague. If it were to be banned by law, legitimate speech would inevitably get caught. That’s what the UK government calls “legal but harmful” content, and what the EARN IT Act includes within the undefined phrase “online sexual exploitation of children.” The broader social harms associated with such content may be very real. But requiring Internet platforms to take responsibility to address them comes with a slew of risks and costs that policymakers need to take heed of.

The EARN IT Act

The EARN IT Act fails to get this formula right: it is probably unconstitutional, because it nudges platforms too hard to adopt measures that will over-restrict lawful speech. At the same time, these measures aren’t the ones that are best adapted to preventing the broad range of five separate harms that the legislation seeks to address—enticement, grooming, sex trafficking, sexual abuse of children, and the proliferation of online child sexual abuse material.

Already, major Internet platforms, to a greater or lesser extent, use the technology of image and video hash filtering to address the last of the above harms, which is the best-defined of them. This technique is a well-tested technological method of detecting and eliminating images of minors that have been previously-identified as illegal. While we have concerns about the transparency of the process by which these hash databases are maintained, and the accountability of those who manage them, these are manageable problems. As Prostasia Foundation and other child protection groups and experts wrote to the Senate Judiciary Committee this March, “A narrower bill devoted towards improving the accuracy, transparency, and ease of implementation of this system by Internet platforms of all sizes would be worthy of our support.”

Broader than unlawful images

But the Act is much broader than that: it allows a committee of 19 people appointed by the government to issue rules to Internet companies, vetted by the Attorney General and fast-tracked through Congress, addressing any of the five harms that the EARN IT Act specifies. For example, it is widely anticipated that these rules may specify that Internet companies should not use secure, end-to-end encryption in apps or services that can be used by minors. The resulting damage to the privacy, and security of the Internet from this rule alone would be devastating.

Thankfully, the tools held by Internet providers are not the only ones that society has to address social problems such as grooming, sex trafficking, and sexual abuse. Support to survivors of child sexual abuse is only just one of them. The March letter from Prostasia and others to the Judiciary Committee further recommends:

an holistic approach that includes the provision of comprehensive sexual education, training and education for bystanders, support to those who are at risk of offending, and the treatment and management of those who have previously offended, among other interventions.

The EARN IT Act does nothing to support these interventions. As the joint letter notes, there are simple ways that it could have done so. For example, 2 million dollars were allocated for child sexual abuse prevention research in the Fiscal Year 2020 Labor-HHS-Education Funding Bill—this could have been doubled. Why are we overlooking the promise of current research into child sexual abuse prevention, in favor of an approach that puts all of our eggs into a single basket—censorship?

Online harms

On the other side of the pond, the United Kingdom has proposed that internet companies should have a ‘duty of care’ to their users overseen by a regulator. Under the government’s plans, companies would be expected to take undefined steps to prevent equally undefined harms to users. Undefined because beyond illegal content, the government itself acknowledges that some of the harms it would like to include within scope are “legal but harmful” with unclear definitions. As per the government’s own table, that includes bullying, the advocacy of self-harm  and disinformation to cite the most obvious examples.  Notwithstanding this, the government said that Ofcom, the UK broadcasting regulator, could define all of these things. Or in other words, Ofcom would potentially have incredibly wide discretion to decide what’s proper and what’s not.

Faced with a backlash in the responses to its consultation on the Online Harms White Paper, the government has softened its stance. It is now saying that all the regulator would do in relation to harmful content is ensure that companies have appropriate “systems and processes” in place with companies defining “harmful” content in their terms of service. But that too is potentially highly problematic since upload filters or a ban on end-to-end encryption could be in the regulator’s sight. In the absence of a Draft Bill, however, it is hard to know. Meanwhile, the government has already announced that child abuse images would be dealt with in a separate code of conduct and it is widely expected that companies will have to adopt a range of tools to protect children. This is likely to include age verification technologies to prevent children from accessing “inappropriate” content. 

If the ICO’s Age Verification Code is any guide, however, the Code dealing with Child Sex Abuse Material is highly likely to raise significant free speech and privacy concerns. And while the government has made much of the recent adoption of Voluntary Principles to Counter Online Child Sexual Exploitation and Abuse in the meantime, those Principles remain high-level and wouldn’t be enforceable against companies if they fail to adopt the tools prescribed by Ofcom. The government has also said it would be looking to support “non-legislative” measures such as child online safety and media literacy training. While this is positive, until the government provides sufficient funding for these efforts, it will do little to prevent harm against children.

Conclusion

We can’t expect that harm prevention can be successfully accomplished with the same single tool from the regulatory toolbox that we use to eliminate illegal content. The various harms that manifest themselves online are so vastly different that many different tools will be needed. Internet companies cannot take the place of sex educators, therapists, social workers, researchers, media literacy experts, and parole officers—and we should be fearful about the government encouraging them to attempt to do so.

Relying on speech regulation to address a broad range of online harms is shortsighted.

Relying on speech regulation to address a broad range of online harms is shortsighted. The ability of governments to regulate speech is very limited—and rightly so. Laws such as EARN IT and the Online Harms legislation are intended to circumvent these limits, by deputizing Internet companies to act on the government’s behalf. But the more directly a law tells Internet companies how to police speech, the more constitutionally dubious it is. The more indirectly it does so, the more worried we should be about whether companies will act in a fair and accountable way.

We have much more freedom to act effectively at limiting harms such as child sexual exploitation through prevention initiatives that don’t involve vague laws that would have unintended consequences for free speech. But neither the EARN IT Act’s censorship committee, nor Ofcom as the likely future online harms regulator, are remotely capable of executing these initiatives—and especially not through the agency of Internet platforms. If the US and UK government are serious about countering content that is harmful to children, they would do well to provide enough funding to harm prevention and education programs as well as support to victims of abuse, rather than chase easy headlines announcing the latest crackdown on Big Tech.

Notable Replies

  1. No one is God therefore no one can prevent every possible form of harm in the world.

    Some people treat AI as a “God” (some techies do it literally) and that is part of your problem.

Continue the discussion at forum.prostasia.org

Participants

Avatar for prostasia Avatar for LoliShadow

Comments

  1. […] Internet platforms alone cannot be expected to take responsibility for these broader social interventions, which require investment in an holistic, public health based approach. In the United States, the Invest in Child Safety Act (not to be confused with the Kids Online Safety Act) would have injected $5 billion into both law enforcement and crime prevention measures, but it attracted little attention from lawmakers and has not yet been re-introduced into the 118th Congress. […]

  2. […] governments ought to be doing for their part is to recognize that it can’t be platforms alone that invest in creating safe environments for children. Expenditure on public education and […]

  3. […] public health based approach to the prevention of child sexual abuse means, and how much of it lies outside the purview of tech […]

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.