Europe’s chat control mandate begins

Secret artificial intelligence (AI) surveillance tools have been developed by tech companies at the urging of governments, and quietly deployed against millions of Internet users to scan their private communications. However, these “chat control” tools have been found to violate Europe’s e-Privacy regulation which went into effect last December. The contradiction highlights the way that moral panics around child sexual abuse have been used to justify state overreach and invasion of privacy.

The controversial chat control regulation passed by the European Parliament in July 2021 was intended to address this. In the end it did so only in part by legalizing the use of certain tech company surveillance tools and disallowing the use of others (here’s more about the regulation). While the compromise was framed as a loss for chat control opponents, an under-appreciated win was that the indiscriminate use of AI-based chat control tools for detecting grooming in private communications was explicitly ruled off-limits.

But how was it ever considered that this mass surveillance technology was justified? Prostasia Foundation was there from the beginning, and can exclusively shed light on the role of governments in pushing this illegal chat control technology.

How governments pushed chat control technology

Tech companies have struggled to overcome an ignoble history of acceding to overreaching government censorship and surveillance demands. Governments have an even more ignoble history of packaging vice and censorship laws with “think of the children” wrapping paper. It was in response to one such law, FOSTA/SESTA, that Prostasia Foundation was formed in 2018.

In November of that same year, Prostasia representatives attended an event titled Preventing Child Online Grooming: Working Together for Maximum Impact which was organized by the UK Home Office, the WePROTECT Global Alliance, and Microsoft. Among the other attendees were representatives of the U.S. Department of Justice. The event was held to launch the development of the chat control tool that became Microsoft’s Project Artemis, a cross-industry AI tool used to detect text-based child grooming, based on code developed for the Xbox.

UK Home Secretary Sajid Javid, who attended the meeting in person at Microsoft’s Redmond campus, was under fire back at home for calling perpetrators of gang-based sexual exploitation “sick Asian pedophiles.” Although these grooming gangs didn’t use the Internet to perpetrate their abuse, pushing responsibility for preventing grooming onto tech platforms was a convenient deflection for Javid. He made no secret of his intention to force platforms into compliance if they refused to adopt such tools voluntarily.

On the eventual release of Project Artemis in 2020, it was promoted as an open product, that any tech platform with a chat function could request access to. Yet when Prostasia Foundation requested access to the tool for evaluation and use in our peer support chat service, it was denied by the private organization, Thorn, that Microsoft had installed as gatekeeper of the technology. By then, the experimental technology had already been quietly rolled out and was being promoted for inclusion in the chat control regulation.

Not all chat control tools are the same

Once it came to light in October 2018 that Europe’s e-Privacy regulation would likely make this technology illegal, the European Commission faced accusations from government-linked child safety groups of “putting pedophiles’ privacy ahead of fighting online child abuse.” The Commission rapidly hatched a scheme to pass a “temporary derogation” to the e-Privacy regulation. This would allow chat control tools to continue to be used, at least for a temporary period of up to three years while a more permanent solution was found.

Inconveniently however, it soon became apparent that such a scheme would violate European human rights law by legalizing a regime in which the private communications of people who are under suspicion of no crime, are scanned by experimental, inaccurate AI robots and potentially flagged to police as evidence of child abuse. Such scanning would be a massive imposition on the privacy of adults and children alike.

In the Commission’s background research on the tools, it was found that they could not all be placed into the same basket when analyzing their human rights impacts. In particular, as Prostasia Foundation has always argued, it is reasonable and proportionate for tech companies to use well-tested technologies such as PhotoDNA to voluntarily scan unencrypted content that is uploaded to their servers, to ensure that it doesn’t match against a set of image hashes (digital fingerprints) of known child sexual abuse material. After all, they would otherwise be hosting such material, which has no legitimate place online.

But the same reasoning doesn’t justify the adoption by tech companies of secret AI tools such as Project Artemis to scan private conversations for signs that they might involve child grooming, or AI image recognition tools that scan private photographs in the hope of identifying previously unknown abuse images. Although the so-called Five Eyes governments (the US, UK, Canada, Australia and New Zealand) insisted these experimental tools be included in the scope of the chat control regulation, the European Parliamentary Research Service found that they could not satisfy the requirements that would be needed for that to happen, such as being well-established, regularly in use, reliable, and protective of privacy.

The final chat control regulation

Recognizing these distinctions, Prostasia Foundation established three red lines for the chat control regulation. First, scanning of uploaded content should only be permitted to identify known illegal images. Second, there must be no use of AI tools to scan conversations. Finally, there must be no authorization or mandate for the scanning of encrypted messages. In the version of the regulation that finally passed in July 2021, the European Parliament only fully respected one of these red lines: that encrypted communications would not be breached.

To our disappointment, the final compromise did allow the limited use of AI tools for scanning both images and text conversations under the oversight of national privacy regulators. However, some limits were imposed. One of these was that a report initiated by an Internet company’s AI surveillance system would have to be reviewed by a human being before being forwarded to law enforcement. Automatic reporting to law enforcement is obviously unacceptable. But exposing private images and chats to tech company workers is hardly a privacy-protective safeguard.

Another limit placed on the use of AI tools was that their use for analysing the meaning of text communications was banned. Since this is the way that Project Artemis works, the chat control regulation effectively put a nail in the coffin of that much vaunted project, at least in respect of its use within the European Union. Automated analysis of communications for human review is still allowed, but this can only be based upon “objectively identified risk factors” such as an adult sending an unsolicited message to an unrelated child.

In the name of protecting children, chat control technologies can put children at risk.

Regardless of the safeguards in the final text, there is still significant danger that chat control technologies will infringe the privacy of sensitive private communications and photos of adults and minors alike. In particular, although the regulation expresses that it is not intended to govern “consensual sexual activities in which children may be involved and which can be regarded as the normal discovery of sexuality in the course of human development,” there is no mechanism provided to prevent intimate communications between teens and their similar-age partners being exposed to tech company employees. Employees could then easily leak or otherwise misuse these conversations. In the name of protecting children, chat control technologies can put children at risk.

What happens next

As mentioned, the chat control regulation is only a temporary derogation from European privacy law, acting as a bridge to legalize the continued use of surveillance tools in the fight against child sexual abuse while more long-term legal arrangements are still being negotiated. Consultations for that long-term European initiative for the detection, removal and reporting of child sexual abuse online wrapped up in April, and we can expect to see a draft regulation on that topic in September 2021. It is widely expected that the regulation will not only continue to legalize the voluntary surveillance and reporting practices of tech companies, but will transform at least some of those practices into mandatory obligations.

In Prostasia’s submission to the European Commission’s consultation, we affirmed the positions we had taken on the temporary chat control regulation. Scanning of uploads for known child abuse images should continue to be promoted as a voluntary regime, and AI tools should be ruled off-limits. We pointed out “that such tools tend to be biased against minorities, meaning that most false positives will disproportionately impact LGBTQ+ people, BIPOC people, and sex workers, who are already face discrimination and overcensorship.”

We also stressed the importance of allowing the continued use of end-to-end encrypted communications, which in light of the legalization of AI-based chat control tools, could be the only way for intimate communications to be conducted in true privacy. We wrote:

Either Europe continues to support a free and open Internet which includes access to end-to end encrypted services, or else it opts for a network without end-to-end encryption in which all communications are subject to surveillance. There is no in-between option. Furthermore, the choice between these two options should be crystal clear: prohibiting end-to-end encryption would infringe the fundamental human right of privacy as guaranteed by Articles 7 and 8 of the EU Charter of Fundamental Rights, as well as Article 8 of the European Convention on Human Rights.

Conclusion

The passage of the chat control regulation provides an ugly example of the mainstream child protection sector doing its worst to minimize and dismiss legitimate concerns over the privacy of children and adults alike. European lawmakers raising these concerns reported being shamed for not being “sufficiently committed to fighting child sexual abuse.” Thorn, which sells surveillance technologies to Internet platforms, had its celebrity founder Ashton Kutchter make theatrical social media appeals about “evidence of the rape of children going unseen.”

Yet the end result, while disappointing for both sides in some respects, was actually rather remarkable. Against all odds, it demonstrated that the familiar and manipulative rhetoric of government-linked child safety groups does actually have limits. In particular, Europe’s firm rejection of the UK government’s favored Project Artemis grooming detection tool amounts to a significant post-Brexit humiliation for the British and their (now notionally independent) WePROTECT Global Alliance, which should help put a stop to the further unchecked expansion of these secret surveillance tools.

Our hope is that this demonstration of the limits of an approach based around censorship and surveillance will see fresh attention being devoted to prevention initiatives. As we wrote in our submission to the European Commission, “The stigma that surrounds the topic of child sexual abuse gives significant power to special interest groups who who would see human rights safeguards loosened. Europe must resist this pressure, and follow an evidence-based, public health approach to this problem, that respects the human rights of all.”


If your rights are being infringed, you can find resources and support here.

Notable Replies

  1. Avatar for Chie Chie says:

    Taken from the Politico article:

    MEPs also decried the pressure they were under to approve the bill, calling it “moral blackmail.”

    “Whenever we asked critical questions about the legislative proposals, immediately the suggestion was created that I wasn’t sufficiently committed to fighting child sexual abuse,” Dutch MEP Sophie in ‘t Veld said a day before the vote.

    While child sex abuse, grooming, etc. are valid interests, the manner at which the EU is going about to address it is fundamentally and overtly flawed.
    I predict a significant amount of shortfalls and inapplicable situations, from overwhelming amounts of false-positives, to legal spats, to free speech, and even perhaps organized attacks by nefarious or privacy-conscious actors, will prompt a revisiting of these rules, and perhaps even a rollback of the rules in question.

    We cannot allow flimsy and fallacious tools, such as morals-based reasoning and coercion, to act as a stand-in for such a heavy-handed approach. Privacy is one of those things you care about the moment it’s gone, and the thought of AI-screening or worse - actual persons going through your messages to arbitrarily determine whether or not it is worrisome - could have extreme ramifications that will only act as a “foot in the door” for more intrusive regulation.

  2. This kind of crap is the beginning of the end, IMO.

    A number of years back, the UK established a blocklist of sites that were mandatory for UK ISPs to use. The law establishing the list was promised to include only CSAM and terrorist-related materials. Only the “worst of the worst” sites and materials would be included, the pols promised.

    Fast forward a few years, and you had rightsholders in court arguing that a blocklist was already in existence, and why shouldn’t it be possible to add the names of copyright-infringing sites like the Pirate Bay? The judge agreed, and now sites that are used for, or accused of, promoting copyright infringement are regularly added to the blocklist as well.

Continue the discussion at forum.prostasia.org

Participants

Avatar for Chie Avatar for prostasia Avatar for Thoughtcrime

Comments

  1. […] sexual interest is not a viable prevention strategy. Monitoring of private conversations, being a violation of privacy rights, is also not an option that most platforms should […]

  2. […] termed this regulation Chat Control 2.0, in reference to the existing voluntary regime known as Chat Control 1.0 established in 2021. A key feature of both measures is to allow (version 1.0) or to require […]

  3. […] Microsoft in November 2018, participants hacked on a “grooming detection” bot that was later ruled unlawful in Europe, and discussed adding spyware into devices that would enable them to report suspected CSAM to law […]

  4. […] the first phase of the regulation (“Chat Control 1.0“) was under negotiation in 2021, Johansson accused those standing up for communications […]

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.