Editor’s Note: Prostasia is reprinting useful material from our newsletter for blog readers. This essay first ran in the November 2021 newsletter. To keep up to date on everything happening at Prostasia, you can sign up for the newsletter here.
No legitimate website owner wants to host unlawful sexual images of minors. Such content is illegal and harmful, and it will disgust and drive away most website visitors.
Last year Prostasia Foundation reviewed some of the automatic tools available to assist website owners in ensuring that their websites aren’t misused to host such images. However, access to these tools, and to the databases of image fingerprints (or hashes) that power them, is tightly restricted. This means that many websites, especially those hosted outside the USA, don’t have an easy way to filter out unlawful images. They are reliant instead on user reports or human moderation.
Project Arachnid, a web crawler operated by the Canadian Center for Child Protection (C3P), seemed to offer a solution to this problem when it was released in 2006. Rather than requiring website owners to scan their own websites for illegal content, the Project Arachnid bot would do so in the course of crawling the Internet. It would then send an abuse notification to the website owner if any content matched images known to C3P’s Cybertip.ca reporting hotline. C3P received a share of $10 million in grants in 2020 to fund the further expansion of Project Arachnid.
Unfortunately however, the irresponsible decision of C3P to extend its Project Arachnid abuse notification system to include lawful and legitimate content has rendered the system next to useless. As a result, Prostasia Foundation now warns that website owners who receive abuse notifications from Cybertip.ca can no longer be sure that the reported content is actually illegal or abusive.
We’ll explain exactly how this happened and why, but first—who is C3P anyway?
About the Canadian Center for Child Protection
On the surface, C3P is an independent nonprofit organization. Indeed, it began as a grassroots, volunteer-led organization founded under the name Child Find Manitoba by the mother of a child who had been abducted and murdered. Its United States counterpart NCMEC was founded in very similar circumstances. But both organizations quickly evolved beyond their independent roots to become closely interlinked with government.
Criminologist Steven Kohm describes C3P as “facilitating networks of policing, surveillance, and control that link public and private bodies.” It does so, however, without the mechanisms of transparency and accountability that we would normally expect from a body exercising such functions. He writes:
C3P works out of view of the general public. Despite being given national ‘tip line’ status by Canadian Parliament in 2011, and despite receiving significant funding from several Canadian provinces and territories and Canada’s federal government, C3P remains uncoupled from the state and therefore resistant to researcher requests for information about most aspects of its operations.
Through a detailed analysis of C3P’s public communications, Kohm has analyzed the tactics used by C3P to center itself as an authority on child protection in Canada. He says that these include:
- “Distorted and exaggerated claims” about the extent of the problem.
- A “focus predominantly on the role of technology in facilitating the offense, rather than addressing who offends and why.”
- A tendency to draw “on emotion and common myths about stranger danger to advocate for legal, social, and behavioral change.”
C3P emphasizes surveillance as a primary solution to the problem of child sexual abuse. It advises parents that they should “know your child’s log-in and password” and impress upon the child that they have “no right to privacy.” The organization also advocates with international policymakers against online privacy measures such as end-to-end encryption.
C3P favors censorship of a broad range of adult content. In 2019, a Canadian report to Costa Rican police resulted in the arrest of a 17-year-old girl for posting explicit artwork to her blog. C3P also spreads anti-pornography pseudoscience that compares adult content to “cocaine, alcohol, and methamphetamines.” In 2021, it resigned its membership of INHOPE, the international network of reporting hotlines, because Mindgeek, the parent company of Pornhub, had joined as a member in order to safeguard its own platform from illegal content.
An increasingly brazen power-grab
In December 2019, C3P put forward a new framework for the removal of content online. This would include the removal of content that is “harmful” but not illegal. As a part of this framework, it promoted the idea that trusted hotlines—notably its own Cybertip.ca—should be able to direct Internet companies to remove such content without internal or external review. “Industry must act on removal notices without subjectivity or unevenness when notified by a trusted/verified hotline,” the organization insisted.
The concept that not enough content is being reported for action, because the criteria for doing so are too loose, is somewhat baffling. In reality, far more reports of actual child abuse material are being made than authorities have the capacity to deal with. This has resulted in investigators prioritizing only the most serious cases such as penetrative abuse of prepubescent minors.
Despite this, following the release of its new framework, C3P began using its Project Arachnid bot to issue abuse reports on a broader set of lawful content that C3P considered harmful. On a close reading of these abuse reports, they no longer claim that the content reported is illegal. Instead, they say that it is “sexual, harmful, and/or abusive,” that it appears to involve “a person under the age of 18,” and that it “may also satisfy legal definitions of illegal material in some countries.”
Despite this careful wording, many of the website and network operators who received these notices have assumed that they continued to refer to verified illegal content as they had prior to 2020. Some may also have been misled into believing that the C3P is an authorized government actor. This misunderstanding has resulted in an increase in unjustified online censorship, which can be attributed solely to C3P’s actions.
Abuse reports misused
To give just a few recent examples, in January 2021 the C3P sought to censor a whistleblower, whose website reveals that an anti-pornography activist fabricated a story about child sex trafficking. Beginning in March 2021, reverse image-search websites were targeted by the Project Arachnid bot, despite the fact that these websites do not themselves host any image content. In June, an image-hosting website complained about repeated takedown requests for a frame from the children’s movie Pippi Longstocking.
Most recently in August 2021, C3P took down a longstanding art and culture blog by making a false report concerning an image taken from a 1960s postcard of an indigenous family. Although the abuse report states that “the content is harmful due to the context or location in which it is being made publicly available,” the context in which this image was used could not have been more innocent. It appeared, along with other similar images, in a detailed ethnographic blog article about indigenous women and girls.
Compounding the harm done by the C3P’s false report, the report was not made to the website owner, who could have taken this image down specifically. Instead it was made to an upstream network operator, which responded by blocking the server from the Internet, thereby taking down the entire website and other websites along with it, using a technique called null routing. In a recently published paper, C3P admitted using null routing against CSAM content, while hiding its broader use of this censorship technique against legitimate websites also.
Also complicit in this act of censorship was Cloudflare, the network security company, which gave up the IP address of the server, which had been shielded from public view as a security precaution. Our revelations that C3P has abused its position to unmask the details of lawful websites provides good reason for Cloudflare to review C3P’s “trusted notifier” status.
Due to the goodwill that Project Arachnid had built up when it was reporting only verified child abuse images, many network operators continue to respond to Cybertip.ca notices without question. In light of its documented misuse of abuse reports to target legitimate and lawful content, this practice must now be reassessed. Going forward, website owners who receive Cybertip.ca abuse reports should avoid acting on them without independently reviewing the content that is the subject of the report, and checking with their local hotline if necessary.
Additionally, we are rescinding our previous advice that organizations that perform PhotoDNA scanning for unlawful images of minors should use the Cybertip.ca hash database. In light of the C3P’s misuse of its Project Arachnid abuse reporting bot, Cybertip.ca can no longer be trusted as a source of reliable data for PhotoDNA scanning.
C3P’s standards of what content is “harmful” are private, arbitrary, and influenced by its overtly sex-negative agenda. Despite the pseudo-public function that it exercises, it must not be forgotten that C3P is simply a private organization, with no accountability for its actions. As an increasing number of documented cases demonstrate, it has manifestly exploited its position to enact policies of overbroad censorship. It should no longer be regarded as a trusted actor in the child protection sector.