Why is important not to report art or fiction as if it was real child abuse? Two reasons. First, it places a strain on the platforms’ trust and safety teams, whose principal job is—or ought to be—protecting real human beings from harm. Reporting content that doesn’t directly place a child in harm’s way clogs up the system that real victims depend upon for their rescue.
A second reason not to report images as child abuse is that Internet platforms pass these reports on to national child abuse reporting hotlines (we wrote more about these hotlines, which are closely linked to government, in last month’s newsletter). These hotlines have far-reaching powers. They use reports submitted by the public to construct blocklists that are shared across the world, and can result in a user who shares those images being automatically flagged to investigators.
We know that some of these groups, such as NCMEC and the Canadian Center for Child Protection, are accepting reports about artwork that clearly doesn’t depict real children, either because they don’t review the reports properly, or because their national law includes art and fiction within the legal definition of child pornography. Although we don’t know the figures for those groups, it has been reported that in 2018, 2% of tips to the Irish hotline were for virtual images or even text fiction.
Reporting artwork to these groups can therefore put the poster and the artist at risk of being arrested for a child pornography offense if they live in a country where such artwork is illegal. For example, a referral from Canadian authorities led to a 17 year old girl from Costa Rica being arrested over artwork on her blog. No matter how offended you may be about such artwork, this is a disproportionately harmful response.
Discussions about child sexual abuse and minor attraction
The same caution needs to be applied to reporting text content, such as discussions of child sexual abuse and minor attraction. Frequently, users who identify as minor-attracted use social media networks to access peer and professional support. Some of these users, when first discovering these unwanted feelings, are as young as 14 years old. Yet although they are not acting on these feelings, their accounts are frequently harassed and reported for child exploitation. This can impede them from receiving the help that they need, and push them into darker corners of the Internet.
Although it isn’t appropriate to report the accounts of people who aren’t actively engaged in illegal behavior or its promotion, it’s absolutely understandable that there are many for whom posts about child abuse or minor-attraction can be triggering or hurtful. So it’s entirely appropriate for people to draw a personal boundary and to block those accounts.
Because developing a personal list of blocked accounts can be a time-consuming and error-prone process, some users get together with others to form their own communities in which they share tips about which accounts should be blocked or reported. We recently spoke to the administrator of one such popular community account, the self-styled Online Bureau of Internet Justice or OBIJ, to talk about their approach to the coordinated community reporting on Discord and Twitter. (Note that Prostasia Foundation does not endorse any actions taken by the OBIJ. Responses have been edited for length and clarity.)
Prostasia: What got you interested in policing content on Twitter, and why do you think this is necessary?
OBIJ: I guess it started with the OBIJ about a year ago. My friend had a problem with a stalker who would harass him everywhere and so the two of us, with a bunch of friends, managed to make him leave us alone. Since then we thought we were cool enough to try and help other people. We’ve had a bunch of cases ranging from serious threats, doxxing, and irl [in real life] crimes to simple trolling. Since about July/August I began focusing on Pro Contact MAPs and accounts that post illegal content. I personally don’t consider the OBIJ to be a “pedophile hunter group” especially now considering what the term means nowadays. Our reason for existence is that we are a group of guys who wanna help people who get unfair or abusive treatment online when the mods aren’t helping (which is often the case).
Prostasia: You recently made a decision not to pursue Twitter accounts of people who identify as non-offending MAPs, provided that they aren't infringing Twitter terms of service. Can you explain more about this decision?
OBIJ: Well the OBIJ is about helping people who are getting harassed and fighting people who harass others, right? Well going after random accounts that tweet nothing and spamming them with slurs sounds a bit hypocritical and many argued that it goes against our motto. While we do not support the normalization of pedophilia, we understand that MAPs did not choose to be who they are, as pedophilia develops in people like anything else and it is not curable. So while we don’t want to support pedophilia, we don’t want to attack people who were born a certain way. So we decided to chose a middle ground that felt right. We will target accounts that either break ToS, state laws, or generally harass other people.
Prostasia: How do you respond to the criticism that groups like yours are drawing attention to prohibited content, and that abusers might follow you to access such content? How can you limit this risk?
OBIJ: We are aware of such criticisms and we are trying to figure out a way to limit the risks. So far we are limiting the risks by doing the most extreme cases privately without public support, and giving the job to only trusted OBIJ Agents. But we are open to suggestions and ideas as we are thinking of other ways. |