These principles are intended to assist Internet platforms of all sizes to adopt a more nuanced and better-informed approach towards the moderation and censorship of sexual content, with a view towards protecting children from sexual abuse while also upholding their rights and the rights of others. They have been developed by consensus through a multi-stakeholder consultation process held between May and November 2019, drawing on work by Access Now. Here is the background paper which provides information about that process.
1. Prevention of harm
Sexual content should be restricted where it causes direct harm to a child. Indirect harms should not be the basis for blanket content restriction policies unless those harms are substantiated by evidence, and adequate measures are taken to avoid human rights infringements.
2. Evaluation of impact
Companies should evaluate the human rights impacts of their restriction of sexual content, meaningfully consult with potentially affected groups and other stakeholders, and conduct appropriate follow-up action that mitigates or prevents these impacts.
3. Transparency
Companies and others involved in maintaining sexual content policies, databases or blocklists should describe the criteria for assessing such content in detail, especially when those policies would prohibit content that is lawful in any of the countries where such policies are applied.
4. Proportionality
Users whose lawful sexual conduct infringes platform policies should not be referred to law enforcement, and their lawful content should not be added to shared industry hash databases, blocklists, or facial recognition databases.
5. Context
The context in which lawful sexual content is posted, and whether there are reasonable grounds to believe that the persons depicted in it have consented to be depicted in that context, should be considered before making a decision to restrict or to promote it.
6. Non-discrimination
Content moderation decisions should be applied to users based on what they do, not who they are.
7. Human decision
Content should not be added to a hash database or blocklist without human review. Automated content restriction should be limited to the case of confirmed illegal images as identified by a content hash.
8. Notice
Users should be notified when their content is added to a hash database or blocklist, or is subject to context-based restrictions, unless such notification would be prohibited by law.
9. Remedy
Companies should give priority to content removal requests made by persons depicted in images that were taken of them as children, and provide users with the means of filtering out unwanted sexual content.