|
This week United States Attorney-General William Barr cited the need to address child exploitation as one of the factors motivating a mooted review of law called CDA 230, which provides that Internet companies aren’t responsible for what their users say or do online. There are many dimensions to the problem of child exploitation, ranging from inappropriate comments on Instagram photos to child grooming on Fortnite… but the one that captures most public attention is the problem of illegal sexual images of minors being shared online.
CDA 230 has nothing to do with that problem, however. These images were never protected by CDA 230, to begin with—platforms were always obliged to take these down as soon as they were discovered. And platforms already have a robust way to remove them immediately and without human intervention—at least if the images have already been encountered and flagged as illegal at least once before. This is done using hash-scanning software such as Microsoft’s PhotoDNA, in conjunction with lists of image hashes contributed by the platforms themselves, or collected by third-party Internet hotlines for reporting child abuse images. When a user attempts to upload content to a platform that utilizes these image hash lists as part of its moderation process, the content will be blocked, and the user will be reported to law enforcement authorities. In the United States, this is mandated by law.
So far, so good. But the big problem with this arrangement is: who makes sure that these hash lists only contain illegal content? Since the images are illegal to view, the platforms aren’t allowed to share them after reporting them. This means that there is a risk that lawful content, or content such as cartoons that aren’t images of real child abuse, will also end up on these lists, and that it will be impossible to get it off again.
In fact, we know that this happens regularly and that the Internet hotlines are complicit in it. When NCMEC, the United States hotline, receives reports to its CyberTipLine that relate to a user in a foreign location, it sends the complete report to the foreign police force, including the personal details (IP address) of the user, without any verification that the content in question is illegal. As a result, up to 90% of the images passed on by NCMEC are later assessed to be innocent. Robert Jones from Britain’s National Crime Authority has testified that the inclusion of these innocent images, including cartoons, is “not really what this regime is designed to detect.”
Canada’s Internet hotline Cybertip.ca, operated by the Canadian Center for Child Protection, has also been accepting reports of cartoons and forwarding them on to authorities. In May 2019, a 17 year old Costa Rican girl was arrested for posting drawings to her blog. The arrest came in response to a report passed on by Canadian authorities. When Prostasia Foundation contacted the Center about this, they refused to confirm or deny responsibility for the referral that led to the girl’s arrest. However, they did say:
While we do not share details on specific reports submitted to Cybertip.ca, it might be helpful for you to understand our process. Through Cybertip.ca, the public can report concerns about the online sexual exploitation of children. We forward any potential concerns to the appropriate law enforcement agency and/or child welfare. These authorities determine whether to proceed with an investigation.
Yet even flagging “potential concerns” is a determination of potential illegality, which exposes the user to the risk of prosecution. It is natural that foreign police forces will take these “potential concerns” seriously, and it is disingenuous for the Canadian Center to wash its hands of responsibility. Unlike the case of real images of child abuse which directly harm children, the harm caused by drawings of child abuse is entirely subjective and much more difficult for a non-judicial agency to determine.
Since April 2010, the United Kingdom’s Internet Hotline, the Internet Watch Foundation (IWF) has been accepting reports of “non-photographic child sexual abuse images” such as cartoons, provided that these are hosted in the United Kingdom. Unlike the American and Canadian hotlines, it does not forward these reports to foreign authorities. Whether it adds them to the hash lists that it offers to its member Internet platform (including foreign platforms) remains unclear, and we have asked the IWF for clarification on this point.
In November 2019, the UK-based host of an art blog was arrested for hosting child pornography due to the blog’s inclusion of two panels of comic strips in a long and academic discussion about the line between legitimate art and child pornography. The panels in question were from a Ignatz Award-nominated semi-autobiographical comic Daddy’s Girl by Debbie Dreschler, and they depict her own experiences of incestual child sexual abuse. We have been unable to confirm whether the IWF was involved in the arrest of the man, who writes:
Do I have the right to tell Debbie that her work is now child pornography? It probably took her 20 years to drum up the courage to put her feelings down on paper and now 20 years on with the book still freely available, some IT technician has decided that he does not like a picture from that book.
Cartoons that depict minors sexually can be offensive, and in some countries, such as Costa Rica and the United Kingdom, they are also illegal. Yet Prostasia Foundation’s position is that Internet hotlines ought not to be engaged in the business of censoring such art or recommending cases for the police to prosecute. As the censorship of Debbie Dreschler’s comic illustrates, there is no clear line between art that graphically depicts child sexual abuse and “obscene” pornography.
If such a line is to be drawn, it should be drawn by courts, not by private, quasi-governmental organizations. These organizations’ special censorship powers enable content to be instantly eliminated from online platforms, and referred to the police, with little accountability or transparency. This is a “nuclear option” that we only tolerate because photos or videos that depict real children being abused are uniquely abhorrent and directly harmful, and must be eliminated quickly to minimize that harm. They are also relatively easily identified—either they depict a real child being abused, or they don’t. These same considerations do not apply to artworks, no matter how offensive they may be.
That doesn’t mean that there is nothing that can be done about art depicting minors that are potentially offensive or triggering. Prostasia Foundation’s No Children Harmed certification standard sets out some guidelines about how such artworks should be handled. Like all adult-oriented content, they should be hidden from minors by default (and from others who choose not to see them). But additionally, the standard requires members to “specifically label, flag, or mark any lawful content that depicts imaginary minors engaged in sexual activity or that depicts nudity of real or imaginary minors.”
Prostasia Foundation is calling on INHOPE, the international organization of Internet reporting hotlines, to instruct its members that they should not be treating artistic images as equivalent to photographs and videos of real child sexual abuse. Specifically, such content should not be added to hash lists to be automatically censored and should not trigger reporting to law enforcement authorities unless required by the law where the hotline operates.
Let’s ensure that the vital work of eliminating real child abuse images from the Internet isn’t diluted into a moral war against art and artists. It’s too important for that, and the costs of getting it wrong are too great. You can help to send this message to INHOPE and its members by signing our petition.
Sponsored byIPv4.Global
Sponsored byVerisign
Sponsored byWhoisXML API
Sponsored byDNIB.com
Sponsored byCSC
Sponsored byRadix
Sponsored byVerisign