NordVPN Promotion

Home / Blogs

How the War Against Child Abuse Material Was Lost

The battle to purge child abuse images from the Internet has been lost. That doesn’t mean that we can’t or shouldn’t continue to work towards the elimination of image-based abuse. But it is widely acknowledged by law enforcement, reporting hotlines, and prevention groups alike that this can’t be achieved merely by censoring images from the Internet and by criminalizing those who access or share them—which are the only strategies that society has focused on until now.

Completely censoring abuse images from the Internet has proved intractable because it would require the surveillance of all communication channels, including those that are end-to-end encrypted. It simply isn’t possible for communications to be both securely encrypted and also to be mass-scanned for child abuse images, and even the proposed Lawful Access to Encrypted Data Act wouldn’t require such a gaping backdoor to be installed in secure communication apps and services. Even if such a mandate were put in place, free, open-source encryption software is now ubiquitous. Secure communications are here to stay.

As for criminalization, at some point, higher penalties for image-based abuse no longer have any further deterrent effect—and we reached that point long ago. Under existing state and federal laws in the United States, those convicted of possessing abuse images can easily receive a longer sentence than those convicted of the hands-on sexual abuse of a child. Yet as penalties for possession offenses have skyrocketed, rates of offending have increased along with them. Criminalization also has lifelong harmful effects on families and communities. Up to 15% of offenders are children themselves—and in some cases the victim and the perpetrator are one and the same.

Governments, nonprofits, and tech companies have failed

Responsibility for the harm that children suffer through the creation and circulation of abuse images lies solely with those who create and circulate them. Those harms are real, and we can’t simply ignore them. But responsibility for our failure to contain this crisis lies with those who have been entrusted with the responsibility to do so. Since governments, large child safety nonprofits, and technology companies have all doubled down on the two-prong approach of censorship and criminalization, they all share the blame for its failure.

For governments, the emotional topic of child sexual abuse is routinely invoked to justify repressive laws and policies that could never otherwise secure passage. An example is FOSTA/SESTA, a law originally promoted as a solution to child sex trafficking, but which in fact targeted adult sex workers for criminalization and censorship. Aside from the harm that it did to sex workers—which can hardly be regarded as an unintended consequence—the law has also made the investigation and prosecution of real child sex trafficking cases more difficult than before, and resulted in censorship of content about abuse prevention.

A censorship-first approach has also been promoted by the large child safety nonprofits. NSPCC, the government-chartered child safety group from the United Kingdom, has been the driving force behind a campaign to hold Internet companies responsible for child sexual abuse. Its American counterpart NCMEC, which is also government-linked, was a key supporter of FOSTA/SESTA. It now also supports the EARN IT Act, a law that would even further expand the censorship of sexual content, but which is opposed by child sexual abuse prevention groups.

Technology companies have long borne the brunt of demands from governments and their allied child safety groups to adopt their censorship agenda. Over the past decade, they have increasingly capitulated by co-opting and partnering with these pro-censorship groups. In 2017, Facebook was the first tech company to join NCMEC in supporting FOSTA/SESTA. Most of the tech company representatives at this month’s summit of INHOPE , the association of abuse reporting hotlines, are also alumni of government-linked groups—Twitter’s from NCMEC, and Google’s from the NSPCC. In short, large tech companies have not offered an effective check on the government’s agenda, but have swallowed it whole.

A lesson from the music industry

Napster, the original peer-to-peer music sharing app that was released in 1999, created a massive headache for the music industry when its revolutionary model of music distribution led to an explosion in copyright infringement. The industry’s initial response to this was exactly the same as the response that society has taken to the problem of child abuse imagery—censorship and criminalization. Indeed, many of the same underlying technologies are used to censor child abuse images as those that were developed to control digital content piracy.

But the industry soon learned that these approaches didn’t work, and that in some ways they made the problem worse. Consumers resented being treated as criminals, incorporated music piracy as part of the counter-cultural identity of their generation, and as soon as one file-sharing app was shut down, they moved to another. In the movie The Social Network, when Napster founder and Facebook investor Sean Parker claims that Napster “brought down the record company,” Mark Zuckerberg objects, “Sorry, you didn’t bring down the record companies. They won.” Parker responds, “In court.”

Eventually, the music industry came around to the idea that they needed to compete with Napster on its own terms, by providing a better, equally convenient alternative. When they finally did so by licensing affordable music streaming and downloads, the piracy problem largely went away by itself.

What we should be doing instead

The government-linked child safety sector and its tech allies have yet to reach the same realization as the music industry. And so they persist in the idea that ever-tougher criminal penalties, combined with the increased surveillance that would be required to make these practical to enforce, will eventually be sufficient to eliminate abuse. But after more than 20 years of this experiment, it’s finally time to call it a failure. If we continue down this path, things aren’t going to get better; they’ll continue to get worse.

To actually make progress towards solving the problem of child abuse online, we need to do what the music industry eventually did: we need to build a better pathway for people who are drawn towards it. Erecting border walls and surveillance posts around the Internet sends the wrong message to these people, and will only encourage them to circumvent these measures. Rather than trying to ensure that abuse images can’t be accessed or shared, instead, we need to focus on ensuring that there are better alternatives, so that fewer people feel the need to seek those images out.

Convincing people that viewing sharing such images is harmful and wrong is a necessary and important part of achieving this outcome. But as with the drugs war, “Just say no” goes only so far—the allure of the taboo is palpable. And as police are now realizing, this allure can extend even to offenders who aren’t otherwise sexually attracted to children (in other words, those who don’t fit the psychological profile of pedophiles). In short, there are a lot more people willing to perpetrate image-based abuse than even experts previously believed.

What does a better alternative look like for these people? In the broadest sense, anything that could prevent them from abusing a real child should be considered as a viable alternative. In some cases, this just means education so that they realize their behavior comes at a cost to children: viewing abuse images is not a victimless crime, and many people who offend still don’t understand that. Once they do understand it, that provides enough incentive for them to stop. Others may require peer or professional support to make that connection and to adjust their behavior accordingly.

For others still, it may also help them to be able to explore their taboo thoughts and feelings through victimless outlets such as art, fiction, role play, or sex toys. These aren’t an indication of an unusual sexual interest in specific individuals, but in the broader population, some users of these materials may be doing so as a coping mechanism. Prostasia Foundation is the only child protection organization raising funds for research on whether such outlets could be a tool in diverting these people away from offending against real children, as initial research suggests may be the case.

Active opposition to alternatives that could prevent offending

Far from promoting or supporting research into such alternatives, the government-linked child safety groups actually want them banned. Historically, many of these groups were associated with the false Satanic sex panic that was a precursor to QAnon, and still today they remain intolerant of sexual minorities and of sexual expressions that are commonly (and wrongly) stigmatized. Their stated justification for criminalizing such expressions is that they are linked with real child sexual abuse—however there is no evidence supporting this claim.

The NSPCC, for example, rails against 18+ pornography and sex dolls that are too “young looking.” NCMEC allows those reporting child abuse images to include anime, drawing, cartoon, virtual or hentai images, and includes these and other lawful images in reports to foreign police forces. The Canadian Center for Child Protection, which does the same, once reported a 17 year old Costa Rican girl over cartoon images that she posted online, resulting in her arrest by authorities. The reporting hotline association INHOPE has refused to put an end to these practices.

Due to their close partnership with these groups, governments and tech companies have fallen into line behind their stigma-driven policies. During 2019, a raft of laws banning sex dolls were passed in the United States and overseas, despite ongoing research into their therapeutic applications. Under pressure from a stigmatizing press report initiated by a conservative activist who represents the NSPCC and other British groups, Facebook cracked down on the adult DD/lg lifestyle community. Other tech companies have been making similar censorship moves; Reddit, for example, banned almost twice as much content for “minor sexualization” in 2019 than in 2018 due to an expansion of its policies and enforcement practices to include fictional content such as 18+ ageplay and manga art.

To be clear, this means that not only are governments and big tech companies failing at addressing the misuse of real child abuse images through their blinkered preoccupation with criminalization and censorship, but by extending this censorship to lawful and possibly therapeutic outlets for some people who might otherwise be drawn to illegal content, they could actually be making the problem worse. Additionally, by establishing a precedent that content should be banned because it is immoral, rather than because it is harmful, they have played into the hands of those whose agenda includes banning other “immoral” content such as 18+ pornography.

Independent platforms are leading the way

It may be too late to disentangle large tech companies from the puritan agenda of the government-linked censorship cartel. At least for now, that agenda is being fought elsewhere, such as in the Supreme Court, where FOSTA/SESTA remains under constitutional challenge. But in the meantime, our hope for a more evidence-based approach to the prevention of online child abuse lies with smaller platforms.

For example Fanexus, a soon to be launched social media platform for fandom communities and creators, and Assembly Four, a sex worker and technologist collective that operates platforms for sex workers and their clients, are both dedicated to providing censorship-free spaces for their respective communities online. But at the same time, they are also working proactively to ensure that these platforms are not misused to perpetrate the abuse or exploitation of minors.

Because so much attention has been devoted to censorship and criminalization as the solution to child sexual abuse, we are still navigating the contours of a more prevention-focussed approach. Legally and technically, what can be done to limit the availability of unlawful images of minors online, and what can’t? How platforms moderate content without resorting to a checkbox approach that embeds harmful stereotypes and assumptions? What does safeguarding look like, for platforms that allow fictional content that references child abuse? What content warnings are sufficient for such material when it may be triggering for survivors?

These are questions we must now engage with seriously, although answering them will require an investment in research, and a willingness to engage with stigmatized topics and communities rather than sweeping them off our platforms and into darker corners of the Internet. The importance of this investment has been overlooked for so long because many people falsely believe that prevention isn’t possible—but it is.

We can effectively work towards the elimination of image-based abuse, but not through mass surveillance and censorship or by further enabling the expansion of the carceral state. Instead, we’ll solve it step by step as more individuals who now resort to the use of unlawful sexual images of minors decide for themselves that a better alternative exists for them… and as generations to come follow in their footsteps.

We look forward to the day when we can call the battle against image-based abuse a success, and we invite you to join us in fighting it.

By Jeremy Malcolm, Trust & Safety Consultant and Internet Policy Expert

Filed Under

Comments

Comment Title:

  Notify me of follow-up comments

We encourage you to post comments and engage in discussions that advance this post through relevant opinion, anecdotes, links and data. If you see a comment that you believe is irrelevant or inappropriate, you can report it using the link at the end of each comment. Views expressed in the comments do not represent those of CircleID. For more information on our comment policy, see Codes of Conduct.

CircleID Newsletter The Weekly Wrap

More and more professionals are choosing to publish critical posts on CircleID from all corners of the Internet industry. If you find it hard to keep up daily, consider subscribing to our weekly digest. We will provide you a convenient summary report once a week sent directly to your inbox. It's a quick and easy read.

Related

Topics

Domain Names

Sponsored byVerisign

IPv4 Markets

Sponsored byIPv4.Global

Threat Intelligence

Sponsored byWhoisXML API

Brand Protection

Sponsored byCSC

Cybersecurity

Sponsored byVerisign

New TLDs

Sponsored byRadix

DNS

Sponsored byDNIB.com

NordVPN Promotion