NordVPN Promotion

Home / Blogs

The Future of Europe’s Fight Against Child Sexual Abuse

Like much of how the Internet is governed, the way we detect and remove child abuse material online began as an ad hoc set of private practices. In 1996, an early online child protection society posted to the Usenet newsgroup alt.binaries.pictures.erotica.children (yes, such a thing really existed) to try to discourage people from posting such “erotica” on the assumption that the Internet couldn’t be censored. But that was never quite true: eventually, Internet providers decided that allowing such a newsgroup to be hosted on their servers wasn’t such a good idea, and they blacklisted this and similar newsgroups. But by then, the World Wide Web had taken over in popularity from Usenet, and the game began again.

Very quickly, governments woke up and decided that the removal of images of child abuse and child nudity from the Internet was something they needed to be involved in. In the United States, lawmakers looked to an existing government-linked nonprofit, the National Center for Missing and Exploited Children (NCMEC). NCMEC had been formed in 1984 in the early stages of what became an unprecedented media blitz on the issue of child abduction and sex trafficking, fueled by notorious cases such as that of Adam Walsh, who also gave his name to the national sex offense registry law.

In 1998, in the middle of the Internet’s explosive early growth phase, NCMEC created a Cybertipline as a place for Internet users to report incidents of suspected child sexual exploitation. Over the next decade, NCMEC (in partnership with other U.S. federal agencies such as the FBI and ICE), established a network of alliances with foreign law enforcement agencies and began forwarding them any Cybertipline reports that seemed to involve their countries.

In 2008, then Senator Joe Biden sponsored the PROTECT Our Children Act, which made reporting suspected child pornography to the Cybertipline compulsory for Internet platforms. However, there was no mandate for them to proactively scan their servers for such material; they simply reported it if and when they became aware of it.

Hash scanning to the rescue

This reliance on user-reporting quickly became a bottleneck. Even after platforms had removed illegal material from their servers, it would reappear—and they would have to wait for another user report to find it again. Very soon, large platforms began digitally “fingerprinting” unlawful images when reporting them the first time, so that they could be automatically removed if they reappeared. But in the cat and mouse game between platforms and abusers, the latter quickly learned that making minor changes to an image could defeat such automatic scanning.

A breakthrough was made in 2009, when Microsoft Research and Hany Farid, professor at Dartmouth College, developed a new tool called PhotoDNA, intended for use in preventing known unlawful images from being uploaded even if small changes to the images were made. Internet platforms began sharing PhotoDNA-compatible fingerprints (or hashes) of images that were reported to NCMEC, until eventually, NCMEC took over this function also, becoming the maintainer of a shared database of hashes of unlawful images in 2014.

What does any of this have to do with Europe? Well, nothing much—and that has become a problem for Europe. Although some platforms that operated in Europe participated in this NCMEC-centered scheme, the unvetted reports that NCMEC was forwarding back to European authorities were of an extremely low quality, with up to 90% of them representing innocent, lawful content such as cartoons and family photos. (NCMEC has disputed this, claiming that its reports are about 63% accurate—still hardly an inspiring figure.) Responsibility for sifting the wheat from the chaff fell to European law enforcement authorities or abuse reporting hotlines. These hotlines were organized in 1999 into a European Commission funded network, INHOPE, which NCMEC also later joined.

One of the key national European reporting hotlines was Britain’s Internet Watch Foundation (IWF), which had originally been formed to tackle the problem on Usenet, and had begun collecting image hashes in 2015. Unlike NCMEC, which was a creation of statute, the IWF had been formed by Internet platforms themselves, many of whom used its tightly-curated hash lists in preference to those of NCMEC. In 2019, NCMEC and the IWF began sharing their hash databases with each other.

Rise of the machines

Despite its evolution, this regime for the filtering of unlawful sexual images of minors remained an ad hoc and largely private arrangement—but governments wanted more control. In November 2018, at the height of a campaign against “Wild West” Internet platforms led by UK tabloids and government-linked child protection organization the NSPCC, the UK Home Office co-sponsored a tech industry hackathon with the aim of developing artificial intelligence (AI) based surveillance tools for the detection of child grooming which could be “licensed for free to resource-constrained smaller and medium-sized tech companies.” Prostasia Foundation representatives attended the formal parts of the event, but were excluded from the hackathon itself. The eventual custodian of the completed tool, Thorn, also refused to license it to us for use in the peer support discussion group that we host.

Meanwhile, other AI surveillance tools were under development with the aim of responding to government demands that not only existing images of child sexual abuse, but also never-before-seen images could be automatically detected and eliminated from Internet platforms. During 2018 both Google and Facebook began using proprietary tools that purported to be able to identify unlawful images of children. Despite concerns about their accuracy and about privacy implications around the use of such tools, as well as a lack of transparency and accountability around their operation, these experimental tools were quietly moved into production. Google also licensed its tool to other platforms (though again, refused to license it to Prostasia).

The ePrivacy Regulation reality check

This extension of private surveillance in the name of child protection came to a screeching end in December 2020, when Europe’s ePrivacy Directive came into effect. As it turns out, the mass and indiscriminate use of surveillance tools by Internet platforms against their users, in the absence of any prior suspicion, that they have done anything wrong, infringes their fundamental human right of privacy. This placed not only the use of experimental AI surveillance tools, but even the much more accurate and well-tested PhotoDNA scanning, under a legal cloud.

The groundwork for a long-term solution to this dilemma had already been planned, in the form of a strategy for a more effective fight against child sexual abuse that the European Commission released in July 2020. As part of this strategy, the Commission planned to establish a new regime for the reporting and removal of unlawful sexual images of minors by Internet platforms, which would build in the necessary privacy protections and democratic safeguards that that ad hoc private regime lacked.

Anticipating that this would not be ready in time for the commencement of the ePrivacy Directive, in September 2020, the European Commission proposed a temporary derogation from the ePrivacy Directive that would allow the continuation of scanning for child sexual abuse online until 2025 at the latest. Among the few safeguards included were that this derogation would be “limited to technologies regularly used” for this purpose, and that such technologies should “limit the error rate of false positives to the maximum extent possible.”

However, although tech companies also agreed to this proposal, the European Parliament’s Civil Liberties, Justice and Home Affairs (LIBE) committee found its safeguards to be insufficient. The committee proposed a compromise with somewhat stronger safeguards, but which would still allow the AI surveillance tools to be used to scan both private messages and photos, provided that they were reviewed by a human being before being forwarded to law enforcement authorities.

As this compromise was unacceptable to the Council and the Commission, the ePrivacy Directive took effect with no temporary derogation in place. Facebook immediately ceased scanning for unlawful images in its messaging services for European users—although other tech platforms including Google and Microsoft continued to use them in the hope that the temporary derogation would still be agreed shortly.

The future In February 2021, while negotiations over the temporary derogation continue, the European Commission opened a consultation on its long term strategy. One of the main purposes of the consultation is to gather feedback on plans to establish a new legal regime under which Internet platforms are required (or, perhaps, voluntarily encouraged) to detect known child sexual abuse material (and perhaps, previously unknown material and suspected grooming) and to report it to public authorities.

Although notionally independent from the temporary derogation negotiations, the reality is that there will be enormous pressure for whatever is agreed as a temporary measure to be grandfathered into the final legislative scheme. As things stand, two groups in the European Parliament are all that stand in the way of legalizing the scanning of private chats, emails and photos using unproven artificial intelligence algorithms. It is to be remembered that these AI tools have been in use for less than three years, and were only adopted under enormous political pressure for tech companies to “solve” the problem of child sexual abuse—a task which, even with the best of intentions, they are simply incapable of performing.

Allowing this to happen would be an exercise in child protection theater, and a disaster for civil liberties. Yes, it is important to establish a new legal regime that accommodates the voluntary scanning of uploaded content for known unlawful sexual images of minors, using well-tested tools such as PhotoDNA. But this should not be taken as an opportunity to also legalize the use of intrusive and untested artificial intelligence algorithms that provide no demonstrated benefit to child safety.

Conclusion

In ourĀ single-minded focus on surveillance and censorship as solutions to the problem of child sexual abuse, we have finally hit a wall: the fundamental human rights that protect us all. If we really mean to protect children from abuse, pushing against that wall isn’t the answer. Instead, we need to broaden our approach, and consider how investing in prevention could hold the answer to a longer term, sustainable reduction in child sexual abuse, online and offline.

Thankfully the European Commission, with advice from experts in the field, has broadened the scope of its ongoing consultation beyond the establishment of a legal regime for the reporting and removal of abuse images, to also include the possible creation of a European centre to prevent and counter child sexual abuse, which would provide holistic support to Member States in the fight against child sexual abuse. This center could help support research into what motivates individuals to become offenders, evaluate the effectiveness of prevention programs, and promote communication and exchange of best practices between practitioners. Having advocated for such an approach since our formation, we will be expressing our support for it in our response to the consultation.

Prostasia Foundation will also be holding a free webinar on March 15 with Member of the European Parliament Dr Patrick Breyer, and clinical psychologist Crystal Mundy, to discuss all angles of the future of the fight against child sexual abuse in Europe, and to provide participants with the background information they need to provide fully informed and comprehensive responses to the Commission’s consultation.

By Jeremy Malcolm, Trust & Safety Consultant and Internet Policy Expert

Filed Under

Comments

Commenting is not available in this channel entry.
CircleID Newsletter The Weekly Wrap

More and more professionals are choosing to publish critical posts on CircleID from all corners of the Internet industry. If you find it hard to keep up daily, consider subscribing to our weekly digest. We will provide you a convenient summary report once a week sent directly to your inbox. It's a quick and easy read.

Related

Topics

New TLDs

Sponsored byRadix

Cybersecurity

Sponsored byVerisign

Domain Names

Sponsored byVerisign

Brand Protection

Sponsored byCSC

Threat Intelligence

Sponsored byWhoisXML API

IPv4 Markets

Sponsored byIPv4.Global

DNS

Sponsored byDNIB.com

NordVPN Promotion