NordVPN Promotion

Home / Blogs

Just Another ‘Black Box’? First Thoughts on Twitter’s Trust and Safety Council

On Tuesday, Twitter announced the creation of the Trust and Safety Council, a body comprising 40 organisations and individuals from civil society and academia, tasked with “ensur[ing] that people feel safe expressing themselves on Twitter”. The move is clearly a response to specific criticism of Twitter and its perceived inadequacies in dealing with hate speech?—a theme so popular and well-trodden that it recently spawned a parody account.

It also reflects a broader tonal shift in narratives around the internet, and especially social media, which has taken place over the last few years. The digital environment is increasingly spoken of in terms of a ‘cesspool’, where free expression—corrupted by perceived anonymity—has spilled over into endemic abuse, for which the only remedy is increased moderation or censorship. Some human rights constituencies—particularly those working with minorities and women, the primary targets of online abuse—now see the internet almost exclusively as a negative space, as I discovered recently while attending a conference on minority rights.

Silicon Valley is also paying attention to this trend. Late last year, Google’s chairman Eric Schmidt wrote an op-ed in the New York Times which argued for “tools to help de-escalate tensions on social media—sort of like spell-checkers, but for hate and harassment”. At Davos, Facebook CEO Sheryl Sandberg argued that targeted campaigns of ‘counter-speech’ could help defeat ISIS. In a climate of heightened fears over extremism, there is an increasing willingness to see these businesses not as platforms for expression, but as censors, or arbiters of appropriate behaviour. As a human rights defender, I find this troubling.

In the offline world, the distinction between speech which is objectively understood to incite hatred and violence (prohibited), and speech which is offensive (protected), has been broadly established through debate, discussion and advocacy over many decades. Courts at the national and international level have learned over time to enforce this boundary in accordance with agreed human rights norms.

Twitter’s Trust and Safety Council represents a wholly different approach. The stakeholders on the Council have—as far as can be surmised from the rather laconic press release—been chosen by Twitter, through a closed process. There is no indication whether the activities of the Council, or the algorithms and processes their advice may inform, will be made public. Where is the legal oversight and independent accountability? How will people who feel unjustly censored be able to challenge decisions? These questions are so far unanswered.

I have other concerns related to the composition of the Council itself. With the exception of a few (excellent) groups who work on free expression, the overwhelming majority of members are focused, in one way or another, on the restriction of hate speech. While the work these groups do is no doubt valuable and important, their numerical dominance on a body tasked with finding the “right balance between fighting abuse and speaking truth to power” seems problematic.

The Council’s geographical composition is also a concern. In spite of its claimed diversity, by far the largest single constituency represented is groups from the US—with the remainder overwhelmingly from the global North. In many countries around the world, hate speech provisions are used to criminalise legitimate dissent and political opposition. Will US groups, acclimatised to the strongest freedom of speech protections in the world, be able to appreciate these nuances? Given that this body will apparently operate with no legal oversight, its internal balance, both geographically and normatively, is crucial—and I am not convinced it is right.

There is no doubt that hate speech is a real problem—and we would, of course, welcome a move to genuinely open Twitter up to external influence. But without the vital structures of oversight, accountability and representation outlined above, it is hard to see how it will functionally differ from the myriad other ‘black boxes’ which invisibly shape our lives—both online and offline—through algorithms and opaque decision-making processes. The closed deliberations of civil society groups, however well-meaning, are not enough.

By Andrew Puddephatt, Executive Director at Global Partners Digital

Filed Under

Comments

Late last year, Google's chairman Eric Schmidt Frank Bulk  –  Feb 11, 2016 7:54 PM

Late last year, Google’s chairman Eric Schmidt wrote an op-ed in the New York Times which argued for “tools to help de-escalate tensions on social media — sort of like spell-checkers, but for hate and harassment”.

So what are they thinking of—adding a 15 minute delay timer after hitting “post” when the content contains hate speech, to allow the poster to reconsider?

Comment Title:

  Notify me of follow-up comments

We encourage you to post comments and engage in discussions that advance this post through relevant opinion, anecdotes, links and data. If you see a comment that you believe is irrelevant or inappropriate, you can report it using the link at the end of each comment. Views expressed in the comments do not represent those of CircleID. For more information on our comment policy, see Codes of Conduct.

CircleID Newsletter The Weekly Wrap

More and more professionals are choosing to publish critical posts on CircleID from all corners of the Internet industry. If you find it hard to keep up daily, consider subscribing to our weekly digest. We will provide you a convenient summary report once a week sent directly to your inbox. It's a quick and easy read.

Related

Topics

Brand Protection

Sponsored byCSC

Cybersecurity

Sponsored byVerisign

IPv4 Markets

Sponsored byIPv4.Global

New TLDs

Sponsored byRadix

Domain Names

Sponsored byVerisign

DNS

Sponsored byDNIB.com

Threat Intelligence

Sponsored byWhoisXML API

NordVPN Promotion