|
There are some who see the regulation of social media platforms as an attack on the open internet and free speech and argue that the way to protect that is to let those platforms continue to self-regulate. While it is true that the open internet is the product of the same freedom to innovate that the platforms have sprung from, it is equally the product of the cooperative, multi-stakeholder organisations where common policy and norms are agreed.
These organisations exist at almost every layer of the internet. At the physical level, we have the IEEE, the ITU-T sets standards for encoding, at the protocol level the IETF, for web standards, we have the W3C, IP addressing and routing is covered by the RIRs, and domain name policy by ICANN. Without these organisations, and without their adoption of some form of multi-stakeholder, cooperative approach, the internet would not have happened. While these organisations were established with a narrower, technical remit, they have mostly recognised the need to deal with issues of security and trust and have developed policies accordingly.
However, and this is a big however, nothing like this exists for social media platforms, either at the technical level of interoperability of data or the more important level of policy. This might not matter so much if it were not for some of these platforms growing so large and their actions so powerful, that they are having a significant impact on global society.
So, the first question is, how are effective policies for the social media industry as a whole going to be developed and adopted? There’s no doubt now that this will not happen in any reasonable timescale through self-regulation and public pressure.
Will they instead cooperate as part of a multi-stakeholder process to create standards and norms that address the serious issues of security and trust that affect them? Given that the behaviours that cause these issues of security and trust, are the very same behaviours that drive the growth of these platforms, I think that is also very unlikely.
With these issues deserving of urgent attention, that leaves government regulation as the only way now to address them and protect all of us from the adverse impact these platforms are having. Ideally, that regulation would be to force the social media platforms to follow the rules of an open, cooperative, multi-stakeholder organisation where they can participate along with all other stakeholders. That would be the way of the open internet. If this is not possible, then more specific legislation is needed otherwise more general legislation will probably appear, such as hate speech laws, which will actually threaten the open internet rather than support it.
The second question, is what exactly are these critical issues of security and trust that need industry policy controls in place? What are the issues that the social media platforms have proven to us they cannot be trusted to fix themselves as quickly as required?
The first place to start is with content recommendation algorithms, which we now know are causing radicalisation, hatred and violence by recommending extreme content to promote maximum engagement. If that’s news to you then the quick primer is that the platforms have algorithms that look at which users are most engaged and which content triggers that engagement, and the algorithm then promotes that content to anyone it can. The sad fact is that the content of hate, outrage and conspiracy theories generate strong engagement and so for some years the algorithms have been recommending that content, while the platforms have either ignored or been oblivious to the harm caused.
As these algorithms are crucial to the growth that these platforms rely on, they are not going to radically overhaul them voluntarily. Left to their own devices the platforms would rather tackle the problem at the edges, e.g. a complaints mechanism for the most extreme content, rather than prevent the problem in the first place and forgo the growth.
The next pressing issue is that of multiple sock-puppet accounts and bots that are created to suggest a far larger number support a particular idea than really do and so influence others by deception. We know that the social media platforms know which these accounts are, in some cases they directly enable them through their APIs and commercial models, but we also know that their commercial success depends on their headline user numbers and a mass cull that hits those numbers hard is not in their interests. They will continue to carry out small culls every now and then to feign interest but never enough to spook the markets.
A longer running issue is how data is shared and resold and what visibility and control people have of that. If there’s one thing we should have learned from the Cambridge Analytica scandal, it is that there is layer upon layer of denial here that no amount of self-regulation will ever prevent. In this specific case, full exposure and public outcry had only a limited effect, whereas the threat of EU fines was taken far more seriously.
Another area of increasing importance and little prospect of being addressed by self-regulation is that of undisclosed advertising by “influencers” and “brands”, commercially facilitated by the platforms. In the regulated media world this was addressed decades ago for well-understood reasons, and we should be applying exactly the same rules to social media.
Last of all, for now, we have the burning question of who is a “fit and proper person” to run such a company, which is a test that many countries apply to a range of activities including traditional media ownership. It is about time that the same test was applied to social media platforms taking into account all the previous scandals the current set of founders have overseen.
To conclude, at the other layers of the internet the multi-stakeholder standards/policy organisations act as an important control mechanism. The absence of such an organisation for the social media platforms means an out of control industry. The best outcome would be for the platforms to self-regulate through an open multi-stakeholder process but that seems utterly remote. The only way forward now is legislation either forcing them into such a process or tackling a small set of the most urgent issues of security and trust and so pre-empt much broader and potentially damaging legislation.
Sponsored byCSC
Sponsored byIPv4.Global
Sponsored byDNIB.com
Sponsored byVerisign
Sponsored byWhoisXML API
Sponsored byRadix
Sponsored byVerisign