|
In the same way monarchs are proclaimed—by powerful stakeholders attending a coronation and not objecting—the UN’s International Telecommunication Union (ITU) took a mandate last week to coordinate AI Safety worldwide, with most industry leaders and relevant UN agencies were present when it did so. It’s as apt a role for the UN as nuclear, maritime or air safety, but since Member State priorities differ widely, leaders need to ensure any safety rules developed there will build trust in AI rather than just regulate it since, as we know, regulation can be many things, not all of them additive. To ensure this, there are two important risks to consider and mitigate through industry action.
First, “coordination” tends to regulation. The ITU can convoke treaty-making conferences and also knows how to use its coordinating function to assert real regulatory power—from creating international standards (successfully) to Internet governance (unsuccessfully, so far)—so the risk and reflex are there. What’s more, the majority of UN stakeholder-governments are as sceptical (pro-sovereignty?) about AI as they were about past innovations that ended up benefitting society—take broadband, the Internet, or cloud—and they will be seeking to ensure that nothing about AI moves faster than they are ready to allow. Depending on their capacity to assess, benefit, or test the limits of AI safety, this could slow down deployment in ways that creates huge and damaging lacunas between nations.
Second, today’s ITU is led by a seasoned UN operative with roots and a belief in private-sector creativity and fair competition, but her leadership won’t last forever. Meanwhile, both the ITU’s other top-ten funders, in combination with the voices of smaller Member States, can create a form of swamping rights that risks delaying the experimentation with AI safety that’s needed for the international community to succeed at this goal. The balance of influence should, therefore, be struck between these state actors and the private sector purveyors of AI. If it is, the outcome will be fairer, faster deployment of AI with strong internationally-recognised safety norms.
These risks are worth mitigating. So to help this new ITU mission, the industry will need to coordinate materially, always a tall order among competitors. What that groundwork really requires, however, is buy-in from all big AI purveyors to use the UN’s convening power effectively, not least to accelerate its decision-making so that AI safety, that indispensable condition for global problem-solving, will follow, and take root everywhere, fairly, at speed.
Sponsored byWhoisXML API
Sponsored byIPv4.Global
Sponsored byRadix
Sponsored byVerisign
Sponsored byDNIB.com
Sponsored byCSC
Sponsored byVerisign
The ITU is an independent treaty organisation that long has faced a diminishing role in the ICT space because only a handful of countries participate and the work occurs elsewhere. That is not going to change.
Furthermore, it is the EU’s AI Act today that gives rise to the real concern in the AI sector because of the assertion of extraterritorial authority and onerous provisions. However, in the face of the 22i’s Bletchley Agreement, it is the DSIT consultative proceeding that should prevail on the technical and operational front, and the COE’s new AI treaty instrument on the human rights front. Meanwhile, everyone else will seek relevance by dipping their oars into the AI pond.