Home / Blogs

The World’s First Treaty on AI: Our Thoughts and the Way Forward

This month, the Council of Europe’s Committee on Artificial Intelligence (CAI) wrapped up negotiations on a groundbreaking draft Framework Convention on Artificial Intelligence, human rights, democracy and the rule of law. This draft Convention marks a milestone as the world’s first binding treaty on AI, and a significant step forward in the global governance of AI—complementing the recently approved EU AI Act, albeit with a more global reach and more distinct focus on the protection of human rights.

Its significance lies not only in its potential to serve as a global standard on AI governance, but its breakneck speed of negotiations by an increasingly diverse number of states and stakeholders. The body tasked with developing the treaty, CAI, comprises the 46 member states of the Council of Europe, as well as observer states from most regions of the world, including Argentina, Australia, Canada, Costa Rica, the Holy See, Israel, Japan, Mexico, Peru the USA, and Uruguay. It also includes representatives of Council of Europe bodies, other international organisations, the private sector, and representatives from civil society. Global Partners Digital has been a longstanding observer at CAI and its predecessor CAHAI, where we’ve consistently advocated for a robust framework that adequately protects human rights as well as championing the perspectives of those not officially part of the negotiations, such as civil society organisations from the Global Majority.

Now that the final draft is ready (with the important caveat that finalisation and full approval is yet to come), how should we interpret it? Our perspective can be summed up as cautiously optimistic. The document is certainly not perfect, and does not encompass all the central points we’ve championed alongside other CSOs, particularly:

  • its treatment of the private sector—by giving states the ability to not directly apply specific provisions of the Convention to private entities;
  • exemptions for the protection of national security interests;
  • an absence of red lines and unambiguous criteria for assessing risks; and
  • lack of standalone provisions on labour, health and the environment.

However, despite these shortcomings, we believe that the outcome itself represents a much-needed effort to establish internationally agreed-upon norms and standards for AI systems.

What is needed now is more sustained engagement by all stakeholders, specifically civil society organisations, to guarantee effective implementation of the Convention and ensure that it provides a protective function for human rights to the greatest extent possible.

Below, we offer a more comprehensive overview of the core aspects and implications of the draft Convention, as well as some concrete next steps on implementation.

An overview of the Convention: general obligations and principles

We welcome that the Convention will establish a range of obligations and principles to ensure that activities within the lifecycle of AI systems are fully consistent with respect for human rights. Chapter II includes general obligations on the protection of human rights int,egrity of democratic processes, and respect for the rule of law. Chapter III includes specific principles on transparency and oversight, accountability and responsibility, equality and non-discrimination, privacy and personal data protection, among others. There is also a standalone chapter on remedies, which will require state parties to guarantee the availability of accessible and effective remedies for violations of human rights resulting from AI systems, and procedural safeguards aimed at bolstering procedural guarantees under international and domestic law.

These requirements build on existing international legal obligations and do not aim to replace them. They will assist in the full realisation of human rights while addressing the unique challenges posed by AI systems. For example, provisions on transparency and oversight will require decision-making processes and the overall operation of an AI system to be understandable to particular actors. This will be further supported through the principle of accountability and responsibility, because it requires the ability to trace and attribute responsibility for particular outcomes. These types of provisions, in conjunction with the requirement to guarantee effective remedies, aim to overcome issues of complexity and opacity within AI systems to avoid and mitigate negative impacts. Provisions relating to equality and non-discrimination, or privacy and personal data protection, are equally crucial to emphasise given their importance in the context of AI systems and necessity for the fulfilment of other Convention obligations.

The Convention will be underpinned by a risk and impact management framework, requiring parties to adopt or maintain measures for the identification, assessment, prevention and mitigation of risks posed by AI systems. This is critical for the effectiveness and operation of the Convention and enables state parties to take measures that are tailored to the level of risk posed by particular AI systems, or those which are deployed in particular contexts or fields. However, the Convention’s approach to this element leaves something to be desired; it provides only a high level and seemingly flexible approach to guide states in undertaking such assessments. The CAI will develop a legally non-binding methodology for the assessment of AI systems in order to support the implementation of the Convention, but there will be no obligation for states to align their assessments beyond the requirements set out in the Convention itself.

This raises questions about the coherence and alignment of actions taken by different state parties. State parties will be required to adopt measures under their domestic legal framework to give effect to the provisions of the Convention, but there is potential for divergent approaches and varying levels of protection by different member states given the absence of a clear and unambiguous means of risk classification, particularly with respect to bans. The final text requires state parties to assess the need for a moratorium or ban or other appropriate measures, but does not mandate them, which could further contribute to fragmentation at the domestic level. We can envision, for example, that EU countries will be required to ban the use of particular AI systems under the EU AI Act (e.g., for social scoring), whereas others may not take such measures.

The issue of scope: private sector & national security exemptions

The most contentious issue throughout negotiations was that of scope—whether the Convention would cover the private sector at all, and if so, to what extent. Leaks of the negotiations and commentary by journalists indicate that several states, primarily the United States, were keen to see the Convention not directly applying to private entities. This approach was challenged by various members of CAI, notably its civil society observers, including GPD, who have consistently advocated for a treaty that equally covers the public and private sectors, as well as rejecting blanket exemptions regarding national security. This has taken the form of direct engagement at plenary sessions of the CAI and public-facing advocacy through drafting and signing joint statements.

Ultimately, a compromise was reached where the Convention will cover the public and private sectors, yet allows states to choose how they will implement such obligations with respect to the private sector. They can either directly apply the obligations set forth in the treaty, or alternatively take “other appropriate measures”—which could include anything from non-binding codes or self-regulation. Countries will be required to specify this in a declaration upon becoming state parties to the Convention, which can be amended at any point in time afterwards. The final text further specifies that the Convention will not apply to AI systems with national security implications, but with an understanding that such activities must be consistent with applicable international law.

These outcomes are undoubtedly better than a full exclusion of the private sector and sweeping exemptions on the grounds of national security, but still seem unsatisfactory, particularly given the heightened human rights risks posed by AI systems that are developed, deployed or made publicly available by private entities, or those used in the context of national security. The final text also fails to establish specific provisions with respect to the labour conditions of those involved in the design and development of AI systems, and there is no standalone provision for the protection of health and the environment; both of these important considerations are only briefly mentioned in the preamble and in other less operative provisions. This suggests that the Convention may not adequately consider the broader impacts or priorities of countries not located in the Global North, as previously highlighted by our partners from the Global Majority.

Next steps: implementation and the need for sustained engagement

The final text will now be examined by the Ministers’ Deputies and transmitted to the Committee of Ministers for adoption at its Ministerial Session in May 2024. The Convention will then be open for signature by member states of the Council of Europe, and non-member states such as the USA, Canada, UK and others that participated in CAI. Once the Convention enters into force, there will be a procedure for other states that are not part of the Council of Europe or the CAI process to accede.

In the meantime, there is still work to be done by the CAI, which will continue to meet to develop the legally non-binding methodology for undertaking risk and impact assessment. Beyond this, there are a number of avenues for engagement to ensure that the Convention is implemented in a way that provides the highest level of protection for human rights.

The final text requires that implementation be undertaken in a non-discriminatory manner and includes provisions on public consultation and promotion of adequate digital literacy and skills by state parties. The treaty will also establish a follow-up mechanism through a Conference of Parties, reporting obligations and oversight mechanisms for compliance. There may, therefore, be opportunities for civil society or other stakeholders to support state parties in these activities and hold them to account.

The Conference of Parties will be composed of representatives of state parties and tasked with the implementation of the treaty. This will include undertaking work to identify any problems relating to reservations, considering potential amendments, making specific recommendations relating to interpretation, and facilitating cooperation with relevant stakeholders. This opens a potential door for civil society and other stakeholders to monitor and contribute to the activities of the Conference of Parties. While the exact modalities are yet to be determined, similar mechanisms have been established under Council of Europe treaties such as the Budapest Convention, and have been instrumental for rights-respecting approaches to cybercrime legislation.

Beyond the Conference of the Parties, stakeholders might have a role to play at the national level with respect to the establishment of oversight mechanisms and reporting. The draft Convention mandates that state parties adopt or maintain effective mechanisms to oversee compliance with obligations, and stakeholders can play a key role in guaranteeing that these bodies are truly independent and impartial, as well as making sure they have the necessary powers, expertise and resources. Stakeholders might consider how they can oversee reporting obligations, as each state party will be required to provide a report to the Conference of Parties. Lastly, stakeholders can advocate at the national level for potential state parties to apply the Convention as broadly as possible with direct application of its obligations and principles to the private sector, and follow up with those that have made declarations that limit protections.

Conclusion

This draft Convention marks a pivotal moment in the international community’s efforts to address the risks posed by AI systems. GPD is not necessarily pleased with all elements of the final text and has ongoing concerns around the inclusion of stakeholders throughout negotiations. Yet it is these realities which reinforce the need for continued and sustained engagement by civil society and other stakeholders, despite the intensiveness of these activities in terms of time and resources. Otherwise, the noble aspirations of this Convention to adequately protect human rights in the context of AI might not be fully realised.

By Ian Andrew Barber, Legal Expert at Global Partners Digital

Filed Under

Comments

Comment Title:

  Notify me of follow-up comments

We encourage you to post comments and engage in discussions that advance this post through relevant opinion, anecdotes, links and data. If you see a comment that you believe is irrelevant or inappropriate, you can report it using the link at the end of each comment. Views expressed in the comments do not represent those of CircleID. For more information on our comment policy, see Codes of Conduct.

CircleID Newsletter The Weekly Wrap

More and more professionals are choosing to publish critical posts on CircleID from all corners of the Internet industry. If you find it hard to keep up daily, consider subscribing to our weekly digest. We will provide you a convenient summary report once a week sent directly to your inbox. It's a quick and easy read.

I make a point of reading CircleID. There is no getting around the utility of knowing what thoughtful people are thinking and saying about our industry.

VINTON CERF
Co-designer of the TCP/IP Protocols & the Architecture of the Internet

Related

Topics

DNS

Sponsored byDNIB.com

Threat Intelligence

Sponsored byWhoisXML API

IPv4 Markets

Sponsored byIPv4.Global

Cybersecurity

Sponsored byVerisign

New TLDs

Sponsored byRadix

Domain Names

Sponsored byVerisign

Brand Protection

Sponsored byCSC