Home / Blogs

What Would a Human Rights-Based Approach to AI Governance Look Like?

Over the past year, discussions around artificial intelligence (AI) have saturated media and policy environments. Perspectives on it vary widely: from boosterist narratives, which posit the limitless potential of AI-powered technologies to help overcome social inequalities and accelerate industrial development, to apocalyptic framings, which suggest that a (speculative) ‘artificial general intelligence’ could make humans extinct.

In the middle, civil society groups have been emphasizing the critical, real-life opportunities and challenges that AI presents for individuals and their human rights in the here and now. We posit that the trajectory of AI is unlikely to lead to either utopia or apocalypse; rather, the technologies it comprises have both rights supporting and oppressive potential. To take one example, generative AI models like Chat GPT could, with effective governance, release us from routine industrial tasks and unlock time for human creativity and free expression. They also have the potential to drive disinformation, negatively disrupt education, and diminish cybersecurity.

How, then, can we harness the potential benefits of AI while avoiding its risks to human rights? Governments are currently racing to enact regulation to address precisely this question. At the time of writing, national, regional or global-specific AI frameworks are not yet operational. So far, the general frameworks that have been proposed are anchored in ethics and have a mixed record in considering human rights. At this juncture, when so many frameworks are under development—and so little precedent exists—civil society has a small but critical window to try and shape them in a more rights-respecting governance and equitable direction.

In response to this challenge, Global Partners Digital—the organisation I work for—has developed a rights-based policy approach to AI governance based on five core principles. Designed to be used both directly by policymakers and in advocacy by other actors, they set out clear and actionable considerations which can be applied to any regulatory process around AI.

Principle 1. Build policy approaches grounded in International Human Rights Law

Policy approaches to the design, development and deployment of AI systems should be firmly rooted within the existing international human rights framework and should not undermine or seek to replace existing human rights standards. This is because the international human rights framework and the specific rights guaranteed under it—including the rights to life, privacy, freedom of expression, association, peaceful assembly, freedom of movement, non-discrimination, and effective remedy—are already applicable in the context of AI systems.

There are challenges in the application of the international human rights law framework to AI systems. This is due both to the complexity and opacity of AI systems, as well as the fact that international human rights protections and obligations are often broadly worded, making them difficult to interpret and apply in the context of new technologies. Such challenges have spurred some entities to propose alternative approaches to AI governance, including those grounded purely in ethics (such as the Recommendation on the Ethics of AI adopted by UNESCO). An ethical approach can be useful as a complement but detrimental if it is regarded as a substitute for a human rights-based approach. This is because there is a risk of undermining the applicability of the existing international human rights law framework, which has a level of normative value, geopolitical recognition and status that any alternative approach would be unlikely to match. There is also a risk that ethical approaches to AI policy may suggest that the international human rights framework is inappropriate or insufficient, which could encourage the development of standards that lack consensus, or are even inconsistent with the existing human rights framework.

Policy approaches must therefore reaffirm the existing international human rights framework and seek to enable the full realisation of such rights in order to address the unique challenges posed by AI systems. This can be accomplished through clarification of the scope and applicability of particular rights, as well as the imposition of tailored requirements or obligations to enable practical mechanisms for protecting human rights. However, this should only take place where it is determined that existing frameworks and standards cannot provide sufficient and comprehensive protection for human rights in the context of AI development and deployment.

Principle 2. Develop a risk-based approach

Not all AI systems or particular uses of AI pose the same type or level of risk to individuals’ human rights. Risk-based approaches require, at a minimum, some form of assessment to determine and classify risk levels, so that any new obligations are proportionate to the identified risk. But while there are some sectors—including law enforcement, healthcare, military use and migration control—that demand particular attention or concern, the design, development and deployment of AI is often not limited to a particular sector. This is what we are seeing with foundation models, which capture general learning patterns to be used later as the basis for more specific AI systems, and can be deployed across products and services in different fields.

A risk-based approach must therefore both recognise the general applicability of AI technologies and sensitively assess their impacts in different use cases. By doing this, we can accurately identify risks to human rights and mitigate them through the imposition of appropriate obligations—whether they relate to transparency, accountability or otherwise—across the whole AI lifecycle. This risk-based approach should be embedded across the public and private sectors.

A risk-based approach should also include the ability to impose prohibitions or moratoriums on AI systems when it is determined that they present an unacceptable threat to human rights. This could include, for example, AI systems using biometrics to identify, categorise or infer the personality or emotions of individuals, leading to mass surveillance, or AI systems used for ‘social scoring’.

The task of assessing risk to human rights should fall on those actors best placed to identify it throughout the AI lifecycle, including designers, developers and deployers. They should be responsible for conducting ongoing evaluation and for communicating—to each other, impacted communities, oversight authorities, and the general public—the results of any assessment exercise.

Principle 3. Embed openness and inclusivity

Data representativeness (the diversity of people represented in data) and quality (accuracy and relevance of data) are necessary prerequisites for AI systems to be designed in an open and inclusive manner. Those in charge of the design and deployment of AI systems should be able to provide transparent information about the provenance of the training data, the values applied to data selection, the quality assurance process that data went through, and the link between the inputted data and the populations or context within which the AI system would be deployed. A broad range of perspectives and interests should also be taken into account, reflecting differences in culture, language, expertise, and socio-economic conditions.

Designers and deployers of AI systems should devote specific resources and attention to monitoring and mitigating disproportionate impacts on particular groups caused by bias and discrimination in the AI system. Mechanisms for redress should be made available in those cases. To that end, existing legal frameworks dealing with non-discrimination in different fields (including employment access, consumer affairs, or healthcare) should be strengthened and used to guide the deployment of AI systems. In jurisdictions where such frameworks do not exist, they should be created.

Principle 4. Ensure transparency throughout the process

Those deploying AI systems to perform particular tasks—whether public or private bodies—must clearly inform affected individuals when a decision that affects them has been made by or with an AI system. This includes “hybrid” decisions, where an AI system was used to predict, augment or flag decisions for human review, or where a human has reviewed a suggestion made by an AI system. This disclosure is important to ensure that individuals can appropriately appeal any decisions which they feel are made in error.

It is also essential to be transparent with users and regulators about how the AI system works. For example, many current LLMs have been released without including sufficient information for consumers on their actual capabilities and limitations and their data provenance. As a result, users have been deceived by erroneous generative AI outputs—such as nonexistent academic citations and legal precedents, fake profiles and misleading facts.

Because of the opacity of the models themselves, AI developers must be transparent about exactly how their system or model was trained, developed and tested, in order to be able to effectively exercise accountability and remedy regarding system outputs and impacts. This disclosure should include, at a minimum:

  • information about how the training dataset was acquired or built, and by whom;
  • assumptions that underpinned its labelling or coding for use in machine training;
  • the nature of quality assurance performed to check data quality or weed out degraded or ‘noisy’ samples;
  • information about the AI model itself, including the type of learning algorithm and reward mechanisms used, and the number of parameters and training/testing iterations;
  • information about how the model was fine-tuned to complete relevant tasks through specific data inputs and reinforcement learning; and
  • details on how the model’s robustness or accuracy was assessed through testing and controls before determining it was safe to launch (for example, through red teaming).

Transparency requirements can create costs for AI developers and implementers. However, these costs should be considered as an integral part of the AI system’s development and deployment. To help ease this burden, regulation should provide guidance on how companies of all sizes—including small businesses—can implement transparency requirements in a proportionate manner.

Principle 5: Hold designers and deployers of AI accountable for risks and harms

Any decisions around AI design and deployment should consider potential human and environmental harms, as well as how to remedy and mitigate them in a timely manner. In addition to risk assessment and mitigation, appropriate mechanisms should be available for handling grievances and providing effective remedy for individuals and groups adversely affected by the performance of AI systems. Accountability mechanisms should avoid diluting responsibility among different actors and entities within the AI lifecycle. Liability should be clearly and proportionately assigned to the different entities which are best positioned to prevent or mitigate harm in the AI system’s performance.

Accountability mechanisms for AI systems in their research and testing phases—or for their implementation in downstream or third-party products—might differ from those appropriate for mass market products. For mass market products, accountability mechanisms should be able to both provide necessary quality and safety assurance to prevent consumer harm once in the market, and offer mechanisms of remedy for impacted consumers in instances where those measures have not been taken.

What next for AI governance?

AI regulation is spreading rapidly at the national, regional and global levels, with most regulation currently emerging in global North jurisdictions. The most advanced current efforts are led by Europe; one at the regional level, with the EU’s proposed risk-based AI Act, likely to be adopted at the end of 2023; the other at the global level, through the work of the Council of Europe’s Committee on Artificial Intelligence (CAI) which is currently developing the world’s first treaty on AI. Though emerging from a European body, this instrument—anticipated to emerge in 2024—has the potential to become a global standard that can be adopted by countries outside of the Council of Europe.

In the UN system, the ongoing Global Digital Compact (GDC) process, led by the UN Secretary General, has proposed a High-Level Advisory Body for AI (the Body), which would bring together state experts, relevant UN entities, industry, academia and civil society groups to advance recommendations for the international governance of AI. This proposal also includes a digital human rights advisory mechanism facilitated by the Office of the High Commissioner for Human Rights (OHCHR), which would provide practical guidance on human rights and technology issues.

Through these governance efforts, the UN is taking a leading role in responding to the intensifying public debate around the appropriate modes and forums for global AI oversight. However, we were disappointed that the recent call for nominations to the Body’s constitution fell short on both timeframe and provision of information regarding the criteria and selection process. Both elements are key to ensuring that the Body’s work is human rights-based, open, inclusive and transparent.

Industry standards also continue to be relevant in establishing good practice for governance. Given the central role played by industry in developing and implementing AI, industry associations and multistakeholder working groups can help inform more nuanced and effective governance approaches by sharing key learnings.

Whatever the eventual form of international governance taken forward, it is imperative that it is not shaped solely by global North leadership, but rather has the active engagement of a range of global South actors—including governments, companies, and civil society in general.

By Maria Paz Canales, Head of Legal, Policy and Research at GPD

Filed Under

Comments

Comment Title:

  Notify me of follow-up comments

We encourage you to post comments and engage in discussions that advance this post through relevant opinion, anecdotes, links and data. If you see a comment that you believe is irrelevant or inappropriate, you can report it using the link at the end of each comment. Views expressed in the comments do not represent those of CircleID. For more information on our comment policy, see Codes of Conduct.

CircleID Newsletter The Weekly Wrap

More and more professionals are choosing to publish critical posts on CircleID from all corners of the Internet industry. If you find it hard to keep up daily, consider subscribing to our weekly digest. We will provide you a convenient summary report once a week sent directly to your inbox. It's a quick and easy read.

I make a point of reading CircleID. There is no getting around the utility of knowing what thoughtful people are thinking and saying about our industry.

VINTON CERF
Co-designer of the TCP/IP Protocols & the Architecture of the Internet

Related

Topics

Threat Intelligence

Sponsored byWhoisXML API

Brand Protection

Sponsored byCSC

New TLDs

Sponsored byRadix

IPv4 Markets

Sponsored byIPv4.Global

Cybersecurity

Sponsored byVerisign

Domain Names

Sponsored byVerisign

DNS

Sponsored byDNIB.com