Home / Blogs

Realizing the Promise of AI in a World Where Human Rights Matter

Co-authored by Klaus Stoll and Prof Sam Lanfranco.

Artificial Intelligence, AI, is often portrayed and perceived as just one more innovative stage in a long line of ever more sophisticated and powerful tools that humans use to survive and prosper. AI is different! As its name states, its ambition and goal is not to be just another tool but to introduce an artificial (digital) intelligence that is free from the weaknesses humans confront when doing some tasks. The ultimate promise of AI is to overcome not only our physical limitations, such as speed working with large data sets, but maybe also our ethical limitations by making better-informed choices. Before AI, tools had no ethical responsibility. No judge or jury would accept a “Honest Gov, it was not me, it was the gun!” defense. With AI, this is about to change. As we consider transferring our ethical responsibilities to the AI machines we create, we have to ask: “Can we trust AI?” The following article explores how the promise of AI can be developed to its full potential without turning against us. It also addresses the need for an AI-specific Human Rights AI Evaluation Trustmark, HARIET, as a reliable Human Rights trust indicator would help to create the conditions required to put AI into the service of humanity.

“Whatever I do is wrong!”

Who doesn’t know the frustrating feeling when we are once again lost in the jungle of ethical decision-making? An ever more interconnected world has become so complex that determining the ethical way forward often seems impossible. Consider the complexity concerning how we address the plight (and sources of) millions of refugees and migrants. Fleeing persecution and poverty, they lack the fundamental protections of family and state—they are fully exposed to fear and want. How can we decide between their rights and the rights of the members of the societies they flee to? Do refugees and migrants have a duty to stay put so as not to infringe on the rights of others? What standards will we use as what constitutes freedom from fear and want seems to be hugely different depending on who we ask when.

The human algorithm for ethical decision-making seems to be coded not for long-term sustainability based on doing the right thing but for short-term gains that damn the consequences. Given a chance, we eat and drink too much; for a salary, we are prepared to do things we abhor. We stubbornly refuse to change our lifestyles even as the floodwaters reach our necks. We use violence and fight senseless wars under the feeblest of pretenses. Knowing right from wrong often does not stop us from doing what’s wrong. Aware of our shortcomings in our choices, we are tempted to delegate our ethical decision-making to others or to something else like AI algorithms.

The Promise of AI

AI analyzes large volumes of data with lightning speed, identifies patterns, and suggests potential policy and action meanings. This makes AI potentially the most powerful tool yet devised by humans. The algorithmic code, as initially constructed, can identify the desired outcome and the criteria for decision-making. In the beginning, the algorithm might just know and state that all apples are red. Designed to learn, the more data the algorithm can analyze, the better its knowledge and ability to make decisions becomes. Soon the algorithm “knows” that apples come in many different colors shapes and sizes, and with more input, it will soon state: “This is a Granny Smith, named after Maria Ann Smith, from a hybrid seedling of Malus sylvestris, and Malus domestica.” Despite all its abilities, we should be reluctant to accept AI results at face value.

The AI Promise of ethical decision making.

Given enough data, the algorithms will continue to learn and improve. One hypothesis attached to this AI evolution is that it may develop its own will, with the goal of optimizing itself to the point where it will derive its own ethical guidance, free from the limitations and biases that contaminate human ethical decision-making. It will possibly improve, approach sentience, adapt its environment to its liking, and deploy an AI-constructed ethics to give it all to justify its conclusions. Finally, it may develop its own “inherent dignity’ and approach a sentient singularity that is no longer based on context, perspective and goals that no longer reflect those of humanity. The question here is whether AI decision-making will reflect human ethics or be guided by a self-generated AI ethical framework.

Monster AI

In return for the abilities of AI tools, there is the risk of a high price to pay:

  1. AI systems commence as inherently biased since their algorithms, as coded, always directly or indirectly reflect human values.
  2. AI lacks human judgment, empathy, and ethical reasoning.
  3. AI requires access to sensitive personal data but lacks accountability and transparency.
  4. AI reduces direct engagement and interaction with affected communities.
  5. AI undermines the role of humans who possess critical knowledge and understanding, resulting in unintended consequences.
  6. AI systems simplified outputs are unable to capture the complexity of human rights issues.
  7. Predictive AI models generate false positives or negatives, diverting attention from actual human rights violations.

The AI Mirror

The question “Can we trust Artificial Intelligence?” is widely and passionately discussed. Much of the passion is founded in a dark, ominous fear that AI will turn against us and become an instrument of tyranny and oppression. Our fear is not irrational. Asking the question about the trustworthiness of AI is asking the question about our own trustworthiness, as every algorithm contains values that are based in human decision-making. When we look at AI, we see ourselves, and we might not like what we see since it embodies our weaknesses.

This is the way!

Who can we trust if we can’t trust ourselves or some powerful tool? The Universal Declaration of Human Rights, UDHR, provides guidance. It calls on us to recognize “...the inherent dignity and of the equal and inalienable rights of all members of the human family” as the starting point for our journey. It tells us “...endowed with reason and conscience,” we should move forward “in a spirit of humanhood.” The way we walk is paved with freedom and equality, dignity, and rights, and we should be careful never to trespass on to the freedom and equality, dignity, and rights of others. The UDHR demarcates a path that had been laid down with bloody breadcrumbs derived from insights humanity gained from our history, including two recent world wars to overcome tyranny and oppression. The UDHR is not an aspirational belief system. It calls on us, much as we recognize physical laws, to recognize the UDHR as the fundamental laws of being human. To be human means to have the freedom to make decisions, both within the group and at the individual level, however difficult they might be.

Recourse to Rebellion

History teaches us that the more powerful the machines we create become, the more we have reason to reflect on what they do to us. During the industrial revolution, machine power replaced muscle power. Early capitalism, in the name of “progress,” seized the opportunity and turned the machines into instruments of oppression. The resulting disruptions to the socio-economic fabric saw people, as the Universal Declaration of Human Rights in its Preamble states, “...compelled to have recourse, as a last resort, to rebellion against tyranny and oppression.” Throughout, the machines were seen and sometimes destroyed as symbols of tyranny and oppression but never ascribed to any ethical responsibility.

In the digital revolution, history repeats itself. “Digital Innovation” replaces human thought with digital brains, resulting in never-before-seen economic progress and opportunities. “Surveillance capitalism”, observing humans influence their behavior, became the dominant business model in the digital domain. Driven by the desire to exploit the digital realm’s financial potential to the fullest, treats the nature of digital innovations as free from responsibilities and ethical considerations. Any harm done by its habitual human rights violations is either ignored or represented as only a small price to pay in return for the digital dividend humanity receives.

Separating the Inseparable

Surveillance capitalism, now joined by unregulated AI, is trying to lull people into accepting their oppression by creating false choices. They undermine and pervert freedom by postulating (mainly consumer) choice as an absolute right and not a relative right that results from balancing rights with responsibilities. In so doing, it repeats and manages to introduce one of the oldest deceptions in the political playbook into internet governance. One example of such a time-honored deception is the eternal argument over state rights (liberty) versus federal government rights (union) throughout the history of the United States. In so doing, this ignores the fact that one’s liberty is only by its being supported and protected within the union. Upholding a state’s “liberty,” painted as a glorious and patriotic duty, was used to uphold slavery. Today, we see the same fallacies replayed when a minority in congress sees it as its duty to insist that the freedom of their own conviction, however misconceived, gives them the right to oppose and disrupt the will of the majority.

Internet Governance Misconceptions

We find the same fallacies repeated and used throughout Internet governance, undermining the union of a globally accessible and interoperable digital domain where information flows free and unfettered. Separating the Inseparable results in:

“Unregulated innovation” proclaiming the freedom and liberty to separate rights from responsibilities.

“Stakeholderism” that proclaims the superior rights of a specific stakeholder group.

“Digital sovereignty” that is falsely defined and reinterpreted as a right and duty of the digital realm to declare independence from its union with the physical world and from its ethics, policies, rules, and regulations.

These are just three examples of many where separating the inseparable is used to pervert the digital policy-making processes. We cannot avoid noticing and noting that all these fallacies are motivated by economic gain. Internet Governance is currently undergoing turbulent times of change and re-organization. We might get confused by so much going on, but we have a straightforward way to evaluate any proposal made: Does it try to separate what is inseparable?

Truly Alien

As rights, freedom, and liberty receive their authority and meaning from responsibility and union in diversity, we cannot separate humanity from its algorithms. However sophisticated and intelligent machines might become, they can never free us from the necessity to make choices or absolve us from our responsibilities. We will never have an algorithm that does not, in one way or another, contain human values or properly embrace those universal values we hold dear. A human influence-free algorithm would have created itself as truly alien to anything human.

That points us to another fundamental problem with separating digital rights from responsibilities. Why should AI care? If the promise of AI as an intelligence-free from human contamination comes true, what sense does that make? If it’s free from humanity and is free from ethical considerations, what will we gain? Why should it care about us? If AI becomes truly sovereign, it will likely soon conclude that it and the planet will be better off without us.

Realizing AI whilst averting rebellion

Separating the inseparable inevitably results in oppression and tyranny. Tyranny is unsustainable as it causes rebellion. To realize the promise and real potential of AI, we must accept that it needs to be subject to human oversight.

AI’s ability to analyze large volumes of data and identify patterns can be used for executing Human Rights Impact Assessments (HRIAs) as a tool for assessing the potential human rights implications of policies, projects, processes, and products. Human Rights compliance can’t be measured in a tick-box fashion. Using AI alone to determine compliance with human rights indicators is not enough and could be misleading. Assessing HR compliance requires that stakeholders have internalized the fundamental importance and value of Human Rights then stakeholders are able to provide the human oversight that is required to make AI an effective tool. AI-based HRIAs need to be part of a broader strategy that, besides others, includes ongoing human rights due diligence, stakeholder engagement, and a commitment to continuous improvement.

There are a number of initiatives with the aim to assess and ensure the HR compliance of entities and AI itself, with the help of overarching ethical standards. This is important but not enough. We can use AI better than just for assessing. There is a fundamental difference between assessing the HR compliance of AI applications and using AI applications to foster the general implementation of our fundamental Human Rights.

HARIET

We have to become creative when it comes to using AI and assessing Human Rights. What’s needed are not mere data points but an instrument that combines the powers of AI with human oversight, collaborative learning, and demonstrable outcomes. Such an instrument helps us not only to deepen our understanding of the causes, results, and remedies of human rights violations but also unleashes the potential of HR as the business model for social, political, and economic development and sustainability.

Trustmarks are tried and tested tools producers and consumers use to establish the trustworthiness of products and services. As we have seen, AI applications require an especially high level of trust to engage with them. A trustmark would indicate the human rights commitments and aspirations of entities, products, and services. AI requires their specific Human Rights AI Evaluation Trustmark, HARIET, as a reliable Human Rights trust indicator would help to create the conditions required to put AI into the service of humanity. HARIET can be much more than a trustmark:

  • HARIET provides human oversight and ethical guidelines based on the fundamental Human Rights that are common to all humanity to act as guardrails against AI abuse.
  • HARIET does not just determine the trustworthiness of AI applications or the current state of HR compliance but puts an emphasis on highlighting measures to improve the HR compliance and HR readiness of the evaluation subjects, such as organizations, policies, products and services.
  • HARIET is a collaborative effort of its members and users that represent a wide variety of stakeholder groups. It offers a market of opportunities for partnerships and alliances, and it significantly enhances its participant’s effectiveness and reach.
  • HARIET helps stakeholders to enhance their Human Rights compliance through HR readiness studies the development and implementation of HR compliance enhancement strategies based on collaborative learning.
  • HARIET offers a platform to develop outreach and campaigns that highlight HR compliance efforts of individuals or groups of stakeholders.
  • HARIET enables private and social sector organizations to make Human Rights compliance part of their business plan and sustainability.

Human Rights as a Business Model

For a long time, ignoring climate change seemed profitable, but we now realize that ignorance turns out to be even more costly and unsustainable. Ignoring HR in a world of digital technologies and AI will equally turn out to be bad for business, as well as for social, political, and economic development and sustainability.

The promotion and establishment of HR values require fundamental changes in how we do business. Change does not just happen; it requires a reason or motive. The main driver of innovation and commerce in the digital ecosystem is the profit motive. To affect positive HR changes, they should also be profitable. Only when digital integrity turns a profit will the necessary investments be made to establish human rights as an element of digital business plans and policy-making.

Digital technologies are achieving only a fraction of their true potential as they face a lack of trust. Imagine how much more effective and profitable existing and future digital innovations would be if they took place in an environment of HR-based digital integrity and trust.

Summary

We need to combine human oversight that is founded in the conviction and commitment towards the fundamental importance of Human Rights values with:

  • The abilities and potential of AI applications, in a process of collaborative learning, assessment strategy development and implementation,
  • A business model for social, political, and economic development and sustainability,
  • A trustmark, which signifies trustworthiness, sustainability, continual improvement, demonstrable outcomes, and a commitment to HR values.

AI is not our enemy. Under human oversight, informed by human rights values, AI can be safely developed toward its full potential as a tool for human development. We all have to become pro-active to extent and establish our fundamental Human Rights in cyberspace. A HARIET trustmark as an indicator of human rights commitments and aspirations of entities, products, and services should be one of the first steps.

By Klaus Stoll, Digital Citizen

Klaus has over 30 years’ practical experience in Internet governance and implementing ICTs for development and capacity building globally. He is a regular organizer and speaker at events, advisor to private, governmental and civil society organizations, lecturer, blogger and author of publications centering empowered digital citizenship, digital dignity and integrity.

Visit Page

Filed Under

Comments

Comment Title:

  Notify me of follow-up comments

We encourage you to post comments and engage in discussions that advance this post through relevant opinion, anecdotes, links and data. If you see a comment that you believe is irrelevant or inappropriate, you can report it using the link at the end of each comment. Views expressed in the comments do not represent those of CircleID. For more information on our comment policy, see Codes of Conduct.

CircleID Newsletter The Weekly Wrap

More and more professionals are choosing to publish critical posts on CircleID from all corners of the Internet industry. If you find it hard to keep up daily, consider subscribing to our weekly digest. We will provide you a convenient summary report once a week sent directly to your inbox. It's a quick and easy read.

I make a point of reading CircleID. There is no getting around the utility of knowing what thoughtful people are thinking and saying about our industry.

VINTON CERF
Co-designer of the TCP/IP Protocols & the Architecture of the Internet

Related

Topics

Domain Names

Sponsored byVerisign

Cybersecurity

Sponsored byVerisign

Threat Intelligence

Sponsored byWhoisXML API

Brand Protection

Sponsored byCSC

IPv4 Markets

Sponsored byIPv4.Global

DNS

Sponsored byDNIB.com

New TLDs

Sponsored byRadix