Home / Blogs

AI Has No Time for “Human” Rights

Popular media have recently reported a White House initiative asserting companies’ “moral obligation” to limit the risks of AI products. True enough, but the issues are far broader.

At the core of the debate around AI—will it save us or destroy us?—are questions of values. Can we tell AI how to behave safely for humans, even if in the future it has a “mind of its own”? It is often said that AI algorithms should be “aligned with human values.” What instructions, then, should be given to those who write the algorithms? As of now, there is nothing approaching a consensus on what constitutes “human” values. Are they the goals of human desire determined by social science? Are they common behavior rooted in evolutionary biology? A summation of the wisdom of faiths and beliefs over time? Or should AI just be allowed to learn them on its own by observing human history and behavior? Equally importantly, who gets to decide? At best, a good-faith process might result in a lowest common standard of “human values.” There is, of course, no obvious reason to believe that “AI values” will be the same as “human values.”

In addition to efforts to conform AI to “human values,” there are widespread attempts to provide AI with rules to govern specific behaviors in particular situations, which by analogy to human behavior is called “AI ethics.” The very idea of “ethics” assumes a degree of intentional choice on the part of the actor, so until AI achieves some degree of conscious personhood and autonomy, use of the word “ethics” in AI is largely metaphorical. For now, it is about rules built into specific algorithms for decision-making in particular contexts, which will reflect the values of the humans doing the programming. But human ethics provides no easy answer.

Ethics come in many schools and disciplines. There are at least twenty different ethical approaches, with normative ethics based broadly on either the outcome of an act or on the inherent value of the act itself. Among them are, in no particular order: utilitarian, situational, virtue, duty, evolutionary, care, pragmatic and post-modern ethics. Without getting into examples, different ethical approaches may lead to different outcomes in the same situation. So it will be quite important to train an AI how to behave, particularly in ethically ambiguous situations where decisions might have to be quick and decisive—with great harm to persons and property in the balance. Some ethical approaches, e.g., utilitarianism, situational, and pragmatic, may have outcomes that leave some people worse off by justifying a greater good or arguing special circumstances.

In this context of conceptual confusion, where are human rights? It seems clear that human rights are not the same as “human values” or “AI ethics.” Yet core “human rights” are also a complex social construction, deemed by the United Nations in 1948 to apply equally to all humans at all times in all places. They are more extensive and specific than “human values” and, in principle, more grounded, unequivocal and inflexible than ethics. Not everyone agrees on them, some seeing them as simply an embodiment of western liberal political values. To the extent UN bodies have commented on AI, they have suggested that human rights be broadly construed as guidelines or aspirational goals.

Right now, human rights are not only not at the center of the policy debates on AI, they are barely in the conversation. It is not yet clear how AI can or will engage with values, but it is critical for the future that human rights be a salient in training AI to support, not conflict with, human rights. That conversation should be public and transparent in multiple forums because it affects every human being, and it needs to happen now!

  1. The above post was previously published on the IITF Human Rights in the digital Domain Blog.

By Richard Taylor, Palmer Chair and Professor of Telecommunications Studies and Law Emeritus, Penn State University

Filed Under

Comments

Mark Datysgeld  –  May 15, 2023 10:34 AM

Professor Taylor, thank you for the thoughts.

What I have been observing from mainstream generative AI companies is a concerrn over placating questions that are controversial to western English speakers, as that is where they feel the financial pressure could come from that would impact them negatively.

However, a lot of the open source development of the technology is being produced by developers form the Global South with different concerns and ideas… in my opinion, this is rapidly creating a rift between what commercial and open source generative AI looks like. Perhaps it’s something looking into in the future.

Best,

Richard Taylor  –  May 15, 2023 10:43 AM

Hi Mark, Thanks for your thoughts. I understand OpenAI is a hot topic right now, and I think The Global South may play an important role in the U.N.'s Digital Compact discussions. Different visions of AI may be a "wild card". Richard

Brighting the rift Klaus Stoll  –  May 16, 2023 1:22 AM

If there is a rift between commercial and open source generative AI developing we should try to find answers to two questions: 1. Why is that such a bad thing, and, 2. What can be done to bridge the rift?

Please allow me to add one further comment: There is a lot of talk about AI but not much knowledge about AI in the general digital user community. There is a huge need for general awareness and capacity building. Part of this needs to be the awareness on which values the new AI is actually based.

Richard Taylor  –  May 16, 2023 5:37 AM

Hi Klaus,

I agree—the professional community needs to be up to speed, but equally the public at all ages needs “AI literacy” information, including value choices, both implicit and explicit, and who decides on the values.  Somewhere there are human judgments behind them, which can be contested over different world views.  Human rights need to be part of that education and conversation.
Richard

Comment Title:

  Notify me of follow-up comments

We encourage you to post comments and engage in discussions that advance this post through relevant opinion, anecdotes, links and data. If you see a comment that you believe is irrelevant or inappropriate, you can report it using the link at the end of each comment. Views expressed in the comments do not represent those of CircleID. For more information on our comment policy, see Codes of Conduct.

CircleID Newsletter The Weekly Wrap

More and more professionals are choosing to publish critical posts on CircleID from all corners of the Internet industry. If you find it hard to keep up daily, consider subscribing to our weekly digest. We will provide you a convenient summary report once a week sent directly to your inbox. It's a quick and easy read.

I make a point of reading CircleID. There is no getting around the utility of knowing what thoughtful people are thinking and saying about our industry.

VINTON CERF
Co-designer of the TCP/IP Protocols & the Architecture of the Internet

Related

Topics

Domain Names

Sponsored byVerisign

New TLDs

Sponsored byRadix

Cybersecurity

Sponsored byVerisign

DNS

Sponsored byDNIB.com

Brand Protection

Sponsored byCSC

Threat Intelligence

Sponsored byWhoisXML API

IPv4 Markets

Sponsored byIPv4.Global