|
Article 1-5: Basic and Personal Rights. Co-authored by Klaus Stoll and Prof Sam Lanfranco. [1]
Digital governance, like all governance, needs to be founded in guiding principles from which all policy making is derived. There are no more fundamental principles to guide our policy making than the Universal Declaration of Human Rights, (UDHR). This article is Part 2 of a series of articles exploring the application of the UDHR to rights issues in the cyberspaces of the Internet ecosystem. The previous article in the series explores the foundations of the UDHR. [2]
This part 2 discusses Articles 1-5 which focus on human freedom, equality, dignity and rights. In subsequent parts, (to be published here), we will look at the remainder of the 30 UDHR Articles and conclude with an overall analysis of how the UDHR can inform policy development in the digital age. [3]
Article 1: All human beings are born free and equal in dignity and rights. They are endowed with reason and conscience and should act towards one another in a spirit of brotherhood. [4]
Birth
We are born to this earth. We had no part in the decision that made us human beings and citizens of planet earth. Today, even at birth, we are born with a digital identity and the start of the multi-faceted digital personas that are associated with it. Our digital presence may have started before our physical birth, when our parents announced and discussed online our imminent arrival with family and friends. Assembled from a myriad of sources, our digital personas consist of the digital data that is associated with us, each persona constructed by others, for their own purposes. Even for a person who never uses digital technologies, a digital identity and personas exist, based on data that has been collected by means that did not require a person’s input. Within the Internet’s data cloud resides more than just our deliberate digital actions (purchasing activity, banking, etc.). Our digital personas are built with a blend of observed behavioral data and ambient data that we do not control [5].
The data cloud that surrounds a human being and the digital persona, generated by algorithms from that data, are separate but inseparable. [6]
Freedom and Equality
There are two fundamental values that are inseparable from our physical and digital persona:
Equal treatment as digital citizens mean we must ensure equal access to the cyberspace for all [7]. This means the creation of a digital infrastructure that does not favor the rich or privileged, a space where no one is prevented from access because of social or economic conditions. Digital access needs to be viewed much as a public good, with delivery like a public utility. [8]
Equality goes further than technical access. We must ensure that the technologies are not biased, and do not they favor one user over the other because of socio-economic status, age, gender, or other cultural characteristics. This requires equitable access to levels of digital literacy, and the non-discriminatory treatment of data flows on networks. For example, about Network Neutrality, the non-discriminatory treatment of data flows has been argued as both a basic human right of every digital citizen and essential to keeping the Internet an even playing field for innovators, service providers, and service users. [9]
Dignity
Our dignity is a direct consequence of our freedom and equality. The dignity of a digital persona is violated when control is lost over the data that makes up one’s digital personas, personas that link back to one’s digital identity. This resulting damage to one’s digital dignity results when opaque digital business process algorithms are used to produce digital personas independent of the wishes of the person. With opaque internal parameters that may reflect biases within the algorithm, the multiple versions of one’s digital persona can result in abuse. This abuse can be minimized, if not totally avoided, if a person has complete control over one’s personal data, and this data is being used.
Rights
Another consequence of our freedom and equality is that our rights are inseparable from our being as a person. How these rights are interpreted and manifest themselves can vary by context and can evolve over time, but the fundamental values expressed in the UDHR and guiding that process are unchangeable.
With rights come obligations. [10] Our personal dignity as a digital citizen depends on how well we exercise both rights and obligations. Our digital obligations are based on our digital rights and are (or should be) being established through proper policy making processes. Under no circumstances should policy making, or outcomes, negate or violate our fundamental human rights as expressed in the UDHR.
The lack of legitimate and effective policy making processes around digital rights leaves us in a position with limited control over our digital identity and digital personas. The loss of digital integrity restricts the exercise of our digital rights and duties. With surveillance technologies, the Internet of Things (IoT) and AI algorithms our digital data grows, and personas proliferate. Those digital personas created by others are not subject to permissions nor validation. One’s lack of ownership of personal data violates the most fundamental of human rights.
Reason and Conscience
Article 1 talks about human beings which “are endowed with reason and conscience and should act towards one another.” It expresses what makes us, as users of digital technologies, different from the digital technology itself.
Digital technologies, and especially Artificial Intelligence (AI) [11], can learn, by which is meant an ability to process Big Data quickly, and aggregate data for various uses. Such learning has various dimensions, but much is based on pattern recognition, inference or deduction. As powerful as this will be for processing Big Data, it is still far away from the ability to reason or having a conscience. The promising claims that AI enthusiasts make, no matter how clever the machines they create, but never attain true human reason and conscience. They will always lack self-integrity and dignity, and empathy and respect for others.
Automated decision systems and AI are adopted for reasons of efficiency and often touted for their ability to do good. But, like any multipurpose technology, there is a real and dangerous downside. [12] Automated decision systems and AI algorithms are created in non-transparent and non-consultative ways. The prime stakeholder, the person whose digital identity is being fashioned into personas, is not consulted. Opaque algorithms, with unidentified biases, make decisions that profoundly affect human lives.
It is essential that there be transparency and accountability for that data use and those persona determinations. Without actual oversight, where is the cautionary check to test if automated decisions are wrong, discriminatory or biased? How are such mistakes corrected before citizens’ lives are negatively impacted? Such decision systems, without full consent and information, can and has harmed citizens [13]. Attempts to appropriate personal digital data, in the absence of use transparency and owner consent, must be strongly opposed.
The call to test, validate and audit automated decision systems triggers intellectual property concerns. The secrecy around digital algorithms, valuable for increasingly questionable digital business practices, has prevented the scrutiny of business practice applications. This trade secret issue has been used to prevent diagnosing and fixing flawed systems and decision outcomes.
We might bestow authority on digital technologies, and we might grant them decision-making rights, but these will always be exercises in human intent. We will always have to recognize that it is our ability to reason and our conscience that are ultimately responsible for the decisions the machines make. Digital technologies cannot be used in ways that escape our reason and conscience. Situations, where digital technologies are given rights over fellow humans, is an abdication of our responsibilities and is deeply fraudulent and highly dangerous.
Spirit of Humanhood
Reason and conscience tell us that despite all apparent differences, we share a joint humanity. We learn early on that we live better when we learn to live together and care for another. Freedom, equality, dignity and rights are the elements that make up the spirit of humanhood. The challenge before us is to understand how the digital cyberspaces of the global Internet are to be used for good. While protecting our rights, we must learn that cyberspace must have digital integrity for us as humans to have personal and collective integrity.
Article 2: Everyone is entitled to all the rights and freedoms set forth in this Declaration, without distinction of any kind, such as race, colour, sex, language, religion, political or other opinion, national or social origin, property, birth or other status. Furthermore, no distinction shall be made on the basis of the political, jurisdictional or international status of the country or territory to which a person belongs, whether it be independent, trust, non-self-governing or under any other limitation of sovereignty.
Article 2 reinforces Article 1 by stating that everyone is entitled to all the rights and freedoms of the UDHR. No distinctions can be made between any persons based on biological factors (race, sex, colour), on culture, (language, religion, political or other opinion), or on the circumstances of our birth (nationality, wealth, social status, class). Article 2 further states that equality is also not affected by “the country or territory to which a person belongs.”
Based on the principle of separate but inseparable, this is equally valid for a person’s digital identity, digital persona, and one’s ability to exercise one’s digital citizenship. No individual or stakeholder group should be able to deny the rights of another digital citizens with regard to privacy and ownership of personal data and base that denial on distinctions made for reasons of biological, cultural, origin, wealth, or country/territory affiliation.
Article 3: Everyone has the right to life, liberty and security of person.
Life
In the context of digital technologies, literal and digital life are co-dependent. Digital life, liberty and security means access to and the right to control one’s personal data and its uses. This, in turn, impacts one’s literal life.
The right to life in the context of digital technologies consists of the free right to access, and our rights as a digital citizen with respect to the security, privacy and integrity of our digital being, and with regard to our rights to the digital personas constructed by those with whom we interact in business, governance, and in society.
Liberty
Within the context of liberty in Cyberspace, we need to start with a discussion of access, a term that we will have to return to time and time again.
Access to cyberspace is a fundamental right of every person. It is essential to the exercise of digital citizenship. Free or affordable Internet access (libraries [14] telecenters, cell phones) are the essential foundations for a digital citizen’s engagement to protect their literal and digital rights. Access requires an enabling policy environment that supports affordable access.
Policies that take restrict a citizen’s Internet access (such as termination of “repeat infringers” of copyright; 3 strike policies) can violate digital Citizens’ rights to life, liberty and security of person. Access to digital products and services has become integral to daily living and can be a question of life or death. For example, when vital medication is only affordable for a patient through online pharmacies, there are several policy and regulatory policy areas that require attention if the person’s right to live is to be respected. [15]
Trade secret or non-disclosure language in third-party license agreements for automated decision systems or related software pose policy problems. Such provisions prevent the ability to access and test underlying operational methods that determine how decisions about personas and other data outputs are made. The public remains uninformed about how its private data is being used. Any underlying data derived from citizens (whether containing identifiable information or not) should only be available to others with the permission of the person, and when the person is fully informed of intended uses.
Nobody should be forced to give up their fundamental rights of equality and freedom in exchange for goods and services, or for the ability to receive the entitlements of citizenship and residency (e.g., welfare and succor). [16] The impacts of digital process decisions, for the individual and the community, need to be understood and addressed now, to prevent further social stratification and the creation of new digital poorhouse residents. [17]
Security
Here we must ask: security, safety and protection from what? For our digital persona to operate as a functional and competent digital citizen requires, besides safe and reliable access, security from everything that corrupts or controls that personal data. Attacks that threaten our digital integrity result in serious harm to our physical body and literal life. [18]
Article 4: No one shall be held in slavery or servitude; slavery and the slave trade shall be prohibited in all their forms.
Digital Slavery [19]
While physical slavery and servitude may seem quite distant from our digital presence in cyberspace, digital techniques are being used to violate literal human rights (sex trafficking, child pornography, etc.,) and result in human slavery. Digital slavery is at another level and digital citizenship needs to be protected from digital slavery. Digital slavery occurs when, without permission, a person’s digital data is appropriate and digital personas are constructed for the particular purpose of influencing or manipulating the behavior of that person. One’s digital persona is in the service of others and without permission or compensation. The exploitation of personal data, including surveillance and data mining, are digital practices on which the digital slave economy based itself. Increasingly a digital slave model of a manipulated and compliant voter is eroding the structures of democratic governance. The ongoing nascent development of the rules of digital governance should both be based on the UDHR and respect those rights in the digital ecosystem.
The right to privacy is an inalienable right of every digital citizen. Control must remain under the control of the literal person. Practices that seek access to and use of personal digital data (such as in user agreements) require explicit and clear terms of agreement if they are not to become instruments of a digital slave trade, with outcomes resulting in literal or digital slavery or servitude.
A person is subject to digital slavery when its data is collected, stored, and processed without its consent and/or knowledge. The result is digital control by an entity over the personal data of another person. A digital slave trade takes place when the personal data of digital citizens is traded between entities without the person’s consent.
A state of digital servitude (near slavery) is also reached when a digital application’s quasi-monopoly status means that to communicate, interact, or conduct business in the Internet ecosystem, the digital citizen is forced to accept digital slavery permissions as a condition for using that specific digital application.
Digital slavery also exists when a person that wants to access government services can only be accessed through digital means and require the person to provide personal data not relevant to the service been sought.
Article 5: No one shall be subjected to torture or to cruel, inhuman or degrading treatment or punishment.
Digital Torture
It may seem a stretch to address the topic of digital torture, but a brief examination here is instructive. Think of digital torture as deliberate acts to stress one’s digital identity and digital presence. A state of digital torture or to cruel, inhuman or degrading treatment or punishment exists when:
Digital technologies, by their very nature, enable malicious behavior that subjects digital citizens to many forms of online cruel, inhuman or degrading treatment. The ability for mass online communication, often anonymous, is used to spread misinformation with the goal to harass and prosecute others. The development of digital governance structures will be charged with putting instruments and measures into place that prevent such behaviors.
In this brief review and reflection on the first 5 Articles in the UDHR, major issues regarding personal digital rights and obligations have been identified. In each case, guidance for good policy and good governance points back to the UDHR as a foundational cornerstone for the definition and elaboration of the digital rights and duties of digital residence (citizenship) in the Internet ecosystem. We suggest the inclusive properties of a multistakeholder approach to policy development here.
[1] The authors contributed this article solely in their personal capacity, to promote discussion around the UDHR, digital rights and digital citizenship. The authors can be reached at <[email protected]> and <[email protected]>.
[2] Part 1 is available at: http://www.circleid.com/posts/20191210_internet_governance…
[3] This series of articles are presented a bit like preparing the foundation for a house, where the house is the “house of regulations and rights” in the digital age. An understanding of the desired digital rights, and the pitfalls from policy and regulation, is required to build a sturdy and relevant platform of digital rights.
These articles are also a contribution to the upcoming 75th UN UDHR anniversary and as a start of an Internet ecosystem-wide discussion around digital rights and policy development. Comments are welcomed. (Send comments with “UDHR” in the subject line to <a href=“mailto:[email protected]”>[email protected]). Comments will be used to update this digital rights discussion in subsequent articles. The goal is to kickstart progress toward a much-needed International Covenant on digital Civil and Political, Economic, Social and Cultural Rights.
[4] We note that the language of the UDHR does not conform to contemporary notions of gender-neutral language.
[5] Digital Persona is here defined as: Any assembly of transactions, behavioral, and ambient data composed to ascribe characteristics to the person, such persona to be used for economic, political, social or other purposes. Composition may be by simple data aggregation or by using artificial intelligence algorithms.
[6] Incorrect data can have dire consequences in terms of one’s virtual persona, but even correct data can be problematic in a literal sense. For example: A child diagnosed at birth with a serious health defect can face health costs without the benefits of the pooled risks of health insurance. The existence of this data and its use in non-transparent algorithms available to insurers, governments, schools and employers (whomever regulations will allow) will affect every aspect of the person’s existence, from birth to death, and possibly beyond.
[7] This will be discussed further in Article 3 below.
[8] This is an area for policy deliberation. Public utility theory supports large scale infrastructure at regulated prices in order to achieve economies of scale without concentrating monopoly power. Public goods theory assumes additional users at near-zero marginal cost, a situation that is increasingly approximated with cloud storage and 5G technology.
[9] Network neutrality—the idea that Internet service providers (ISPs) should treat all data that travels over their networks fairly, without improper discrimination in favor of particular apps, sites or services—is a principle that must be upheld to protect the future of our open Internet. It’s a principle that’s faced many threats over the years, such as ISPs forging packets to tamper with certain kinds of traffic or slowing down or even outright blocking protocols or applications. See Electronic Frontier Foundation https://www.eff.org/issues/net-neutrality
[10] Digital obligations and digital duties are used interchangeable here.
[11] We differentiate between Artificial Narrow Intelligence or “ANI”, which are basically smart algorithms that make decisions based on data input they receive, and Artificial General Intelligence or “AGI,” where machines are basically smarter than us. Today most AI is ANI and AGI remains mainly a dream (or nightmare) about the future.
[12] https://www.aclu.org/issues/privacy-technology/...
[13] Massachusetts learned that big data was not always trustworthy in 2013 when volunteers used Boston’s “Street Bump” app to report potholes. Data indicated more potholes in wealthier areas than poorer ones. The reason for that false result was that wealthier residents were more likely to own a smartphone and use the app. See http://ritaallen.org/blog/.... The Boston pothole example is a cautionary tale. Automated data systems elsewhere in the country have already harmed citizens in critical areas such as child welfare, education and housing. In 2015, Indiana awarded a lucrative contract to IBM to automate the state’s welfare eligibility requirements, which wound up erroneously kicking disabled people off Medicaid. In 2016, Arkansas began using an automated decision system that harmed hundreds of disabled residents by improperly restricting their access to home health care services. Legal Aid sued the state, and the court found such a system unconstitutional and flawed. Even today, the U.S. federal government is promoting rules that are antithetical to the protective goals of H.R. 2701 and S. 1876. HUD is considering adopting new rules that would insulate landlords, banks, and insurance companies from liability for the use of algorithmic models, regardless of the discriminatory consequences. See https://www.eff.org/deeplinks/2019/09/dangerous-hud-proposal-would…
[14] https://www.ifla.org/digital-plans
[15] For example: The “BRUSSELS PRINCIPLES ON THE SALE OF MEDICINES OVER THE INTERNET”, https://www.brusselsprinciples.org/
[16] This is of relevance in the context of AI. See footnote 9 above
[17] As the saying goes, those who do not learn from history are doomed to repeat it. Our rich history of well-meaning social welfare projects in Massachusetts can be an instructive lesson in addressing the risks from AI and automated decision systems. For example, Boston established the first U.S. poor house in 1662. Over the next two hundred years, poor houses sprouted up all over the country as a way of “managing” poverty. Citizens signed a “pauper’s oath” giving up their fundamental rights, including the right to vote. Although well-intended, poor houses became known for their squalid and inhumane conditions. Families were reduced to “cases” to be managed. The risks presented by automated digital decision systems and AI need to be understood and addressed now, to reduce both digital and literal disempowerment, and to prevent the creation of new digital poorhouses.
[18] As mentioned before. See footnote 3 above
[19] It is important to note here that the authors are not trying to compare the incomparable and set slavery on an equal footing with digital exploitation. Despite all sufferings of digital exploitation, they can never be an equivalent, nor are they comparable. Nevertheless, we can take history as a warning and inspiration and use the lessons learned as navigational and interpretive aids.
There are many joint characteristics of slavery and digital exploitation that justify the use of the phrase: digital slavery. Both see human beings as commodities and ignore basic human rights. Both form the basis of an economic ecosystem to justify exploitation. It is reminiscent to the situation in the US before and during the American Civil War. The South, reaping the economic gains, condoned the exploitation and enslavement of humans while much of the North outlawed the practice. Similarly, today we have corporate and government stakeholders that pursue digital exploitation, arguing it as necessary for digital innovation and prosperity, while other stakeholders argue that such practices are exploitive and should be prohibited.
Both earlier human exploitation and contemporary digital exploitation present economic benefits as a justification, and effectively ignore what should be the labor or data rights of the provider. Similarly, then for anti-slavery and now for digital rights, the growth of leadership to champion the rights of one’s digital properties has been slow, as has general engagement in the protection of digital rights. Much of current, almost ad hoc, digital governance represents mechanisms appropriated by special interest groups. The mechanisms give no voice and no power to those whose digital rights are appropriated and whose digital citizenship is compromised.
A cautionary tale here is the era of “Jim Crow” in the post-emancipation United States where, for more than half a century, the proponents of exploitation used the law to undermine emancipation and restore human discrimination. Efforts for emancipation from digital slavery will require vigilance. The high stakes require awareness, inspired leadership, and capacity building for good digital governance, integrity in digital business practices, and respectful digital behavior. This will require the will of the digital citizen to be strong, inspired leadership, unrelenting awareness and capacity building, ethical, economic alternatives, and political processes that put what’s right before what’s politically expedient. Today the defenders of digital rights and integrity have the additional strength of the Universal Declaration of Human Rights (UDHR) as a foundation for their struggles to advance digital rights and digital citizenship.
[20] This raises the difficult situations of false news, vaccine deniers, and the like.
Sponsored byVerisign
Sponsored byIPv4.Global
Sponsored byVerisign
Sponsored byCSC
Sponsored byRadix
Sponsored byDNIB.com
Sponsored byWhoisXML API