|
Imagine that you are considering the purchase of your first self-driving car. You anticipate the benefits of sensors and steering that avoid accidents, conserve energy and keep you in contact with emergency personnel should you need help. You unlock the door, get situated in the driver’s seat and are about to engage the ignition and then a question pops into your mind, “Is it really safe”? To answer that question, we need to understand first, that the car is not being controlled by Artificial General Intelligence or “AGI”, where machines are basically smarter than us. That technology is still some years off. Instead, when it comes to self-driving cars, we are talking about Artificial Narrow Intelligence or “ANI”, which are basically smart algorithms that make decisions based on data input they receive from sensors situated throughout the car. It is therefore not surprising that the biggest data miners are also the biggest promoters and players in the self-driving car market. The sensors are continuously tested and should be reasonably safe. Like “autopilot” in today’s jet planes, there is redundancy built into self-driving cars. This means that in case one sensor breaks down, there is another to replace its functions. The systems are essentially “fail-safe” and humans can take over the function at any time.
Given the intense safety testing of self-driving cars and their legacy from airplane “auto-pilot” technology, we probably do not have to worry about their algorithms. Or do we? The factor that is relevant for this question is what and how the data collected by cars is processed by the algorithm—just like in a search engine or shopping site. The essential question becomes, “What are the values that inform the algorithm’s decision making when it processes data” rather than “Is it safe?”
The complexity of the question posed above is best explained by the following example. A driverless car gets into a situation where it cannot stop and must decide to hit either an old man, a young mother, a cat, or brake so hard that the car’s passengers are endangered. What path the algorithm will tell the car to take? This depends on the respective weights of the data points enshrined in the car’s algorithm. How the car responds becomes a matter of ethics. Who decides which factors are the most important in determining the risks and probable outcomes of each possibility—injuring the old man, the young mother, the car or the passengers? If the algorithm’s programming is most influenced by the car manufacturer’s interests, it is likely that the safety of the passengers is the priority. What follows might simply be decided by what is known as a “random decision generator.” The algorithm eliminates the passengers as an option and then the random decision generator is left to choose any other option.
In a time of lightning-fast data exchanges, perhaps we can implement a better solution than the random decision generator? A sophisticated algorithm can recognize the situation and the variables involved in it. It then requests all relevant data, and once received, transforms the information into positive and negative merit points. From these points, it calculates its instructions to the car.
In our case, the algorithm might receive data that the old man has the highest insurance of all involved and he would be the one most able to absorb the risk and damages. His health data and life expectancy might result in negative merit points as his loss to society will have the least impact. However, not all is lost, as the algorithm receives data on the old man’s status as a bearer of top-secret governmental nuclear secrets. His life or death has national security implications, so he may live!
What about the cat? It has not much of insurance value. It is just a cat and it can be reasonably argued that human life might be more valuable than that of an animal. The cat might literally spring by using his natural agility to miraculously jump out of the way of the car, a possibility that ironically results in negative merit points. The algorithm also receives data on the cat’s status as an online video star with millions of followers. If killed, not only would the owner suffer huge monetary loss, quite a considerable number of fan-suicides are also predicted with undisputable algorithmic accuracy. Purr on!
Who does that leave as a potential accident victim? The young mother? She has a lot of negative merit points as her education is not that good, she is unemployed, has a criminal record and smokes. Further, the data suggest that breast cancer rates are high in her family. Her life expectancy, (if the cat has not too many of its 9 lives left), and the costs on society for caring for her kids when she is dead, are factors that speak for her, and besides, doesn’t her genetic data shown that she may be the mother of several Einstein’s! She might just have enough merit points to make it.
Last option is risking car and passengers. Over my dead body!
This example shows us in a drastic way that we have become mostly unaware and passive bystanders in the digital realm. We do not know what data has been collected about us to make crucial decisions about our lives. Nobody asked us for permission, let alone our ethical values. The imperative of the monitory gain of our digital overlords has degreed our new role as mind and soulless packages, that are no different from the parcel in the back of a delivery van.
The question becomes “who assigns specific values to national safety, popularity, parenting, emotional support and likely future innovation?” Looking closer, what seems at first an extremely complicated, and an ultimately unsolvable, ethical conundrum, is something individuals and societies have learned to resolve every day.
When we move around, using whatever mode of transportation, we accept a set of rules that have been pre-determined by a policy-making process influenced by the history of all travelers. Nearly all aspects of traffic, from the way we cross a street to the level of exhaust we allow our vehicles to emit, are regulated. These regulations are guided by public policy concerns and social values. There will be always those who will try to thwart these rules, (VW’s “clean” diesel), but the rules are generally observed allowing some space for individual interpretation, for example, driving 85 miles an hour on the New Jersey Turnpike is a time-honored custom. But that is a topic of another essay.
The rights and duties of global digital citizens” and the digital tools they invent and use, should be regulated and governed in the same way as other aspects of public safety. Who is a global digital citizen? Anyone who is employing technology as part of their daily living. This means everyone even those who are not connected to the internet. Individuals without internet access may still be subject to technology policy as they interact with their governments, seek social support, medical assistance or have any other interaction where data is collected, stored and used to make decisions. We are all global digital citizens, including the prospective purchaser of the self-driving car.
Global Digital Citizenship, or GDC, is an ethical construct that combines the values of openness, exchange and co-operation, technical and ethical integrity with the right and duties of physical and digital residencies of citizens. GDC provides us with the basic guiding principles that should be used to move forward, for example, and not only: the fundamental right of digital users to have full control over their data. Unrestricted innovation that does not infringe the rights of other digital users in any way. (The “Do no harm” imperative.) Full personal responsibility of innovators for the actions of the applications and devices they introduce. The right of digital users to be forgotten.
Our digital user experience shows us that we cannot expect that the fundamental rights of those involved in cyber traffic will be respected. The ethics practiced by current day algorithms do not take the common good or even basic human rights like privacy and security into account or try to avoid harm. To determine the values that inform ethical decision making, we need not look further than the ethical values of the business models of those responsible for the programming of the algorithms. The algorithms’ ethics are informed by the overriding principle to protect and enforce the interests of their creators, so the probable command that will be given from the algorithm of the driverless car in our example will be: Exterminate them all! Better to have no witnesses that can give evidence, ask questions and sue us.
To survive as citizens of the concrete and digital worlds, we need rules and regulations that have been derived from open, accessible and fair policy making that takes the interests of all stakeholders into account. We also need to have reasons to believe that stakeholders will respect the rules and that those who violate the rules will be punished. Doing business depends on mutual trust and respect. Efforts to replace these fundamental values by creating dependencies, customer manipulation and exploitation through the deployment of digital tools such as data mining and false news, can only be counteracted through empowerment and capacity building for all digital users.
Should a passenger have the right and opportunity to override an algorithm’s decisions for any reasons? Ethical decision making is not always based on and motivated by rationality. A passenger that feels ultimately responsible for the situation, might decide that it should be the passenger that suffers the harm rather than harm others. Maybe the passenger prefers to stop for cats, but not old people? Should we make the ethical consensus algorithm our “deus ex machina” or the random decision generator or should our personal freedom decide and be responsible for the consequences?
From the point of view of the data mining companies that provide the data for the self-driving car, the question of personal liberty to free decision making is irrelevant, even impertinent. They collected and own our data and everything that can be derived from it. In their view, there is no “bad” data or data beyond their reach. As more and qualitative the data is, the more their predictions and decisions become accurate and “better”. “Better,” we note for them and their customers, not for those who became the unaware and unintentional data providers, (you and me), that are now exposed and must live and die with the decisions and actions derived from it. We have become disenfranchised from every stage of the process: collection, rendering, use and outcomes.
There may come a time when self-driving cars become mandatory by law because the statistics show their beneficial impact. Similarly, stakeholders in the cyberworld, particularly governments, may determine that they need to regulate the activities of digital users by algorithms as this would result in the protection of digital users and serve the common good. This could be predicated on the assumption that such algorithms have been developed through an ethical evolution necessitated by technology itself. If we go down that path, are we ultimately denying ourselves the fundamental human rights of freedom of choice and the pursuit of happiness? The answer is a frightening, “yes” as digital users are reduced to subjects of algorithms that are completely outside of their control. The free-thinking citizen have three options: conform, resist, or leave the planet! GDC always includes the integrity of all personal freedoms within the framework of rights and responsibilities. We may refer to this as digital integrity. The fact that we have not established these basic rights in the Cyberspaces is worrisome.
We are navigating Cyberspaces via our connected automobiles, whose least concern is our wellbeing, and whose manufacturers who are loathing to accept responsibility for the actions and consequences of their devices.
The applications in the Cyberspaces make us a lot like passengers in driverless cars. Data mining provides the input that algorithms use to stir us. False news are the billboards on the side of the road that tell us how to get to the things we are supposed to like. Firewalls prevent us from taking roads that others have deemed too dangerous for us to take. We put our wellbeing into the hands of actors whose motivations and decision-making processes we are unaware of and excluded from. We are asked to trust without having any basis on which to anchor our trust. We know that we are not trusted as before we use the app, we must press the “accept button” for terms and conditions whose consequences we are unable to interpret or fully understand. We are asked to get into the driverless car, we will make a journey, but it is not us who oversees our destination!
The manipulative and cynical relationship between provider and user reflects the pattern of disenfranchised policymaking that centers on arcane interest groups and power elites who gather to make consequential decisions on how the internet operates. Digital users are asked to put their trust in governance mechanisms they are not aware of and that they are unable to influence. There are “multistakeholder” dialogues that take place throughout the globe which most of the globe is unaware of. The fundamental question becomes, “who” can play such a pivotal role in legitimate Internet Governance? This question might be premature. Before internet governance mechanisms can be put in place, it requires widespread empowerment of all digital users through awareness and capacity building. This empowerment needs to be initiated and maintained by enthusiastic representatives from all digital stakeholder groups whose motives are guided not by special interests but the knowledge of shared responsibility for the common good. With the growth of awareness and capacity building will come a growth of rights and duties based on digital dignity. As digital dignity becomes more widespread, the global digital citizenship will be able to decide on and implement inclusive and just internet governance mechanisms.
Let’s get back to the driverless car analogy. Today’s digital users are forced to be passengers in driverless cars of whose origin they are unaware of and whose destiny is uncertain. They are prevented from putting their head underneath the hood to see how things are made and work.
What happens when the driverless car crashes? A prominent social-media site recently had to admit that they shared and sold private user data without any customer protections in place. At first, the company denied any wrongdoing by stating that everything was legal, as users had explicitly accepted these practices when they agreed to the company’s user terms and conditions. Faced by ever-mounting criticism, the company’s CEO went on a tour around the capitals of the world, adopting a “big brother’s clever little sister” approach, by assuring everybody that they had learned a valuable lesson and that the company had begun to implement more data boundaries and better controls in place. The CEO’s tour was followed by a global media campaign with the same message. But, besides the inconvenience to the CEO’s schedule, (there is still an empty chair with your name on it in the British House of Commons, Mr. Zuckerberg!), nothing changed, business as usual and even business practices worse than before continued! The company counts on the digital ignorance and the inability to ask the right questions by its digital users and the political leadership.
In another example, some domain name resellers engage in exploitative business practices that can endanger the stability and security of the DNS. Some domain name resellers engage in the practice of selling privacy protection to domain name registrants—those who own domain names. If a registrant declines this protection, then their private contact information will be sold to direct marketers of such products as web design and hosting, resulting in the registrants receiving hundreds of unsolicited emails and phone calls. Registration of a domain name can become a life-altering, traumatic experience. Every new registration becomes a decision between avoiding registration of a new domain at all costs, register and pay the bully or keep your dignity and suffer the molestations.
The DNS is in danger, not because there is anything wrong with the technology and the industry that run it, but because data miners want to control the DNS in order to access more data more freely, it’s a goldmine for them and the functions, security and stability of the DNS can only be their secondary concern. Did the IANA transition open the door for the data miners to take over the DNS by removing the last controls of the DNS from governmental oversight and placing it in the hands of ICANN and its community? Can ICANN.org avoid being usurped by data miners? Can the ICANN community avoid undue influence of those who represent the interest of the data miners in their midst? Can the domain name industry withstand the temptation to become data miners themselves? (Like some of their telecommunications friends and neighbors such as Verizon, did?) Will the DNS itself avoid becoming a data mining operation because it’s the most profitable use of it? The answer to all these questions is: Not without empowered digital citizens!
Asking all these questions above leads us to the most worrisome question of them all: Did it already happen and we are just not aware of it?
Companies might think that they will get away with these practices, but as in all situations where people’s free will and rights are oppressed, there will come a point where the disenfranchised will make the pressure cooker explode. The pressure inside the Internet ecosystem is rising. The alarm whistle on top of the pressure cooker has gone off and has become louder and louder. It is not too late to take the pot off the flame by implementing widespread, cross-sectorial digital integrity as it is embodied by GDC. There must be a call for a greater discussion with reasonable implementation that balances privacy and security.
As for the purchaser of the self-driving car, all she really wants to do is drive around with all the safety support she can get, but with the ability to keep her hand on the wheel, kick the tires and occasionally stick her head under the hood.
Happy Christmas everyone!
Sponsored byIPv4.Global
Sponsored byDNIB.com
Sponsored byRadix
Sponsored byVerisign
Sponsored byWhoisXML API
Sponsored byCSC
Sponsored byVerisign