|
On February 8, 1996, John Perry Barlow published his “Declaration of Independence in Cyberspace” in Davos. Inspired by the “Digital Revolution” and the “Dot-Com-Boom”, he predicted a new “Home of Mind,” a cyber world without governments. “Governments of the Industrial World”, he wrote, “you weary giants of flesh and steel. I come from cyberspace, the new home of Mind. On behalf of the future, I ask you of the past to leave us alone. You are not welcome among us. You have no sovereignty where we gather.”
Twenty-five years later, we know that Barlow, who died in 2018, was right and wrong. He was right in predicting a “new world.” But he was wrong to expect that this will be a “home” without governments. However, to discuss whether Barlow was right or wrong is probably not the right question. The more interesting issue is the process, which was triggered by his proclamation, not his projection.
To understand the “history of the Internet,” one can go back to October 4, 1957. The “Sputnik Shock” pushed the Eisenhower administration to establish not only NASA, but also ARPA, the “Advanced Research Project Agency.” ARPA operated under the US Department of Defense (DoD) and was tasked to make the United States more resilient against foreign attacks. Both agencies became success stories: In 1969, NASA did send the first man to the moon, and ARPA was presenting ARPANET, a decentralized communication network. In the 1960s, the Rand Corporation, working together with the DoD, recognized the vulnerability of centralized communication networks. The idea was to develop a decentralized network, which would overstretch foreign adversaries’ capacity if they plan to destroy the communication system. On October 29, 1969, ARPANET connected computers in Stanford, Los Angeles, Santa Barbara and Utah. For some people, this date is the birthday of the Internet.
1969 was also the year when the Nixon Administration was inviting the Soviet Union to enter into “Strategic Arms Limitation Talks” (SALT) to bring the nuclear arms race under control. This did have consequences for ARPANET. The project did not disappear but wasn’t anymore a first priority for the DoD.
There was an interesting side effect. The graduate students, who were involved in ARPANET, continued to think out of the box. The idea to have a network with no power in the center but knowledge at the edges, a network that enables free communication among everybody anywhere regardless of frontiers, was an attractive concept for a new generation which did have—after the painful years of the Vietnam war—their own ideas about democracy, freedom, and self-determination. For them, this toy became a tool to build something new, enhanced freedom, and went beyond traditional borders. New protocols and innovative applications enabled the emergence of a new virtual world with RFCs, TCP/IP, DNS, the “@”, the “dot,” and self-organized institutions as IETF and IANA. This “new world” was self-regulated by a “netiquette”, based on the concept of individual and borderless freedom and populated by “good guys”. It was not disconnected from the “rest of the world”, but the majority did not really understand what that “network of networks” is about.
The bridge-building to the “rest of the world” started in 1991 with Tim Berners-Lee’s “HTTP-Protocol”. The World Wide Web created new business opportunities that triggered the “Dot-Com-Boom” and the vision of a “New Economy.” The Clinton-Administration (1993–2000) realized quickly that “the Internet” is much more than a “technical toy”. US Vice-President Al Gore’s “National Information Infrastructure Initiative” (NII) from 1993 recognized the far-reaching economic, political and social implications.
Barlow was not the first one who reflected on the broader implications of the “digital revolution”. In the 1970s/80s Zbigniew Brzezinski’s “Technotronic Era,” Ithiel Sola de Pool’s “Technologies of Freedom,” and Alvin Toffler’s “Third Wave” started the discussion. In the 1990s, Manuel Castell’s “Network Society,” Nichola Negroponte’s “Being Digital,” and Francis Cairncros’s “Death of Distance” were eye-openers. The Silicon Valley “Cluetrain Manifesto” from 1999 took inspiration from the 95 theses of Martin Luther, who kick-started the “reformation” in Europe 500 years ago. “We reject kings, presidents and voting. We believe in rough consensus and running code,” said David Clark already in 1993.
In other words: Barlow’s declaration was not so exceptionally new. Nevertheless, his statement was a special one. His reference to the US “Declaration of Independence” from 1776 made it much more political. Barlow knew how to use words and talk to people. He wrote songs for the rock band the “Grateful Dead”.
In the first place, Barlow’s vision inspired many constituencies. I remember a discussion at Harvard, where Charles Nesson mobilized the power of imagination of his audience to remember the moment in Philadelphia’s Hall of Independence when the US constitution was drafted, and the institutions of the US democracy were designed. “We have now to build the democratic institutions for a digital 21st century”, he said. It was the time when ICANN was seen as a pilot project for “cyberdemocracy” and prepared “global elections” for its “Board of Directors”. It was “governance without governments”. Decision making power was in the hands of the provider and user of services. Governments were in an “Avdvsory Committee”, the GAC. Their advice is not binding for the Board.”
Barlow argued in his declaration: “We have no elected government, nor are we likely to have one, so I address you with no greater authority than that with which liberty itself always speaks. I declare the global social space we are building to be naturally independent of the tyrannies you seek to impose on us. You have no moral right to rule us, nor do you possess any enforcement methods we have true reason to fear. Governments derive their just powers from the consent of the governed. You have neither solicited nor received ours. We did not invite you. You do not know us, nor do you know our world. Cyberspace does not lie within your borders. We will create a civilization of the Mind in Cyberspace. May it be more humane and fair than the world your governments have made before.”
Five years later, reality took his vision from the Swiss mountains down to earth. In 2001, the “Doc-Com-Bubble” blasted, and 9/11 turned the theoretical debate around “Cyberdemocracy” into a very practical discussion on “Cybersecurity”.
Within ten years, the number of Internet users did grow from one million to one billion. The borderless opportunities of the interconnected world were not only used by the “good guys,” but it also enhanced the spaces for criminal activities, vandals, hate preachers, pedophiles, terrorists, money launderers and other “bad guys”. The new publications did have more pessimistic titles: “The Future of the Internet and How to Stop It” (Jonathan Zittrain) or “The Darkening Web” (Alexander Klimburg). As Jeff Moss, the founder of Black Hat, once argued: “We created innovations to keep the governments out. With the new applications, big money came in. Big money attracted the criminals. And with criminals in cyberspace, it is only natural that governments came back.”
Was Barlow wrong? Yes and no. Even if governments are back, they are back in a different way. The world is now a cyberworld. The economy is a digital economy. The new complexity of the global Internet Governance Ecosystem can not be managed anymore in the traditional way. In 2005—at the UN World Summit on the Information Society (WSIS)—the heads of states of 193 UN member states recognized that the governance of the Internet needs the involvement of all stakeholders, including the private sector, the technical community and civil society. It requires the “sharing” of policy development and decision making. The so-called multistakeholder model became the blueprint for global Internet Governance.
Even if the model has many conceptual weaknesses and is stress-tested by numerous new challenges, there is broad recognition that governments alone will not find solutions for the digital age problems. Referring indirectly to Barlow’s declaration, the ‘High-Level Panel on Digital Cooperation,’ established by UN Secretary-General Antonio Guterres, titled its final report in 2018 “The Age of Cyberinterdependence”. Insofar, the “return of governments” is more than the swinging back of a pendulum. It is not “government or the community”, it is “government and the community”. Humanity is now on a new layer and has still to figure out how this new digital cyber world is functioning, how it can be governed, and how “sharing” can be organized in a political environment dominated by power struggles and moneymakers.
Today’s digital revolution is described now often as the “4th Industrial Revolution”. Looking backward, makes it sense to learn some lessons from the “1st Industrial Revolution”?
When the industrial age started in the first half of the 19th century, a 30 years old German rocked the world by arguing that this industrial revolution is much more than steamboats, trains, electricity, factories and the telegraph. He predicted a “new economy” and a “new society”. In 1848 Kart Marx called his declaration the “Communist Manifesto.” But Marx was soon confronted with the realities of his time. In a speech in London on April 14, 1856, he recognized the deep contradictions: “In our days, everything seems pregnant with its contrary: Machinery, gifted with the wonderful power of shortening and fructifying human labor, we behold starving and overworking it. By some strange, weird spell, the newfangled sources of wealth are turned into sources of want; The victories of art seem bought by the loss of character. At the same pace that humanity masters nature, man seems to become enslaved to other men or to his own infamy. Even the pure light of science seems unable to shine but on the dark background of ignorance.”
History did not work as expected by Marx. However, 100 years later, the world was “fully industrialized”. The kingdoms, which ruled the world when Marx was a young journalist, didn’t exist anymore. They were replaced by rather different types of “republics”: On the one side, democracies, based on the respect of human rights and the rule of law. And on the other side, autocracies, based on a one party system with a single man on the top, dictated the rest of the country what to do. Even worse, after two world wars, 1948 did see the start of a cold war between the two blocks. And it took nearly another half of a century until the heads of states of the “two blocks” declared democracy as the winner of the industrial age.
Their vision in the “Charter of Paris” (1991) reads as follows: “Ours is a time for fulfilling the hopes and expectations our peoples have cherished for decades: steadfast commitment to democracy based on human rights and fundamental freedoms; prosperity through economic liberty and social justice; and equal security for all our countries. We undertake to build, consolidate and strengthen democracy as the only system of government of our nations. Democratic government is based on the will of the people, regularly expressed through free and fair elections. Democracy has as its foundation respect for the human person and the rule of law. Democracy is the best safeguard of freedom of expression, tolerance of all groups of society, and equal opportunity for each person. Democracy, with its representative and pluralist character, entails accountability to the electorate, the obligation of public authorities to comply with the law, and justice administered impartially. No one will be above the law.”
Isn’t this a nice vision? Peace and understanding, prosperity, economic liberty, and social justice for everybody from Vancouver to Vladivostok? And this “vision” came from governments, not from dreamers like John Perry Barlow. However, also this vision did not survive the stress test of reality.
1991, when the “Charter of Paris” was signed, the World Wide Web opened the door into the “digital age”. 30 years later, the “fathers of the Internet” are now grandfathers. Their children commercialized, politicized and weaponized the cyberspace. The visions of yesterday have disappeared behind the horizon. Today’s realities tell us that all the outstanding achievements, the new applications and services, which made our life freer, easier, richer and more comfortable, have a dark flip side. Social networks risk becoming censors; search engines risk becoming global watchdogs; we are surrounded by mass surveillance, biometric control systems, and a swamp of fake news and hate speech. New profitable applications destroy traditional businesses, and it is unclear whether this is “constructive destruction” (Schumpeter) or the road towards a deeper divided society. We have to struggle with cybercrime, misinformation, market dominance, digital trade wars and lethal autonomous weapon systems. Will platform regulations, digital taxation, norms of state behavior in cyberspace, and rules for an ethical approach to artificial intelligence help manage our future? What will the grandchildren of the Internet do with this new generation of problems John Perry Barlows did not touch in his declaration?
History doesn’t repeat itself. Nobody knows, how our world will look like 25 years from now. One can certainly expect a “fully digitalized world” in 2046. But will this world be a “civilization of mind”? Will every individual have affordable access to the Net? Can we enjoy the fruits of a successful “green and digital deal”? Will digital progress have improved our environment, education and healthcare? Will there be “decent work” for everybody? Will the world be more “humane and fair”? Or will we have to struggle through a digital “cold war” between cyber-democracies and cyber-autocracies?
To have visions and dreams for the future is always a good thing. It is needed to inspire people, broaden their views, and stimulate the imagination. But one should also be aware that reality takes a different road. Today is a result of yesterday; tomorrow is a result of today. Winston Churchill once said: “A nation that forgets its past has no future”. Insofar, I would recommend tomorrow’s professors add Barlow’s “Declaration of Independence in Cyberspace” to their students’ reading list.
Sponsored byVerisign
Sponsored byVerisign
Sponsored byDNIB.com
Sponsored byIPv4.Global
Sponsored byRadix
Sponsored byWhoisXML API
Sponsored byCSC