|
||
|
||
On 8 February 1996, two events created the original sin of digital and, increasingly, AI governance, shaping tech developments to this day. In Davos, John Perry Barlow’s Declaration of the Independence of Cyberspace cast the internet as a sovereign realm beyond the authority of states. The same day in Washington, D.C., the US Communications Decency Act entered into force, granting internet platforms an unprecedented legal shield from liability for content they host. Taken together, these moves seeded an enduring assumption that technological development should outrun, and often sit outside, politics, law, and the governance instruments societies have built over millennia.
In Davos, Barlow’s Declaration of the Independence of Cyberspace pronounced rather dramatically:
Governments of the Industrial World, you weary giants of flesh and steel, I come from Cyberspace, the new home of Mind. On behalf of the future, I ask you of the past to leave us alone. You are not welcome among us. You have no sovereignty where we gather.
This was a foundational myth, a political fantasy that spawned a generation of thought arguing the internet meant the ‘end of geography.’ Thousands of articles, books, theses and speeches have been delivered arguing that we need new governance for the ‘new brave world’ of the digital.
This intellectual and policy house of cards was built on the assumption that there is cyberspace beyond the physical one. It was (and is) a wrong assumption. There is no cyberspace. Every email, every post, every AI query is ultimately a physical event: pulses of electrons carrying bits and bytes through cables under the ocean, Wi-Fi, data servers, and internet infrastructure. Anything we do online ultimately happens in the physical world, under the jurisdiction of one of 193 countries. Barlow’s declaration was a call to lawlessness disguised as liberty, persuading a generation that the digital realm was a land beyond long legal and ethical traditions.
On that same day, President Clinton signed into law the US Communications Decency Act (CDA), which had been adopted by the US Congress. Buried within it was Section 230, which granted internet platforms an unprecedented immunity: they could not be treated as publishers or speakers of the content they hosted.
For the first time in history, commercial entities were granted a broad shield from liability for the very business from which they profited. It was the departure from a well-established principle of legal liability, for example, a newspaper for the text it publishes or broadcasters for their transmissions. A total legal abdication by the internet platform was justified to protect the nascent industry from large legal suits over the content it hosted. It helped the growth of the internet. In the meantime, the small tech industry became huge companies with market capitalisation in trillions of USD. But the legal immunity remains as if they were small garage start-ups, triggering one of the main tensions in the modern economy and law: a multi-trillion-dollar industry structurally divorced from the legal consequences of its operations.
These two events, the myth of statelessness and the immunity from legal liability, fed each other. The fantasy of a separate cyberspace provided the ideological cover for the exceptional legal treatment worldwide. Why burden these pioneers with old-world laws if they were building a new world?
Digital governance’s original sin was challenged immediately in 1996 by US judge Frank H. Easterbrook, arguing that we do not need internet law, as we did not need ‘law of the horse’ when horses were introduced as dominant transportation. Internet should be regulated by applying existing legal principles.
Forty years ago, time proved Easterbrook right. Law is not about technological means, whether smoke signals, horse transport or the internet. Law, in its core function, is about regulating relations among human beings and entities they create (companies, states), in, among others, their use of technology.
The myth of cyberspace has since been dismantled by reality. Today, we are more than ever anchored in geography, with high-precision location and activity tracking. Barlow’s thesis on ‘end of geography’ is rarely mentioned in speeches and articles. However, despite concerns from both Republicans and Democrats, the DCA remains in force, extending into the age of AI.
Protected by Section 230 of the DCA, AI platforms can launch large language models and diffusion models into the world with minimal oversight, shielded by the same logic: we are not the speakers, we are merely the conduit.
The result is a glaring, deadly asymmetry to other industries. A car manufacturer must issue recalls for flaws. A pharmaceutical company bears liability for its products. But an AI company can release a system that amplifies hatred, disseminates lethal misinformation, or directly contributes to an increase in suicides through unchecked interaction, and faces no comparable legal responsibility for the harms it enables. The burden of proof and the weight of tragedy fall solely on users and victims, not on the architects of the systems.
As AI raises the political, societal, and economic stakes, we have to revisit governance’s original sin to return to a fundamental legal principle developed over millennia of human history: if you create, operate, and profit from a technology, you must be accountable for its foreseeable impacts.
This is not about stifling innovation. It is about aligning innovation with responsibility, as humanity has done with every other transformative technology throughout history. The age of legal exceptionalism should end. The age of accountability must begin with addressing the powerful impact of AI on society.
Sponsored byIPv4.Global
Sponsored byDNIB.com
Sponsored byCSC
Sponsored byWhoisXML API
Sponsored byVerisign
Sponsored byVerisign
Sponsored byRadix