Home / Blogs

The EU AI Act: A Critical Assessment

The proposed new European Union (EU) Artificial Intelligence Act has been extolled in the media as a bold action by a major legislative body against the perceived dangers of emerging new computer technology. The action presently consists of an initial proposal for a Regulation with annexes from 2021, plus recent Amendments adopted on 14 June. This regulatory behemoth exists entwined among a multitude of other recent EU major regulations, and consists of nearly 100 clarifying “recitals,” followed by another hundred clauses and ten annexes. It is so complex and evolving that no composite appears to exist, and the ensemble text continues to evolve continuously among some invisible set of players.

The AI Act now has more than fifty vague definitions. The various provisions include almost every possible kind of regulatory mechanism: mandated capability requirements, prohibited practices, standards, conformity assessments, certifications, registrations, new EU offices and boards, codes of conduct, monitoring, reporting, enforcement, and penalties. The Act’s obligations, pursuant to the essentially unintelligible scope clause, which continues to change, are applied to anyone and everyone in the world who places on the market or puts AI systems into service, and includes providers, deployers, importers, distributors, and authorised representatives.

The definition of what constitutes an AI system is so abstruse as to encompass almost any computer-based activity—“a system that is designed to operate with elements of autonomy and that, based on machine and/or human-provided data and inputs, infers how to achieve a given set of objectives using machine learning and/or logic—and knowledge based approaches, and produces system-generated outputs such as content (generative AI systems), predictions, recommendations or decisions, influencing the environments with which the AI system interacts.” In many ways, the AI Act itself is constantly regenerating its totality both through the legislative text and the outsourcing of details to a myriad of new activities and organisations.

The purpose of the AI Act is a laudable socio-political statement—“to promote the uptake of human-centric and trustworthy artificial intelligence and to ensure a high level of protection of health, safety, fundamental rights, democracy and the rule of law, and the environment from harmful effects of artificial intelligence systems in the Union while supporting innovation.” However, the attributes of the Act as it is coming into existence, arguably constitutes “failure by design.” Here is why.

Flawed legal constructs

It will be essentially impossible for the AI Act to be implemented in its present form because of an array of significant legal impediments and infirmities.

Jurisdiction and Conflict of Law. The EU decided to move ahead on its own massive regulatory regime for AI rather than collaborate in intergovernmental venues with other jurisdictions on a common approach. Nations outside the EU are clearly taking different approaches—beginning notably with the only global intergovernmental body which has been significantly engaged in both AI law and standardisation—the ITU. Essentially all the world’s nations last year added Res. 214 to the basic ITU ICT global treaty instrument to address AI technologies, which takes a more positive and judicious approach. The ITU has brought together nations on an array of AI activities that include its AI for Good events, as well as developed AI technical standards over the past several years for both radiocommunications and ICT networks. The work has resulted in a number of beneficial AI systems and implementations.

For global collaboration on AI law, in early 2022, the Council of Europe created a Committee on Artificial Intelligence (CAI). Its objective is to encourage the application of AI based on human rights, the rule of law and democracy and consists of representatives from 46 member states as well as a number of observer states and other organisations. The CAI’s draft ”[Framework] Convention on Artificial Intelligence, Human Rights, Democracy and the Rule of Law” was recently released and contains its own definition for Artificial Intelligence.

The fundamental difference in perspective was also underscored recently by the UK’s head of its national cybersecurity organisation, the NCSC. Ms. Cameron articulated a cautious sensible approach to the subject, advocating a ‘secure by design’ approach where vendors take more responsibility for embedding cyber security into their technologies, and their supply chains, from the outset. Toward this end, the UK is hosting a global summit on AI.

China and the United States are also clearly taking different approaches toward AI. China’s AI regulatory measures issued this year focus primarily on media-related “generative” AI products and services, including rules on “deep synthesis” technologies. The United States, meanwhile, has taken a structured approach—establishing a national initiative with multiple agencies and committees together with several strategic pillars: innovation, advancing trustworthy AI, education and training, infrastructure, applications, and international cooperation. The last involved engagement with OECD, Global Partnership on AI, G7 and G20 dialogue, a bilateral UK-US partnership, AI Partnership for Defense, and the US-EU Trade and Technology Council. The NTIA recently undertook a public policy-making proceeding with more than 1400 comments filed.

Especially noteworthy for both the US and China is the close coupling of their AI approaches to strategic knowledge and national security considerations that weigh significantly into choice of regulatory regimes. China’s first concerns go to “reflecting Socialist Core Values,” and preventing “subversion of state power, overturning of the socialist system; incitement of separatism; harm to national unity….” The US first concerns focus on “continued US leadership in AI R&D; lead[ing] the world in the development and use of trustworthy AI systems in public and private sectors; prepare the present and future US workforce for the integration of artificial intelligence systems across all sectors of the economy and society” and includes multiple defense-related activities. The last prominently includes NSA AI cybersecurity as the next frontier, and Cyber Command strategies.

Perhaps the broadest and most structured approach to AI regulation is that recently undertaken by the Australian government. Australia’s discussion paper on responsible AI, begins with examining the opportunities and challenges, followed by an extensive examination of what other regions and countries are doing, and closing with ideas on how to manage the potential risks. It eschews outright regulation and relies instead on five possible “elements” for their approach: published AI impact assessments, especially for high-risk AI; notice to users when AI is used; human oversight assessments; explanations on how the AI system functions; and AI employee training.

In addition to the fundamental regulatory divergences, it is also apparent that the national agencies of government that grant patent rights have generally taken a different approach as well. A simple global patent search also reveals that more than 100,000 patents granted for AI systems across multiple national jurisdictions. There is a presumption here that these jurisdictions, in granting the patents, have found the implementations unique and beneficial, and many have been subsequently introduced in commerce.

The AI system marketplace today is rapidly evolving and expanding at an almost incomprehensible rate worldwide. Products, code, and implementations in countless offerings exist ubiquitously and publicly available globally. Most of the significant code developer sites today host AI tools. Github points to 18,000 public repositories. The AI code generators are so numerous that CodeSummit lists the “top twenty.” SourceForge lists hundreds of AI Coding Assistants categorised by different deployment mechanisms and categories. Essentially all of the major ICT platform and cloud vendors have huge developer programmes underway that include Amazon, Google, IBM Cloud, Microsoft, Alibaba, H20.ai, DataRobot, Oracle, Salesforce, Wipro-Holmes, and Meta for starters.

Lastly, there is the reality that the EU does not exist in some kind of walled market sandbox. It cannot ex post facto impose a massive regulatory regime on the global ICT mesh of networks, devices, services, and providers, which already has infused AI code running everywhere, and of which it is a part. There are a few provisions in the AI Act that attempt to deal with the legal challenges of jurisdiction and conflict of law impediments—all included in recital admonitions rather than in the body of the Act.

For example, recital (6) states that “the notion of AI system in this Regulation should be clearly defined and closely aligned with the work of international organisations working on artificial intelligence to ensure legal certainty, harmonization and wide acceptance….” Recital (10) poses the ultimate conflict-of-law challenge: “the rules established by this Regulation should apply to providers of AI systems in a non-discriminatory manner, irrespective of whether they are established within the Union or in a third country, and to deployers of AI systems established within the Union.”

The AI Act has several vague jurisdictional exceptions, notably the recital (11) note that “this Regulation should not apply to public authorities of a third country and international organisations when acting in the framework of international agreements concluded at national or European level for law enforcement and judicial cooperation with the Union or with its Member States.” It notes, however, that “this exception should nevertheless be limited to trusted countries and international organisation that share Union values.”

The new recital (41a) somewhat circularly states that “A number of legally binding rules at European, national and international level already apply or are relevant to AI systems today,” and then cites its own law combined with vague references to “UN Human Rights treaties,...Council of Europe conventions,... and national law.”

Lastly, is the recital (61d) admonition that “when adopting common specifications, the Commission should strive for regulatory alignment of AI with likeminded global partners, which is key to fostering innovation and cross-border partnerships within the field of AI, as coordination with likeminded partners in international standardisation bodies is of great importance.”

Reasonableness, rationality, and vagueness. Most jurisdictions which have strong human rights protections operate under the juridical standard of “reasonableness.” It is “a standard of review often used for by courts for making a determination as to the constitutionality or lawfulness of legislation and regulations, particularly in common law jurisdictions, and through which judges will assess whether the questioned law or practice can be justified vis-à-vis the objectives targeted and the constitutional rights to be protected.” A stricter test is “rationality” which is also used rather than reasonableness.

Although some provisions of the AI Act would pass muster under these tests, those which vicariously impose prohibitions or arrays of onerous obligations found in the Act on essentially anyone and everyone associated with the deployment and use of AI code across all the network meshes of the world and facing enormous monetary penalties, seem patently unreasonable if not irrational.

Especially relevant here is the matter of vagueness, and related requirements that those being subject to the law must be capable of knowing what is required. This is often expressed as the “void-for-vagueness” doctrine. The doctrine curbs the arbitrary and discriminatory enforcement of regulatory provisions. The law must be understood not only by those persons who are required to obey them but by those persons who are charged with the duty of enforcing them. Although an international “void for vagueness” doctrine has never been fully articulated, it is implicit in the concept of binding international legal norms. As Franck’s treatise, The Power of Legitimacy Among Nations, notes, “in order for an international legal standard to be legitimate, it must provide reasonably clear guidance concerning the nature of the obligation that it imposes.”

By almost any measure, the AI Act is an enormous collection of vague definitions and requirements, beginning with the new definition of an AI system. Then there is the reality that it is not actually possible to implement the requirements operationally or technically. AI systems and code have essentially become a new form of “dark matter” that pervades the ICT universe.

Flawed standards making

Both the recitals and the implementing provisions of the AI Act effectively outsource everything—beginning with the scope of application and definitions—to an extensive, constantly evolving assortment of EU institutions and external groups which are tasked with somehow developing something that is potentially implementable. The vagueness begins with the basic definition of what constitutes an AI system and extends out to almost every other entity and thing in the Act.

Unfortunately, standards-making for ICT is the EU’s huge Achilles Heel. At a time when ICT standards have massively moved to all manner of private sector consortia and become highly competitive to attract industry participants, the EU standards-making establishment remains a relic from the distant past. It is dominated by a handful of insular and substantially closed national entities that lobby for EU monopoly standards stature to maintain practices, including human rights violations, recently found unlawful by the EU’s own Court of Justice Advocate General.

Notwithstanding these deficiencies, the AI Act seems to seek its salvation somehow through standards—which are described profusely throughout the text. This begins in recital (13), which states, “common normative standards for all high-risk AI systems should be established.” The entire task, however, is summed in recital (61): Standardisation should play a key role to provide technical solutions to providers to ensure compliance with this Regulation. Compliance with harmonised standards as defined in Regulation (EU) No 1025/2012 of the European Parliament and of the Council should be a means for providers to demonstrate conformity with the requirements of this Regulation. To ensure the effectiveness of standards as policy tool for the Union and considering the importance of standards for ensuring conformity with the requirements of this Regulation and for the competitiveness of undertakings, it is necessary to ensure a balanced representation of interests by involving all relevant stakeholders in the development of standards. The standardisation process should be transparent in terms of legal and natural persons participating in the standardisation activities.”

However, some of the exceptions are themselves vague, such as recital (61b) which states: “When AI systems are intended to be used at the workplace, harmonised standards should be limited to technical specifications and procedures.” And then there are the exceptions in recital (61c) which states, “The Commission should be able to adopt common specifications under certain conditions when no relevant harmonised standard exists or to address specific fundamental rights concerns. Through the whole drafting process, the Commission should regularly consult the AI Office and its advisory forum, the European standardisation organisations and bodies or expert groups established under relevant sectorial Union law as well as relevant stakeholders, such as industry, SMEs, start-ups, civil society, researchers and social partners.” And then lastly, recital (61d) states “When adopting common specifications, the Commission should strive for regulatory alignment of AI with likeminded global partners, which is key to fostering innovation and cross-border partnerships within the field of AI, as coordination with likeminded partners in international standardisation bodies is of great importance.”

The body of the AI Act is filled with references to “harmonised standards,” “relevant specifications,” “technical specifications or existing standards.” Thus, somehow, a new universe of new and existing EU and other institutions and are being co-opted into implementing a regime which is legally, operationally, and technically impossible to implement. This remains the ultimate conundrum being faced here—and seen in the existing work and related challenges.

The challenges begin with extant problems within the standards organisation ecosystem. That ecosystem in the AI arena is extensive. Almost every standards venue has some kind of AI specifications and standards work underway. A small number of those bodies earn enormous revenues from the sales of their standards by keeping them sequestered from public view behind paywalls, and thus employ relatively small, closed processes among those participants willing to participate and give away their intellectual property to the publishers. Those bodies also tend to seek out monopoly relationships with regulatory authorities to sustain their paywall model.

The examples of this behaviour can already be seen with respect to the AI Act. The European Commission’s recent Analysis of the preliminary AI standardisation work plan in support of the AI Act lists only 12 standards—all of them ISO/IEC or CEN/CENELEC publications collectively costing thousands of Euros to view. This was not unexpected, as CEN/CENELEC JTC 21 was given an exclusive remit to develop the standards by the EC and consists almost entirely of a handful of CEN/CENELEC national member bodies who must struggle with the task of making some sense of the AI Act yet to be finalised. The EU’s largest and most diverse ESO (ETSI) that is industry-driven, as well as all other AI standards entities, including ITU, are not included. The actual standards process underway flies in the face of the AI Act call for broad participation in the work and openness.

The recent appellate memorandum by European Court of Justice Advocate General Medina on 22 June 2023 in a human right lawsuit against the EC for its paywall standards making practices, finds those practices unlawful. AG Medina’s finding exacerbates the multiple legal infirmities of the AI Act as well as undercuts the assertions of inclusion, consumer responsiveness, and human rights promotion.

However, the larger issue is whether the EU AI regulatory regime can be accomplished through some kind of technical standards magic combined with all the myriad regulatory mechanisms the EU is throwing at a rapidly expanding global AI marketplace. The AI genie is clearly long out of the bottle, and the idea that the entire world will bow to EU jurisdiction and rules is unlikely.

This view also seems consonant with the views of legal scholars. For example, John O. McGinnis’ chapter on The Folly of Regulating against AI’s Existential Threat in the 2022 Cambridge Handbook of Artificial Intelligence:

There is an overwhelming case against the current regulation of AI for existential risks. The regulation would compromise the progress in AI because regulators could not tell which lines of research make existential threats. Part of the reason is that these risks are not imminent and are not probable, thus making identification even harder. Finally, regulating at the national level might empower rogue nations to threaten the national security of well-functioning democracies. But international regulation is not possible, because it is difficult, if not impossible, to verify that prohibited lines of research are not occurring within another nation’s territory. Encouraging with subsidies the development of AI that is not an existential threat is the best way forward, because it will build up knowledge of potential dangers.

Other Approaches

One of the more significant omissions in the EU AI Act is its failure to recognize and embrace the Zero Trust Model—which never appears in the EU material related to AI. The latest iteration CISA’s Zero Trust Maturity Model, notes there is an AI component without elaborating. The model encourages the use of advanced analytics, artificial intelligence, and machine learning for better detection of threats and breaches. Implementing the Zero Trust Model is a core strategy of the United States.

The AI Act’s stated purpose “to promote…trustworthy artificial intelligence” seems fundamentally at odds with the Zero Trust Model, which disavows the notion that any ICT capability or service is trustworthy. Thus, AI will always be untrustworthy, and the focus is shifted instead on tools to detect AI misuse and measure the level of context-dependent risk by different users. Although the AI Act takes a risk management approach, it fails to embrace most of the pillars of the Zero Trust Maturity Model.

To the EU’s credit, it has not staked its AI strategy entirely on the AI Act. Arguably, the most significant and likely effective EU AI action is the draft AI Liability Directive which enables causes of action to proceed in judicial systems when actual harm occurs. The European Economic and Social Committee has also published a favourable opinion relating to the Liability Directive.

Conclusion

The EU has clearly moved far out front in the global AI regulatory ecosystem. Politically, the move is consonant with the EU’s activist image. As the Australian consultative paper notes, it gets high marks for a structured risk-based approach to AI.

However, even in its unfinished state, the AI Act runs the risk of being a “failure by design.” Simply throwing every legacy regulatory mechanism up on the wall to see if it somehow sticks, notwithstanding so many omissions and infirmities, to somehow govern the AI universe, will not work. It is an endeavour whose principal merit is demonstrating the limits of activist regulators and their enforcement abilities. As the Australian consultative paper notes, the EU regulatory excesses have already been throttled back. The AI Act will establish the EU’s AI governance limitations.

The one exception to this assessment—facilitating harmed parties to institute civil actions against wrongdoers—is a course of action which will work and deserves support and praise.

Disclaimer: The views expressed in this article are solely those of the author who has worked as an engineer-lawyer in the regulatory, strategic analysis, and technical standards fields for fifty years and should not be attributed to any organisation in which he works or participates.

By Anthony Rutkowski, Principal, Netmagic Associates LLC

The author is a leader in many international cybersecurity bodies developing global standards and legal norms over many years.

Visit Page

Filed Under

Comments

Perspective Richard Taylor  –  Jul 5, 2023 11:51 AM

Thank you.  This is a helpful review at a time when perspective is badly needed.  Your insights, as well as those of Adam Thierer and others provide some balance in the face of AI moral panic.

Comment Title:

  Notify me of follow-up comments

We encourage you to post comments and engage in discussions that advance this post through relevant opinion, anecdotes, links and data. If you see a comment that you believe is irrelevant or inappropriate, you can report it using the link at the end of each comment. Views expressed in the comments do not represent those of CircleID. For more information on our comment policy, see Codes of Conduct.

CircleID Newsletter The Weekly Wrap

More and more professionals are choosing to publish critical posts on CircleID from all corners of the Internet industry. If you find it hard to keep up daily, consider subscribing to our weekly digest. We will provide you a convenient summary report once a week sent directly to your inbox. It's a quick and easy read.

I make a point of reading CircleID. There is no getting around the utility of knowing what thoughtful people are thinking and saying about our industry.

VINTON CERF
Co-designer of the TCP/IP Protocols & the Architecture of the Internet

Related

Topics

Domain Names

Sponsored byVerisign

New TLDs

Sponsored byRadix

DNS

Sponsored byDNIB.com

Brand Protection

Sponsored byCSC

Threat Intelligence

Sponsored byWhoisXML API

IPv4 Markets

Sponsored byIPv4.Global

Cybersecurity

Sponsored byVerisign