Home / News

Governing the Invisible: AI Risks in Telecom Infrastructure Outpace Global Legal Frameworks

As artificial intelligence becomes deeply woven into the operational fabric of telecommunications, legal systems across the globe are struggling to keep up. A recent cross-jurisdictional study by Avinash Agarwal and colleagues examines regulatory preparedness in ten countries, revealing that most legal frameworks remain unequipped to address the novel risks AI introduces into critical digital infrastructure.

New threats: Telecommunications networks increasingly rely on AI for tasks such as network optimization, cybersecurity, and predictive maintenance. Yet, this growing dependence carries new threats—model drift, opaque decision-making, and susceptibility to adversarial attacks—that traditional telecom, data protection, and cybersecurity laws largely ignore. Regulatory focus remains rooted in conventional risks like data breaches and unauthorized access, while algorithmic bias and systemic failures go unaddressed.

Governance gap: The study highlights a worrying governance gap: AI is operating at the core of critical infrastructure, but legal oversight remains fragmented and reactive. Even as some countries introduce national AI strategies or incident reporting systems, these tend to target general data or content issues, not the sector-specific challenges of AI in telecoms. Only a handful of jurisdictions, such as China and the United States, have begun integrating AI-specific guidelines—but these remain mostly limited to content regulation or federal agency use, not core telecom functions.

Patchy regulation: In India, for instance, the newly enacted Telecommunications Act of 2023 introduces advanced security provisions and incident reporting rules, yet lacks specific mechanisms for AI failures. Similarly, Indonesia’s cyber laws acknowledge AI’s strategic importance but have yet to translate this into binding regulation. Across jurisdictions, voluntary databases like the AI Incident Database fill some gaps, but lack standardization and legal enforceability.

The authors argue for more anticipatory regulation, calling for a harmonized legal approach that recognizes AI’s distinctive failure modes and integrates lessons from other high-risk industries like aviation. Without such frameworks, the growing sophistication of AI systems could render existing safeguards obsolete—leaving vital networks exposed to invisible, and potentially catastrophic, failures.

NORDVPN DISCOUNT - CircleID x NordVPN
Get NordVPN  [74% +3 extra months, from $2.99/month]
By CircleID Reporter

CircleID’s internal staff reporting on news tips and developing stories. Do you have information the professional Internet community should be aware of? Contact us.

Visit Page

Filed Under

Comments

Comment Title:

  Notify me of follow-up comments

We encourage you to post comments and engage in discussions that advance this post through relevant opinion, anecdotes, links and data. If you see a comment that you believe is irrelevant or inappropriate, you can report it using the link at the end of each comment. Views expressed in the comments do not represent those of CircleID. For more information on our comment policy, see Codes of Conduct.

CircleID Newsletter The Weekly Wrap

More and more professionals are choosing to publish critical posts on CircleID from all corners of the Internet industry. If you find it hard to keep up daily, consider subscribing to our weekly digest. We will provide you a convenient summary report once a week sent directly to your inbox. It's a quick and easy read.

Related

Topics

New TLDs

Sponsored byRadix

Cybersecurity

Sponsored byVerisign

Brand Protection

Sponsored byCSC

DNS Security

Sponsored byWhoisXML API

DNS

Sponsored byDNIB.com

IPv4 Markets

Sponsored byIPv4.Global

Domain Names

Sponsored byVerisign