|
||
As artificial intelligence becomes deeply woven into the operational fabric of telecommunications, legal systems across the globe are struggling to keep up. A recent cross-jurisdictional study by Avinash Agarwal and colleagues examines regulatory preparedness in ten countries, revealing that most legal frameworks remain unequipped to address the novel risks AI introduces into critical digital infrastructure.
New threats: Telecommunications networks increasingly rely on AI for tasks such as network optimization, cybersecurity, and predictive maintenance. Yet, this growing dependence carries new threats—model drift, opaque decision-making, and susceptibility to adversarial attacks—that traditional telecom, data protection, and cybersecurity laws largely ignore. Regulatory focus remains rooted in conventional risks like data breaches and unauthorized access, while algorithmic bias and systemic failures go unaddressed.
Governance gap: The study highlights a worrying governance gap: AI is operating at the core of critical infrastructure, but legal oversight remains fragmented and reactive. Even as some countries introduce national AI strategies or incident reporting systems, these tend to target general data or content issues, not the sector-specific challenges of AI in telecoms. Only a handful of jurisdictions, such as China and the United States, have begun integrating AI-specific guidelines—but these remain mostly limited to content regulation or federal agency use, not core telecom functions.
Patchy regulation: In India, for instance, the newly enacted Telecommunications Act of 2023 introduces advanced security provisions and incident reporting rules, yet lacks specific mechanisms for AI failures. Similarly, Indonesia’s cyber laws acknowledge AI’s strategic importance but have yet to translate this into binding regulation. Across jurisdictions, voluntary databases like the AI Incident Database fill some gaps, but lack standardization and legal enforceability.
The authors argue for more anticipatory regulation, calling for a harmonized legal approach that recognizes AI’s distinctive failure modes and integrates lessons from other high-risk industries like aviation. Without such frameworks, the growing sophistication of AI systems could render existing safeguards obsolete—leaving vital networks exposed to invisible, and potentially catastrophic, failures.
Sponsored byRadix
Sponsored byVerisign
Sponsored byCSC
Sponsored byWhoisXML API
Sponsored byDNIB.com
Sponsored byIPv4.Global
Sponsored byVerisign