|
||
Artificial intelligence is rapidly reshaping cyber warfare, according to a new report from Google’s Threat Intelligence Group, which warns that hostile states and criminal networks are using generative AI to automate hacking, disguise malware and scale disinformation campaigns.
The report describes a shift from experimental use of AI toward industrial-scale deployment. For the first time, Google says it identified a zero-day vulnerability likely developed with AI assistance. The flaw, designed to bypass two-factor authentication on a widely used administration tool, was allegedly intended for mass exploitation before Google intervened.
Chinese and North Korean actors have shown particular interest in AI-assisted vulnerability research. Meanwhile, Russian-linked groups are reportedly using AI-generated “decoy logic” to hide malicious code inside malware. One Android backdoor, dubbed PROMPTSPY, employed Google’s Gemini model to autonomously interpret a victim’s screen and execute commands without human supervision.
Disinformation campaigns: The report also highlights AI’s growing role in propaganda. Pro-Russia campaigns have allegedly used AI-generated voice cloning and fabricated media clips to impersonate journalists and manufacture political narratives at scale.
Supply chains: At the same time, AI systems themselves are becoming targets. Criminal groups have compromised software packages tied to popular AI development tools, stealing cloud credentials and infiltrating corporate systems through supply-chain attacks.
Google argues that frontier AI models remain difficult to compromise directly, but the surrounding ecosystem—plugins, connectors and third-party tools—has become increasingly vulnerable. The company says it is responding with automated defensive systems, including AI agents designed to discover and patch software flaws before attackers can exploit them.
Sponsored byIPv4.Global
Sponsored byRadix
Sponsored byVerisign
Sponsored byVerisign
Sponsored byWhoisXML API
Sponsored byCSC
Sponsored byDNIB.com