|
||
Cyberattacks are entering a new phase of automation and adaptability, driven not by human ingenuity alone, but by large language models (LLMs) embedded directly into malicious code. In its latest threat intelligence bulletin, Google’s Threat Intelligence Group (GTIG) details a striking evolution: malware families that are no longer merely aided by AI, but fundamentally co-executed with it.
Self-mutating malware: The most novel discovery involves PROMPTFLUX, a dropper written in VBScript that interfaces live with Google’s Gemini API. Using a module called “Thinking Robot,” the malware issues machine-parsable prompts to the model, instructing it to regenerate its own source code hourly for evasion. Notably, the malware leverages the “-latest” model tag, ensuring access to Gemini’s most recent stable release—effectively treating the LLM as a self-updating obfuscation engine. Although still experimental, the recursive design marks a significant departure from conventional polymorphism.
Other malware, such as PROMPTSTEAL—attributed to Russia’s APT28—uses the Hugging Face-hosted Qwen2.5-Coder model to generate live system reconnaissance commands during execution. This marks the first known case of malware querying an external LLM in active operations. Deployed under the guise of a benign image generator, PROMPTSTEAL runs AI-generated one-liners in the background, exfiltrating system data with no static signatures to inspect.
Prompt manipulation: Threat actors are also adopting increasingly deceptive prompt engineering techniques to bypass AI safety filters. Chinese and Iranian state-affiliated groups have been observed posing as students in capture-the-flag (CTF) competitions, or as academics writing cybersecurity papers. In one case, an Iranian actor accidentally leaked operational secrets—including an active command-and-control domain—by pasting live infrastructure code into Gemini for debugging help.
The underground market for generative-AI tooling is also maturing. According to GTIG, vendors now offer tiered AI-assisted malware services—complete with obfuscation-as-a-service, phishing kits, and access to generative APIs—mirroring SaaS business models. Some tools even advertise their ability to automate exploit research or generate deepfakes to circumvent KYC checks.
Google has responded by disabling abused assets, retraining classifiers, and hardening Gemini’s safety architecture. While still in its early stages, the development of live LLM-in-the-loop tooling represents a systemic risk that traditional security controls are ill-equipped to address.
Sponsored byCSC
Sponsored byIPv4.Global
Sponsored byVerisign
Sponsored byVerisign
Sponsored byDNIB.com
Sponsored byWhoisXML API
Sponsored byRadix