Home / News

AI-Powered Malware Evolves: Google Uncovers Live Use of Generative Models in Active Intrusions

Cyberattacks are entering a new phase of automation and adaptability, driven not by human ingenuity alone, but by large language models (LLMs) embedded directly into malicious code. In its latest threat intelligence bulletin, Google’s Threat Intelligence Group (GTIG) details a striking evolution: malware families that are no longer merely aided by AI, but fundamentally co-executed with it.

Self-mutating malware: The most novel discovery involves PROMPTFLUX, a dropper written in VBScript that interfaces live with Google’s Gemini API. Using a module called “Thinking Robot,” the malware issues machine-parsable prompts to the model, instructing it to regenerate its own source code hourly for evasion. Notably, the malware leverages the “-latest” model tag, ensuring access to Gemini’s most recent stable release—effectively treating the LLM as a self-updating obfuscation engine. Although still experimental, the recursive design marks a significant departure from conventional polymorphism.

Other malware, such as PROMPTSTEAL—attributed to Russia’s APT28—uses the Hugging Face-hosted Qwen2.5-Coder model to generate live system reconnaissance commands during execution. This marks the first known case of malware querying an external LLM in active operations. Deployed under the guise of a benign image generator, PROMPTSTEAL runs AI-generated one-liners in the background, exfiltrating system data with no static signatures to inspect.

Prompt manipulation: Threat actors are also adopting increasingly deceptive prompt engineering techniques to bypass AI safety filters. Chinese and Iranian state-affiliated groups have been observed posing as students in capture-the-flag (CTF) competitions, or as academics writing cybersecurity papers. In one case, an Iranian actor accidentally leaked operational secrets—including an active command-and-control domain—by pasting live infrastructure code into Gemini for debugging help.

The underground market for generative-AI tooling is also maturing. According to GTIG, vendors now offer tiered AI-assisted malware services—complete with obfuscation-as-a-service, phishing kits, and access to generative APIs—mirroring SaaS business models. Some tools even advertise their ability to automate exploit research or generate deepfakes to circumvent KYC checks.

Google has responded by disabling abused assets, retraining classifiers, and hardening Gemini’s safety architecture. While still in its early stages, the development of live LLM-in-the-loop tooling represents a systemic risk that traditional security controls are ill-equipped to address.

NORDVPN DISCOUNT - CircleID x NordVPN
Get NordVPN  [74% +3 extra months, from $2.99/month]
By CircleID Reporter

CircleID’s internal staff reporting on news tips and developing stories. Do you have information the professional Internet community should be aware of? Contact us.

Visit Page

Filed Under

Comments

Comment Title:

  Notify me of follow-up comments

We encourage you to post comments and engage in discussions that advance this post through relevant opinion, anecdotes, links and data. If you see a comment that you believe is irrelevant or inappropriate, you can report it using the link at the end of each comment. Views expressed in the comments do not represent those of CircleID. For more information on our comment policy, see Codes of Conduct.

CircleID Newsletter The Weekly Wrap

More and more professionals are choosing to publish critical posts on CircleID from all corners of the Internet industry. If you find it hard to keep up daily, consider subscribing to our weekly digest. We will provide you a convenient summary report once a week sent directly to your inbox. It's a quick and easy read.

Related

Topics

Brand Protection

Sponsored byCSC

IPv4 Markets

Sponsored byIPv4.Global

Domain Names

Sponsored byVerisign

Cybersecurity

Sponsored byVerisign

DNS

Sponsored byDNIB.com

DNS Security

Sponsored byWhoisXML API

New TLDs

Sponsored byRadix