|
ChatGPT, the newly released language model, has quickly gained popularity and is used for various tasks, from automation to music composition. While having useful features like fast and easy-to-use code examples, it also has the ability to create sophisticated malware without malicious code. According to new research, “ChatGPT could easily be used to create polymorphic malware.” This article will explore how this power can be used for both good and bad purposes.
ChatGPT is able to generate responses using a vast collection of data from 2021 and earlier. When asked for code, ChatGPT will generate modified or inferred code based on the parameters set by the user rather than reproducing examples it has previously learned. However, ChatGPT has built-in content filters to prevent it from answering questions on potentially problematic subjects like code injection.
A recent study has found that when using the API for the ChatGPT system, it does not utilize its content filter. “API bypasses every content filter there is.” The reason for this is unclear, but it makes tasks easier as the web version can become slowed down by more complex requests. The study also found that the system can be used to inject pseudocode and generate corresponding shellcode.
Sponsored byVerisign
Sponsored byDNIB.com
Sponsored byIPv4.Global
Sponsored byCSC
Sponsored byVerisign
Sponsored byWhoisXML API
Sponsored byRadix