Now You See Me, Now You Don't: Using LLMs to Obfuscate Malicious JavaScript
Dec. 20, 2024, 4:41 p.m.
Tags
External References
Description
This article discusses an adversarial machine learning algorithm that uses large language models (LLMs) to generate novel variants of malicious JavaScript code at scale. The algorithm iteratively transforms malicious code to evade detection while maintaining its functionality. The process involves rewriting prompts such as variable renaming, dead code insertion, and whitespace removal. The technique significantly reduced detection rates on VirusTotal. To counter this, the researchers retrained their classifier on LLM-rewritten samples, improving real-world detection by 10%. The study highlights both the potential threats and opportunities presented by LLMs in cybersecurity, demonstrating how they can be used to create evasive malware variants but also to enhance defensive capabilities.
Date
Published: Dec. 20, 2024, 3:25 p.m.
Created: Dec. 20, 2024, 3:25 p.m.
Modified: Dec. 20, 2024, 4:41 p.m.
Indicators
4f1eb707f863265403152a7159f805b5557131c568353b48c013cad9ffb5ae5f
3f0b95f96a8f28631eb9ce6d0f40b47220b44f4892e171ede78ba78bd9e293ef
03d3e9c54028780d2ff15c654d7a7e70973453d2fae8bdeebf5d9dbb10ff2eab
http://jakang.freewebhostmost.com/korea/app.html
bafkreihpvn2wkpofobf4ctonbmzty24fr73fzf4jbyiydn3qvke55kywdi.ipfs.dweb.link
Attack Patterns
FraudGPT
WormGPT
T1027.001
T1588.001
T1587.001
T1588.002
T1059.007
T1140
T1027
Additional Informations
Korea, Republic of