GenAI Security Technical Blog Series 6/6: Secure AI by Design - A Double-Edged Sword

cancel
Showing results for 
Show  only  | Search instead for 
Did you mean: 
L2 Linker

Title_Secure-AI-by-Design_palo-alto-networks.jpg

 

 

This blog written by: Jay Chen, Asher Davila Loranca, Brody Kutt, Haozhe Zhang, Yiheng An, Yu Fu, Qi Deng, Royce Lu

Technical editors:  Nicole Nichols, Sam Kaplan, and Aryn Pedowitz

 

 

Executive Summary

 

The rapid evolution of Generative AI (GenAI) brings about a double-edged sword. While revolutionizing industries with its transformative potential, it simultaneously presents significant security challenges. Cybercriminals are increasingly leveraging GenAI, exploiting its capabilities to enhance their malicious activities. This rise in GenAI-powered attacks enables adversaries to streamline and automate each stage of the Cyber Kill Chain, from reconnaissance and phishing to code generation and malware deployment. The result? Attacks are becoming more effective, harder to detect, and occurring at an alarming rate, scale, and sophistication.

 

Defenders must proactively combat this growing threat by turning the tables and utilizing GenAI to their own advantage. This includes leveraging GenAI to expand their security coverage, accelerate threat detection and response times, and develop more robust, proactive defense mechanisms. A key advantage for defenders lies in their access to high-quality, domain-specific data. By effectively collecting, processing, and feeding this data to GenAI models, organizations can empower their defenses with precise, context-aware decision-making capabilities.

 

The cybersecurity landscape will continue its rapid evolution as both attackers and defenders harness the power of GenAI. To stay ahead of the curve, organizations must prioritize continuous innovation and adaptation of their security strategies. Solutions like those from Palo Alto Networks, which are powered by Precision AI, offer a proactive approach, leveraging AI to anticipate, detect, and mitigate threats more efficiently, thus maintaining a robust security posture against the escalating threat of AI-driven cyber attacks.

 

 

 

 

The rollout of our Secure AI by Design product portfolio has begun. If you want to see how we can help secure AI applications, please see the Palo Alto Networks Can Help  section below.

 

 

Introduction

 

This blog concludes our GenAI Security Framework series, shifting our focus from defense to offense. While previous installments explored securing GenAI models, applications, and infrastructure, this piece delves into the adversarial potential of GenAI, examining how malicious actors can exploit this technology.

 

As a refresher, our series laid out a comprehensive GenAI security framework:

 

 

Building on this foundation, we now shift our focus to the attackers. In this blog, we will explore how GenAI can be weaponized to both accelerate traditional attacks and enable new forms of cyber threats. We will examine emerging attacks powered by GenAI and discuss how this technology can be abused to facilitate each stage of the Cyber Kill Chain.

 

Emerging Attacks Elevated by GenAI

 

Like any publicly available technology, GenAI can be harnessed for both positive and negative purposes. On the one hand, GenAI can effectively contextualize information, aiding in decision-making processes and generating beneficial content for society. On the other hand, it can also be exploited for malicious activities. GenAI has the potential to generate misinformation, craft phishing schemes, synthesize realistic malign images or videos, and even produce malicious code. Research has also shown that GenAI can be used to perform penetration testing, further demonstrating its capability to automate aspects of cyber offenses.

 

In mid-2023, WormGPT began surfacing on various cybercriminal forums. WormGPT is a chatbot service similar to ChatGPT but without safety restrictions. Its creator claims WormGPT was specially fine-tuned to assist users in generating malicious content, including malware, hate speech, and misinformation. Researchers have successfully used it to create convincing phishing texts for business email compromise (BEC). Following WormGPT, Evil-GPT, and Fraud-GPT emerged on the Dark Web, both claiming to be superior versions of WormGPT.

 

Microsoft and OpenAI revealed that multiple nation-state cybercriminal groups, including Forest Blizzard (Russia), Emerald Sleet (North Korea), Crimson Sandstorm (Iran), Charcoal Typhoon (China), and Salmon Typhoon (China), attempted to use OpenAI services for malicious purposes throughout 2023. These threat actors utilized LLMs for tasks such as understanding satellite technologies, crafting phishing content, and writing scripts.

 

In early 2024, CNN reported that a finance worker at a multinational firm was tricked into transferring $25 million to scammers who used a type of GenAI technology called deepfakes to impersonate the company’s CFO during a video call.

 

A plethora of academic research has also demonstrated the possibilities of using GenAI to carry out cyber attacks. Researchers from UIUC showed that LLM agents can autonomously identify and exploit vulnerabilities on a website without prior knowledge and human intervention. The same group of researchers developed a multi-agent framework with teams of LLM agents that successfully exploited zero-day and one-day vulnerabilities in real-world systems. Researchers from Nanyang Tech developed an LLM-powered penetration testing framework, PENTESTGPT, which successfully completed various Capture The Flag (CTF) challenges. Additionally, the CyberSecEval2 benchmark from Meta demonstrates that the most recent LLM models, such as GPT-4, which are granted access to Python and other code interpreters to execute code directly generated by the LLM in response to a prompt, are more effective at exploiting vulnerabilities.

 

The speed and scale of LLM-powered cyber attacks will continue to rise. Before we bolster our IT infrastructure to defend against more intense attacks, it is imperative to understand how threat actors could leverage GenAI to empower their arsenals. In the next section, we will outline how threat actors may utilize GenAI to facilitate cyber attacks.

 

The Cyber Kill Chain Supercharged by GenAI

 

The Cyber Kill Chain is a framework developed by Lockheed Martin  to delineate the various stages of a cyber attack. These stages include Reconnaissance, Weaponization, Delivery, Exploitation, Installation, Command and Control (C2), and Actions on Objectives.

 

This framework aids organizations in understanding and mitigating cyber threats by breaking down the attack process into distinct, manageable phases. By mapping attacks to these stages, security teams can more effectively prevent, detect, and respond to potential breaches.

 

Given the rapidly advancing capabilities of GenAI and the real threats it has posed, we envision that GenAI could one day act as the “mastermind” that plans, directs, and executes a cyber attack. The subsections below dive into each stage of the attack chain to illustrate how the use of GenAI can facilitate each stage of the operation. As part of our analysis, we have not directly addressed the "Exploitation" stage, principally because successful exploitation is dependent on the execution of the planning and preparation phases which are extensively covered below.

 

Reconnaissance

Reconnaissance is the initial phase where the attacker gathers information about the target. It involves researching the organization to identify potential entry points, vulnerabilities, and valuable data. Common techniques used in this stage include scanning networks, harvesting employee names and emails, and collecting open-source intelligence. The goal is to understand the target's defenses and weaknesses to plan the attack.

 

GenAI's Impact: GenAI can automate and enhance reconnaissance through:

 

  • Automated Information Gathering: GenAI can crawl the web, social media platforms, and even dark web forums to collect information about targets. This includes identifying employees, technologies used, organizational structure, and potential vulnerabilities based on publicly available data.
  • Vulnerability Discovery: GenAI can analyze the target’s public source code repositories and other online assets to look for known and unknown vulnerabilities. It can also be used to identify misconfigurations and security gaps in publicly exposed systems.
  • Content Analysis: GenAI can analyze public documentation, configuration files, and user forums of commercial products for clues about potential misconfiguration present in a target of interest.
  • Behavioral Analysis for Profiling: GenAI can profile the online behavior and interactions of key personnel to predict their susceptibility to phishing or other forms of social engineering attacks.

 

Weaponization

During the weaponization stage, attackers leverage the information gathered to create a tailored exploit. This could involve crafting a phishing email with a malicious attachment, building a specific malware payload, or exploiting a known software vulnerability.

 

GenAI's Impact: GenAI can make weaponization faster and more effective:

 

  • Tailored Malware Generation: GenAI can quickly generate a large number of unique malware strains, rendering signature-based detection methods ineffective. This includes the creation of polymorphic malware, which alters its code with each iteration to evade detection. Additionally, GenAI can assist in obfuscating malware code, making it more challenging for security tools to analyze and detect malicious code.
  • Accelerated Exploit Development: GenAI can assist in turning a discovered vulnerability into a working exploit, as demonstrated in research. This accelerates the weaponization process, making it challenging for defenders to patch vulnerabilities before they're exploited.
  • Automated Attack Orchestration: GenAI can help script the coordination of multi-stage attacks, automating the deployment of various components such as exploit payloads or data exfiltration mechanisms.
  • Stealth Techniques: GenAI can generate scripts for simple anti-forensic techniques, such as log file manipulation to cover tracks, or basic encryption to secure exfiltrated data.

 

Delivery

The delivery stage is where the attacker transmits the weaponized payload to the target. Common delivery methods include phishing emails, malicious websites, or infected USB drives. This phase represents the point at which the attacker enters the target’s environment.

 

GenAI's Impact: GenAI can make delivery methods more convincing and harder to detect:

 

  • Convincing Phishing Lures: GenAI can craft highly believable phishing emails that appear to come from trusted sources, enticing users to click on malicious links or open infected attachments. This includes tailoring emails to individual targets based on their online profiles and activities. Additionally, GenAI can mimic the writing style and tone of specific individuals or organizations, enhancing the credibility of phishing emails and reducing the likelihood of suspicion.
  • Synthetic Content Generation: GenAI can create realistic-looking websites, social media profiles, and news articles that host malware or redirect users to malicious sites, making it harder to discern legitimate content from malicious ones. Additionally, GenAI can produce convincing audio or video messages that appear to come from trusted sources such as industry leaders, politicians, or influencers, instructing targets to download malicious files or visit harmful websites.
  • ​​Interactive Phishing and automated chatbots: GenAI can create phishing emails that engage the target in a conversation, increasing the likelihood of the target eventually downloading a malicious attachment or visiting a malicious website. Alternatively, GenAI can deploy bots on social media platforms or create chatbots that impersonate customer service representatives, and engage targets in conversations, leading them to malicious links or downloads.

 

Installation

In the installation phase, the attacker establishes a foothold in the target’s environment by installing malware. This often involves installing backdoors, creating new user accounts, or modifying system settings to maintain access. The malware is typically designed to remain undetected while providing ongoing control over the compromised system.

 

GenAI's Impact: GenAI can create stealthier and more persistent malware:

 

  • Stealthy Backdoor Creation: GenAI can create backdoors that are difficult to detect and remove. This could include generating code that blends in with legitimate system processes or using steganographic techniques to hide malicious code within seemingly harmless files.
  • Malware with Evasion Capabilities: GenAI can create malware with anti-forensic capabilities, such as log wiping, timestamp modification, and stealth techniques to avoid detection by security tools. This makes it harder for defenders to analyze and respond to an attack.
  • Evasion Techniques: GenAI can automate basic evasion techniques, such as altering file timestamps or hiding files in non-standard directories to evade detection by file-based security measures. Moreover, GenAI can automate sophisticated evasion methods, such as utilizing machine learning algorithms to dynamically adjust file attributes and behaviors based on real-time detection feedback.

 

Command & Control

With persistence successfully accomplished at the installation stage, attackers need to establish a channel for remote communication with the compromised system. This allows them to send commands, exfiltrate data, and potentially move laterally within the network.

 

GenAI's Impact: GenAI can make command and control (C2) communication more resilient and harder to detect:

 

  • Sophisticated Communication Channels: GenAI can create custom dynamic communication protocols that are difficult for traditional network monitoring tools to detect. These protocols can continuously evolve to avoid detection. AI can also embed C2 data within seemingly benign files or network traffic, such as images, videos, or social media posts, making detection by traditional security tools more challenging.
  • Adaptive C2 Infrastructure: GenAI may create convincing decoy C2 servers and traffic patterns, confusing and misdirecting defensive efforts. By managing the continuous creation and rotation of new C2 servers, domains, and IP addresses, GenAI can make it significantly more challenging for defenders to execute takedowns. Furthermore, GenAI systems can autonomously identify and exploit vulnerable systems, dynamically expanding the C2 infrastructure as needed. These techniques collectively enhance the complexity of detecting and mitigating C2 infrastructure, posing a substantial challenge to cybersecurity defenses.
  • Traffic Patterning: GenAI can disguise malicious traffic by mimicking legitimate user behavior or blending in with normal network activity, reducing the likelihood of anomalous detection by intrusion detection systems (IDS).
  • Command Automation: GenAI can automate the creation of scripts for executing commands on compromised systems. This can include creating new user accounts with administrative privileges, altering system settings, or manipulating files to achieve specific objectives.

 

Actions on Objectives

The final stage involves the attacker taking actions to achieve their ultimate goals, which could involve data theft, system destruction, or further network compromise. This phase is where the attacker extracts valuable information, disrupts operations, and conducts fraudulent transactions.

 

GenAI's Impact: GenAI can automate and optimize the attacker's objectives:

 

  • Data Exfiltration: GenAI can quickly identify and extract valuable data from compromised systems by searching for specific patterns, keywords, or data formats within large datasets. For example, it can locate and steal intellectual property, personal data, or confidential business information.
  • System Disruption or Destruction: GenAI can generate scripts and payloads to disrupt or destroy the compromised systems. These scripts can target specific components, such as databases or operating systems, to maximize damage.
  • Data Analysis and Reporting: Generating reports or summaries based on data collected during reconnaissance and exploitation phases, providing insights for further decision-making by the attackers.

 

Conclusion

 

The emergence of GenAI in the cybersecurity landscape has ushered in a new era of potential threats, significantly lowering the entry barrier for cyber attacks and accelerating every stage of the cyber kill chain. From reconnaissance to command and control, GenAI's capabilities empower attackers to operate at unprecedented scales, potentially targeting thousands of victims simultaneously. As these AI-powered attacks grow more adaptive and sophisticated, evolving in tandem with GenAI's advancing capabilities, the cybersecurity community faces an urgent need to bolster its defenses.

 

GenAI technologies present substantial challenges to traditional cybersecurity measures. Attackers can leverage AI to rapidly adapt their tactics, evade detection through advanced obfuscation techniques, and exploit vulnerabilities across different environments. This adaptability makes conventional security approaches increasingly inadequate.

 

To counter this escalating threat landscape, it is crucial to employ AI for defensive purposes. The concept of "fighting AI with AI" has never been more pertinent, as organizations strive to outsmart adversaries with more intelligent and precise AI-driven solutions. By harnessing AI defensively, organizations can improve their ability to detect, prevent, and mitigate emerging threats in real time.

 

For instance, Palo Alto Networks' proprietary AI system, Precision AI, combines machine learning, deep learning, GenAI, and other AI techniques to predict and block attacks. It draws on extensive security datasets collected from cloud, endpoint, and network sources. Specifically, Palo Alto Networks operates a series of AI-based detection engines within its Advanced Threat Prevention (ATP) cloud to analyze traffic for advanced C2 and malware threats in real time, providing protection against zero-day threats. AI-powered ATP can stop 22% more unknown malware in real time than traditional methods. Additionally, the cloud-delivered WildFire malware analysis service leverages data and threat intelligence from the industry’s largest global community, applying advanced AI analysis to automatically identify unknown threats and halt attackers. Palo Alto Networks’ Advanced URL Filtering uses inline cloud-based deep learning detectors to assess suspicious web content in real time, protecting users from zero-day threats by detecting evasion techniques, targeted attacks, and new, unknown web-based threats.

 

The multifaceted AI-powered defense strategy represents a crucial step in staying ahead of the evolving threat landscape shaped by GenAI. By leveraging the power of AI, we can enhance our cybersecurity defenses and better protect against the sophisticated threats posed by AI-powered attackers.

 

Palo Alto Networks Can Help

 

The rollout of our Secure AI by Design product portfolio has begun. 

 

We can help you solve the problem of protecting your GenAI infrastructure with AI Runtime Security that is available today. AI Runtime Security is an adaptive, purpose-built solution that discovers, protects, and defends all enterprise applications, models, and data from AI-specific and foundational network threats.

 

AI Access Security secures your company’s GenAI use and empowers your business to capitalize on its benefits without compromise. 

 

Prisma® Cloud AI Security Posture Management (AI-SPM) protects and controls AI infrastructure, usage and data. It maximizes the transformative benefits of AI and LLMs without putting your organization at risk. It also gives you visibility and control over the three critical components of your AI security — the data you use for training or inference, the integrity of your AI models, and access to your deployed models.

 

These solutions will help enterprises navigate the complexities of Generative AI with confidence and security.

 

 

  • 2029 Views
  • 0 comments
  • 0 Likes
Register or Sign-in
Labels