Enhanced Security Measures in Place:   To ensure a safer experience, we’ve implemented additional, temporary security measures for all users.

GenAI Security Technical Blog Series 4/6: Secure AI by Design - Understanding GenAI App Infrastructure

cancel
Showing results for 
Show  only  | Search instead for 
Did you mean: 
Announcements
L2 Linker

Title_GenAI-Sec-Tech-Blog_Secure-AI-by-Design_palo-alto-networks.jpg

 

This blog written by Haozhe Zhang, Brody Kutt, Yiheng An, Yu Fu, Qi Deng, Royce Lu, and Scott Emo.

 

 

 

 

 

The rollout of our Secure AI by Design product portfolio has begun. If you want to see how we can help secure AI applications, please see the Palo Alto Networks Can Help section below.

 

 

 

Introduction


Generative AI (GenAI) is redefining possibilities across diverse industries, including education, entertainment, marketing, legal, and healthcare. While these sectors already face varied security risks, GenAI introduces novel challenges and compounds existing complexities. With the right AI security portfolio, you should be able to secure AI by design rather than patching it afterwards.


To be able to secure AI Apps, we need to understand the infrastructure that underpins GenAI is different from other modern architectures in three key ways: 

 

  1. It requires extensive computational power and specialized hardware for model training and inference. 
  2. It necessitates robust data management practices to handle vast amounts of diverse training data securely. 
  3. It involves sophisticated model deployment strategies to ensure scalability and real-time responsiveness.   

 

In this context, four essential elements of infrastructure security are critical for success: Access Management, Insecure Plugins, Supply Chain Attacks, and Model Denial-of-Service attacks. In the following segments, we will discuss each in their traditional context and highlight the unique needs of GenAI systems.

 

Access Management

 

In modern system architectures, issues related to access management always take precedence as the concerns that security engineers want to address. This is because the aftermath of an access management failure can be unpredictable, and can depend on what components are included in the system architecture. When there is a database or filesystem, improper access management may lead to information leaks. When there are command dispatchers and task executors, improper access management may lead to malicious command execution. As we will discuss, these concerns also apply to GenAI system infrastructure. 

 

Exploring the Concept and Challenges

 

Typical GenAI system infrastructure usually contains web services, storage (including filesystem, database, and cache), network services (like load balancers, proxies, and API gateways), and computational resources for processing and generating responses. These components cohesively work together to generate each image or word.

 

While data flows throughout the system, if access to any of these components is inadequately managed, it could potentially expose sensitive data or allow unauthorized users to manipulate the system's behavior. This can lead to severe security breaches or system misuse. For example, unauthorized access to the database could result in the leakage of confidential training data, while compromised network services might allow attackers to intercept or alter data in transit. Additionally, insufficient access controls on computational resources could enable malicious actors to deploy malware or backdoor, severely impacting the system's security and reliability.

 

Strategies And Mitigation

 

Access management issues can be addressed or mitigated with appropriate security frameworks,  like legacy access management principles or the implementation of a zero trust security framework. In GenAI systems, the security models and zero trust framework should be tailored to enforce strict verification for every access request, and to limit access to only what's necessary. This method enhances security by closely monitoring who gets in and what they can do, effectively reducing the risk of breaches.

 

Legacy Models

In traditional access management frameworks, there are several legacy access control models that provide options for securing systems and data. These techniques still prove useful in GenAI applications. They have stood the test of time, though they are not a silver bullet. As AI system technology continues to progress, these more traditional access control models may need to give way to new frameworks like the zero trust model. However, they are still valuable mechanisms to regulate access control. 

 

  • Discretionary Access Control (DAC): DAC allows resource owners to grant or restrict access to others based on individual discretion. It empowers resource owners with the authority to determine access permissions. This model places decision-making in the hands of those who own or manage resources, offering a flexible approach ideal for dynamic environments such as those encountered in GenAI systems. However, the reliance on individual discretion raises concerns about potential security vulnerabilities, particularly if the decision-makers lack comprehensive security awareness.When ACL maintainers are not adequately informed about security practices, it can lead to significant security breaches and other negative consequences.
  • Mandatory Access Control (MAC): MAC is defined by its adherence to stringent access policies which are enforced based on predefined classifications of data and user credentials. This model excels in environments where security is paramount, effectively preventing unauthorized access through its rigid policy framework. Nevertheless, the inflexibility of MAC could pose challenges to the adaptive and collaborative needs inherent in GenAI systems.
  • Role-Based Access Control (RBAC): RBAC simplifies permission management by assigning access rights according to user roles within an organization. This model is particularly effective in large-scale GenAI projects where it streamlines the administration of access rights. However, its reliance on static roles may limit its utility in scenarios requiring more nuanced access decisions.
  • Attribute-based access control (ABAC): ABAC introduces a highly adaptable framework by evaluating a combination of user, system, and contextual attributes to make access decisions. This granularity makes ABAC well-suited for the complexities of GenAI systems, where access requirements can rapidly evolve. The trade-off, however, involves more intricate policy management and the potential for increased system overhead.

 

Zero Trust

Zero Trust is a practical security framework that addresses access management challenges in GenAI systems. Its core philosophy assumes that no entity should be automatically trusted, regardless of whether it is inside or outside the network perimeter. This philosophy is especially relevant in GenAI environments, where the dynamic nature of AI operations and data flows requires a more adaptive and vigilant approach to security.

 

A properly implemented Zero Trust Architecture (ZTA) usually implements the below principles to ensure better access management:

 

  • Principle of Least Privilege: ZTA enforces strict access controls and minimal privileges, ensuring that GenAI systems grant access only to verified and authorized entities.
  • Continuous Verification: ZTA requires continuous monitoring and verification of all entities, which can help GenAI systems dynamically assess and authenticate access requests.
  • Micro-Segmentation: By segmenting access to resources, ZTA can limit lateral movement within GenAI systems, reducing the risk of unauthorized access.
  • Security Posture Assessment: ZTA requires continuous verification. GenAI systems can leverage ZTA's continuous verification framework to constantly assess the security posture of assets. This involves real-time analysis of user behaviors, device integrity, network traffic, and access patterns to identify anomalies or deviations that might indicate a security risk.

 

Insecure Plugins

 

Insecure plugins are another critical concern that must be addressed for the infrastructure security of GenAI systems. Some people have access to GenAI systems such as ChatGPT and Bard but have no idea how they could maximize the magnificent power of GenAI to benefit their lives. Plugins or extensions make this easy for them. Plugins are being created in increasing numbers and with an increasingly important role in the GenAI ecosystem. However, each plugin comes with varying levels of security. Additionally, the attack surface of GenAI systems continuously widens with each new plugin. 

 

Risks Associated With Plugins and Extensions

 

On one hand, plugins and extensions play an important role in GenAI ecosystems by extending GenAI's capabilities. For example, one of ChatGPT plugins released by the famous travel technology company, Expedia, uses LLMs to make travel plans in an innovative way. On the other hand, the integration of insecure plugins into GenAI systems introduces additional risks. The risks have a large variety of outcomes, ranging from information disclosure to remote code execution. 

 

One example is related to a security flaw that previously existed in Google Workspace extension of Google Bard. Researchers managed to exploit the flaw and implement data exfiltration. In two blogs , the researchers demonstrated how the extension can be used to perform  unauthorized actions. Specifically, they showed how the extension could read sensitive information from the user's system and send it to an untrusted location. This underscores the potential risks to users posed by such plug-ins, which can serve as vectors for unauthorized data access and transmission.

 

In another example, researchers from Salt Security conducted a series of investigations on ChatGPT's plugins and identified several vulnerabilities. These vulnerabilities include the potential for malicious plugin installation and account takeover. These plugins can act as masking vectors, potentially exposing user data or compromising user accounts.

 

In addition, researchers from Washington University in St. Louis and University of Washington shared some of their research on an evaluation framework for OpenAI's ChatGPT plugins. In their paper, they not only put forward a methodology with a framework that helps the security evaluation of LLM plugin ecosystems but also analyzed some attack surfaces with real-world test cases.

 

Mitigation Strategies

 

For plugin developers and maintainers:

 

  • Development Best Practices: Apply Shift Left Methodology and ensure a secure start of the software lifecycle for plugins and extensions. This approach involves integrating security practices early in the development process, including threat modeling, secure coding, and automated testing from the initial stages of design and implementation.
  • Proactive Vulnerability Scanning: Use automated tool scanning and penetration testing to help discover security flaws before the plugins enter the production release stage. 
  • Incident Response: A highly efficient incident response system is needed to cover unexpected security problems.

 

For GenAI system maintainer:

 

  • Plugin Audit: Before a plugin is implanted into the GenAI system, a thorough security audit must be performed. 
  • Robust and Secure Architecture: A well-designed architecture with a proper isolation mechanism helps mitigate or limit the negative outcomes caused by insecure plugins. 
  • Monitoring: A monitoring system for abnormal behaviors is needed so that, once an accident happens, they can be noticed for immediate actions.

 

Supply Chain Attacks

 

GenAI systems usually feature complex components, libraries, and software dependencies. The intricate web of dependencies introduces a significant risk in the form of supply chain attacks

 

Potential Impact on GenAI Systems

 

Supply chain attacks on legacy infrastructures usually occur on familiar features like database management systems and network components. GenAI systems, while also often suffering from security flaws on those components, could also be compromised through additional, less familiar ways. Below are some common examples.

 

  1. Compromised Software Dependencies: When an external library or package is compromised, it acts as a malicious vector that exposes the entire system to security risks.For example, in December 2022, a compromised dependency in the PyTorch-nightly package led to a significant supply chain attack. This compromised package collected and uploaded sensitive information from the affected systems, demonstrating the heightened risk and complexity associated with software dependencies in GenAI systems
  2. Compromised Infrastructure Components: Vulnerabilities in foundational systems like databases or web servers can be exploited, leading to widespread system breaches. For example, compromised computation resources can be hijacked for unauthorized tasks, impacting performance and costs. A breach could expose this data, potentially violating privacy regulations and compromising intellectual property. Securing all underlying infrastructure components is crucial to prevent cascading failures and ensure system integrity.
  3. Poisoned Datasets: Adversaries can inject malicious data inside training sets to negatively affect the output of models. This is particularly concerning for GenAI systems that continuously learn from new data, as poisoned inputs could gradually alter the model's behavior over time. The effects might range from subtle biases to more overt issues like generating harmful content or making incorrect decisions in critical applications.
  4. Poisoned Models: Malicious alterations inside GenAI models themselves can result in biased outcomes or create backdoors for further attacks. These compromised models might appear to function normally in most cases, making detection challenging. The implications are far-reaching, potentially affecting decision-making processes in various sectors, from finance to healthcare, and eroding trust in AI systems overall.

 

Detailed further in this blog, ChatGPT was once impacted by a security issue in a 3rd party library, leading to information disclosure vulnerability. In March 2023, an information disclosure vulnerability in ChatGPT was disclosed. This vulnerability allowed users to access other users' messages. This is known as Horizontal Privilege Escalation. This was because of an improper implementation logic in py-redis, a 3rd party library that works as a connector between the Python program and Redis servers.

 

Mitigation Strategies

 

Below are some helpful strategies to safeguard GenAI systems against supply chain security concerns:

 

  1. Dependency Management: Proactive management for dependencies is an effective way to reduce the risks associated with vulnerabilities in 3rd party libraries and packages. This helps the supply chain of the system exclude non-reputable and malicious factors. 
  2. Data and Model Integrity Assurance: To defend against poisoned datasets and models, integrity assurance should be deployed to ensure they are not maliciously altered. This includes employing techniques such as cryptographic hashing and digital signatures to verify the integrity of data and model files. Additionally, adopting secure data pipelines, continuous monitoring, and anomaly detection can help identify and mitigate attempts at data poisoning or model tampering. These practices are critical in GenAI systems to maintain trustworthiness, reliability, and security, ensuring the models produce accurate and unbiased outputs.
  3. Secure Development Lifecycle Integration: Secure development lifecycle for GenAI systems makes security an integral part of the system development process. This also helps identify and remediate vulnerabilities early. The secure development lifecycle pipeline effectively helps the system eliminate known vulnerable components and reduces the risks of new vulnerabilities.
  4. Incident Response: Develop a clear and actionable plan for incident response that suits the GenAI system. Once a supply chain attack happens, a rapid response can help minimize the impact of the security breaches. 

 

Model Denial-of-Service

 

Denial-of-Service attacks, occurring in different contexts or systems, are typically conducted through various methods of exploitation. However, the basic idea behind these methods is similar—to create or direct enough noise or signal to particular parts of a network to freeze or crash their operation, making certain resources unavailable to users. These resources can include websites, services, or applications.

 

In the context of GenAI systems, which include various AI models, Model Denial-of-Service refers to an attack method that, when interfering with large language models (LLMs), consumes such a high amount of resources that it reduces the performance or increases the cost of the model services.

 

Examples of Model Denial-of-Service

 

Denial-of-Service can occur when context expansion happens and consumes excessive calculation or network resources.

 

Security researchers with Dropbox published a blog to discuss their research on OpenAI's GPT models. In their research, they conducted expensive repeat requests by crafting a prompt asking GPT to repeat certain phrases forever, and as a consequence, they observed abnormally long response times with a large amount of output. Thus, the researchers believe that denial-of-service is possible because it is easy to find short prompts that will generate a full context window of output. 

 

Other than the DoS caused by excessive resource consumption, glitch tokens can also be utilized to conducted DoS when they are processed by GenAI models.

 

In this article, researchers discovered a set of anomalous tokens that trigger a previously undocumented failure mode in GPT-2 and GPT-3 models. These tokens, when used in prompts, lead to unusual and often bizarre completions, contradicting the models' intended functions. Based on their findings, we can assume that if these glitch tokens are inserted into the source of models and once the RAGs vector database is updated, the tokens will be added to the information store and may cause unexpected behaviors of LLMs.

 

Mitigation Strategies

 

Mitigation of Model Denial-of-Service is not easy due to the various unique exploitations that exist. Below are some effective ways in response to the model DoS attacks.

 

  • Implement Anomaly Detection: Use anomaly detection to monitor for unusual behavior that could indicate a DoS attack, enabling early intervention.
  • Rate Limiting: Apply rate limiting on API calls to prevent any single user from monopolizing resources, ensuring fair resource allocation.
  • Input Validation: Validate and sanitize inputs to AI models to block known problematic tokens or patterns that can lead to resource exhaustion.

 

Palo Alto Networks Can Help

 

The rollout of our Secure AI by Design product portfolio has begun. 

 

We can help you solve the problem of protecting your GenAI infrastructure with AI Runtime Security that is available today. AI Runtime Security is an adaptive, purpose-built solution that discovers, protects, and defends all enterprise applications, models, and data from AI-specific and foundational network threats.

 

AI Access Security secures your company’s GenAI use and empowers your business to capitalize on its benefits without compromise. 

 

Prisma® Cloud AI Security Posture Management (AI-SPM) protects and controls AI infrastructure, usage and data. It maximizes the transformative benefits of AI and large language models without putting your organization at risk. It also gives you visibility and control over the three critical components of your AI security — the data you use for training or inference, the integrity of your AI models and access to your deployed models.

 

These solutions will help enterprises navigate the complexities of Generative AI with confidence and security.

 

  • 1962 Views
  • 0 comments
  • 1 Likes
Register or Sign-in
Labels