Protecting Azure AI services with Prisma Cloud AI-SPM

cancel
Showing results for 
Show  only  | Search instead for 
Did you mean: 
Announcements
L4 Transporter
No ratings

By Alvaro Fortes, Customer Success Engineer

 

Overview

 

After being named a Leader in 2024 for the fifth consecutive year in the Gartner® Magic Quadrant™ for Cloud AI Developer ServicesAzure AI is positioned at the forefront of empowering customers on their generative AI journey, offering a wide variety of models (such as OpenAI, Phi-3, Meta), models dedicated to sectors such as healthcare, as well as an Unified AI development platform (Azure AI Studio) to help developers accelerate the development of production-ready copilots.

 

Given the rise of this service, in this document, we aim to explore how Prisma Cloud AI-SPM can help customers in discovering Azure AI resources to effectively detect and prioritize AI risks.


Azure AI risks in Prisma Cloud AI-SPM

 

We will focus on the existing out-of-the-box risks for Azure AI in Prisma Cloud AI-SPM:

 

  • AI Asset open to world

  • AI asset without content filtering

  • Public asset containing prompt history

  • Training dataset publicly readable

  • Training dataset publicly writable

  • AI Inference dataset ingesting data from foreign project

 

To replicate these risks, we have created several resources in Azure AI Services, which are documented in the following links. For simplicity, the deployment and configuration of these resources are left to the reader:


 

AI Asset open to world

 

After following the documentation to create a simple assistant for interaction, we observe how Prisma Cloud AI-SPM identifies our model as exposed to the Internet in figure 3:

 

 
unnamed.png

Figure 1: Azure-gpt35-turbo-basemodel_deployment_PaloAltoNetworks 

 

 
unnamed.png

Figure 2: Azure-gpt35-turbo-basemodel_playground_PaloAltoNetworks

 

 
unnamed.png

Figure 3: AI-asset-open-to-world-risk_PaloAltoNetworks

 

Although the model endpoint is protected by Key Authentication Type, it is exposed to the Internet, which could allow unwanted actors to potentially interact with our model (figure 4): 

 

 
unnamed.png

Figure 4: Key-Authentication-Type_PaloAltoNetworks

 

To mitigate this risk, we can use network restrictions on our AI assets, such as limiting Public network access to our Azure AI hub (figure 5):

 
unnamed.png

Figure 5: Azure-disable-Public-Network-access_PaloAltoNetworks

 

AI asset without content filtering

 

Content filtering to both prompts and completions becomes a crucial task for detecting and preventing harmful content when interacting with our model.

 

Microsoft applies certain filters by default for categories such as Violence, Hate, or Self-harm, but we must also implement safeguards against other vulnerabilities, such as Jailbreak and Indirect attacks, which can manipulate the model into performing unintended actions.

 

Prisma Cloud AI-SPM can help identify whether these two filters are enabled (figure 6), providing an additional layer of security for our AI models.

 

In this regard, we can use content filters to set different thresholds for Prompt shields for jailbreak attacks and Prompt shields for indirect attacks (figure 7)

 

 
unnamed.png

Figure 6: AI-asset-without-content-filtering-risk_PaloAltoNetworks 

 

 
unnamed.png

Figure 7: Azure-content-filters_PaloAltoNetworks


Implementing Prompt Shields for Jailbreak Attacks and Indirect Attacks is crucial to maintaining the integrity and safety of our models. These safeguards prevent models from being manipulated into bypassing ethical guidelines or performing unintended actions.


By analyzing input patterns, detecting adversarial intent, and ensuring contextual awareness, these shields protect against vulnerabilities that could lead to harmful or unethical outputs.

 

Public asset containing prompt history

 

Having a public asset that contains prompt history can pose significant security risks. Prompt history can potentially expose sensitive information, such as user inputs or private data, making it accessible to unauthorized parties. 

 

This information could be exploited by malicious actors to gain insights into the usage patterns, internal processes, or vulnerabilities of the AI system. Therefore, it is crucial to manage and restrict access to prompt history to safeguard against such threats.

 

From Azure AI Studio, we can configure the deployment of our model endpoint in a web application that, in turn, stores the history of all interactions with the model (prompts and completions) in Azure Cosmos DB (which, by default, is public): 

 

 
unnamed.png

Figure 8: Enabling-Chat-History_PaloAltoNetworks


In figure 9 we see an example of an interaction with our model through the deployed web application. We can see how the original prompt is stored in Cosmos DB (figure 10):

 

unnamed.png

Figure 9: Chat-interaction_PaloAltoNetworks

 

 
unnamed.png

Figure 10: Azure-CosmosDB-chat-history_PaloAltoNetworks


This risk could be mitigated by setting network restrictions on our Azure Cosmos DB resource (figure 11):

 
unnamed.png

Figure 11: Azure-CosmosDB-networking-settings_PaloAltoNetworks


Training dataset publicly readable & Training dataset publicly writable

 

Two critical risks related to training datasets—being publicly readable or writable—can significantly impact the security and integrity of our Azure AI models:

 

  • Training Dataset Publicly Readable:
    If a training dataset is publicly readable, sensitive and confidential business data used for fine-tuning models may be exposed to unauthorized parties. This can lead to data theft, intellectual property loss, regulatory breaches, and damage to customer trust.

 

  • Training Dataset Publicly Writable:
    A publicly writable training dataset poses the risk of unauthorized manipulation or injection of malicious data. This compromises the integrity of the dataset and the trained model, potentially leading to erroneous predictions, operational failures, or security vulnerabilities.

 

For fine-tuned models, we use training datasets to create a new model from a base model with our own data. Fine-tuning is a great way to achieve higher-quality results while reducing latency.

 

In Azure, we can fine-tune models as explained in the documentation.

 

These training datasets are mainly part of Azure Storage Accounts, which are linked to Hubs and Projects objects within Azure AI Studio.

 

In figure 12, we have uploaded a jsonl file containing information on how we want our new fine-tuned model to respond to specific prompts:

 

 
unnamed.png

Figure 12: Training-dataset_PaloAltoNetworks

 

Since the dataset is stored in a Storage Account, we must consider the network restrictions and permissions applied to it. If the account is configured with containers that allow anonymous access, this could enable malicious actors to perform data poisoning by uploading the same file (with altered content), causing that data to be included in future training of our model.

 

For this reason, it is important to review the configuration of our resources used to store training datasets, limiting internet access or allowing access only through private networks, as outlined in this Microsoft documentation.

 

AI Inference dataset ingesting data from foreign project

 

Inference data poisoning in Azure AI models using Retrieval Augmented Generation (RAG) can lead to inaccurate or biased results. If untrusted data from datasets that are not controlled by the organization is ingested, it can distort the model's responses.

 

In Azure AI Studio (figure 13), we can add data sources to our model to make it more accurate (unlike fine-tuning, here we don’t create a new model but rather query external sources).

unnamed.png

Figure 13: Data-upload-RAG_PaloAltoNetworks

 

These data sources are indexed by the Azure AI Search service, which dramatically increases the response time until we receive an answer to our prompt. 

 

One example is connecting to external services that are not under our organization's control in Azure (such as a Storage Account that does not belong to our subscription). If this Storage Account is not under our control, it becomes an additional attack vector for malicious actors to alter its content, resulting in the external sources we query via RAG being poisoned, leading to unwanted responses from our model.

 

Conclusion

 

This article provides an overview of key risks associated with Azure AI services and offers actionable solutions to address them. Using Prisma Cloud AI-SPM, we identified vulnerabilities such as AI assets exposed to the public, missing content filters, and unsecured data, including prompt histories and training datasets. Each of these risks poses a significant challenge to the integrity and reliability of AI systems, making their mitigation essential.

 

By adopting the measures highlighted—such as implementing network restrictions, enabling content filtering, and securing access to sensitive data—organizations can effectively prevent data breaches, unauthorized modifications, and inference data poisoning.

 

References

 

[1] 2024 Gartner® Magic Quadrant™ for Cloud AI Developer Services

[2] Azure AI services

[3] Foundation models for healthcare 

[4] Azure AI Studio

[5] Introduction to Prisma Cloud AI-SPM

[6] Configure Content Filters with Azure OpenAI Service

[7] Azure AI Hubs and Projects overview

[8] Configure Azure Storage firewalls and virtual networks

[9] What’s Azure AI Search 

 
Rate this article:
  • 249 Views
  • 0 comments
  • 0 Likes
Register or Sign-in
Contributors
Labels
Article Dashboard
Version history
Last Updated:
‎11-22-2024 12:49 PM
Updated by: