- Access exclusive content
- Connect with peers
- Share your expertise
- Find support resources
Prisma AIRS is an advanced, unified platform specifically engineered to provide end-to-end security for the entire Artificial Intelligence (AI) ecosystem. It delivers comprehensive protection across the full AI lifecycle, ensuring the safety and integrity of the critical components that power modern AI applications. All in one platform, Prisma AIRS lets organizations discover their AI ecosystem, assess AI risk, and protect against threats. Following are the 5 pillars of Prisma AIRS.
AI Model Security — scanning models for vulnerabilities like tampering, malicious scripts, and deserialization attacks.
AI Red Teaming — automated penetration testing using an adaptive red team agent that stress-tests AI apps the way a real attacker would.
Posture Management — surfacing risks from excessive permissions, sensitive data exposure, or misconfigurations before they’re exploited.
Runtime Security — stopping prompt injection, malicious code, toxic content, data leaks, hallucinations, or resource overload in the moment they happen.
AI Agent Security — protecting your no-code and low-code AI agents against entirely new threat classes like identity impersonation, memory manipulation, and tool misuse.
This how-to guide provides the guidance to integrate the Prisma AIRS Model Security function into your Continuous Integration/Continuous Deployment (CI/CD) pipeline.
The goal is to automatically enforce security policies and prevent vulnerable AI model artifacts from proceeding to deployment.
The scanning capability fits into the Assess/Pre-deployment phase of the workflow to ensure a trustworthy model.
By integrating AI Model Security in your CI/CD pipeline, you are shifting left to detect vulnerabilities and risks early, ensuring models are safe before deployment. The security policy enforcement (failing the build) acts as an automated gate to maintain security integrity.
The goal-oriented nature of the model scanning (to ensure model safety for deployment) means it is best placed in the Test/Validation phase of your pipeline.
|
CI/CD Stage |
Purpose |
Optimal Stage for Model Scan |
|
Source |
Developers commit code to the repository (e.g., Git). |
Typically, this stage involves initial code/dependency scanning. |
|
Build |
Compiles the source code and creates the executable model artifact. |
The model artifact is now created and available for scanning. |
|
Test |
Runs automated tests (unit, integration, performance) on the built artifacts. |
Model Scan Location: The model security scan should run here alongside other quality and security checks, such as functional and unit tests. The script's exit code determines if the policy is enforced (pipeline fails). |
|
Deploy |
Deploys the application/model to a production environment. |
Only models that PASS the scan proceed to this stage. |
Before starting the integration, ensure you have the following prerequisites in place:
Reference - https://docs.paloaltonetworks.com/ai-runtime-security/ai-model-security/model-security-to-secure-you...
The integration involves three main actions in your CI/CD pipeline:
The following sample python script, model_scan.py, demonstrates how to initialize the Scanner SDK, trigger the scan, and enforce the security policy by checking the scan results.
Python
import os
import sys
import argparse
import json
from pan_modelsecurity import Scanner, AiProfile
def parse_arguments():
"""Parses command-line arguments for model path and security group ID."""
parser = argparse.ArgumentParser(description="Palo Alto Model Security Scan.")
parser.add_argument(
"--model-path",
required=True,
help="Path to the model artifact to be scanned."
)
parser.add_argument(
"--security-group-id",
required=True,
help="The ID of the security group."
)
return parser.parse_args()
def run_model_scan(model_path: str, security_group_id: str):
"""
Initializes the SDK, scans the model, enforces the security policy.
"""
try:
# 1. Initialize the Scanner Client
scanner = Scanner()
ai_profile = AiProfile(profile_name=security_group_id)
print(f"Starting scan for model: {model_path} against profile: {security_group_id}...")
# 2. Trigger the Scan
scan_response = scanner.sync_scan(
ai_profile=ai_profile,
model_uri=f"file://{model_path}"
)
# Save the full report for audit/debugging
with open('model_scan_report.json', 'w') as f:
json.dump(scan_response, f, indent=4)
# 3. Policy Enforcement Check. Fail CICD pipeline if issues detected.
policy_violated = False
for finding in scan_response.get("findings", []):
error = finding.get("error", "")
if error:
policy_violated = True
print(f"Policy Violation Detected ")
break
if policy_violated:
print("FAIL")
sys.exit(1)
else:
print("PASS")
sys.exit(0)
except Exception as e:
print(f"Unexpected error: {e}")
sys.exit(1)
if __name__ == "__main__":
args = parse_arguments()
run_model_scan(args.model_path, args.security_group_id)
The CI/CD pipeline needs a step that executes the Python script and fails the job if a violation is detected.
Use the following command in your CI/CD configuration file, replacing the model path and security group ID with your specific values:
Python
python model_scan.py \
--model-path ./artifacts/my_model.pkl \
--security-group-id groupid
The sample python script or equivalent can be run as a standard step in almost any CI/CD platform that supports shell commands and Python environments.
|
CI/CD Platform Category |
Examples of Tools |
Integration Method |
|
Continuous Integration (CI) Tools |
Jenkins, CircleCI, TeamCity, Bamboo |
Configure a job/step to execute the python model_scan.py command after the model artifact is built or downloaded. |
|
Version Control/Pipeline Platforms |
GitHub Actions, GitLab CI, Azure Pipelines |
Define the Python script execution as a step within your workflow definition file (e.g., .github/workflows/*.yml or .gitlab-ci.yml). |
The goal is to leverage the automation capabilities of these platforms to run the security check every time a new model version is generated and stop the pipeline if the check fails.
The Python script can be executed on any CI/CD platform that supports running shell commands or Python. For example -
JSON
name: Model Security Scan Pipeline
on:
push:
branches: [ "main" ]
pull_request:
branches: [ "main" ]
jobs:
model_scan_job:
runs-on: ubuntu-latest
steps:
- name: 📦 Checkout Repository
uses: actions/checkout@v4 [cite: 412]
- name: 🐍 Set up Python
uses: actions/setup-python@v5
with:
[cite_start]python-version: '3.12'
- name: 🛠️ Install Dependencies
run: |
pip install pan-modelsecurity
# Assume your model_scan.py and model artifact are present
- name: 🛡️ Run Prisma AIRS Model Scan (Policy Enforcement)
# Fail the job if the script exits with a non-zero code (due to a violation)
run: |
python model_scan.py \
--model-path ./artifacts/my_model.pkl \
--security-group-id ${{ secrets.PRISMA_SECURITY_GROUP_ID }}
- name: 📄 Upload Scan Report for Artifacts
if: always() # Uploads even if the scan fails for auditing
uses: actions/upload-artifact@v4
with:
name: model-scan-report
path: model_scan_report.json
In a GitLab CI/CD pipeline (.gitlab-ci.yml), you define a job in the test stage to execute the script.
JSON
stages:
- build
- test
- deploy
model_security_scan:
stage: test
image: python:3.12 # Use a Python Docker image
script:
- echo "Installing dependencies..."
- pip install pan-modelsecurity
- echo "Running Prisma AIRS Model Scan..."
# The script execution is the security gate. A non-zero exit code fails the job.
- python model_scan.py \
--model-path ./artifacts/my_model.pkl \
--security-group-id $PRISMA_SECURITY_GROUP_ID
artifacts:
when: always # Collect the report even on failure for auditing
paths:
- model_scan_report.json
By performing these actions, you shift security left, automatically running the check every time a new model version is generated, and stopping the pipeline if the check fails. This ensures that only models which PASS the security scan proceed to the deployment stage.
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.
| Subject | Likes |
|---|---|
| 2 Likes | |
| 2 Likes | |
| 1 Like | |
| 1 Like | |
| 1 Like |
| User | Likes Count |
|---|---|
| 3 | |
| 3 | |
| 2 | |
| 2 | |
| 1 |


