Prisma Cloud Articles
cancel
Showing results for 
Show  only  | Search instead for 
Did you mean: 
Announcements
Vulnerabilities or CVEs are publicly disclosed security vulnerabilities that threat actors can exploit to gain unauthorized access to systems or networks. CVEs are widely present in programs and operating systems until an organization works to remediate the known CVEs. For many organizations, one of the first steps with cloud and container security is to discover and patch vulnerabilities in their environments.
View full article
Prisma Cloud collects data about cloud resources in your cloud accounts and allows extracting information about those cloud resources such that answers to common security questions can be answered, such as show me ec2 volumes that are not encrypted.   These queries are written in Resource Query Language (RQL), and can be debugged and run on the Investigate page in Prisma Cloud.
View full article
An "Attack Path" refers to a sequence of steps or a series of vulnerabilities and misconfigurations that an attacker exploits to achieve their malicious objectives within a cloud environment. 
View full article
By Omoniyi Jabaru, Senior Customer Success Engineer   Overview   Prisma Cloud can scan container images in public and private repositories on public and private registries. The registry is a system for storing and distributing container images. The most well-known public registry is Docker Hub. One of the main repositories Prisma Cloud customers use is JFrog Artifactory. This article describes how Prisma Cloud works with this registry.    Purpose   JFrog Artifactory requires that every image added to the main repository, be added as a new Registry Scanning inside Prisma Cloud. When adding more than one image inside a main repository, you need to add a registry scanning per each image to be scanned properly. Using a wildcard is not supported by Prisma Cloud at this time.   Before You Begin   To accomplish this, you will need to:   have a running instance of Prisma Cloud CWP Console have an on-prem Jfrog Artifactory set up with Private IP have a container engine running and at least one Defender has been deployed on different VPC with public and private IP   Create VPCs in different accounts and/or the same Region     Figure 1 : VPC peering_PaloAltoNetworks   To request a VPC peering connection with VPCs in different accounts and the same Region   Open the Amazon VPC console at https://console.aws.amazon.com/vpc/. In the navigation pane, choose Peering Connections. Choose Create peering connection. Configure the information as follows, and choose Create peering connection when you are done:   Name: You can optionally name your VPC peering connection. Doing so creates a tag with a key of Name and a value that you specify. This tag is only visible to you; the owner of the peer VPC can create their own tags for the VPC peering connection. VPC ID (Requester): Select the VPC in your account with which to create the VPC peering connection. Account: Choose Another account. Account ID: Enter the ID of the AWS account that owns the accepter VPC. VPC ID (Accepter): Enter the ID of the VPC with which to create the VPC peering connection.   Figure 2 : VPC Create Peering_PaloAltoNetworks   To accept a VPC peering connection with VPCs in different accounts and the same Region A VPC peering connection that’s in the pending-acceptance state must be accepted by the owner of the accepter VPC to be activated. You cannot accept a VPC peering connection request that you've sent to another AWS account. If you are creating a VPC peering connection in the same AWS account, you must both create and accept the request yourself. If the VPCs are in different regions, the request must be accepted in the region of the accepter VPC. To accept a VPC peering connection   Open the Amazon VPC console at https://console.aws.amazon.com/vpc/. Use the region selector to choose the region of the accepter VPC. In the navigation pane, choose Peering Connections. Select the pending VPC peering connection (the status is pending-acceptance), and choose Actions, Accept Request. Note: If you cannot see the pending VPC peering connection, check the region. An inter-region peering request must be accepted in the region of the accepter VPC. In the confirmation dialog box, choose Yes, Accept. A second confirmation dialog displays; choose to Modify my route tables now to go directly to the route tables page.   Figure  2a : VPC peering DNS_PaloAltoNetworks Figure  2b : VPC peering route_PaloAltoNetworks   Configuring routes for a VPC peering connection   Open the Amazon VPC console at https://console.aws.amazon.com/vpc/. In the navigation pane, choose Route Tables. Select the route table that’s associated with the subnet in which your instance resides. Note: If you do not have a route table associated with that subnet, select the main route table for the VPC, as the subnet then uses this route table by default. Choose Routes, Edit, Add Route. For Destination, enter the IPv4 address range to which the network traffic in the VPC peering connection must be directed. You can specify the entire IPv4 CIDR block of the peer VPC, a specific range, or an individual IPv4 address, such as the IP address of the instance with which to communicate. For example, if the CIDR block of the peer VPC is 10.0.0.0/16, you can specify a portion 10.0.0.0/28. Select the VPC peering (pcx) connection from Target, and then choose Save.   Figure  2c : VPC peering routes_PaloAltoNetworks Figure  2d : VPC peering routes3_PaloAltoNetworks   Allow access from the source communication VPC on the Security Group of the Service Create EC2 Instance : https://docs.aws.amazon.com/efs/latest/ug/gs-step-one-create-ec2-resources.html We need to add the entry for the new network address on the security group of the services you permit access to/from another VPC.   Open the Amazon EC2 console at https://console.aws.amazon.com/ec2/. In the navigation pane, choose Security Groups. In the list, select the security group and choose Actions, Edit inbound rules. Choose Add rule. For Type, choose the type of protocol to allow. If you choose a custom TCP or UDP protocol, you must manually enter the port range to allow. If you choose a custom ICMP protocol, you must choose the ICMP type name from Protocol, and, if applicable, the code name from Port range. If you choose any other type, the protocol and port range are configured automatically. Figure 2e : VPC peering inbound_PaloAltoNetworks Allow access to the source communication VPC on the Security Group of the Service For Source, do one of the following:   Choose Custom and then enter an IP address in CIDR notation, a CIDR block, another security group, or a prefix list from which to allow inbound traffic. I used my custom CIDR as seen in the below screenshot. Choose Anywhere to allow all inbound traffic of the specified protocol to reach your instance. This option automatically adds the 0.0.0.0/0 IPv4 CIDR block as an allowed source. This is acceptable for a short time in a test environment, but it's unsafe for production environments. In production, authorize only a specific IP address or range of addresses to access your instance.  For Description, optionally specify a brief description for the rule.   Figure 2f : VPC peering inbound2_PaloAltoNetworks For more info, review AWS VPC documentation below: Work with VPC peering connections Confirm VPC peering is working fine    SSH into the Source Scanner EC2. Ping the EC2 in the Target VPC with the private subnet. Furthermore, SSH into the target EC2 (not necessary).   Figure 3a : ping target_PaloAltoNetworks   Figure 3b : ssh target_PaloAltoNetworks   Install a Container Defender On the Scanner EC2 Go to Compute > Manage > Defenders > Defenders: Deployed and select Manual deploy. Under Deployment method, select Single Defender. In Defender type, select Container Defender - Linux  Container Defender. Select the way Defender connects to Console. (Optional) Set a custom communication port (4) for the Defender to use. (Optional) Set a proxy (3) for the Defender to use for the communication with the Console. (Optional) Under Advanced Settings, Enable Assign globally unique names to Hosts when you have multiple hosts that can have the same hostname (like autoscale groups, and overlapping IP addresses). After setting the option to ON, Prisma Cloud appends a unique identifier, such as ResourceId, to the host’s DNS name. For example, an AWS EC2 host would have the following name: Ip-171-29-1-244.ec2internal-i-04a1dcee6bd148e2d. Copy the install scripts command from the right side panel, which is generated according to the options you selected. On the host where you want to install Defender, paste the command into a shell window, and run it. Verify the Install   In the Console, go to Manage > Defenders > Defenders: Deployed. Your new Defender should be listed in the table, and the status box should be green and checked. For more info, review the Prisma Defender documentation below : Install a Single Container Defender   Install NGINX on The Private JFROG Instance Registry scanning requires a secure connection which is HTTPS. Hence, we need to setup nginx reverse proxy in front of Artifactory. A reverse proxy configuration can be generated in the Artifactory UI by going to Administration->Artifactory->HTTP Settings.  This will need to be copied to your nginx config. You will need to have your own SSL certs and key and place them in the correct directory specified in the nginx config. Below is a sample configuration for reference:   To install Nginx on an EC2 instance running a Linux distribution such as Amazon Linux, CentOS, Ubuntu, or Debian, you can follow these general steps: Connect to Your EC2 Instance: Use SSH to connect to your EC2 instance. You can connect from the scanner instance since the VPC is peered. Update the Package Database by the following command to ensure that the package database is up to date: sudo yum update -y    Install Nginx: Use the package manager to install Nginx. The package name may vary slightly depending on your Linux distribution.   For Amazon Linux, CentOS, or RHEL: sudo yum install nginx -y   For Ubuntu or Debian:   sudo apt install nginx -y Start the Nginx Service: Once Nginx is installed, start the Nginx service using the following command: sudo systemctl start nginx In the Conf.d path, Created a JFROG.conf and input the configuration on the link below: https://jfrog.com/help/r/artifactory-quick-installation-guide-linux-archive/ssl   Figure 4 : jfrog.conf_PaloAltoNetworks Verify Nginx Installation nginx -t Enable Route 53 TO Point to the Private Jfrog IP Set up a Route 53 Private Hosted Zone:   Navigate to the Route 53 console in the AWS Management Console. Choose "Create Hosted Zone". Specify a domain name (e.g., example.com). Select "Private hosted zone". Choose the VPC(s) that you want to associate with the private hosted zone. Click "Create". Create a Record Set:   Inside the newly created private hosted zone, choose "Create Record Set". In the "Name" field, enter the subdomain you want to map (e.g., www). Select the type of record you want to create (e.g., A for IPv4 address). In the "Value" field, enter the private IP address. Click "Create" to create the record set. Update VPC DNS Settings:   In the VPC settings, ensure that the VPC's DNS settings are configured to use the Route 53 resolver. In the Route 53 console, under "Resolver", you can find the Resolver endpoints and rules. Solution Architecture     Figure 5 : solution architecture_PaloAltoNetworks     Summary     The Scanner instance attempts to resolve the DNS private-jfrog.jmontufar.org of the JFROG instance. Route 53 indicates that the DNS private-jfrog.jmontufar.org corresponds to the server with IP address 10.0.138.85. Subsequently, the Scanner instance initiates a TLS negotiation request to the IP address 10.0.138.85, including the DNS private-jfrog.jmontufar.org in the request. NGINX identifies the requested DNS as belonging to the default route and begins TLS negotiation, providing the Server Certificate for the negotiation. As the certificate installed on NGINX is a wildcard certificate (*.jmontufar.org) and the requested DNS is private-jfrog.jmontufar.org, the Scanner instance recognizes the certificate as valid and proceeds. Upon successful TLS negotiation, NGINX forwards scanning requests from the Scanner instance to the private JFROG instance. The Scanner instance subsequently transmits the report back to the Prisma Cloud Compute Console. Scan Confirmation Pushed Images to the on-prem Jfrog docker pull alpine:latest docker tag alpine:latest <jfrog-domain>/<repository-name>/<image-name>:latest docker push <jfrog-domain>/<repository-name>/<image-name>:latest   Figure 6 : docker_PaloAltoNetworks Prisma Cloud registry scan settings   Figure 6a : registry scan_PaloAltoNetworks Prisma Cloud Vulnerability Report   Figure 6b : vuln report_PaloAltoNetworks   Conclusion   By integrating Prisma Cloud with JFrog Artifactory, you can enhance your container security posture by continuously scanning images for vulnerabilities and compliance issues. This integration allows seamless monitoring and remediation, ensuring that your containerized applications remain secure throughout their lifecycle. References   Create with VPCs in different accounts and the same Region Route 53 JFROG Nginx Proxy   About the Author Omoniyi Jabaru is senior customer success engineers specializing in Prisma Cloud, Next-Generation Firewall, AWS, Azure, GCP, containers and Kubernetes. He uses simple approaches to break down complex problems into solutions for global enterprise customers and leverage their multi-industry knowledge to inspire success.    
View full article
Many organizations have to create, read, update, and delete their cloud infrastructure. Terraform is an easy way to provision and deploy Infrastructure resources such as servers, databases, network components, etc.    By using Terraform, you no longer have to log in nor navigate and set up all your settings manually in the Prisma Cloud console. You can now just simply create a Terraform configuration and efficiently apply it directly in a command line.   In this article, we would like to illustrate how you can onboard your AWS accounts using Prisma Cloud Terraform provider.
View full article
A common customer question is how to view host vulnerabilities in the Asset Inventory for each Cloud Service Provider. Host vulnerabilities are easily identified in the Runtime Security Module, by selecting Monitor - Vulnerabilities - Hosts.    Most Cloud Service Providers have a managed offering-- Azure has AKS, Google offers GKE, AWS has EKS and Red Hat offers RedHat openshift; in this article, specifically, we will focus on EKS. The container workloads for all of these managed offerings run on host machines and those machines can contain vulnerabilities.   The Prisma Cloud Command Center (Figure 1) and Vulnerabilities (Figure 2) dashboards are the first high level dashboards that provide visibility into Vulnerabilities, and its purpose is to identify top issues by severity for hosts, images and repositories.  In order to narrow the scope and filter based on EKS worker nodes in Cloud Security, it is recommended to explore the asset inventory.
View full article
The Palo Alto Networks Prisma Cloud (CSPM and CWPP) not only can help the organizations to discover the impacted resources, but can also protect the exploit from happening.   Vulnerabilities or CVEs are publicly disclosed security vulnerabilities that threat actors can exploit to gain unauthorized access to systems or networks. CVEs are widely present in programs and operating systems until an organization works to remediate the known CVEs.  The list of known vulnerabilities continues to increase daily, and the prioritization of these vulnerabilities change rapidly as exploits are found.    This article will guide you on leveraging the Prisma Cloud Product to gain visibility of your cloud resources affected by any vulnerabilities/CVEs.  In this article, we will use Log4Shell and/or SpringShell as an example of a vulnerability to demonstrate how Prisma Cloud can help with understanding your Attack Surface. 
View full article
The Prisma Cloud Darwin release enables you to utilize out of the box dashboards as well as custom dashboards. With the capabilities to track and monitor your cloud security posture ranging from vulnerabilities to compliance. In this article, we will discuss the existing OOTB dashboards and the capability of creating custom dashboards in Prisma Cloud.
View full article
This document presents a step-by-step guide for automating the deployment of Prisma Cloud Windows container defender to Google Kubernetes Engine Windows nodes. You will set up a Kubernetes cluster with a Windows node-pool and leverage the Google Cloud startup scripts on Windows VMs to install the Prisma Cloud container defenders. We will discuss installation of Prisma Cloud defender on Windows Google Kubernetes Engine clusters.
View full article
This guide describes how to configure agentless vulnerability and compliance scanning for virtual machines in Microsoft Azure subscriptions.   This article will use a credential dedicated to the agentless scanning process.  
View full article
This document goes over how to configure Azure RBAC providing fine-grained access to Azure Resources and visibility in Prisma Cloud.   With Azure RBAC, you can create a role definition that outlines the permissions to be applied to Prisma Cloud app registrations. This article specifically addresses the application of Azure RBAC predefined roles to manage access to Azure resources.    Azure Resources offers two authorization systems such as Azure Role Based Access Control and an access policy model.    Azure RBAC has several built-in roles you can assign to service principals and managed identities.    Azure Resources authorized by access policy model  Azure Resources authorized by Azure RBAC (Recommended Authorization)   The Prisma Cloud role created for Azure ingestion with Terraform currently utilizes the access policy module, requiring the addition of permissions one at a time. Azure recommends leveraging role-based Azure RBAC, which enables configuring permissions for Prisma Cloud using pre-defined Azure roles containing a set of permissions. With Azure RBAC, any updates to the role's permissions automatically apply without the need for manual adjustments.
View full article
How to Disable or enable default or custom policies 
View full article
This guide describes how to configure agentless vulnerability and compliance scanning of virtual machines in Microsoft Azure subscriptions. This example uses Prisma Cloud Enterprise Edition (PCEE, Compute SaaS) which has a different configuration process from using the same feature in the Compute Edition (Self-Hosted). Additionally, we will be onboarding and scanning a single Azure subscription.
View full article
“Auto Create Account Groups” is a useful feature for managing a large number of GCP projects and folders.    If there are various teams creating folders and projects in your organization, it makes sense to have separate account groups for each team, and create separate alert rules based on the account groups. This will help maintain alert isolation for each team and make it manageable for taking proactive actions to mitigate those alerts.    In this article, we would like to illustrate an example using a GCP account with nested folders and projects in a GCP Organization. The name of the GCP Organization is “example.world” 
View full article
Prisma Cloud allows you to create policies to ensure that your Cloud Security Posture Management is in compliance with best practices and the needs of your organization.  These policies create alerts which need to be evaluated and also indicate which cloud objects need to be updated for compliance.    Managing these alerts is a task that many organizations find difficult as the number of alerts increases. Prisma Cloud allows you to define an auto-remediation to correct certain alerts.  However, oftentimes an organization requires much more customization and integration with other tools that they are using.   This article continues on from the previous article “Enhanced Alert Remediation” using XSOAR via CSPM, building on the concepts introduced in that article.     This article will dive into post-integration of Prisma Cloud alerts to Cortex XSOAR incidents (where we discussed how to integrate Prisma Cloud to Cortex XSOAR), and how playbooks can be used to not only help remediate, but create an organized flow on how these violations should be delegated.
View full article
Prisma Cloud allows you to create policies to ensure that your Cloud Security Posture Management is in compliance with best practices and the needs of your organization.  These policies create alerts which need to be evaluated and also indicate which cloud objects need to be updated to be in compliance.    Managing these alerts is a task that many organizations find difficult as the number of alerts increases. Prisma Cloud allows you to define an auto-remediation to correct certain alerts.  However, oftentimes an organization requires much more customization and integration with other tools that they are using.    This article describes how to increase your alert automation and integrate with other tools by using a security orchestration, automation, and response (SOAR) platform from Palo Alto Networks.
View full article
A common customer question is how to view host vulnerabilities in the Asset Inventory for each Cloud Service Provider. In this article, we will focus on Azure, following up with articles for GCP and AWS.     Kubernetes is a popular container orchestration tool.  Most Cloud Service Providers have a managed offering.  Azure has AKS, Google offers GKE, AWS has EKS and Red Hat offers RedHat openshift.   The container workloads for all of these managed offerings run on host machines and those machines can contain vulnerabilities.
View full article
Identity and Access Management (IAM) refers to the processes and tools for managing user access to resources and enforcing security policies. IAM is crucial for securing the modern enterprise as it enables organizations to control who can access what resources. By enforcing strong IAM policies, companies can enforce the principle of least privilege, meaning users and resources are only granted minimum permissions necessary to perform their jobs. This minimizes the horizontal scaling of security attacks in the event of compromised credentials.    Prisma Cloud offers capabilities to embed IAM into the software delivery lifecycle. It can scan infrastructure-as-code for misconfigurations and enforce least privilege during deployment. Additionally, Prisma Cloud can monitor permissions at runtime and alert on anomalies that indicate privilege creep or excessive permissions. By leveraging the CIEM module within Prisma Cloud, organizations can confidently monitor access while minimizing risk.   This article will provide RQLs to create sample policies based on IAM requirements, as well as demonstrate how a simple IAM RQL can be continually extended to add additional IAM functionality. 
View full article
This document provides guidance on how to configure Single Sign On (SSO) between Prisma Cloud Enterprise and Microsoft Entra ID (formally known as Azure Active Directory, or Azure AD) to use Just-in-Time (JIT) provisioning to automatically create users in Prisma Cloud based on their AD Groups assignment.
View full article
Visibility is a crucial part of cyber-security because “if you cannot see the asset, then you cannot protect it.” Prisma Cloud Workload protection has a RADARS section which helps visualize digital assets in a cloud environment.
View full article
A Secrets Manager is a secure and centralized tool or service used in the field of information technology and cybersecurity to store, manage, and access sensitive information, commonly referred to as "secrets". These secrets can include credentials, API keys, encryption keys, certificates, and other sensitive data that applications and services require for secure operation. Secret Manager systems can vary depending on the platform or service you use. For example: Cloud-Based: Cloud providers like AWS Secrets Manager, Google Cloud Secret Manager, and Azure Key Vault offer secret management services tailored for their respective cloud ecosystems. Containers often require sensitive information, such as passwords, SSH keys, encryption keys, and so on. Prisma Cloud integrates with many common secrets management platforms to securely distribute secrets from those stores to the containers that need them.
View full article
The Prisma Cloud Runtime Security DaemonSet auto-deploy feature uses a kubeconfig file generated from a kubernetes service account with limited permissions.    Purpose If you aim to streamline the deployment of Defender DaemonSets to a cluster or lack direct kubectl access to your cluster, you can conveniently deploy Defender DaemonSets directly from the Console UI.   The Auto-Defend feature also allows you to upgrade with ease any Defender that you have deployed before, so you could easily perform the upgrade process from the Console UI or automate it by making API calls to the appropriate console endpoints.   
View full article
Introducing infrastructure as code scanning into your GitOps flow with Prisma Cloud Code Security.
View full article
This document showcases the process of how to deploy the Prisma Cloud Compute console in a Kubernetes cluster  on any cloud provider and use a NGINX Ingress controller as a proxy for this console. Purpose For many enterprises, moving production workloads into Kubernetes brings additional challenges and complexities around application traffic management. An Ingress controller abstracts away the complexity of Kubernetes application traffic routing and provides a bridge between Kubernetes services and external ones.  
View full article
The Prisma Cloud product from Palo Alto Networks has a number of threat landscape views along with preventative tools to help mitigate the risks of a vulnerability, including zero-day vulnerabilities.   We will examine how Prisma Cloud can notify you of a CVE, what API calls can be used to find the resources affected by a CVE, and how to create a custom CVE to support zero-day vulnerabilities. This article will demonstrate how you as a security professional can get a better understanding around the threat landscape of your environment.  For purposes of example, we will use Log4J as our zero-day threat in this article.
View full article
A best practice in security is alerting on the assets that you find most critical. The concept of vulnerability and exploit defines that a vulnerability can be exploited.   
View full article
To get the most out of your investment in Prisma™ Cloud, we need to add your cloud accounts to Prisma Cloud. This process requires that you have the correct permissions to authenticate and authorize the connection and retrieval of data.
View full article
Many teams are relying on automation to streamline their Security Operations Center. Automation allows customers to scale their operations as their cloud presence grows and allows the data from Prisma Cloud to be integrated with a customer’s existing workflow to manage Cloud security.  This API is also used by Cortex XSOAR playbooks for alert remediation and alert report generation.
View full article
Throughout the security lifecycle of an application or cloud environment it is important to be able to understand the tools available to each security professional. One of the best tools for any security professional to be able to use is scripting. Scripting allows one to create a program that automates an individual task and, when coupled with the Prisma Cloud Compute Workload Protection Platform (CWPP), you can effectively complete your use cases with ease. All that it takes to create a script is an understanding of the tools available to you, practice, and studying the available documentation of API calls that can interface with your scripting program.    Through the CWPP API and this article, you will be able to begin to establish a new way to be able to solve your company’s problems while enhancing your available tools in problem solving. In this article, we are utilizing a SaaS CWPP console for the examples and a text editor which can save text files for scripting along with a linux command line available in MacOS terminal or in Windows with Subsystem for Linux.    When interacting with a command line, you can type directly into the command prompt. As an example, to help those of you who have not yet worked with a Linux command line, you can navigate to different directories using the “cd” or ‘current directory’ command. You can determine the path to your current directory by typing “pwd,” or ‘print working directory’, and you can list the files in the current directory using “ls”.
View full article
Prisma Cloud Compute Agentless scanning enables you to quickly gain comprehensive visibility into vulnerability and compliance risks without having to install an agent on each host.   Cloud environments are dynamic in nature. Prisma Cloud gives you the flexibility to choose between agentless and agent-based security. At this time, Prisma Cloud supports agentless scanning of VMs on AWS, GCP and Azure.   This article outlines the process of setting Prisma Cloud Compute Agentless to scan Google Cloud Platform (GCP) Compute Engine to discover vulnerabilities and compliances.  
View full article
  • 32 Posts
  • 277 Subscriptions
Customer Advisories

Your security posture is important to us. If you’re a Palo Alto Networks customer, be sure to login to see the latest critical announcements and updates in our Customer Advisories area.

Learn how to subscribe to and receive email notifications here.

Listen to PANCast

PANCast is a Palo Alto Networks podcast that provides actionable insights to customers, helping you maximize your investment while improving your cybersecurity posture.

Labels
Top Contributors
Top Liked Authors