Prisma Cloud Articles
cancel
Showing results for 
Show  only  | Search instead for 
Did you mean: 
Announcements
By Omoniyi Jabaru, Senior Customer Success Engineer   Overview   Prisma Cloud can scan container images in public and private repositories on public and private registries. The registry is a system for storing and distributing container images. The most well-known public registry is Docker Hub. One of the main repositories Prisma Cloud customers use is JFrog Artifactory. This article describes how Prisma Cloud works with this registry.    Purpose   JFrog Artifactory requires that every image added to the main repository, be added as a new Registry Scanning inside Prisma Cloud. When adding more than one image inside a main repository, you need to add a registry scanning per each image to be scanned properly. Using a wildcard is not supported by Prisma Cloud at this time.   Before You Begin   To accomplish this, you will need to:   have a running instance of Prisma Cloud CWP Console have an on-prem Jfrog Artifactory set up with Private IP have a container engine running and at least one Defender has been deployed on different VPC with public and private IP   Create VPCs in different accounts and/or the same Region     Figure 1 : VPC peering_PaloAltoNetworks   To request a VPC peering connection with VPCs in different accounts and the same Region   Open the Amazon VPC console at https://console.aws.amazon.com/vpc/. In the navigation pane, choose Peering Connections. Choose Create peering connection. Configure the information as follows, and choose Create peering connection when you are done:   Name: You can optionally name your VPC peering connection. Doing so creates a tag with a key of Name and a value that you specify. This tag is only visible to you; the owner of the peer VPC can create their own tags for the VPC peering connection. VPC ID (Requester): Select the VPC in your account with which to create the VPC peering connection. Account: Choose Another account. Account ID: Enter the ID of the AWS account that owns the accepter VPC. VPC ID (Accepter): Enter the ID of the VPC with which to create the VPC peering connection.   Figure 2 : VPC Create Peering_PaloAltoNetworks   To accept a VPC peering connection with VPCs in different accounts and the same Region A VPC peering connection that’s in the pending-acceptance state must be accepted by the owner of the accepter VPC to be activated. You cannot accept a VPC peering connection request that you've sent to another AWS account. If you are creating a VPC peering connection in the same AWS account, you must both create and accept the request yourself. If the VPCs are in different regions, the request must be accepted in the region of the accepter VPC. To accept a VPC peering connection   Open the Amazon VPC console at https://console.aws.amazon.com/vpc/. Use the region selector to choose the region of the accepter VPC. In the navigation pane, choose Peering Connections. Select the pending VPC peering connection (the status is pending-acceptance), and choose Actions, Accept Request. Note: If you cannot see the pending VPC peering connection, check the region. An inter-region peering request must be accepted in the region of the accepter VPC. In the confirmation dialog box, choose Yes, Accept. A second confirmation dialog displays; choose to Modify my route tables now to go directly to the route tables page.   Figure  2a : VPC peering DNS_PaloAltoNetworks Figure  2b : VPC peering route_PaloAltoNetworks   Configuring routes for a VPC peering connection   Open the Amazon VPC console at https://console.aws.amazon.com/vpc/. In the navigation pane, choose Route Tables. Select the route table that’s associated with the subnet in which your instance resides. Note: If you do not have a route table associated with that subnet, select the main route table for the VPC, as the subnet then uses this route table by default. Choose Routes, Edit, Add Route. For Destination, enter the IPv4 address range to which the network traffic in the VPC peering connection must be directed. You can specify the entire IPv4 CIDR block of the peer VPC, a specific range, or an individual IPv4 address, such as the IP address of the instance with which to communicate. For example, if the CIDR block of the peer VPC is 10.0.0.0/16, you can specify a portion 10.0.0.0/28. Select the VPC peering (pcx) connection from Target, and then choose Save.   Figure  2c : VPC peering routes_PaloAltoNetworks Figure  2d : VPC peering routes3_PaloAltoNetworks   Allow access from the source communication VPC on the Security Group of the Service Create EC2 Instance : https://docs.aws.amazon.com/efs/latest/ug/gs-step-one-create-ec2-resources.html We need to add the entry for the new network address on the security group of the services you permit access to/from another VPC.   Open the Amazon EC2 console at https://console.aws.amazon.com/ec2/. In the navigation pane, choose Security Groups. In the list, select the security group and choose Actions, Edit inbound rules. Choose Add rule. For Type, choose the type of protocol to allow. If you choose a custom TCP or UDP protocol, you must manually enter the port range to allow. If you choose a custom ICMP protocol, you must choose the ICMP type name from Protocol, and, if applicable, the code name from Port range. If you choose any other type, the protocol and port range are configured automatically. Figure 2e : VPC peering inbound_PaloAltoNetworks Allow access to the source communication VPC on the Security Group of the Service For Source, do one of the following:   Choose Custom and then enter an IP address in CIDR notation, a CIDR block, another security group, or a prefix list from which to allow inbound traffic. I used my custom CIDR as seen in the below screenshot. Choose Anywhere to allow all inbound traffic of the specified protocol to reach your instance. This option automatically adds the 0.0.0.0/0 IPv4 CIDR block as an allowed source. This is acceptable for a short time in a test environment, but it's unsafe for production environments. In production, authorize only a specific IP address or range of addresses to access your instance.  For Description, optionally specify a brief description for the rule.   Figure 2f : VPC peering inbound2_PaloAltoNetworks For more info, review AWS VPC documentation below: Work with VPC peering connections Confirm VPC peering is working fine    SSH into the Source Scanner EC2. Ping the EC2 in the Target VPC with the private subnet. Furthermore, SSH into the target EC2 (not necessary).   Figure 3a : ping target_PaloAltoNetworks   Figure 3b : ssh target_PaloAltoNetworks   Install a Container Defender On the Scanner EC2 Go to Compute > Manage > Defenders > Defenders: Deployed and select Manual deploy. Under Deployment method, select Single Defender. In Defender type, select Container Defender - Linux  Container Defender. Select the way Defender connects to Console. (Optional) Set a custom communication port (4) for the Defender to use. (Optional) Set a proxy (3) for the Defender to use for the communication with the Console. (Optional) Under Advanced Settings, Enable Assign globally unique names to Hosts when you have multiple hosts that can have the same hostname (like autoscale groups, and overlapping IP addresses). After setting the option to ON, Prisma Cloud appends a unique identifier, such as ResourceId, to the host’s DNS name. For example, an AWS EC2 host would have the following name: Ip-171-29-1-244.ec2internal-i-04a1dcee6bd148e2d. Copy the install scripts command from the right side panel, which is generated according to the options you selected. On the host where you want to install Defender, paste the command into a shell window, and run it. Verify the Install   In the Console, go to Manage > Defenders > Defenders: Deployed. Your new Defender should be listed in the table, and the status box should be green and checked. For more info, review the Prisma Defender documentation below : Install a Single Container Defender   Install NGINX on The Private JFROG Instance Registry scanning requires a secure connection which is HTTPS. Hence, we need to setup nginx reverse proxy in front of Artifactory. A reverse proxy configuration can be generated in the Artifactory UI by going to Administration->Artifactory->HTTP Settings.  This will need to be copied to your nginx config. You will need to have your own SSL certs and key and place them in the correct directory specified in the nginx config. Below is a sample configuration for reference:   To install Nginx on an EC2 instance running a Linux distribution such as Amazon Linux, CentOS, Ubuntu, or Debian, you can follow these general steps: Connect to Your EC2 Instance: Use SSH to connect to your EC2 instance. You can connect from the scanner instance since the VPC is peered. Update the Package Database by the following command to ensure that the package database is up to date: sudo yum update -y    Install Nginx: Use the package manager to install Nginx. The package name may vary slightly depending on your Linux distribution.   For Amazon Linux, CentOS, or RHEL: sudo yum install nginx -y   For Ubuntu or Debian:   sudo apt install nginx -y Start the Nginx Service: Once Nginx is installed, start the Nginx service using the following command: sudo systemctl start nginx In the Conf.d path, Created a JFROG.conf and input the configuration on the link below: https://jfrog.com/help/r/artifactory-quick-installation-guide-linux-archive/ssl   Figure 4 : jfrog.conf_PaloAltoNetworks Verify Nginx Installation nginx -t Enable Route 53 TO Point to the Private Jfrog IP Set up a Route 53 Private Hosted Zone:   Navigate to the Route 53 console in the AWS Management Console. Choose "Create Hosted Zone". Specify a domain name (e.g., example.com). Select "Private hosted zone". Choose the VPC(s) that you want to associate with the private hosted zone. Click "Create". Create a Record Set:   Inside the newly created private hosted zone, choose "Create Record Set". In the "Name" field, enter the subdomain you want to map (e.g., www). Select the type of record you want to create (e.g., A for IPv4 address). In the "Value" field, enter the private IP address. Click "Create" to create the record set. Update VPC DNS Settings:   In the VPC settings, ensure that the VPC's DNS settings are configured to use the Route 53 resolver. In the Route 53 console, under "Resolver", you can find the Resolver endpoints and rules. Solution Architecture     Figure 5 : solution architecture_PaloAltoNetworks     Summary     The Scanner instance attempts to resolve the DNS private-jfrog.jmontufar.org of the JFROG instance. Route 53 indicates that the DNS private-jfrog.jmontufar.org corresponds to the server with IP address 10.0.138.85. Subsequently, the Scanner instance initiates a TLS negotiation request to the IP address 10.0.138.85, including the DNS private-jfrog.jmontufar.org in the request. NGINX identifies the requested DNS as belonging to the default route and begins TLS negotiation, providing the Server Certificate for the negotiation. As the certificate installed on NGINX is a wildcard certificate (*.jmontufar.org) and the requested DNS is private-jfrog.jmontufar.org, the Scanner instance recognizes the certificate as valid and proceeds. Upon successful TLS negotiation, NGINX forwards scanning requests from the Scanner instance to the private JFROG instance. The Scanner instance subsequently transmits the report back to the Prisma Cloud Compute Console. Scan Confirmation Pushed Images to the on-prem Jfrog docker pull alpine:latest docker tag alpine:latest <jfrog-domain>/<repository-name>/<image-name>:latest docker push <jfrog-domain>/<repository-name>/<image-name>:latest   Figure 6 : docker_PaloAltoNetworks Prisma Cloud registry scan settings   Figure 6a : registry scan_PaloAltoNetworks Prisma Cloud Vulnerability Report   Figure 6b : vuln report_PaloAltoNetworks   Conclusion   By integrating Prisma Cloud with JFrog Artifactory, you can enhance your container security posture by continuously scanning images for vulnerabilities and compliance issues. This integration allows seamless monitoring and remediation, ensuring that your containerized applications remain secure throughout their lifecycle. References   Create with VPCs in different accounts and the same Region Route 53 JFROG Nginx Proxy   About the Author Omoniyi Jabaru is senior customer success engineers specializing in Prisma Cloud, Next-Generation Firewall, AWS, Azure, GCP, containers and Kubernetes. He uses simple approaches to break down complex problems into solutions for global enterprise customers and leverage their multi-industry knowledge to inspire success.    
View full article
This document presents a step-by-step guide for automating the deployment of Prisma Cloud Windows container defender to Google Kubernetes Engine Windows nodes. You will set up a Kubernetes cluster with a Windows node-pool and leverage the Google Cloud startup scripts on Windows VMs to install the Prisma Cloud container defenders. We will discuss installation of Prisma Cloud defender on Windows Google Kubernetes Engine clusters.
View full article
A Secrets Manager is a secure and centralized tool or service used in the field of information technology and cybersecurity to store, manage, and access sensitive information, commonly referred to as "secrets". These secrets can include credentials, API keys, encryption keys, certificates, and other sensitive data that applications and services require for secure operation. Secret Manager systems can vary depending on the platform or service you use. For example: Cloud-Based: Cloud providers like AWS Secrets Manager, Google Cloud Secret Manager, and Azure Key Vault offer secret management services tailored for their respective cloud ecosystems. Containers often require sensitive information, such as passwords, SSH keys, encryption keys, and so on. Prisma Cloud integrates with many common secrets management platforms to securely distribute secrets from those stores to the containers that need them.
View full article
The Prisma Cloud Runtime Security DaemonSet auto-deploy feature uses a kubeconfig file generated from a kubernetes service account with limited permissions.    Purpose If you aim to streamline the deployment of Defender DaemonSets to a cluster or lack direct kubectl access to your cluster, you can conveniently deploy Defender DaemonSets directly from the Console UI.   The Auto-Defend feature also allows you to upgrade with ease any Defender that you have deployed before, so you could easily perform the upgrade process from the Console UI or automate it by making API calls to the appropriate console endpoints.   
View full article
This document showcases the process of how to deploy the Prisma Cloud Compute console in a Kubernetes cluster  on any cloud provider and use a NGINX Ingress controller as a proxy for this console. Purpose For many enterprises, moving production workloads into Kubernetes brings additional challenges and complexities around application traffic management. An Ingress controller abstracts away the complexity of Kubernetes application traffic routing and provides a bridge between Kubernetes services and external ones.  
View full article
  • 35 Posts
  • 281 Subscriptions
Customer Advisories

Your security posture is important to us. If you’re a Palo Alto Networks customer, be sure to login to see the latest critical announcements and updates in our Customer Advisories area.

Learn how to subscribe to and receive email notifications here.

Listen to PANCast

PANCast is a Palo Alto Networks podcast that provides actionable insights to customers, helping you maximize your investment while improving your cybersecurity posture.

Labels
Top Contributors
Top Liked Posts in LIVEcommunity Article
Top Liked Authors