I see a lot of customers using Compute, and I get questions on a good way to approach getting started with Compute for their organization. It is difficult for us to provide a “one-answer-fits-all” response to every customer, since not every customer is built the same.
I will break down a real life example of setting up the Compute Console for a DEV and a PROD team, each with their own set of Rules and Alerts, so you can see ways you can create a Filter that can be applied throughout the Compute process.
Lets suppose our DEV and PROD team are assigned the following resources in their ORG
DEV - Hosts, Clusters, Registry, Code Repository, Serverless Functions
PROD - Clusters and Serverless Functions
The DEV team handles all of the initial development from coding templates (Terraform/Kubernetes/CloudFormation), developing and building images that are being sent to the Registry, which are then used to launch containers/VMs, Serverless functions are also coded and deployed for testing. The PROD team only launches the necessary resources for the application to run, so only Clusters and Serverless Functions are in-use.
We want to define a filter for those resources. Using an Account ID can be easier, or you can map the “Collection” to the name of your Containers/Functions/Hosts, etc. In this scenario, to make it easier, I am going to map the “Collection” to an Account ID associated with each team’s resources. For simplicity, all of DEV’s resources are located in “11111111-DEV” and all of PROD’s resources are located in Account ID “22222222-PROD”.
Manage > Collections and Tags
You will want to create two Collections that are mapped to the Account ID, Container Names, Image Names, etc respectively, to the teams you need to alert. These collections can be applied in the Radar/Defend/Monitor sections and filter those related compute resources.
Collections are predefined filters for segments of your environment. They’re centrally defined, and they’re used in rules and views across the product.
Since we already know the Account IDs of the teams we need to set up, and their resources are all located in their own account, I went ahead and created the two collections respectively pointing to those IDs. This will map the Collection to all compute-related resources that exist within that entire Account.
We can further define our Collections by adding in additional fields, like the name of the Containers, or Images. For example, if you follow a specific naming convention for all your container names, you can additionally add those to further scope the Collection, i.e. eks-devteam-app, eks-devteam-db, eks-prodteam-app, etc… There are a lot of combinations that are being used, it just depends on how your teams are structured, what compute resources they are managing and how they want to be alerted.
Now that we have the Collections created and defined, we can begin to set up our Rules/Policies that will be applied to those teams resources. In the Defend section, we can further define what we are wanting to look for, like Vulnerabilities within your application and/or containers, compliance issues or active runtime events occurring at any given time. You will want to make sure you have a Rule in place for the sections you want to monitor (Vulnerabilities, Compliance, Runtime, WAAS, Access and/or CNNF) within the areas you are defending. So if you have defenders installed on your Hosts/VMs/Instances, then goto the Hosts/Host Policy sections. If you have containerized environments running docker, kubernetes, openshift, etc, then make sure to set up a Rule in the Container Policy and/or Images/Containers and Images section. Each one will have sub-areas to define rules for Deployed Hosts/Images, and also it might show a CI section for use with that type of use.
If your developers are going to use Jenkins, DevOps pipelines, then you will also need to set up a CI rule to deliver a pass or fail in those instances. This will apply when you are wanting to build an image, and scan the image before it is pushed to the registry. Prisma can deliver a pass or fail code to your pipeline depending on the settings you configure.
Repeat this process for other sections you require, i.e. Compliance, Runtime, WAAS, Access and/or CNNF (Self-Hosted only).
Now we have 2 custom Rules created, each protecting their respective resources. If you click Entities in Scope, you can further see the specific resources being protected by that rule. If Prisma finds a match when going through the rule or policy, it will stop and it will not proceed further. The “Order” of the rules is important. So ensure, your custom rules are on top and above any Default rules, or rules being scoped to those resources in question.
Here, we are going to view the Events/ Incidents/Vulnerabilities/Compliance rate for all Compute resources being protected. The Console UI is your first line of defense and where all Alerts will be provided initially. Without any integrations or alert profiles configured, you will need to manage the Alerts here within the Console.
There is an option to select the “Collection”, so you can filter those events, incidents, etc to the ones specific to those DEV/PROD resources, we created earlier
Repository, Registry and Server-less
Since our DEV team also is using a Code Repository, a Container Registry for their Images and deploying Lambda functions for testing and production, we will also need to configure Rules and Function Scans to properly monitor all of the resources that are being used to deploy the live application.
With the created rules, Prisma Cloud Compute will analyze each protected resource against each existing rule. The order matters as well, so the highest rules take precedence. Without a rule, you will not be able to see those alerts/events.
Manage > Alerts
If you want your teams to handle the alerts from their end, without needing to intervene for them, then you can create an Alert Profile to deliver those specific alerts to them.
We offer multiple types of providers to integrate with, i.e. email, ServiceNow, Jira, PagerDuty, Webhook, etc, but in our upcoming Iverson release, we are allowing customers to take advantage of any previously configured integrations (SaaS/Enterprise only).
You choose the Provider and configure those details, and then finally select a trigger that contains the alerts you want to deliver to your team. In the example images, we are adding triggers on each Alert rule, ensuring to select the rules for each respective team.
Screenshots showing alert profile configuration, including Alert trigger for their respective rules that were previously set up earlier.
The audit aggregation period you see listed in the Alert section, will apply to the runtime events and Incidents that can occur at any given time. Vulnerabilities and compliance scans occur every 24 hours by default, but you can modify this in the Manage > System > Scan section, and set the time period to a different value in hours.
You can see how the setup for Compute can be difficult to provide a “one-answer-fits-all”, that can apply to everyone, but the goal of this was to help you understand how it can be applied to this example scenario to give you a better view when applying it to your organization.
We offer multiple Webinars for the Compute side of Prisma, along with some new useful tools that can help you see your usage/utilization within Prisma. Let us help ensure you are taking advantage of all you are paying for, and being able to fully manage it as well.