- Access exclusive content
- Connect with peers
- Share your expertise
- Find support resources
Enhanced Security Measures in Place: To ensure a safer experience, we’ve implemented additional, temporary security measures for all users.
on 10-21-2024 08:32 AM - edited on 10-28-2024 07:59 AM by jforsythe
This document provides detailed steps for a customer to achieve Cortex XDR Events / Telemetry forwarding to Google Security Operations (SIEM/Chronicle).
You can contact google-tech@paloaltonetworks.com if you have any further questions.
The telemetry data is pulled into an intermediary bucket in the customer tenant and the native integration is set up from there.
The following diagram demonstrates this.
The customer creates a GCS bucket in the Customer/MDR tenant. Let’s call it Project1
For this example, we are using the following bucket (your bucket name will be different)
cortex-xdr-events-destination - Used to hold the XDR telemetry data temporarily.
Make sure that the bucket is in the same region as the customer’s Chronicle Region.
Setup the Cortex XDR event forwarding and download the service account key. We will call it xdr_sa_key.json going forward. For the complete guide to event forwarding please refer to this link. The following screenshot shows the customer performing this action. At the end of this step the customer must have
Create a secret called EVENT_FRWD_CRTX_KEY and add the contents of the SA
JSON xdr_sa_key.json as the value of the secret
Review the details added and Submit
The Cortex XDR service account created during the event forwarding setup already has access to the source bucket. Now go ahead and provide the Storage Object Admin and Storage Legacy Bucket Reader access to this service account on the bucket (cortex-xdr-events-destination) created in Step 1. Also, grant the account (the chronicle service account, see below when you adding feeds in Chronicle) created during feed creation of the Storage Object Viewer (roles/storage.objectViewer)
git clone https://github.com/PaloAltoNetworks/google-cloud-cortex-chronicle.git
REGION=us-central1 # update this to the region you want
REPO_NAME=panw-chronicle # The repo name to create
IMAGE_NAME=sync_cortex_bucket # The image name to create
GCP_PROJECT_ID=xdrxxxxxtion # update this to your project ID
JOB_NAME=cloud-run-job-cortex-data-sync # The Cloud Job name to create
PROJECT_NUMBER=80xxxxx9 # update this to your project number
# JOB ENV VARIABLES
SRC_BUCKET=xdr-us-xxxxx-event-forwarding # update this to Cortex XDR GCS bucket
DEST_BUCKET=cortex-xdr-events-destination # Update to the GCS name you created
SECRET_NAME=EVENT_FRWD_CRTX_KEY # Need to match exactly the secret you created
JOB_SCHEDULE_MINS=30
chmod 744 deploy.sh
Grant permission through the Secret Manager -> Permissions (Secret Manager Secret Accessor):
Regular Usage & Monitoring
Predominantly this solution uses the following resources that are billed.
Assuming the deployment region to be us-central1 about 10GB of data persistently stored every day and a cloud run job that runs every 10 minutes for 30 days, the estimated cost is about USD 10 per month.
These are approximate costs. Your costs may vary based on the amount of data and the frequency of the job.
You can set up the Lifecycle Management rule to delete the object after 14 days. Follow the steps shown in the screenshots below
Go to the installation folder, provide execute permissions to the uninstall.sh, and execute it by using the command
chmod 744 uninstall.sh && ./uninstall.sh