Palo Alto Cortex XDR Event Forwarding to Google SecOps (Chronicle)

cancel
Showing results for 
Show  only  | Search instead for 
Did you mean: 
Announcements
L1 Bithead
No ratings

This document provides detailed steps for a customer to achieve Cortex XDR Events / Telemetry forwarding to Google Security Operations (SIEM/Chronicle). 

You can contact google-tech@paloaltonetworks.com if you have any further questions.

 

Cortex XDR Event Forwarding to Google SecOps (Chronicle) Solution

 

The telemetry data is pulled into an intermediary bucket in the customer tenant and the native integration is set up from there.

 

The following diagram demonstrates this.

Danielma911_2-1728953430741.png

 

Steps To Setup The Integration (with an example)

Create Required GCS Bucket

The customer creates a GCS bucket in the Customer/MDR tenant. Let’s call it Project1

For this example, we are using the following bucket (your bucket name will be different)

cortex-xdr-events-destination - Used to hold the XDR telemetry data temporarily.

 

Make sure that the bucket is in the same region as the customer’s Chronicle Region.

Danielma911_3-1728953746394.png

 

Set up Cortex XDR Event Forwarding 

Setup the Cortex XDR event forwarding and download the service account key. We will call it xdr_sa_key.json going forward. For the complete guide to event forwarding please refer to this link. The following screenshot shows the customer performing this action. At the end of this step the customer must have 

  1. Storage Path (GCS Bucket URL)
  2. Service account JSON key.
  3. Danielma911_4-1728953818011.png

     

Secret Manager Setup

Create a secret called EVENT_FRWD_CRTX_KEY and add the contents of the SA

JSON xdr_sa_key.json as the value of the secret

Danielma911_5-1728953868964.png

 

Set up Native Chronicle Feed Integration

  • Create a new feed by navigating to SIEM Settings - Feeds - ADD NEW.  

Danielma911_6-1728953938295.png

 

  •  Provide a feed name and select options as shown below. Click GET A SERVICE ACCOUNT. Click Next.

 

Danielma911_10-1728954123904.png

 

  • Provide the bucket name and select the options as shown below. Please add a namespace if that’s relevant to you/ your customer. It is recommended to add an ingestion label. Copy the service account name.

Danielma911_11-1728954194057.png

Review the details added and Submit

Danielma911_12-1728954260708.png

 

  • The feed should be available in feeds now (the feed name in the example below is different as we are using a feed created earlier)

Danielma911_13-1728954324210.png

 

  • For now, disable the feed. We will enable it later.

Danielma911_14-1728954372465.png

 

IAM Setup

The Cortex XDR service account created during the event forwarding setup already has access to the source bucket. Now go ahead and provide the Storage Object Admin and Storage Legacy Bucket Reader access to this service account on the bucket (cortex-xdr-events-destination) created in Step 1. Also, grant the account (the chronicle service account, see below when you adding feeds in Chronicle) created during feed creation of the Storage Object Viewer (roles/storage.objectViewer)

 

Danielma911_0-1728955978355.png

 

Set up the Solution (One-time setup)

  • Enable Following APIs
    1. Cloud Run
    2. Artifact Registry
  • Open Cloud Shell and download the code using: 

 

 

 

 

 

 

 

 

 

git clone https://github.com/PaloAltoNetworks/google-cloud-cortex-chronicle.git

 

 

 

 

 

 

 

 

 

  • cd google-cloud-cortex-chronicle/ The contents of this directory are shown below

Danielma911_2-1728956127266.png

 

Danielma911_3-1728955417032.png

 

 

 

  • Open the file env.properties with the editor of your choice. Update the values of the variables as shown below. Job Schedule minutes can be adjusted based on the size and frequency of data pushed by Cortex.

 

 

 

 

 

 

 

 

REGION=us-central1 # update this to the region you want
REPO_NAME=panw-chronicle # The repo name to create
IMAGE_NAME=sync_cortex_bucket # The image name to create
GCP_PROJECT_ID=xdrxxxxxtion # update this to your project ID
JOB_NAME=cloud-run-job-cortex-data-sync # The Cloud Job name to create
PROJECT_NUMBER=80xxxxx9 # update this to your project number
# JOB ENV VARIABLES
SRC_BUCKET=xdr-us-xxxxx-event-forwarding # update this to Cortex XDR GCS bucket
DEST_BUCKET=cortex-xdr-events-destination # Update to the GCS name you created
SECRET_NAME=EVENT_FRWD_CRTX_KEY # Need to match exactly the secret you created
JOB_SCHEDULE_MINS=30

 

 

 

 

 

 

 

 

  • Provide execute permissions to the deploy.sh using the command

 

 

 

 

 

 

 

 

chmod 744 deploy.sh

 

 

 

 

 

 

 

 

 

  • Run the deploy.sh using command ./deploy.sh. This step does following
    1. Creates an artifact registry repository
    2. Build an image for a cloud run job
    3. Pushes the image to the Artifact registry
    4. Creates a Cloud Run Job using this image
    5. Creates a trigger for this cloud run job every JOB_SCHEDULE_MINS minutes (configured in env.properties)
  • After the script finishes, you would need to grant permission to access the Secret Manager Secret you created before to the service account (you can see the service account used by the Cloud Jobs from the script output; see below).
 

Danielma911_5-1728955691985.png

 

Grant permission through the Secret Manager -> Permissions (Secret Manager Secret Accessor):

 

Danielma911_1-1728955993010.png

 

Verify setup

 

  • Verify if artifacts mentioned above are created.
  • You can wait for JOB_SCHEDULE_MINS minutes or perform the following steps to force execute the job. This is required/recommended to be done only once to test.
  • Go to the Cloud run job. In your case, it might show “no executions” in the status.

Danielma911_3-1728956137613.png

  • Force Execute

Danielma911_5-1728956227537.png

 

  • Check logs

Danielma911_6-1728956316094.png

 

  • Now check the destination bucket
    • It should have the files downloaded from the XDR Bucket

Danielma911_7-1728956503193.png

  • Download one of the files and unzip it and note down one or more event ids as shown below

Danielma911_8-1728956635895.png

 

  • Now go to Chronicle - SIEM Settings - [Your Feed Name] -> Enable Feed

Danielma911_9-1728956715776.png

 

  • Search for the event id in Chronicle with RAW search / UDM search (you may have to wait for a few minutes for UDM search). You should find the event in the Chronicle.

Danielma911_10-1728956779158.png

 

 

Regular Usage & Monitoring

  • You do not have to change anything going forward for this integration.
  • The feed should remain enabled from this point forward unless you want to troubleshoot.
  • You can change the schedule based on your requirements. Recommended 30 minutes initially when there’s too much data in the source bucket and gradually reducing to about 5 minutes as things stabilize. You can change that directly on the trigger as shown below

Danielma911_11-1728956880491.png

  • Monitor the Job Run execution History in Cloud run.

Danielma911_12-1728956949082.png

 

Cloud Billing Costs

Predominantly this solution uses the following resources that are billed.

  • GCS cloud bucket
  • Cloud Run Batch job
  • Artifact Registry

Assuming the deployment region to be us-central1 about 10GB of data persistently stored every day and a cloud run job that runs every 10 minutes for 30 days, the estimated cost is about USD 10 per month.

These are approximate costs. Your costs may vary based on the amount of data and the frequency of the job.

 

Troubleshooting

  • If the data is not available in the Chronicle
    1. Check if the feed is enabled
    2. Check the cloud run job logs
    3. For the very first time, the job may have to deal with GBs of data if you had the forwarding setup enabled for many days. Please check the job logs and wait for it to finish.
    4. Wait for a few minutes if the event is from a recent file
    5. Sometimes Cortex sends older files, so try to expand the search time range by a few hours.
  • Troubleshooting the Cloud run job
    1. Check for logs for any errors
    2. If there are no files logged by the job please check with Cortex XDR support as the files may not be present in the source bucket.
  • How to reduce GCS costs?

You can set up the Lifecycle Management rule to delete the object after 14 days. Follow the steps shown in the screenshots below

Danielma911_0-1728957534041.png

Danielma911_1-1728957574746.png

Danielma911_2-1728957614077.png

Danielma911_3-1728957648309.png

 

Uninstallation

 

Go to the installation folder, provide execute permissions to the uninstall.sh, and execute it by using the command

 

 

 

 

 

 

 

 

 

 

chmod 744 uninstall.sh && ./uninstall.sh

 

 

 

 

 

 

 

 

 

 

Rate this article:
  • 764 Views
  • 0 comments
  • 0 Likes
Register or Sign-in
Labels
Article Dashboard
Version history
Last Updated:
‎11-15-2024 02:24 PM
Updated by: