Enhanced Security Measures in Place:   To ensure a safer experience, we’ve implemented additional, temporary security measures for all users.

Playbook of the Week: Using ChatGPT in Cortex XSOAR

cancel
Showing results for 
Show  only  | Search instead for 
Did you mean: 
Announcements
Community Team Member

Graphics Created (3).jpg

 

This blog was written by Sameh Elhakim.

 

This post is also available in: 日本語 (Japanese)

 

You might have used ChatGPT to help you write a script or generate an image. So now that you know Cortex XSOAR has a ChatGPT integration, are you wondering how you might apply it to your security operations to facilitate incident response? 

Quick NoteFor more information on how we are incorporating AI across our Cortex portfolio to drive the autonomous modern SOC, please refer to the XSIAM Solution brief.  

 

Here’s an example of how you can start using ChatGPT within your XSOAR playbooks to deliver information in a user-friendly way:

  • Analysis of incidents delivered in readable, natural language to security analysts.
  • Improve incident ticket response with information on analysis, impact and recommendations.
  • For MSSPs, your clients will receive a description and analysis that looks like it was written by a human. That will help in clarity and better user satisfaction as ChatGPT can respond at a much higher speed than humans.

Before we dive into the playbook, let’s have a look at how ChatGPT rewrites the incident details from an ingested alert, adding richer context and in accordance with the format provided:

 

ChatGPT Request

We are using ChatGPT 3.5 in this request and the playbook.

Using ChatGPT 3.5.jpg

 

 

ChatGPT Response

Analysis

ChatGPT Response Analysis.jpg

 

Impact Analysis

Impact Analysis.jpg

 

Actions/Recommendations

Actions and recommendations.jpg

Additional actions and recommendations.jpg

Amazing, right?

Now it is time to integrate it into an automated playbook. This playbook is built using the standard ticketing template which covers:

  • Details (Analysis)
  • Impact
  • Actions/Recommendations

You can tweak the ChatGPT response by giving it a different set of output criteria. Quick tip: Put the criteria in bullet points so ChatGPT can format it accordingly. Note: When using ChatGPT for presenting data, we recommend following your organization’s data classification policies. 

 

 

How to Use the ChatGPT Integration in a Playbook

We will use the following integration (OpenAi ChatGPT v3)

https://xsoar.pan.dev/docs/reference/integrations/open-ai-chat-gpt-v3

 

Generate an OpenAI API key

Note: Using the OpenAI API requires a pay-as-you-go subscription after the free trial ends.

  1. Login to your OpenAI account using the following link:

https://platform.openai.com/docs/introduction

  1. Click on your profile from top right then View API Keys

Menu with API keysMenu with API keys

 

  1. Click on Generate new secret key

Creating new secret keyCreating new secret key

 

To add a new secret key, press Create new secret key

Window to create new secret keyWindow to create new secret key

 

Copy the new secret key before the pop-up is closed as it will not be accessible once you close the window.

Next window to create new secret keyNext window to create new secret key

 

Configure OpenAI ChatGPT v3 Instance

  1. Download the content pack from the Cortex Marketplace
  2. Add an Instance

Adding an InstanceAdding an Instance

 

  1. Paste the copied secret key Then press Save & exit

Saving the instanceSaving the instance

 

 

Use Case

Now that we have configured your ChatGPT integration instance, we can use it in a playbook. You can modify the playbook tasks as needed to suit your automation use cases.

 

Incident Enrichment

The enrichment is done in two separate phases:

  • Indicator extraction: Get more details like logs and extract artifacts from your SIEM solution.
  • Indicator enrichment: Enrich the extracted indicators using your threat intel feed such as Unit42 Intel or VirusTotal.

Indicator enrichment tasks in playbookIndicator enrichment tasks in playbook

 

Incident Analysis

For incident analysis, you will send to ChatGPT all collected data from previous tasks as follows to determine severity of the incident:

ChatGPT analysis section of playbookChatGPT analysis section of playbook

 

ChatGPT Task Configuration

As we mentioned earlier, you can configure your prompts to ChatGPT but here are some tips for optimum output results:

  • Make it short
  • Provide specific instructions for your output display
  • Provide output criteria in bulleted format
  • Add your data parameters (e.g. title or hostname of compromised machine)

ChatGPT prompt output in playbookChatGPT prompt output in playbook

 

ChatGPT Prompt

As you configured the task with your input parameters, it should look very similar to the ChatGPT web output.

 

ChatGPT prompt output.png

 

ChatGPT Response Output

It is important to monitor how many tokens are being used to communicate with ChatGPT via the API integration. OpenAI calculates the cost of API usage based on the total number of tokens used in your API calls (prompt + answer).

 

OpenAI provides more details about how they define and count OpenAI tokens.

 

ChatGPT response information in incident War RoomChatGPT response information in incident War Room
 

Incident Response

In this phase, any malicious indicators will be blocked on Cortex XDR or the firewall. This is decided based on the indicator's reputation/score as determined during the indicator's enrichment phase.

Incident response actions based on incident severityIncident response actions based on incident severity

 

Incident Closure

As part of incident resolution, an email with ChatGPT response details is sent to the SOC analyst, a ServiceNow ticket is generated and updated with the closure notes from the ChatGPT output.

Incident closure actionsIncident closure actions

 

Sample Email

This is an example of the email received by the analyst from XSOAR with the ChatGPT response output.

 

Sample emailSample email

 

Note:

ChatGPT is one of the many LLMs (large language models) we are working to integrate into Cortex XSOAR.  Stay posted for upcoming blogs on other LLM integrations such as Google Vertex Ai.

 

Learn More

Don’t have Cortex XSOAR? Download our free Community Edition today to test out this playbook and hundreds more automation packs for common use cases you deal with daily in your security operations or SOC.

 
  • 3605 Views
  • 0 comments
  • 0 Likes
Register or Sign-in
Labels