LLM Prompts to Boost Your XSOAR Productivity

cancel
Showing results for 
Show  only  | Search instead for 
Did you mean: 
Community Blogs
12 min read
L4 Transporter

Title_XSOAR-productivity_palo-alto-networks.jpg

 

As both tools can automate tasks and save you time, Cortex XSOAR and large language models (LLMs) like OpenAI’s ChatGPT are a natural pairing. This blog post will present practical prompts that can help you supercharge your Cortex XSOAR skills. Included with this post is a Cortex XSOAR playbook file, from which you can copy out all the prompts discussed below and tailor them to meet your needs.

 

​​Quick Note: For more information on how we are incorporating AI across our Cortex portfolio to drive the autonomous modern SOC, please refer to the XSIAM Solution brief.  

 

Before getting started with LLMs, it is important to assess the related security and privacy risks. Familiarize yourself with any policies your organization has governing the use of LLMs. Take note of which types of data you are permitted to submit, and to which LLM services. Cortex XSOAR currently has community-supported packs to integrate with OpenAI and Google Vertex AI. If your organization uses different platforms to work with LLMs, talk to your Customer Success team for guidance on integrating them with Cortex XSOAR. The prompts included here are intended for practical use, but they should be treated as experimental and used with caution in accordance with your organization’s policies. 

 

The prompts covered in this post were validated on July 14, 2023 with OpenAI models gpt-3.5-turbo-16k (for Chat) and text-davinci-003 (for Completions). See the OpenAI documentation for an overview of all available models.  

 

Tips for Prompt Engineering Success

 

When you construct your prompts, providing the LLM with examples is key. Be hyper-specific about the data you want back and the format you want it to be in. LLMs are sensitive to subtle changes in prompt tone, format, etc. The prompts covered here are shared as a reference, but please treat them as a jumping-off point and understand they will need to be adapted to suit your use cases.

 

For many of these prompts, such as generating a custom automation, a deterministic, high-probability result is preferred. In these cases, it is recommended to set a low value like 0 for temperature, which reduces the variability or “creativity” of the output, by either passing in temperature=0 as an argument or customizing the integration code if necessary. For other prompts, such as creating a quiz or getting use case ideas, it is best to increase the temperature to inject more randomness into the generation.

 

Be sure to enable Quiet Mode on tasks that send a prompt to the LLM, because the input data may be very large, causing it to bog down your Work Plan and take up excessive space on your XSOAR server.

 

Many advanced prompts require you to send a larger amount of data to the LLM beyond the OpenAI gpt-3.5-turbo default limit of 4096 tokens. To handle this, it is necessary to use a model with a higher token limit, like OpenAI gpt-3.5-turbo-16k, or embed the prompt data into a vectorstore using a framework like LangChain, which is beyond the scope of this blog.

 

Prompts Overview

 

At a high level, here is the list of LLM prompts that will be covered in detail below:

 

  1. Write a custom automation: Given input and desired output, generate a new automation, such as a transformer or General Purpose Dynamic Section script. (engineer role)
  2. Brainstorm use case ideas: Based on your role, get a list of ideas for new use cases to implement in Cortex XSOAR. (any role) 
  3. Get study help: Generate a custom quiz to help you study. Ask questions to get clarification as you are exposed to new concepts in training. (any role) 
  4. Document a playbook: Given a playbook, write a description for it, to create internal documentation or simply to understand what it does. Or ask specific questions about the playbook. (engineer or analyst role)
  5. Analyze/summarize incident data: Given incident details, analyze and determine the severity of an incident. (analyst role)
  6. Auto-generate threat intel reports: Given details on vulnerability (CVE) or indicator, auto-generate a Threat Intel Report. (analyst role)

 

Prompt 1 – Write a Custom Automation

As stated above, it is critical to provide the LLM with an example to get a usable result. Luckily for the use case of creating Cortex XSOAR content like automations, there are usually similar out-of-the-box examples to pull from. When leveraging LLMs to write a custom automation, identify the existing automation that is most similar to the automation you want to generate. Download the automation, copy the complete contents of the YML file, and paste them into the prompt template below. Be sure to copy the YML data that contains both the code and the script settings (arguments, outputs, etc.), as opposed to just the code that can be copied out of the UI. If necessary, refer to specific qualities of the example to get the LLM to generate a valid result. 

 

Warning: Carefully review any code generated by LLMs before uploading it to Cortex XSOAR and running it.

 

Prompt Template 1.a – Write a Custom Transformer

Given this input for <input_1>:

<example_structure_for_input_1>

 

And this input for <input_2>:

<example_structure_for_input_2>

 

[...]

 

Write me an XSOAR automation YML to generate output in the following format:

<desired_output_format>

 

In the YML, all fields are required, except "args" and "outputs" are not required if they do not have a value. "version" should be set to -1. "tags" must contain "transformer". The value for "comment" must be in quotes like the example. If you do not know what to put for a field, use the same value as the example.

 

Here is an example XSOAR automation YML so you know the format to follow:

<example_automation_yml>

###

 

Automation YML:

 

For example, use this prompt to create an automation to transform a list of questions and a list of answers from the output of the SlackBlockBuilder automation into a list in the desired format [<Question 1>: <Answer 1>, <Question 2>: <Answer 2>, …]. Using custom transformer GenerateOutput generated by prompt 1.a:

 

Figure 1_XSOAR-productivity_palo-alto-networks.jpg

 

Prompt Template 1.b – Write a Custom General Purpose Dynamic Section (GPDS) Script

Given data in the following format:

<example_structure_for_input>

 

Write me an XSOAR automation YML for a General Purpose Dynamic Section that creates a <desired_output_including_format>.

 

In the YML, all fields are required. "version" should be set to -1. If you do not know what to put for a field, use the same value as the example. <Refer to specific qualities of example automation if necessary.> <Provide specific instructions to correct the output if necessary.>

 

Here is an example XSOAR automation YML so you know the format to follow, up to and not including "###":

<example_automation_yml>

###

 

Automation YML:

 

For example, use this prompt to create an automation to take the results of the urlscan.io integration url command and display them in the incident layout as markdown. Urlscan.io Results GPDS generated by prompt 1.b:

 

Figure 2_XSOAR-productivity_palo-alto-networks.jpg

 

Debugging

Using LLMs to generate custom automations is far from a perfect science. That being said, many common issues can be overcome by making your prompt more specific. Here are some common issues you may encounter and what to try:

 

  • Misunderstanding of how Cortex XSOAR result formatting works, or invalid output formatting in general: Provide a different example automation that uses the desired output formatting. You may need to add an instruction to the prompt to direct the LLM to a specific quality of the example automation.
  • Incorrect usage of a Cortex XSOAR-specific function, such as tableToMarkdown: Add an example or explanation of how the function works to the prompt, or instruct the LLM not to use the function.
  • Random imports: Add an instruction to the prompt stating the import is not needed, or manually remove the import after the code has been generated by the LLM.
  • Trailing or leading whitespace: Instruct the LLM to leave off the whitespace, or use the Trim transformer on the code after it has been generated by the LLM.

 

Prompt 2 – Brainstorm Use Case Ideas

The generic Cortex XSOAR use cases list may not be suitable for all organizations. Given the varied roles and responsibilities of Cortex XSOAR customers, it is far more useful to get use case suggestions that are tailored to your team’s role and the specific products your organization uses.

 

Prompt Template 2 – Get Use Case Ideas

I am a <role>. My organization uses <list_tools_you_have_integrated_with_XSOAR>. <Briefly_describe_your_team>, and we are mainly responsible for <list_key_responsibilities>, etc. What are some use cases we could implement in XSOAR to automate our repetitive workflows and save time?

 

Cortex XSOAR use case ideas for a NetOps engineer whose organization uses Palo Alto firewalls, Jira, and Exabeam, generated by prompt 2:

 

Figure 3_XSOAR-productivity_palo-alto-networks.png

 

Prompt 3 – Get Study Help

If you are studying for the Palo Alto Networks Certified Security Automation Engineer (PCSAE) exam or just trying to beef up your XSOAR knowledge in general, use the LLM as a training assistant using the prompts below. Have the LLM prepare a multiple-choice quiz or just generally provide clarification on a concept that is unclear. For best results, especially with the quiz, feed in source data copied from the relevant section of the Cortex XSOAR documentation.

 

Prompt Template 3.a – Generate Mini-Quiz about Cortex XSOAR

I am learning XSOAR. Write a quiz with 3 questions on the topic of <topic>. Base the quiz only on the context information included below. AVOID "All of the above" QUESTIONS. Each question must have exactly one unique answer. Output format: output the questions first, then the string "~~" on its own line, then the correct answers, exactly like this template:

 

  1. <Question>
  2. a) <Option>
  3. b) <Option>
  4. c) <Option>
  5. d) <Option>

 

  1. <Question>
  2. a) <Option>
  3. b) <Option>
  4. c) <Option>
  5. d) <Option>

 

  1. <Question>
  2. a) <Option>
  3. b) <Option>
  4. c) <Option>
  5. d) <Option>

~~

Answer 1: <Answer>

Answer 2: <Answer>

Answer 3: <Answer>

###

 

Context to base the quiz on:

<source_data_copied_from_XSOAR_documentation>

###

 

Quiz:

 

Data Collection task for quiz about Threat Intelligence Management (TIM) generated by prompt 3.a:

 

Figure 4_XSOAR-productivity_palo-alto-networks.png

 

Prompt Template 3.b – Ask Clarification Question

I am learning XSOAR. Explain the concept of <topic>, list and define the key components, and describe what it is used for.

 

Explanation of TIM generated by prompt 3.b:

 

Figure 5_XSOAR-productivity_palo-alto-networks.png

 

Prompt 4 – Document a Playbook

Many of us have been in the unfortunate position of inheriting work started by a colleague who has now left the organization without leaving behind any documentation on what they did. When it comes to playbooks and automations, it can be difficult to untangle what the content does, especially when you did not write it yourself. The following prompts can help you understand what a given playbook does and could be used to auto-generate internal documentation for the playbook (or at least populate the “Description” field in the Playbook Settings). Download the playbook, copy the complete contents of the YML file, and paste them into the prompt template.

 

Prompt Template 4.a – Describe a Playbook

Analyze the following XSOAR playbook YML. Create documentation for the playbook using the following template:

1- Purpose of playbook

2- Overview of actions taken by the playbook

 

<playbook_yml>

 

Description of out-of-the-box playbook Cortex XDR Malware - Investigation And Response generated by prompt 4.a:

 

Figure 6_XSOAR-productivity_palo-alto-networks.png

 

Prompt Template 4.b – Ask a Question of a Playbook

Review the following XSOAR playbook YML and answer the question: <question>

 

<playbook_yml>

 

Answer to question “How does the playbook handle incident classification before the incident is closed out and what are the classification options?” from prompt 4.b:

 

Figure 7_XSOAR-productivity_palo-alto-networks.png

Prompt 5 – Analyze/Summarize Incident Data

LLMs can help reduce the manual workload of typing up incident summary notes by taking advantage of the rich data that is already available in an incident. The auto-generated text can be used to populate an incident layout with a human-readable summary, update a ticket, or send an email summary to an analyst or to SOC management. Using LLMs makes it much easier to scale such functionality across multiple use cases, compared to manually writing summaries for different incident types. For more detail on this use case, see our blog post “Playbook of the Week: Using ChatGPT in Cortex XSOAR”.

 

For best results with this use case, give the prompt the specific format in which you want the result. This prompt template proposes a sample format, which should be modified to suit the needs of your organization.

 

Prompt Template 5 – Get Incident Summary 

Give detailed analysis of the following using the following form:

1- Analysis (Incident Description)

2- Impact Analysis

3- Action or Recommendations (Must be detailed)

 

There is a security incident with the following information:

1- Title: <incident_name>

2- Indicator of compromise: <incident_IOCs>

3- Indicator of compromise reputation: <incident_IOC_scores>

4- Type of machine compromised: <endpoint_type>

5- IP of Compromised Machine: <impacted_IP>

6- Hostname of Compromised Machine: <impacted_hostname>

7- Time of Incident: <incident_timestamp>

8- MitreAttack Technique: <incident_MITRE_ATT&CK_technique>

9- MitreAttack Attack Group: <incident_MITRE_ATT&CK_group>

 

Sample incident analysis summary generated by prompt 5:

 

Figure 8_XSOAR-productivity_palo-alto-networks.png

 

Prompt 6 – Auto-Generate Threat Intel Reports

LLMs can help automate the traditionally manual process of writing Threat Intel Reports. Since generally available LLMs are not trained on up-to-the-minute data, it is necessary to pass in data about the threat intel report topic (CVE, indicator, etc.) to the prompt from an external source, like a published blog post. Copy and paste the raw threat intel data into the prompt template below. Be sure the prompt includes the desired output format for your threat intel report field, typically markdown. For more detail on this use case, see our Customer Success webinar “TIM Advanced Features”

 

Prompt Template 6 – Write Threat Intel Report

Write a brief threat intelligence report on vulnerability <cve> using the data below. Style the report as markdown paragraphs, with an executive summary at the beginning, and an ending conclusion.

 

Source data for report:

<source_threat_intel_data>

###

 

This example pulls from a threat brief on CVE-2023-34362 published by Unit 42, and generates markdown to produce this Cortex XSOAR threat intel report:

 

Figure 9_XSOAR-productivity_palo-alto-networks.png

 

  • 183 Views
  • 0 comments
  • 3 Likes
Register or Sign-in
Labels
Contributors
Top Liked Authors