- Access exclusive content
- Connect with peers
- Share your expertise
- Find support resources
This blog was written by Sameh Elhakim.
This post is also available in: 日本語 (Japanese)
You might have used ChatGPT to help you write a script or generate an image. So now that you know Cortex XSOAR has a ChatGPT integration, are you wondering how you might apply it to your security operations to facilitate incident response?
Quick Note: For more information on how we are incorporating AI across our Cortex portfolio to drive the autonomous modern SOC, please refer to the XSIAM Solution brief.
Here’s an example of how you can start using ChatGPT within your XSOAR playbooks to deliver information in a user-friendly way:
Before we dive into the playbook, let’s have a look at how ChatGPT rewrites the incident details from an ingested alert, adding richer context and in accordance with the format provided:
We are using ChatGPT 3.5 in this request and the playbook.
Amazing, right?
Now it is time to integrate it into an automated playbook. This playbook is built using the standard ticketing template which covers:
You can tweak the ChatGPT response by giving it a different set of output criteria. Quick tip: Put the criteria in bullet points so ChatGPT can format it accordingly. Note: When using ChatGPT for presenting data, we recommend following your organization’s data classification policies.
We will use the following integration (OpenAi ChatGPT v3)
https://xsoar.pan.dev/docs/reference/integrations/open-ai-chat-gpt-v3
Note: Using the OpenAI API requires a pay-as-you-go subscription after the free trial ends.
https://platform.openai.com/docs/introduction
To add a new secret key, press Create new secret key
Copy the new secret key before the pop-up is closed as it will not be accessible once you close the window.
Now that we have configured your ChatGPT integration instance, we can use it in a playbook. You can modify the playbook tasks as needed to suit your automation use cases.
The enrichment is done in two separate phases:
For incident analysis, you will send to ChatGPT all collected data from previous tasks as follows to determine severity of the incident:
As we mentioned earlier, you can configure your prompts to ChatGPT but here are some tips for optimum output results:
As you configured the task with your input parameters, it should look very similar to the ChatGPT web output.
It is important to monitor how many tokens are being used to communicate with ChatGPT via the API integration. OpenAI calculates the cost of API usage based on the total number of tokens used in your API calls (prompt + answer).
OpenAI provides more details about how they define and count OpenAI tokens.
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.
Subject | Likes |
---|---|
4 Likes | |
3 Likes | |
3 Likes | |
2 Likes | |
2 Likes |
User | Likes Count |
---|---|
11 | |
4 | |
3 | |
2 | |
2 |