Custom Signatures With ChatGPT and AI Security

cancel
Showing results for 
Show  only  | Search instead for 
Did you mean: 
General Articles
7 min read
Cyber Elite
Cyber Elite
100% helpful (1/1)
kiwi_0-1766154564195.png

 

 

We all know ChatGPT can write code and articles, but can it automate the specialized task of threat mitigation? We set out to test the capabilities of AI by asking it to generate a Palo Alto Networks custom signature.

 

Before diving into the experiment, let's establish some foundational context on the technology that drives these platforms.

 

Content:

 

  1. AI Basics
  2. DNS example, starting from scratch!
  3. HTTP example, comparison with already created signatures
  4. AI security
  5. Ending words

 

 

1. AI Basics

 

A Large Language Model (LLM) is a type of artificial intelligence (AI) that uses machine learning to understand and generate human-like text. For example, GPT is the underlying LLM, and ChatGPT is the conversational application built on top of it.

 

Not all language models are large: Small Language Models (SLMs) are compact AI models trained for specific, narrower tasks, requiring fewer computing resources.

 

It is important to note that AI models can also be trained on, and generate, non-human language data. For instance, an AI could be specialized to analyze structured data, such as adding calculated columns to a table based on a CSV file.

 

The LLM is just one component of a complete AI platform. Other crucial elements ensure functionality and accuracy:

 

  • AI Orchestrator: This system selects the most appropriate LLM or SLM to answer your specific request.

  • RAG (Retrieval-Augmented Generation): RAG allows the AI to retrieve and use up-to-date or internal documents to formulate a response. It prevents the AI from relying solely on its original, potentially stale training data.

  • System Prompt: This set of instructions defines the rules, persona, and limitations of the AI. It limits what users can ask and often acts as the trigger for the RAG system when internal context is required.

  • MCP (Model Context Protocol) Servers: These are used to connect the AI to external, real-time systems to answer questions requiring live data, such as querying the current weather or stock prices.

 

nikoolayy1_0-1765196840250.png

 

We can streamline the process of writing custom signatures by utilizing Prompt Engineering. This involves adding a specific system prompt under our personalization settings that will be automatically appended to all our inputs. This custom prompt ensures the AI generates highly targeted and consistent output tailored to our specific needs.

 

nikoolayy1_1-1765191775795.png

 

nikoolayy1_0-1765191412727.png

 

 

nikoolayy1_0-1759844429231.png

 

 

Reference:

 

Given the extensive, high-quality content available from industry leaders such as Amazon AWS and Microsoft, we will omit basic introductory links. We encourage readers new to the subject to consult these widely available foundational resources.

 

2. DNS example, starting from scratch!

 

Now that we have refined our system prompt through Prompt Engineering, let's put the AI to the test. We will ask the model to generate the necessary Palo Alto Networks custom signature to block DNS requests destined for the publicly testable domain, example.com.

 

nikoolayy1_2-1759844605898.png

 

Ok this looks good as it is even PCRE compliant! But how to configure this regex? 

 

nikoolayy1_3-1759844741121.png

 

The custom regex is now ready for validation. The signature is designed to be applied using the appropriate context: dns-req-section (see: Custom Application IDs and Signatures: dns-req-section)

 

I have configured the environment to demonstrate two distinct DNS capture options on the NGFW:

  1. Direct Route: Traffic is sent directly to the public DNS server (8.8.4.4) via a static route (route ADD command on the host), capturing the initial DNS request.

  2. DNS Proxy: Traffic is directed through the NGFW's configured DNS Proxy IP (192.168.1.91).

 

Note on DNS Proxy Configuration

During setup, I encountered an unexpected behavior where the DNS Proxy required an interface to be set to DHCP to "inherit" the DNS configuration. Since the inherited DNS was invalid, I used Security rules to ensure all final domain resolution traffic was directed to the intended server (8.8.4.4).

 

For more details on the DNS Proxy feature, please consult the knowledge base:

[How to Configure DNS Proxy on a Palo Alto Networks Firewall]

 

nikoolayy1_4-1759844903092.png

 

 

nikoolayy1_1-1759844518180.png

 

nikoolayy1_5-1759845149855.png

 

 

The preceding steps confirmed that the custom signature generation was successful.

If we were to combine all our requirements into a single, comprehensive query, such as:

 

"Generate a Palo Alto Networks NGFW vulnerability signature that blocks DNS FQDN queries for the www.example.com domain and all its subdomains. Also, provide the configuration steps and the correct context for deployment."

We would receive the full solution in one exchange.

 

This leads to the most important best practice for using LLMs:

 

  • Provide Maximum Context: LLMs are stateless. They do not "remember" previous interactions. For the model to answer your second question, the AI Orchestrator must send the entire context (your original question, the model's previous answer, and your new question) in one block.

  • Save Time and Cost (Tokens): Since the entire conversation is resent with every new query, each exchange consumes tokens (computational currency). By providing all necessary details in a single input, you reduce the number of exchanges, making the process faster and more cost-effective.

 

3. HTTP example, comparison with already created signatures

 

For this second example, we will provide ChatGPT with a foundation of existing knowledge. I will use the content from my previous article, How to Write Palo Alto Networks Custom Vulnerability and Application Signatures with Examples | Palo..., as the input context.

 

Recommendation: For the best understanding of the AI's output, I recommend reviewing that article first.

 

My prompt is I want a Palo Alto Vulnerability Signature that matches the that blocks parameters "user" if it exceeds 14 characters. As you see below it was almost correct just the "{15,}" needs to be  "{14,}"  . I even wrote the prompt in a incorrect way and ChatGPT stil undrestood what I meant 😁

 

nikoolayy1_0-1765195709705.png

 

 

 

Next, we look at the 'pass' parameter. During testing, I discovered a limitation that ChatGPT overlooked: when matching the user parameter, a simple regex like pass=[^&]{14,} is insufficient on its own. Instead, a more effective approach is to use a combination signature consisting of two separate signatures tied to a single condition. Because ChatGPT now utilizes a conversation cache, I was able to inquire about this second parameter without repeating the full context, as it remained aware of our previous discussion regarding the user parameter.

 

nikoolayy1_1-1765196025680.png

 

As this demonstrates, domain expertise and rigorous testing remain essential. You cannot simply 'copy and paste' ChatGPT-generated signatures into a production environment. When a signature fails in testing, you must have the technical depth to understand why it failed and how to refine it manually.

 

4. AI Security

 

  • Leveraging AI models like OpenAI’s GPT via the ChatGPT interface is a powerful starting point. You can even go a step further by building custom Python clients to interact with multiple Large Language Models (LLMs) beyond the OpenAI API—such as DeepSeek—to compare their outputs.

  • However, using AI safely requires more than just standard access. Most AI providers implement a "system prompt" to secure the model by prepending instructions to your request before it hits the LLM. While this acts as a form of "normalization," it is often insufficient. Sophisticated users can bypass these guardrails using clever techniques known as prompt injections. If you want to see this in action, I highly recommend testing your skills on Gandalf | Lakera – Test your AI hacking skills. 

  • Beyond injection, the threat landscape includes AI-specific DoS/DDoS attacks (designed to exhaust tokens or compute), prompt hijacking, and training data poisoning. A great resource for exploring these vulnerabilities is the OWASP GenAI Security Project.

  • This is where the Palo Alto Networks AI Security Profile becomes essential. It acts as an AI Gateway to monitor and filter traffic. It can block malicious requests or use Data Loss Prevention (DLP) to stop responses containing sensitive data if a prompt triggers an exfiltration attempt.

  • Furthermore, Palo Alto offers advanced solutions like Prisma AIRS, where security is deployed directly on Nvidia DPUs (Data Processing Units). These are essentially "Smart NICs" designed to offload and accelerate data-centric tasks, protecting the AI infrastructure without taxing the primary CPU.

 

nikoolayy1_0-1765194856379.png

 

Reference:

Security Profile: AI Security

Prisma® AIRS | The World’s Most Comprehensive AI Security Platform

 

 

 

I wish you all happy holidays !

 

nikoolayy1_0-1765198778023.jpeg

 

Rate this article:
(1)
Comments
Community Manager
Community Manager

@nikoolayy1 This is a great article! It covers foundational AI concepts, practical testing via DNS and HTTP examples, and the critical role of AI security.

Thank you for taking time and expertise to pull this together for the greater good of community members. 

  • 198 Views
  • 1 comments
  • 5 Likes
Register or Sign-in
Contributors
Labels
Article Dashboard
Version history
Last Updated:
‎12-19-2025 06:30 AM
Updated by: