Use Ansible AWX to automate the Palo Alto NGFW's management and even Process/Deamon Restarts!

cancel
Showing results for 
Show  only  | Search instead for 
Did you mean: 
General Articles
9 min read
Cyber Elite
No ratings

This article is a continuation of my previous one, 'Automating the Palo Alto NGFW's Process/Deamon Restarts'.  While using TCL Expect is one of the classic methods for automating legacy devices, modern infrastructure demands more robust solutions like Ansible AWX for better scalability and management.

 
 1. Overview
2.  AWX ansible installation 
3. GUI access to AWX
4. Creating the AWX playbook in a repo and pulling to AWX 
5. Create and trigger Ansible AWX automation
6. Ansible AWX API and Terraform Integration
7. Palo Alto API module example (bonus!)
Summary
 
 
 1. Overview
 

Ansible is an open-source, agentless IT automation engine used for configuration management, application deployment, and task orchestration. It uses human-readable YAML playbooks to define infrastructure states, making complex workflows simple and repeatable. AWX is the open-source upstream project for Red Hat Ansible Automation Platform (AAP)—formerly known as Ansible Tower. It provides the Automation Controller (the GUI and API) that standard CLI-based Ansible lacks, enabling easier collaboration and scaling.

nikoolayy1_3-1776425825137.png

 

While the official Palo Alto Networks Ansible Collection is excellent for configuration management, it is primarily XML API-based. As I discussed in my previous article, restarting specific system daemons is a low-level operation that requires direct CLI access. Since the API can't perform these restarts, we pivot to Ansible's native expect functionality. This allows us to automate interactive SSH sessions and issue the exact CLI commands needed to kick-start a hanging process.

 

 2.  AWX ansible installation 

 

To get started, you'll need to install AWX. Since AWX is comprised of several containers, Kubernetes (K8s) is the preferred choice for production. For a home lab, I recommend two paths: Docker Compose (https://github.com/ansible/awx/blob/devel/tools/docker-compose/README.md) or AWX on a Kind cluster (Kubernetes in Docker)(https://docs.ansible.com/projects/awx-operator/en/latest/installation/kind-install.html) .

 

I chose the Kind cluster approach as it’s the standard for testing the AWX Operator and provided a great opportunity to get hands-on with K8s.

 

I recommend using a 40GB Ubuntu VM. A common pitfall is finding that only 20GB is actually allocated to the filesystem. If your automation job pods fail with a 'DiskFull' error (check this with kubectl describe pods [pod-name] -n awx), you’ll likely need to resize your Logical Volume.

Here is the quick workflow to expand your space:

 

  1. Resize the Physical Volume: pvresize /dev/sda3

  2. Extend the Logical Volume: lvextend -l +100%FREE /dev/ubuntu-vg/ubuntu-lv

  3. Resize the Filesystem: resize2fs /dev/ubuntu-vg/ubuntu-lv

 

 

kubectl get pods -A
NAMESPACE            NAME                                              READY   STATUS              RESTARTS   AGE
awx                  automation-job-20-dndqn                           0/1     ContainerCreating   0          2s
awx                  awx-demo-migration-24.6.1-8f5fh                   0/1     Completed           0          140m
awx                  awx-demo-postgres-15-0                            1/1     Running             0          142m
awx                  awx-demo-task-7f7665864-qbwtf                     4/4     Running             0          141m
awx                  awx-demo-web-75b9757c57-mhlsd                     3/3     Running             0          141m
awx                  awx-operator-controller-manager-7ddd859f8-s88nb   2/2     Running             0          146m
ingress-nginx        ingress-nginx-controller-f5784567-cfvm6           1/1     Running             0          146m
kube-system          coredns-66bc5c9577-6r4tp                          1/1     Running             0          146m
kube-system          coredns-66bc5c9577-x42qr                          1/1     Running             0          146m
kube-system          etcd-kind-control-plane                           1/1     Running             0          146m
kube-system          kindnet-5kzlp                                     1/1     Running             0          146m
kube-system          kindnet-75l77                                     1/1     Running             0          146m
kube-system          kube-apiserver-kind-control-plane                 1/1     Running             0          146m
kube-system          kube-controller-manager-kind-control-plane        1/1     Running             0          146m
kube-system          kube-proxy-4fsxz                                  1/1     Running             0          146m
kube-system          kube-proxy-s7xw2                                  1/1     Running             0          146m
kube-system          kube-scheduler-kind-control-plane                 1/1     Running             0          146m
local-path-storage   local-path-provisioner-7b8c8ddbd6-wjjxg           1/1     Running             0          146m

 

Because the official AWX documentation for Kind deployments isn't updated frequently, I had to create a custom kustomization.yaml. Currently, the official docs reference a repository for the kube-rbac-proxy where the manifest is no longer available. Below is the custom configuration I used to bypass this broken dependency and ensure a successful deployment.

 

apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
resources:
  # Find the latest tag here: https://github.com/ansible/awx-operator/releases
  - github.com/ansible/awx-operator/config/default?ref=2.19.1

# Set the image tags to match the git version from above
images:
  - name: quay.io/ansible/awx-operator
    newTag: 2.19.1

  - name: gcr.io/kubebuilder/kube-rbac-proxy
    newName: registry.k8s.io/kubebuilder/kube-rbac-proxy
    newTag: v0.15.0

# Specify a custom namespace in which to install AWX
namespace: awx

 

 

 3. GUI access to AWX

 

Once the deployment is up, you can retrieve the auto-generated admin password with the following command:

 

kubectl get secret awx-demo-admin-password -n awx -o jsonpath="{.data.password}" | base64 -d

 

You can then access the GUI via your VM’s IP address on port 32000 (NodePort). Interestingly, I encountered an issue where the auto-generated secret stopped working over time. If you find yourself locked out, you can manually reset the admin password by exec-ing into the web container:

 

kubectl exec -it deployment/awx-demo-web -n awx -- awx-manage changepassword admin

 

nikoolayy1_0-1776430115994.png

 

 

4. Creating the AWX playbook in a repo and pulling to AWX 

 

Once your AWX instance is running, you need to sync your playbooks. The most efficient way is to create an AWX Project linked to a Git repository.

While AWX can host playbooks locally (using the 'Manual' SCM type), it is difficult to configure with a Kind deployment. Unlike Docker Compose or production Kubernetes, where you can easily mount host volumes directly into containers, Kind’s architecture (running Kubernetes inside a Docker container) makes local volume mounting much more cumbersome. For this reason, I recommend pushing your playbooks to a Git repository (like GitHub or GitLab) for seamless synchronization.

 

nikoolayy1_0-1776427311767.png

 

 

nikoolayy1_1-1776427340257.png

 

 

 

- name: Palo Alto SSH with expect
  hosts: all
  gather_facts: false
  tasks:
    - name: Restart Service
      ansible.builtin.expect:
        command: >-
          ssh -tt
          -o StrictHostKeyChecking=no
          -o UserKnownHostsFile=/dev/null
          -o PubkeyAuthentication=no
          -o PreferredAuthentications=keyboard-interactive,password
          -o IdentitiesOnly=yes
          -p 22
          admin@{{ ansible_host | default(inventory_hostname) }}
        responses:
          'Are you sure you want to continue connecting \(yes/no/\[fingerprint\]\)\?': "yes"
          '(?i)(\|\s*)?password:': "{{ cli_pass }}"
          'admin@.*[>#]\s*$': "set cli scripting-mode on\nset cli pager off\ndebug software restart process web-server\nexit\n"
          'press RETURN': "\n"
        timeout: 90
        echo: true
      register: out
      changed_when: false
    - name: Print output
      ansible.builtin.debug:
        msg: "{{ out.stdout }}"

 

You can find the full code and example playbooks in my GitHub repository:

 

awx-example-playbooks/awx-palo-alto-restart-service.yml at main · Nikoolayy1/awx-example-playbooks

 

This specific playbook uses the ansible.builtin.expect module to handle the interactive CLI prompts required for service restarts—a task that standard API-based modules cannot perform.

 

 5. Create and trigger Ansible AWX automation

 

Next, create an Inventory in AWX using the variables provided below. While you can attach these variables directly to the host, I recommend creating a Group and placing the variables there—this makes it much easier to scale if you have multiple firewalls.

For the 'Host' field, you can use the firewall's IP address or its FQDN. If your Kind cluster or Docker host is configured with the correct DNS server, using the FQDN is the preferred method for long-term management.

Once the Inventory is set, create your Job Template, link it to your project and inventory, and run it to see the magic happen!

 

nikoolayy1_3-1776427650468.png

 

 

---
ansible_connection: local
cli_user: admin
cli_pass: xxxx
cli_port: 22
ssh_opts: "-o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null -o PubkeyAuthentication=no -o PreferredAuthentications=keyboard-interactive,password -o IdentitiesOnly=yes"

 

nikoolayy1_0-1776427856727.png

 

nikoolayy1_1-1776427894618.png

 

 

nikoolayy1_2-1776427406108.png

 

 

 6. Ansible AWX API and Terraform Integration.

 

Ansible AWX also provides a robust REST API, which allows it to integrate seamlessly with other tools like Terraform. In a modern 'Best of Breed' stack, Terraform handles the Infrastructure as Code (IaC)—provisioning the public or private cloud resources—while Ansible takes over for granular Configuration Management.  Because Terraform is declarative and relies on a state file, it is designed to maintain a persistent environment. This makes it difficult to use for 'procedural' operational tasks, such as restarting a specific service, as these actions don't represent a change in the infrastructure's permanent state.

 

Historically, triggering Ansible from Terraform required using local or remote provisioners on the same machine. However, this 'old way' is difficult to scale and maintain due to the complex SSH requirements and tight coupling. By using the AWX API, you can decouple these tools: Terraform provisions the firewall, and then issues a simple API call to AWX to trigger the necessary configuration or restart playbooks.

Ansible Provisioner | Integrations | Packer | HashiCorp Developer

 

nikoolayy1_1-1776425217015.png

 

 

nikoolayy1_2-1776425239275.png

 

nikoolayy1_0-1776429105471.png

 

resource "null_resource" "awx_launch" {
  provisioner "local-exec" {
    command = <<EOT
curl -ks -u '${var.user}:${var.pass}' \
  -H 'Content-Type: application/json' \
  -X POST \
  https://awx.local/api/v2/job_templates/${var.job_template_id}/launch/ \
  -d '{"limit":"palo"}'
EOT
  }
}

 

 

 

7. Palo Alto API module example (bonus!)

 

A common issue with the default Execution Environment (EE) in AWX is that the standard job container often lacks the specific Python packages (like pan-os-python) required for Palo Alto’s native API modules. To solve this, you can make your own container or use mine, which I've hosted here:

 

ghcr.io/nikoolayy1/custom-awx-ee:latest.

 

Just add this as an Execution Environment in AWX the settings and select it in your Job Template and you're ready to go!

 

I also enabled an AWX Survey in the template. This provides a user-friendly prompt where you can enter any operational command to be executed, making the template versatile for more than just service restarts.

 

Important Note on Variables: Don't forget that the connection variables for API-based modules differ from the SSH ones. Ensure your Inventory or Group variables include the correct API credentials.

 

nikoolayy1_0-1776444353495.png

 


 

 

FROM quay.io/ansible/awx-ee:24.6.1

USER root

RUN python3 -m pip install --no-cache-dir \
    pan-python \
    pandevice \
    xmltodict

RUN ansible-galaxy collection install paloaltonetworks.panos

USER 1000

 

 

nikoolayy1_1-1776443811117.png

 

 

- name: Palo Alto dynamic command
  hosts: all
  connection: local
  gather_facts: false
  vars:
    device:
      ip_address: "{{ ansible_host }}"
      username: "{{ palo_user }}"
      password: "{{ palo_pass }}"
  tasks:
    - name: Run dynamic command
      paloaltonetworks.panos.panos_op:
        provider: "{{ device }}"
        cmd: "{{ pa_cmd }}"
      register: result
    - debug:
        msg: "{{ result.stdout }}"

 

If you plan to use Ansible AWX also with the general palo alto api modules then you will need to pull the collections as well as mentioned in awx/docs/collections.md at devel · ansible/awx 

 

 
nikoolayy1_1-1776429115682.png

 

 Again links to my playbooks repo:

 

awx-example-playbooks/README.md at main · Nikoolayy1/awx-example-playbooks

 

 

 

Summary:

 

The benefits of this setup extend into high-level security operations. Cortex XSOAR natively supports the Ansible Automation Platform (AAP) API, meaning the playbooks we’ve discussed for process and daemon restarts can be triggered automatically by XSOAR as part of an incident response or self-healing workflow. You can find more on that integration here: Ansible Automation Platform | Cortex XSOAR.

 

I hope you enjoyed reading this article as much as I enjoyed building the lab! Stay tuned—my next playground will likely involve GitHub Actions CI/CD, with a potential article on that coming soon

 

nikoolayy1_2-1776428874050.png

 

 

 

Rate this article:
Comments
Community Team Member

Great work on this ! Automation is a top priority for so many of us right now, and this AWX deep-dive is exactly the kind of content our members need. Thanks for the contribution! 👏

Community Manager

Great work👏

Community Team Member

Thanks for sharing @nikoolayy1 !

Community Manager

Really appreciate the content here @nikoolayy1 ! Great read.

Community Manager

Awesome automation insights. Thanks! 👏

  • 498 Views
  • 5 comments
  • 3 Likes
Register or Sign-in
Labels
Article Dashboard
Version history
Last Updated:
‎05-08-2026 04:56 PM
Updated by: