ansible zone creation failing

cancel
Showing results for 
Show  only  | Search instead for 
Did you mean: 
Announcements
Please sign in to see details of an important advisory in our Customer Advisories area.

ansible zone creation failing

L1 Bithead

i'm trying to run the following task in my play to create a bunch of new L3 subinterfaces on ae2 and then add them to the appropriate security zone. if i try to assign the zone as part of the panos_l3_subinterface, or through a different play (as shown in code vs commented out section) i get the same error message. 

my understanding is that if i dont have the zone created it should create it .. however it keeps complaining about the mode.. 

failed: [my-panorama] (item=adding zone: Z-VL-1200 ) => {"ansible_loop_var": "item", "changed": false, "item": {"brm_srv_fw_interface": "ae2", "brm_srv_fw_vlan_ip_address": "10.218.0", "kam_srv_fw_interface": "ae2", "kam_srv_fw_vlan_ip_address": "10.11.0", "srv_fw_zone_name": "Z-VL-1200", "vic_srv_fw_interface": "ae2", "vic_srv_fw_vlan_ip_address": "10.11.128", "vlan_id": "1200", "vlan_name": "Seg-VL-1200"}, "msg": "Failed apply:  Z-VL-1200 -> network -> layer3 '[ae2.1200]' is not a valid reference\n Z-VL-1200 -> network -> layer3 is invalid"}

 

tasks:
 - name: make Interfaces in Test-Template 
   panos_l3_subinterface:
    provider: '{{ PANO_Provider }}'
    enable_dhcp: no
    name: "ae2.{{ item.vlan_id }}"
    tag: '{{ item.vlan_id }}'
    vr_name: vr-inside
    #zone_name: '{{ item.srv_fw_zone_name }}'
    ip: ["{{ item.kam_srv_fw_vlan_ip_address }}.1/24"]
    template: Test-Template
   loop: '{{ build_vlan }}'
   loop_control:
    label: "adding interface: ae2.{{ item.vlan_id }}"

 - name: make zones on Test-Template
   panos_zone:
    provider: '{{ PANO_Provider }}'
      zone: '{{ item.srv_fw_zone_name }}'
      mode: "layer3"
      enable_userid: yes
      interface: "[ae2.{{ item.vlan_id }}]"
      template: Test-Template
    loop: '{{ build_vlan }}'
    loop_control:
      label: 'adding zone: {{ item.srv_fw_zone_name }} '

 

1 accepted solution

Accepted Solutions

Fixed the issue - it seems that if the aggregate was created manually and it was NOT PUT INTO a VR then the subinterfaces through scripting get that error.. panorama seems to manually allow this to get bypassed.. however most of my firewalls have their AE created by a partner and they weren't put into a VR... adding them to VR allowed the scripts to run. 

 

View solution in original post

2 REPLIES 2

L1 Bithead

in case anyone can help here, i ran a simpler play with -vvvv enabled 

The full traceback is:
  File "/tmp/ansible_panos_l3_subinterface_payload_qkpcbcde/ansible_panos_l3_subinterface_payload.zip/ansible/modules/panos_l3_subinterface.py", line 273, in main
  File "/usr/local/lib/python3.6/site-packages/pandevice/network.py", line 325, in set_zone
    update, running_config, return_type, False, mode=mode)
  File "/usr/local/lib/python3.6/site-packages/pandevice/base.py", line 1522, in _set_reference
    obj.update(reference_var)
  File "/usr/local/lib/python3.6/site-packages/pandevice/base.py", line 633, in update
    retry_on_peer=self.HA_SYNC)
  File "/usr/local/lib/python3.6/site-packages/pandevice/base.py", line 3486, in method
    raise the_exception
fatal: [my-panorama]: FAILED! => {
    "changed": false,
    "invocation": {
        "module_args": {
            "adjust_tcp_mss": null,
            "api_key": null,
            "comment": null,
            "create_default_route": false,
            "dhcp_default_route_metric": null,
            "enable_dhcp": false,
            "ip": [
                "10.11.1.1/24"
            ],
            "ip_address": null,
            "ipv4_mss_adjust": null,
            "ipv6_enabled": null,
            "ipv6_mss_adjust": null,
            "management_profile": null,
            "mtu": null,
            "name": "ae2.1200",
            "netflow_profile": null,
            "password": null,
            "port": 443,
            "provider": {
                "api_key": null,
                "ip_address": "vic-panora-lpr1",
                "password": "VALUE_SPECIFIED_IN_NO_LOG_PARAMETER",
                "port": 443,
                "serial_number": null,
                "username": "ansible-play"
            },
            "state": "present",
            "tag": 1200,
            "template": "xxx-KAM-SRV-FW",
            "username": "admin",
            "vr_name": "default",
            "vsys": null,
            "zone_name": "TEST3"
        }
    },
    "msg": "Failed setref:  TEST3 -> network -> layer3 'ae2.1200' is not a valid reference\n TEST3 -> network -> layer

 

this was the play:

- name: adds security zones, interfaces and firewall rules
  hosts: my-panorama
  connection: local
  gather_facts: False
  
  
  vars_files:
    - 'build-vlan.yml'
    - 'firewall-rules.yml'
    - 'my-secrets.yml'
    #- 'ios.yml'


  #{{ item.patunnel_name }}
  #  -e 'ansible_python_interpreter=/usr/bin/python3'

  roles:
    - role: PaloAltoNetworks.paloaltonetworks

  tasks:
    - name: make Interfaces in Test-Template 
      panos_l3_subinterface:
        provider: '{{ PANO_Provider }}'
        enable_dhcp: no
        name: "ae2.1200"
        tag: 1200
        zone_name: TEST3
        ip: ["10.11.1.1/24"]
        template: xxx-KAM-SRV-FW

Fixed the issue - it seems that if the aggregate was created manually and it was NOT PUT INTO a VR then the subinterfaces through scripting get that error.. panorama seems to manually allow this to get bypassed.. however most of my firewalls have their AE created by a partner and they weren't put into a VR... adding them to VR allowed the scripts to run. 

 

  • 1 accepted solution
  • 2829 Views
  • 2 replies
  • 0 Likes
Like what you see?

Show your appreciation!

Click Like if a post is helpful to you or if you just want to show your support.

Click Accept as Solution to acknowledge that the answer to your question has been provided.

The button appears next to the replies on topics you’ve started. The member who gave the solution and all future visitors to this topic will appreciate it!

These simple actions take just seconds of your time, but go a long way in showing appreciation for community members and the LIVEcommunity as a whole!

The LIVEcommunity thanks you for your participation!