Issue with network driver of PAN-OS 10.1.3 deployed in azure

cancel
Showing results for 
Show  only  | Search instead for 
Did you mean: 
Announcements

Issue with network driver of PAN-OS 10.1.3 deployed in azure

L3 Networker

Hi Folks,

 

We have an PA-VM-100 series firewall deployed in the Azure cloud.

 

We have three NIC cards mapped to the firewall interfaces which is configured as below:

NIC card 1 <-----> Management interface

NIC Card 2 <----> Untrust interface(Ethernet 1/1)

NIC Card 3 <----> Trust Interface(Ethernet 1/2)

 

Recently we had upgraded the firewall from PAN-OS version 10.0.4 to PAN-OS 10.1.3.

 

After that we had started facing an strange issue where when we try to ping ethernet 1/2 from any device deployed in Azure we are facing the latency issue and 35 percent of ping packets are dropped.

 

But when we had done the packet capture on the firewall we could see that the firewall is responding to all ping requests it had received and no packets are dropped by the firewall.

 

Upon taking Global counter we had seen the the below drop counter:

 

pkt_recv_flush_link 73726 9 drop packet pktproc Packets dropped due to link down in dpdk mode

 

We had powered off the VM and removed the NIC cards on the Azure side and mapped new NIC cards but still we had faced latency issue on the ethernet 1/2.

 

Upon further research we came to know firewall uses two network interface drivers namely Packet MMAP and DPDK drivers to interact with the underlying VM host interfaces and the DPDK driver will be used by default by the firewall and we can switch to Packet MMAP driver on the firewall by disabling the DPDK driver.

 

https://docs.paloaltonetworks.com/compatibility-matrix/vm-series-firewalls/sr-iov-and-dpdk-drivers

 

So we had disabled the DPDK driver on the firewall using the below command and then rebooted the firewall and there were no packet drops/Latency after the firewall driver is switched to Packet MMAP mode.

 

system setting dpdk-pkt-io off

 

Is this an bug on PAN-OS 10.1.3 or an expected behaviour. Is this an issue on the firewall side or Azure side.

 

Need more understanding on this.

 

Any inputs would be helpful.

1 REPLY 1

Cyber Elite
Cyber Elite

@tamilvanan,

I've actually started advising customers to not perform major version upgrades on Azure VM-series firewalls and just building out a new template. This is far from the only issue I've seen with major version upgrades on Azure deployments and deploying a fresh template and configuring it is extremely easy. I'm not sure what it is about Azure that seems to cause more problems with major version upgrades, but it does in my experience. 

DPDK is preferred in these deployments from a performance aspect over MMAP. Whether or not that performance is actually important to your deployment depends on a number of different aspects, so you could be perfectly fine running MMAP over DPDK drivers and you'll never notice a performance change. If having it off is working for you then I'd just leave it off for now.

 

What I would personally do is build out a new VM-100 and get the configuration copied over and the new firewall verified. Then just assign the IPs you have on the current VM-100 and migrate it over to the new VM. Like I said, I've taken to doing this for all major version upgrades to avoid Azure specified VM-series issues post upgrade. Something about it just seems to cause me more problems on Azure nodes. 

  • 3051 Views
  • 1 replies
  • 0 Likes
Like what you see?

Show your appreciation!

Click Like if a post is helpful to you or if you just want to show your support.

Click Accept as Solution to acknowledge that the answer to your question has been provided.

The button appears next to the replies on topics you’ve started. The member who gave the solution and all future visitors to this topic will appreciate it!

These simple actions take just seconds of your time, but go a long way in showing appreciation for community members and the LIVEcommunity as a whole!

The LIVEcommunity thanks you for your participation!