Latency on Internal Interface

L4 Transporter

Latency on Internal Interface



Using PAN-OS 8.0.7. When we ping a trusted interface, we see latency up and down. Any clues?


root@test-machine:~# ping
PING ( 56(84) bytes of data.
64 bytes from icmp_seq=1 ttl=63 time=3.46 ms
64 bytes from icmp_seq=2 ttl=63 time=1.25 ms
64 bytes from icmp_seq=3 ttl=63 time=19.9 ms
64 bytes from icmp_seq=4 ttl=63 time=17.9 ms
64 bytes from icmp_seq=5 ttl=63 time=18.6 ms
64 bytes from icmp_seq=6 ttl=63 time=16.6 ms
64 bytes from icmp_seq=7 ttl=63 time=3.39 ms
64 bytes from icmp_seq=8 ttl=63 time=2.25 ms
64 bytes from icmp_seq=9 ttl=63 time=11.5 ms
64 bytes from icmp_seq=10 ttl=63 time=17.9 ms
64 bytes from icmp_seq=11 ttl=63 time=9.48 ms
64 bytes from icmp_seq=12 ttl=63 time=11.3 ms
64 bytes from icmp_seq=13 ttl=63 time=7.23 ms
64 bytes from icmp_seq=14 ttl=63 time=14.1 ms
64 bytes from icmp_seq=15 ttl=63 time=3.14 ms
64 bytes from icmp_seq=16 ttl=63 time=1.15 ms
64 bytes from icmp_seq=17 ttl=63 time=9.94 ms
64 bytes from icmp_seq=18 ttl=63 time=18.0 ms
64 bytes from icmp_seq=19 ttl=63 time=16.8 ms
64 bytes from icmp_seq=20 ttl=63 time=14.9 ms
64 bytes from icmp_seq=21 ttl=63 time=12.5 ms
64 bytes from icmp_seq=22 ttl=63 time=4.19 ms
64 bytes from icmp_seq=23 ttl=63 time=1.17 ms
64 bytes from icmp_seq=24 ttl=63 time=11.1 ms
64 bytes from icmp_seq=25 ttl=63 time=10.6 ms
64 bytes from icmp_seq=26 ttl=63 time=8.15 ms
64 bytes from icmp_seq=27 ttl=63 time=7.21 ms
64 bytes from icmp_seq=28 ttl=63 time=6.46 ms
64 bytes from icmp_seq=29 ttl=63 time=2.64 ms

L7 Applicator

Re: Latency on Internal Interface

hi @Farzana!


Are you experiencing slowdowns in packets passing through the firewall, is there packetloss or lots of resends?

What's the current load on the dataplane?


Pings to the dataplane interface get the lowest possible priority when processing packets, so if your dataplane is under any type of load you may see some latency

Is there a direct link from your host to the firewall, is there a routing device or a switch in between?

did you verify the speed and duplex settings of the firewall interface to match what it is connected to?

L3 Networker

Re: Latency on Internal Interface

Same "issue" here. 


This latency when ping the interface adress started after upgrade to  PAN 8.0.x 

previous versions had only higher latency when the firewall or interace was under high load,

The mgmt interface does not show latency, 


Througput and latency when going "through" the device is normal and the same as  previous pan-os versions

Also when dataplane usage is 0 the latency is bouncing between 2 and 12 ms



L7 Applicator

Re: Latency on Internal Interface


Just wanted to state that I can't reproduce the issue on either 3020s, 3050s, or 200s all running 8.0.7. Do you have a physical firewall or is this through a VM model? 

L3 Networker

Re: Latency on Internal Interface

@BPryFor me its a VM-50 stand alone and VM-100 HA clusters.

All our physical firewall's are still on 7.1.1x

Latency for the interfaces IP's  happends on all 8.0.x versions

L7 Applicator

Re: Latency on Internal Interface


Interesting. Maybe this is actually limited to the VMs and doesn't actually effect hardware units? 

L3 Networker

Re: Latency on Internal Interface

@BPry I think you are right, that is only affect the VM models

L4 Transporter

Re: Latency on Internal Interface



Just wanted to let you know I have logged a case with TAC regarding the issue. 

So far we have seen that when DPDK is off, latency is dropped. Still checking the issue.

I will update it here.

L3 Networker

Re: Latency on Internal Interface

@Farzana  Any news on this?

L4 Transporter

Re: Latency on Internal Interface

Hi @Gertjan-HFG,


Thank you for the follow up. TAC asked to perform the following.


1. Perform ping test and take specific packet capture by filtering only source and destination in both directions for 15 minutes (Kindly note down the time).

2. Capture output of ethernet1 Interface from ESXi Host at start and end of testing after 15 minutes.
net-stats -l | grep <Firewall name>

Sample output below

ID Number DVS Port No MAC-Address Interface
33554978 5 9 DvsPortset-0 00:49:3b:ee:e3:14 FW.eth5

The above output would give port details for the Firewall.

Once port detail is obtained, for eth1 interface run below command:
cat /net/portsets/<DVS Port No>/ports/<ID Number of eth1>/vmxnet3/rxSummary


3. Take output of below command after every 5 minutes:

debug dataplane pow performance


However, after trying the following command latency has dropped to expectancy level. So our customer did not go ahead with further testing. Case is closed.


admin@PA-VM> show system setting dpdk-pkt-io 

Device current Packet IO mode: DPDK 
Device DPDK Packet IO capable: yes 
Device default Packet IO mode: DPDK 

admin@PA-VM> set system setting dpdk-pkt-io off 
Enabling/disabling DPDK Packet IO mode requires a device reboot. Do you want to continue? (y or n) 

Device is now in non-DPDK IO mode, please reboot device 



Like what you see?

Show your appreciation!

Click Like if a post is helpful to you or if you just want to show your support.

Click Accept as Solution to acknowledge that the answer to your question has been provided.

The button appears next to the replies on topics you’ve started. The member who gave the solution and all future visitors to this topic will appreciate it!

These simple actions take just seconds of your time, but go a long way in showing appreciation for community members and the Live Community as a whole!

The Live Community thanks you for your participation!