Latency on Internal Interface

cancel
Showing results for 
Show  only  | Search instead for 
Did you mean: 
Announcements
Please sign in to see details of an important advisory in our Customer Advisories area.

Latency on Internal Interface

L4 Transporter

Hello,

 

Using PAN-OS 8.0.7. When we ping a trusted interface, we see latency up and down. Any clues?

 

root@test-machine:~# ping 10.2.2.100
PING 10.2.2.100 (10.2.2.100) 56(84) bytes of data.
64 bytes from 10.2.2.100: icmp_seq=1 ttl=63 time=3.46 ms
64 bytes from 10.2.2.100: icmp_seq=2 ttl=63 time=1.25 ms
64 bytes from 10.2.2.100: icmp_seq=3 ttl=63 time=19.9 ms
64 bytes from 10.2.2.100: icmp_seq=4 ttl=63 time=17.9 ms
64 bytes from 10.2.2.100: icmp_seq=5 ttl=63 time=18.6 ms
64 bytes from 10.2.2.100: icmp_seq=6 ttl=63 time=16.6 ms
64 bytes from 10.2.2.100: icmp_seq=7 ttl=63 time=3.39 ms
64 bytes from 10.2.2.100: icmp_seq=8 ttl=63 time=2.25 ms
64 bytes from 10.2.2.100: icmp_seq=9 ttl=63 time=11.5 ms
64 bytes from 10.2.2.100: icmp_seq=10 ttl=63 time=17.9 ms
64 bytes from 10.2.2.100: icmp_seq=11 ttl=63 time=9.48 ms
64 bytes from 10.2.2.100: icmp_seq=12 ttl=63 time=11.3 ms
64 bytes from 10.2.2.100: icmp_seq=13 ttl=63 time=7.23 ms
64 bytes from 10.2.2.100: icmp_seq=14 ttl=63 time=14.1 ms
64 bytes from 10.2.2.100: icmp_seq=15 ttl=63 time=3.14 ms
64 bytes from 10.2.2.100: icmp_seq=16 ttl=63 time=1.15 ms
64 bytes from 10.2.2.100: icmp_seq=17 ttl=63 time=9.94 ms
64 bytes from 10.2.2.100: icmp_seq=18 ttl=63 time=18.0 ms
64 bytes from 10.2.2.100: icmp_seq=19 ttl=63 time=16.8 ms
64 bytes from 10.2.2.100: icmp_seq=20 ttl=63 time=14.9 ms
64 bytes from 10.2.2.100: icmp_seq=21 ttl=63 time=12.5 ms
64 bytes from 10.2.2.100: icmp_seq=22 ttl=63 time=4.19 ms
64 bytes from 10.2.2.100: icmp_seq=23 ttl=63 time=1.17 ms
64 bytes from 10.2.2.100: icmp_seq=24 ttl=63 time=11.1 ms
64 bytes from 10.2.2.100: icmp_seq=25 ttl=63 time=10.6 ms
64 bytes from 10.2.2.100: icmp_seq=26 ttl=63 time=8.15 ms
64 bytes from 10.2.2.100: icmp_seq=27 ttl=63 time=7.21 ms
64 bytes from 10.2.2.100: icmp_seq=28 ttl=63 time=6.46 ms
64 bytes from 10.2.2.100: icmp_seq=29 ttl=63 time=2.64 ms

1 accepted solution

Accepted Solutions

Hi @Gertjan-HFG,

 

Thank you for the follow up. TAC asked to perform the following.

 

1. Perform ping test and take specific packet capture by filtering only source and destination in both directions for 15 minutes (Kindly note down the time).


2. Capture output of ethernet1 Interface from ESXi Host at start and end of testing after 15 minutes.
net-stats -l | grep <Firewall name>

Sample output below

ID Number DVS Port No MAC-Address Interface
33554978 5 9 DvsPortset-0 00:49:3b:ee:e3:14 FW.eth5

The above output would give port details for the Firewall.

Once port detail is obtained, for eth1 interface run below command:
cat /net/portsets/<DVS Port No>/ports/<ID Number of eth1>/vmxnet3/rxSummary

 

3. Take output of below command after every 5 minutes:

debug dataplane pow performance

 

However, after trying the following command latency has dropped to expectancy level. So our customer did not go ahead with further testing. Case is closed.

 

admin@PA-VM> show system setting dpdk-pkt-io 

Device current Packet IO mode: DPDK 
Device DPDK Packet IO capable: yes 
Device default Packet IO mode: DPDK 

admin@PA-VM> set system setting dpdk-pkt-io off 
Enabling/disabling DPDK Packet IO mode requires a device reboot. Do you want to continue? (y or n) 

Device is now in non-DPDK IO mode, please reboot device 
admin@PA-VM>

 

 

View solution in original post

9 REPLIES 9

Cyber Elite
Cyber Elite

hi @Farzana!

 

Are you experiencing slowdowns in packets passing through the firewall, is there packetloss or lots of resends?

What's the current load on the dataplane?

 

Pings to the dataplane interface get the lowest possible priority when processing packets, so if your dataplane is under any type of load you may see some latency

Is there a direct link from your host to the firewall, is there a routing device or a switch in between?

did you verify the speed and duplex settings of the firewall interface to match what it is connected to?

Tom Piens
PANgurus - Strata specialist; config reviews, policy optimization

L3 Networker

Same "issue" here. 

 

This latency when ping the interface adress started after upgrade to  PAN 8.0.x 

previous versions had only higher latency when the firewall or interace was under high load,

The mgmt interface does not show latency, 

 

Througput and latency when going "through" the device is normal and the same as  previous pan-os versions

Also when dataplane usage is 0 the latency is bouncing between 2 and 12 ms

 

 

Cyber Elite
Cyber Elite

@Farzana,

Just wanted to state that I can't reproduce the issue on either 3020s, 3050s, or 200s all running 8.0.7. Do you have a physical firewall or is this through a VM model? 

@BPryFor me its a VM-50 stand alone and VM-100 HA clusters.

All our physical firewall's are still on 7.1.1x

Latency for the interfaces IP's  happends on all 8.0.x versions

@Gertjan-HFG,

Interesting. Maybe this is actually limited to the VMs and doesn't actually effect hardware units? 

@BPry I think you are right, that is only affect the VM models

Hi,

 

Just wanted to let you know I have logged a case with TAC regarding the issue. 

So far we have seen that when DPDK is off, latency is dropped. Still checking the issue.

I will update it here.

@Farzana  Any news on this?

Hi @Gertjan-HFG,

 

Thank you for the follow up. TAC asked to perform the following.

 

1. Perform ping test and take specific packet capture by filtering only source and destination in both directions for 15 minutes (Kindly note down the time).


2. Capture output of ethernet1 Interface from ESXi Host at start and end of testing after 15 minutes.
net-stats -l | grep <Firewall name>

Sample output below

ID Number DVS Port No MAC-Address Interface
33554978 5 9 DvsPortset-0 00:49:3b:ee:e3:14 FW.eth5

The above output would give port details for the Firewall.

Once port detail is obtained, for eth1 interface run below command:
cat /net/portsets/<DVS Port No>/ports/<ID Number of eth1>/vmxnet3/rxSummary

 

3. Take output of below command after every 5 minutes:

debug dataplane pow performance

 

However, after trying the following command latency has dropped to expectancy level. So our customer did not go ahead with further testing. Case is closed.

 

admin@PA-VM> show system setting dpdk-pkt-io 

Device current Packet IO mode: DPDK 
Device DPDK Packet IO capable: yes 
Device default Packet IO mode: DPDK 

admin@PA-VM> set system setting dpdk-pkt-io off 
Enabling/disabling DPDK Packet IO mode requires a device reboot. Do you want to continue? (y or n) 

Device is now in non-DPDK IO mode, please reboot device 
admin@PA-VM>

 

 

  • 1 accepted solution
  • 17954 Views
  • 9 replies
  • 0 Likes
Like what you see?

Show your appreciation!

Click Like if a post is helpful to you or if you just want to show your support.

Click Accept as Solution to acknowledge that the answer to your question has been provided.

The button appears next to the replies on topics you’ve started. The member who gave the solution and all future visitors to this topic will appreciate it!

These simple actions take just seconds of your time, but go a long way in showing appreciation for community members and the LIVEcommunity as a whole!

The LIVEcommunity thanks you for your participation!