- Access exclusive content
- Connect with peers
- Share your expertise
- Find support resources
04-19-2019 04:33 PM - edited 04-19-2019 04:44 PM
Hi All
At home I run a PA-200, but I've been toying with the idea of moving to a VM (both lab-licences).. But whilst the VM runs WebUI runs 100x quicker, MGT interface is fine, but on the actual FW interface it has a much higher ping latency.
When pinging a FW interface on my PA-200, I get average of 1-2ms response times, but on the VM, I see anything from 3 to 20 milliseconds. If I ssh into the VM, and ping outbound from the PA-200 FW interface (ping source <INT> HOST <8.8.8.8>) is about 15ms , where from the VM i see 25-30ms. Whilst not enough to make it un-useable, the fact even local traffic has such a big increase is concerning.
This is a fresh VM install, running PANos 9.x and absolutely no policies or filtering. I thought it might be a hardware issue, but I've tried on 2 different ESXi servers (A Dell Power T320, and a Dell 3010 i7), both dataplanes have very low CPU usage.
Is this normal? (if so, I'll just suck it up)
Could it be an issue with v9.x?
Could anybody make recommendations?
Ping from my PC to PA-200, Interface 1/2:
Kelvins-MacBook-Pro-3:~ kelvin$ ping 192.168.10.1 PING 192.168.10.1 (192.168.10.1): 56 data bytes 64 bytes from 192.168.10.1: icmp_seq=0 ttl=63 time=1.543 ms 64 bytes from 192.168.10.1: icmp_seq=1 ttl=63 time=1.590 ms 64 bytes from 192.168.10.1: icmp_seq=2 ttl=63 time=1.323 ms 64 bytes from 192.168.10.1: icmp_seq=3 ttl=63 time=1.590 ms
Ping from my PC to PA VM, Interface 1/2:
PING 192.168.12.102 (192.168.12.102): 56 data bytes 64 bytes from 192.168.12.102: icmp_seq=0 ttl=63 time=10.034 ms 64 bytes from 192.168.12.102: icmp_seq=1 ttl=63 time=5.832 ms 64 bytes from 192.168.12.102: icmp_seq=2 ttl=63 time=5.988 ms 64 bytes from 192.168.12.102: icmp_seq=3 ttl=63 time=10.965 ms
Ping from my PC to PA VM, MGT Interface:
PING 192.168.12.101 (192.168.12.101): 56 data bytes 64 bytes from 192.168.12.101: icmp_seq=0 ttl=63 time=0.754 ms 64 bytes from 192.168.12.101: icmp_seq=1 ttl=63 time=0.650 ms 64 bytes from 192.168.12.101: icmp_seq=2 ttl=63 time=0.849 ms 64 bytes from 192.168.12.101: icmp_seq=3 ttl=63 time=0.780 ms
FYI the VM Interface 1/2 and Management are on the same physical port.
04-19-2019 05:41 PM
Search and ye shall find 😄
I found this thread: https://live.paloaltonetworks.com/t5/General-Topics/Latency-on-Internal-Interface/m-p/199444#M59109
Somebody else experienced something similar, and the fix seems to be running the following command via the cli:
admin@PA-VM> set system setting dpdk-pkt-io off Enabling/disabling DPDK Packet IO mode requires a device reboot. Do you want to continue? (y or n) Device is now in non-DPDK IO mode, please reboot device admin@PA-VM>
This worked for me.. afte a reboot, my pings were down to 1-2ms on the VM!
04-19-2019 05:41 PM
Search and ye shall find 😄
I found this thread: https://live.paloaltonetworks.com/t5/General-Topics/Latency-on-Internal-Interface/m-p/199444#M59109
Somebody else experienced something similar, and the fix seems to be running the following command via the cli:
admin@PA-VM> set system setting dpdk-pkt-io off Enabling/disabling DPDK Packet IO mode requires a device reboot. Do you want to continue? (y or n) Device is now in non-DPDK IO mode, please reboot device admin@PA-VM>
This worked for me.. afte a reboot, my pings were down to 1-2ms on the VM!
Click Accept as Solution to acknowledge that the answer to your question has been provided.
The button appears next to the replies on topics you’ve started. The member who gave the solution and all future visitors to this topic will appreciate it!
These simple actions take just seconds of your time, but go a long way in showing appreciation for community members and the LIVEcommunity as a whole!
The LIVEcommunity thanks you for your participation!