High DataPlane CPU at PanOS 7.1.9

cancel
Showing results for 
Show  only  | Search instead for 
Did you mean: 
Announcements
Please sign in to see details of an important advisory in our Customer Advisories area.

High DataPlane CPU at PanOS 7.1.9

L4 Transporter

Hi everybody

 

I have upgraded a Pa-3050 from 7.1.8 to 7.1.9, all seems to be OK but the DataPlane CPU is above 90%

 

Management CPU 16%
Data Plane CPU 95%
Session Count 37023 / 524286

 

I noticied if the session count is lower, the CPU also decrease , but this behaviour didn't happens in the previous release.

 

Do anybody knows if there is a problem with the release 7.1.9?

 

best regards

 

COS

1 accepted solution

Accepted Solutions

Hi,

 

The best way would be a TAC case for this issue. High CPU is not easy to troubleshoot without a special tool that TAC have.

View solution in original post

8 REPLIES 8

L6 Presenter

This could be something unic to your enviroment that causing high DP CPU on the new 7.1.9 release or even unknown bug. 

 

https://live.paloaltonetworks.com/t5/Featured-Articles/How-to-Troubleshoot-High-Dataplane-CPU/ta-p/7...

 

Do you thing revering back to the previous release an option for you?

Anyway whatever you do get a tech support file first so you have some info to show.

Hello

 

I already read this post you sent me but I don't know how to interpreter the cause of the problem. I put some of the output

 

The previous version 7.1.8 didn't has this problem but got another with UIA that forces me to upgrade.

 

 

Resource monitoring sampling data (per second):

CPU load sampling by group:
flow_lookup : 100%
flow_fastpath : 100%
flow_slowpath : 100%
flow_forwarding : 100%
flow_mgmt : 100%
flow_ctrl : 100%
nac_result : 100%
flow_np : 100%
dfa_result : 100%
module_internal : 100%
aho_result : 100%
zip_result : 100%
pktlog_forwarding : 100%
lwm : 0%
flow_host : 100%

CPU load (%) during last 60 seconds:
core 0 1 2 3 4 5
* 100 100 100 100 100
* 100 100 100 100 100
* 100 100 100 100 100
* 99 100 100 100 100
* 100 100 100 100 100
* 100 100 100 100 100
* 100 100 100 100 100
* 100 100 100 100 100
* 100 100 100 100 100
* 100 100 100 100 100
* 100 100 100 100 100
* 100 100 100 100 100

 

 

FW1(active)> show session info

target-dp: *.dp0
--------------------------------------------------------------------------------
Number of sessions supported: 524286
Number of active sessions: 32607
Number of active TCP sessions: 26561
Number of active UDP sessions: 5031
Number of active ICMP sessions: 29
Number of active BCAST sessions: 0
Number of active MCAST sessions: 0
Number of active predict sessions: 137
Session table utilization: 6%
Number of sessions created since bootup: 31876226
Packet rate: 860/s
Throughput: 686 kbps
New connection establish rate: 369 cps
-------------------------------------------------------------

 

 

 

 

@FW1(active)> debug dataplane pool statistics


Hardware Pools
[ 0] Packet Buffers : 52002/57344 0x8000000030c00000
[ 1] Work Queue Entries : 217701/229376 0x8000000037c00000
[ 2] Output Buffers : 1007/1024 0x800000000fc00000
[ 3] DFA Result : 3996/4000 0x8000000039800000
[ 4] Timer Buffers : 4096/4096 0x8000000039be8000
[ 5] PAN_FPA_LWM_POOL : 1024/1024 0x800000000fd00000
[ 6] ZIP Commands : 1023/1024 0x800000000fd40000
[ 7] PAN_FPA_BLAST_PO : 1024/1024 0x800000000ff40000

Software Pools
[ 0] software packet buffer 0 ( 512): 31571/32768 0x800000004b9b5680
[ 1] software packet buffer 1 ( 1024): 32263/32768 0x800000004c9d5780
[ 2] software packet buffer 2 ( 2048): 78152/81920 0x800000004e9f5880
[ 3] software packet buffer 3 (33280): 19354/20480 0x8000000058a45980
[ 4] software packet buffer 4 (66048): 304/304 0x8000000081459a80
[ 5] Shared Pool 24 ( 24): 1002369/1040000 0x8000000082781780
[ 6] Shared Pool 32 ( 32): 1023311/1050000 0x8000000084346e80
[ 7] Shared Pool 40 ( 40): 139028/140000 0x8000000086753800
[ 8] Shared Pool 192 ( 192): 2151362/2240000 0x8000000086d33780
[ 9] Shared Pool 256 ( 256): 137189/140000 0x80000000a0fe7080
[10] ZIP Results ( 184): 981/1024 0x80000000bf2f7300
[11] CTD AV Block ( 1024): 32/32 0x80000000dbf0d380
[12] Regex Results (11544): 7648/8000 0x80000000dbf36100
[13] SSH Handshake State ( 6512): 64/64 0x80000000e32dc680
[14] SSH State ( 3200): 512/512 0x80000000e3342480
[15] TCP host connections ( 176): 15/16 0x80000000e34d2e80

 

 

 

You can type the following key to switch what to display
--------------------------------------------------------
'a' - Display application statistics
'h' - Display this help page
'q' - Quit this program
's' - Display system statistics
System Statistics: ('q' to quit, 'h' for help)

Device is up : 3 days 13 hours 3 mins 25 sec
Packet rate : 666/s
Throughput : 545 Kbps
Total active sessions : 35628
Active TCP sessions : 29485
Active UDP sessions : 4870
Active ICMP sessions : 41
Top 20 Application Statistics: ('q' to quit, 'h' for help)

Virtual System: vsys1
application sessions packets bytes
-------------------------------- ---------- ------------ ------------
linkedin-base 54911 3862899 2239254176
_NS_443 3095971 97506885 49368689662
wetransfer 33 2132843 2060482904
rtmp 102 2498409 1920167963
mediafire 16 1699521 1902291088
flumotion 1104 3564910 1859883495
http-audio 1847 1870987 1714584568
shoutcast 178 3245209 1601866741
citrix 158765 8593462 1506670681
ms-ds-smb 56826 16171358 10076315423
ms-onedrive-uploading 37 33602440 35837235826
dns 5081890 11618122 1435727329
ms-onedrive-downloading 126 1439873 1393143794
google-base 951801 63108687 39851792945
snmpv2 1425622 10870599 1149238331
google-maps 15062 1686969 1099259879
hotmail 10315 1882599 1084961227
paloalto-wildfire-cloud 1780 1126579 1077311913
soap 55168 1891095 1065351780
quic 263565 83977499 74065072117

 

 

 

 

Best regards

Hi,

 

The best way would be a TAC case for this issue. High CPU is not easy to troubleshoot without a special tool that TAC have.

FYI: 7.1.10 is out so you might give a go

Same problem on a 5060 device, low session, low CPS, low throughput, 80% DP CPU at peak.

 

Active TAC case.

Hi Philip & COS

 

Just wondering how you guys went with this? I have a planned upgrade in a few days and would like to avoid this occurring or know what to do if it happens.

 

Regards

Darren

PAN TAC Does not think we have an issue... I'm gonna try a downgrade and get further proof.

 

Will let you know.

 

Edit:

FYI we are using 3-5% of the available TP throughput at peak, same with sessions and CPS.

Thanks for the response Philip. We were going to go with 7.1.9 but went to 7.1.10 instead and all good so far. Good luck with it, and keep us posted if you have the time.

  • 1 accepted solution
  • 6251 Views
  • 8 replies
  • 0 Likes
Like what you see?

Show your appreciation!

Click Like if a post is helpful to you or if you just want to show your support.

Click Accept as Solution to acknowledge that the answer to your question has been provided.

The button appears next to the replies on topics you’ve started. The member who gave the solution and all future visitors to this topic will appreciate it!

These simple actions take just seconds of your time, but go a long way in showing appreciation for community members and the LIVEcommunity as a whole!

The LIVEcommunity thanks you for your participation!