How to determine if high dataplane is an issue

cancel
Showing results for 
Show  only  | Search instead for 
Did you mean: 
Announcements
Please sign in to see details of an important advisory in our Customer Advisories area.
Palo Alto Networks Approved
Palo Alto Networks Approved
Community Expert Verified
Community Expert Verified

How to determine if high dataplane is an issue

L3 Networker

Our Data plane CPU usage is constantly on or above 90%.

 

We have a PA PA-3020.   PANOS 9.1.6

 

It is usually High only during business hours and after hours it is back to normal.

It has not affected the firewall performance and any traffic yet.

However we are worried what could be causing it and how can we optimize it.

We have followed this KB article, however not able to tell what is causing is and how can we optimize it

https://knowledgebase.paloaltonetworks.com/KCSArticleDetail?id=kA10g000000ClRTCA0

 

admin@PA-3020-secondary(active)> show running resource-monitor

Resource monitoring sampling data (per second):

CPU load sampling by group:
flow_lookup : 88%
flow_fastpath : 88%
flow_slowpath : 88%
flow_forwarding : 88%
flow_mgmt : 88%
flow_ctrl : 88%
nac_result : 88%
flow_np : 88%
dfa_result : 88%
module_internal : 88%
aho_result : 88%
zip_result : 88%
pktlog_forwarding : 88%
lwm : 0%
flow_host : 88%

CPU load (%) during last 60 seconds:
core 0 1 2 3 4 5
* 88 87 87 87 87
* 88 87 87 87 87
* 87 86 87 87 88
* 93 95 94 94 94
* 93 93 93 93 92
* 96 96 96 95 96
* 94 95 93 94 94
* 93 94 94 94 94
* 91 88 89 91 88
* 91 90 90 90 90
* 95 95 95 95 94
* 97 98 97 98 98
* 97 97 96 96 96
* 92 91 91 91 92
* 89 89 89 89 88
* 90 87 88 87 87
* 91 90 91 91 91
* 94 94 94 94 93
* 91 90 91 92 92
* 86 87 83 83 84
* 80 82 84 81 81
* 86 88 86 88 86
* 81 82 79 82 81
* 89 91 92 91 92
* 88 87 87 86 87
* 91 89 89 88 90
* 96 96 96 96 97
* 92 92 92 92 92
* 96 95 95 96 96
* 94 93 93 93 93
* 92 91 91 92 91
* 92 91 92 90 91
* 91 87 88 89 88
* 94 93 93 94 93
* 86 82 84 83 83
* 90 91 91 91 91
* 95 95 95 95 95
* 99 99 99 99 99
* 93 93 93 93 94
* 89 90 90 91 91
* 86 88 87 88 87
* 89 88 88 88 89
* 97 96 96 96 97
* 98 98 98 98 98
* 97 96 96 96 95
* 99 98 98 98 98
* 97 96 96 95 96
* 97 97 97 96 96
* 93 93 94 93 93
* 93 93 94 93 94
* 90 91 92 92 92
* 93 93 93 93 93
* 96 96 96 96 96
* 93 93 92 93 93
* 96 96 96 95 96
* 90 90 90 91 92
* 91 91 92 91 91
* 91 91 90 91 90
* 93 92 91 92 92
* 95 95 94 94 95

 

admin@PA-3020-secondary(active)> show session info

target-dp: *.dp0
--------------------------------------------------------------------------------
Number of sessions supported: 262142
Number of allocated sessions: 19694
Number of active TCP sessions: 13951
Number of active UDP sessions: 5296
Number of active ICMP sessions: 57
Number of active GTPc sessions: 0
Number of active GTPu sessions: 0
Number of pending GTPu sessions: 0
Number of active BCAST sessions: 0
Number of active MCAST sessions: 0
Number of active predict sessions: 559
Number of active SCTP sessions: 0
Number of active SCTP associations: 0
Session table utilization: 7%
Number of sessions created since bootup: 3127924831
Packet rate: 49729/s
Throughput: 261195 kbps
New connection establish rate: 503 cps
--------------------------------------------------------------------------------
Session timeout
TCP default timeout: 3600 secs
TCP session timeout before SYN-ACK received: 5 secs
TCP session timeout before 3-way handshaking: 10 secs
TCP half-closed session timeout: 120 secs
TCP session timeout in TIME_WAIT: 15 secs
TCP session delayed ack timeout: 250 millisecs
TCP session timeout for unverified RST: 30 secs
UDP default timeout: 30 secs
ICMP default timeout: 6 secs
SCTP default timeout: 3600 secs
SCTP timeout before INIT-ACK received: 5 secs
SCTP timeout before COOKIE received: 60 secs
SCTP timeout before SHUTDOWN received: 30 secs
other IP default timeout: 30 secs
Captive Portal session timeout: 30 secs
Session timeout in discard state:
TCP: 90 secs, UDP: 60 secs, SCTP: 60 secs, other IP protocols: 60 secs
--------------------------------------------------------------------------------
Session accelerated aging: True
Accelerated aging threshold: 80% of utilization
Scaling factor: 2 X
--------------------------------------------------------------------------------
Session setup
TCP - reject non-SYN first packet: True
Hardware session offloading: True
Hardware UDP session offloading: True
IPv6 firewalling: True
Strict TCP/IP checksum: True
Strict TCP RST sequence: True
Reject TCP small initial window: False
Reject TCP SYN with different seq/options: True
ICMP Unreachable Packet Rate: 200 pps
--------------------------------------------------------------------------------
Application trickling scan parameters:
Timeout to determine application trickling: 10 secs
Resource utilization threshold to start scan: 80%
Scan scaling factor over regular aging: 8
--------------------------------------------------------------------------------
Session behavior when resource limit is reached: drop
--------------------------------------------------------------------------------
Pcap token bucket rate : 10485760
--------------------------------------------------------------------------------
Max pending queued mcast packets per session : 0
--------------------------------------------------------------------------------

 

 

 

We noticed  our root patriation is high, and we after doing the cleaning process as per Palo alto guide it is still high.

Would this be causing the dataplane issue?

> show system disk-space
 
Filesystem      Size  Used Avail Use% Mounted on
/dev/root       3.8G  3.4G  236M  94% /
1 accepted solution

Accepted Solutions

@Jatin.Singh 

 

It might be due to the high traffic running on PA 3020.

As you have few issues going on Please open the TAC case has see if your are hitting the max capacity of the box?

As that url for High DP CPU is very good .

 

Regards

MP

Help the community: Like helpful comments and mark solutions.

View solution in original post

2 REPLIES 2

L3 Networker

Also no changes was made to firewall recently.

Only change in environment was a lot of people are working from home and everyone is connected via Global protect

Once the Global protect users are less in the evening, dataplane is back to normal

@Jatin.Singh 

 

It might be due to the high traffic running on PA 3020.

As you have few issues going on Please open the TAC case has see if your are hitting the max capacity of the box?

As that url for High DP CPU is very good .

 

Regards

MP

Help the community: Like helpful comments and mark solutions.
  • 1 accepted solution
  • 4847 Views
  • 2 replies
  • 0 Likes
Like what you see?

Show your appreciation!

Click Like if a post is helpful to you or if you just want to show your support.

Click Accept as Solution to acknowledge that the answer to your question has been provided.

The button appears next to the replies on topics you’ve started. The member who gave the solution and all future visitors to this topic will appreciate it!

These simple actions take just seconds of your time, but go a long way in showing appreciation for community members and the LIVEcommunity as a whole!

The LIVEcommunity thanks you for your participation!