high dataplane cpu

Reply
L3 Networker

high dataplane cpu

Hi,

we're having an issue with 3050 very high cpu.new installed device and when checking dp file we see that ,

anyone can tell what can be cause this ? probably  ? Thanks.

panos 6.1.4

packet buffer is not high,

packet descriptor is not high

session count is normal

throughput is normal

packet rate is between 80000-100000 at problem time.

Resource monitoring sampling data (per second):

:

:CPU load sampling by group:

:flow_lookup : 97%

:flow_fastpath : 90%

:flow_slowpath : 97%

:flow_forwarding : 97%

:flow_mgmt : 60%

:flow_ctrl : 60%

:nac_result : 97%

:flow_np : 90%

:dfa_result : 97%

:module_internal : 97%

:aho_result : 97%

:zip_result : 97%

:pktlog_forwarding : 98%

:lwm : 0%

:flow_host : 60%

:

:CPU load (%) during last 15 seconds:

:core 0 1 2 3 4 5

: 0 60 97 97 98 98

: 0 59 97 97 97 97

: 0 59 98 98 98 98

: 0 59 98 98 98 98

: 0 56 97 97 98 98

: 0 57 98 98 98 98

: 0 59 97 97 97 97

: 0 57 94 94 95 95

: 0 56 95 95 96 96

: 0 53 95 96 96 96

: 0 49 93 93 94 94

: 0 50 94 94 95 95

: 0 52 95 95 95 95

: 0 61 96 96 96 96

: 0 64 98 98 98 98

:

:Resource utilization (%) during last 15 seconds:

:session:

: 20 20 19 19 19 19 19 19 19 19 19 19 19 19 19

:

:packet buffer:

: 1 1 0 0 0 0 0 0 0 0 0 0 0 0 0

:

:packet descriptor:

: 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0

:

:packet descriptor (on-chip):

: 7 6 3 3 4 4 4 4 3 5 4 3 3 5 6

:

:

:Resource monitoring statistics (per minute):

:CPU load (%) during last 15 minutes:

:core 0 1 2 3 4 5

: avg max avg max avg max avg max avg max avg max

: 0 0 53 60 95 99 95 99 96 99 96 99

: 0 0 55 66 96 100 96 100 96 100 96 100

: 0 0 55 66 95 100 95 100 96 100 96 100

: 0 0 58 69 97 100 97 100 97 100 97 100

: 0 0 57 65 97 100 97 100 97 100 97 100

: 0 0 59 69 99 100 99 100 99 100 99 100

: 0 0 59 73 97 100 97 100 97 100 97 100

: 0 0 57 65 98 100 98 100 98 100 98 100

: 0 0 54 66 97 100 97 100 97 100 97 100

: 0 0 55 67 97 100 97 100 97 100 97 100

: 0 0 53 63 96 100 96 100 96 100 97 100

: 0 0 52 58 96 99 96 99 96 99 96 100

: 0 0 46 57 93 99 93 99 93 99 93 99

: 0 0 47 62 92 98 92 98 93 98 93 98

: 0 0 55 70 95 99 95 99 95 99 95 99

:Resource utilization (%) during last 15 minutes:

:session (average):

: 19 21 19 20 20 20 21 20 20 19 20 19 18 19 18

:

:session (maximum):

: 20 22 20 21 20 21 22 22 20 20 21 20 19 19 19

:

:packet buffer (average):

: 0 0 0 1 0 1 1 1 0 0 0 0 0 0 0

:

:packet buffer (maximum):

: 1 2 1 1 1 2 1 1 1 1 1 1 1 1 1

:

:packet descriptor (average):

: 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0

:

:packet descriptor (maximum):

: 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0

:

:packet descriptor (on-chip) (average):

: 4 5 4 6 4 6 6 6 4 5 4 4 3 5 5

:

:packet descriptor (on-chip) (maximum):

: 8 24 9 22 7 20 14 18 11 17 10 8 8 31 28

:

:--------------------------------------------------------------------------------

:Number of sessions supported: 524286

:Number of active sessions: 103036

:Number of active TCP sessions: 52227

:Number of active UDP sessions: 45340

:Number of active ICMP sessions: 89

:Number of active BCAST sessions: 0

:Number of active MCAST sessions: 0

:Number of active predict sessions: 1981

:Session table utilization: 19%

:Number of sessions created since bootup: 24249791

:Packet rate: 95996/s

:Throughput: 574825 kbps

:New connection establish rate: 2316 cps

:--------------------------------------------------------------------------------

:Session timeout

: TCP default timeout: 3600 secs

: TCP session timeout before SYN-ACK received: 5 secs

: TCP session timeout before 3-way handshaking: 10 secs

: TCP half-closed session timeout: 120 secs

: TCP session timeout in TIME_WAIT: 15 secs

: TCP session timeout for unverified RST: 30 secs

: UDP default timeout: 30 secs

: ICMP default timeout: 6 secs

: other IP default timeout: 30 secs

: Captive Portal session timeout: 30 secs

: Session timeout in discard state:

: TCP: 90 secs, UDP: 60 secs, other IP protocols: 60 secs

:--------------------------------------------------------------------------------

:Session accelerated aging: True

: Accelerated aging threshold: 80% of utilization

: Scaling factor: 2 X

:--------------------------------------------------------------------------------

:Session setup

: TCP - reject non-SYN first packet: True

: Hardware session offloading: True

: IPv6 firewalling: True

: Strict TCP/IP checksum: True

:--------------------------------------------------------------------------------

:Application trickling scan parameters:

: Timeout to determine application trickling: 10 secs

: Resource utilization threshold to start scan: 80%

: Scan scaling factor over regular aging: 8

:--------------------------------------------------------------------------------

:Session behavior when resource limit is reached: drop

:--------------------------------------------------------------------------------

:Pcap token bucket rate : 10485760

:--------------------------------------------------------------------------------

L4 Transporter

Re: high dataplane cpu

It looks like you have about 1900 plus predicted sessions, take a look on those,

show session all filter type predict


Also, run ACC report for the last 15 mins or the time period that when the dp is high. 

Like what you see?

Show your appreciation!

Click Like if a post is helpful to you or if you just want to show your support.

Click Accept as Solution to acknowledge that the answer to your question has been provided.

The button appears next to the replies on topics you’ve started. The member who gave the solution and all future visitors to this topic will appreciate it!

These simple actions take just seconds of your time, but go a long way in showing appreciation for community members and the Live Community as a whole!

The Live Community thanks you for your participation!