DataPlane Restarted unexpectedly

Announcements

ATTENTION Customers, All Partners and Employees: The Customer Support Portal (CSP) will be undergoing maintenance and unavailable on Saturday, November 7, 2020, from 11 am to 11 pm PST. Please read our blog for more information.

Reply
Highlighted
L4 Transporter

DataPlane Restarted unexpectedly

As we have seen that in system log the dataplane is Restarted. When i  run this command show system resource follow i can see that  cpu utilization goes 100%. Please suggest as i run 8.1.7 PAN-OS version. 

 

Joshan_Lakhani_1-1596621967579.png

 

 

Highlighted
Community Team Member

Hi @Joshan_Lakhani ,

 

It's normal for pan_task to be at 100% (https://knowledgebase.paloaltonetworks.com/KCSArticleDetail?id=kA10g000000PLZZCA4).

 

You should use "show running resource-monitor" to determine the true dataplane load.

Might also want to check for corefiles or pull the tech support file for analysis.

 

Cheers,

-Kiwi.

 
Highlighted
L2 Linker

pan_task is indicating that data plane is busy for process all packet.
pan_task process is running for each core and it is process threats in the data plane.
How to the interpret output of "show system resources" for multicore CPU
https://knowledgebase.paloaltonetworks.com/KCSArticleDetail?id=kA10g000000PLZZCA4

show running resource-monitor- on the CLI to find data plane load.
if you face high load last 24 hour then pls run below command
show running resource-monitor hour last 24
show running resource-monitor day last 7

show running resource-monitor  ----it will include all data plane  information 



Highlighted
L4 Transporter

@kiwi 

 

Thanks for you reply

 

show admin@NITL> show running admin@NITL> show running resource-monitor
> day Per-day monitoring statistics
> hour Per-hour monitoring statistics
> ingress-backlogs show top 5 sessions using at least 2% of packet buffers
> minute Per-minute monitoring statistics
> second Per-second monitoring statistics
> week Per-week monitoring statistics
| Pipe through a command
<Enter> Finish input

admin@NITL> show running resource-monitor
[?1h=
Resource monitoring sampling data (per second):

CPU load sampling by group:
flow_lookup : 4%
flow_fastpath : 4%
flow_slowpath : 4%
flow_forwarding : 4%
flow_mgmt : 4%
flow_ctrl : 5%
nac_result : 4%
flow_np : 4%
dfa_result : 4%
module_internal : 4%
aho_result : 4%
zip_result : 4%
pktlog_forwarding : 4%
lwm : 0%
flow_host : 4%

CPU load (%) during last 60 seconds:
core 0 1 2 3 4 5 6 7
* 4 5 4 4 4 * *
* 3 4 3 3 3 * *
* 2 3 2 2 2 * *
* 2 3 2 2 2 * *
* 3 4 3 2 2 * *
* 2 3 2 2 2 * *
* 2 3 2 3 2 * *
* 3 4 3 3 3 * *
* 2 3 2 2 3 * *
* 2 2 2 1 2 * *
* 1 3 2 1 2 * *
* 3 4 3 3 3 * *
* 3 4 3 3 3 * *
* 3 4 3 3 3 * *
* 3 4 3 3 3 * *
* 3 4 3 3 3 * *
* 2 3 2 3 2 * *
* 3 3 3 3 3 * *
* 3 4 3 2 3 * *
* 3 3 2 2 3 * *
* 4 3 3 4 3 * *
lines 1-43  * 4 4 3 4 3 * *
* 3 4 3 3 3 * *
* 3 4 3 3 3 * *
* 3 4 3 3 3 * *
* 2 3 2 2 2 * *
* 3 4 3 3 3 * *
* 3 4 3 2 3 * *
* 3 4 3 3 3 * *
* 3 3 3 3 3 * *
* 3 4 3 3 3 * *
* 5 6 5 5 6 * *
* 3 5 4 4 4 * *
* 3 4 3 3 3 * *
* 2 4 3 3 3 * *
* 3 4 3 2 3 * *
* 4 4 3 3 3 * *
* 3 4 3 3 3 * *
* 3 4 3 3 4 * *
* 3 4 3 3 3 * *
* 3 3 2 3 3 * *
* 5 5 5 4 5 * *
* 7 6 6 6 6 * *
* 7 6 7 7 7 * *
* 7 6 8 8 8 * *
* 6 5 7 7 7 * *
* 13 7 11 13 14 * *
* 11 5 7 10 8 * *
* 4 4 3 4 3 * *
* 2 3 2 3 2 * *
* 3 4 3 3 3 * *
* 3 4 3 3 2 * *
* 3 4 3 3 3 * *
* 3 3 2 3 3 * *
* 3 3 3 3 3 * *
* 3 4 3 3 3 * *
* 2 3 2 2 3 * *
* 3 3 2 2 3 * *
* 2 3 3 2 3 * *
* 2 3 2 2 2 * *

Resource utilization (%) during last 60 seconds:
session:
2 2 2 2 2 2 2 2 2 2 2 2 2 2 2
lines 44-86  2 2 2 2 2 2 2 2 2 2 2 2 2 2 2
2 2 2 2 2 2 2 2 2 2 2 2 2 2 2
2 2 2 2 2 2 2 2 2 2 2 2 2 2 2

packet buffer:
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0

packet descriptor:
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0

packet descriptor (on-chip):
5 5 5 5 5 5 5 5 5 5 5 5 5 5 5
5 5 5 5 5 5 5 5 5 5 5 5 5 9 6
5 5 5 5 5 5 5 5 5 5 5 5 5 6 5
5 5 5 5 5 5 5 5 5 5 5 5 5 5 5


Resource monitoring sampling data (per minute):

CPU load (%) during last 60 minutes:
core 0 1 2 3 4 5 6 7
avg max avg max avg max avg max avg max avg max avg max avg max
* * 6 14 6 12 6 14 6 13 6 15 * * * *
* * 4 11 4 10 4 10 4 9 4 11 * * * *
* * 3 8 3 5 3 8 3 7 3 9 * * * *
* * 4 14 4 12 4 13 4 18 4 16 * * * *
* * 4 16 4 10 4 13 4 16 4 17 * * * *
* * 3 8 4 9 3 7 3 7 3 7 * * * *
* * 4 13 4 8 4 13 4 12 4 13 * * * *
* * 6 14 6 13 6 14 6 15 6 15 * * * *
* * 4 11 5 10 4 9 4 11 4 9 * * * *
* * 4 13 4 7 4 13 4 13 4 16 * * * *
* * 3 13 4 8 3 14 3 14 3 14 * * * *
* * 3 4 4 5 3 4 3 4 3 4 * * * *
* * 3 4 3 5 3 4 3 4 3 4 * * * *
* * 3 4 4 5 3 4 3 4 3 4 * * * *
* * 3 5 4 5 3 5 3 5 3 4 * * * *
lines 87-129  * * 3 6 4 5 3 5 3 6 3 5 * * * *
* * 3 5 4 5 3 5 3 4 3 5 * * * *
* * 4 5 4 6 3 5 3 5 4 6 * * * *
* * 7 13 7 13 7 12 7 12 7 12 * * * *
* * 3 5 4 6 3 5 3 5 3 5 * * * *
* * 3 6 4 6 3 5 3 5 3 6 * * * *
* * 4 18 4 9 4 18 4 18 4 20 * * * *
* * 3 8 4 7 3 7 3 7 3 9 * * * *
* * 3 6 4 6 3 8 3 7 3 8 * * * *
* * 3 4 3 5 3 4 2 4 3 4 * * * *
* * 3 5 3 6 3 5 3 5 3 5 * * * *
* * 3 6 4 6 3 6 3 7 3 7 * * * *
* * 4 6 4 7 4 6 4 6 4 7 * * * *
* * 4 7 4 8 4 7 3 8 4 7 * * * *
* * 6 12 6 13 6 12 6 12 6 12 * * * *
* * 5 12 4 11 5 14 5 11 5 12 * * * *
* * 4 10 4 8 4 9 4 11 4 9 * * * *
* * 3 8 4 9 3 8 3 8 3 8 * * * *
* * 3 4 3 5 3 4 3 4 3 4 * * * *
* * 5 20 5 18 4 19 5 21 5 20 * * * *
* * 4 19 5 11 4 16 4 16 4 15 * * * *
* * 3 5 4 5 3 4 3 5 3 5 * * * *
* * 4 6 5 7 4 7 4 6 4 7 * * * *
* * 4 6 5 7 4 5 4 6 4 6 * * * *
* * 3 6 4 6 3 8 3 10 3 10 * * * *
* * 4 8 4 6 4 7 4 7 4 7 * * * *
* * 8 20 7 14 7 19 8 20 8 21 * * * *
* * 4 13 4 6 4 12 4 12 4 15 * * * *
* * 4 8 4 6 4 6 4 7 4 7 * * * *
* * 3 8 4 8 3 9 3 9 3 9 * * * *
* * 4 14 4 8 3 12 4 13 4 17 * * * *
* * 3 12 4 7 3 9 3 11 3 10 * * * *
* * 8 12 8 12 8 13 8 14 8 14 * * * *
* * 3 6 4 6 3 5 3 6 3 7 * * * *
* * 3 5 4 5 3 4 3 5 3 4 * * * *
* * 4 9 4 9 4 14 4 11 4 11 * * * *
* * 4 9 5 10 4 9 4 8 4 9 * * * *
* * 7 17 8 18 7 19 8 19 8 18 * * * *
* * 7 23 6 14 7 20 7 23 7 20 * * * *
* * 6 11 8 13 6 11 6 11 6 11 * * * *
* * 4 13 5 8 4 10 4 11 4 12 * * * *
* * 4 10 5 8 4 10 4 11 5 9 * * * *
* * 5 9 5 9 5 8 5 11 5 9 * * * *
lines 130-172  * * 4 7 5 9 4 8 4 8 4 7 * * * *
* * 4 13 4 8 4 15 4 13 4 15 * * * *

Resource utilization (%) during last 60 minutes:
session (average):
2 2 2 2 2 2 2 2 2 2 2 2 2 2 2
2 2 2 2 2 2 2 2 2 2 2 3 3 2 2
2 2 2 2 2 2 2 3 3 2 2 2 2 2 3
3 2 2 2 2 2 2 3 3 3 3 3 2 2 2

session (maximum):
2 2 2 2 3 2 2 2 2 2 2 2 2 2 2
2 2 2 2 2 2 2 2 2 2 3 3 3 2 2
2 2 2 2 2 2 3 3 3 2 3 3 3 3 3
3 3 2 2 2 2 3 3 3 3 3 3 3 2 2

packet buffer (average):
0 0 0 0 1 1 1 0 0 0 0 0 0 0 0
0 0 0 1 1 1 0 0 0 0 0 0 0 0 0
0 0 1 1 1 0 0 0 0 0 0 0 0 0 0
0 1 1 1 0 0 0 0 0 0 0 0 0 0 0

packet buffer (maximum):
0 0 1 0 1 1 1 1 0 0 0 0 0 0 0
0 0 1 1 2 1 0 0 0 0 0 0 0 0 0
1 0 1 2 1 1 0 0 0 0 0 0 0 1 0
0 1 2 2 1 0 0 0 0 0 0 0 0 0 0

packet descriptor (average):
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0

packet descriptor (maximum):
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0

packet descriptor (on-chip) (average):
5 5 5 5 6 5 5 5 5 5 5 5 5 5 5
5 5 5 5 5 5 5 5 5 5 5 5 5 5 5
lines 173-215  5 5 5 5 5 5 5 5 5 5 5 5 5 5 5
5 5 5 5 5 5 5 6 6 5 5 5 5 5 5

packet descriptor (on-chip) (maximum):
6 8 5 6 28 10 8 8 5 8 5 5 5 5 5
5 5 5 8 5 5 8 7 8 5 5 5 5 5 5
8 6 5 5 5 11 5 5 8 6 6 8 8 7 8
8 5 6 6 6 7 7 21 25 7 8 5 6 5 6


Resource monitoring sampling data (per hour):

CPU load (%) during last 24 hours:
core 0 1 2 3 4 5 6 7
avg max avg max avg max avg max avg max avg max avg max avg max
* * 4 23 5 23 4 21 4 23 4 21 * * * *
* * 4 30 5 20 4 30 4 27 4 29 * * * *
* * 4 22 5 19 4 20 4 21 4 22 * * * *
* * 5 32 6 30 5 31 5 31 5 34 * * * *
* * * * * * * * * * * * * * * *
* * * * * * * * * * * * * * * *
* * * * * * * * * * * * * * * *
* * * * * * * * * * * * * * * *
* * * * * * * * * * * * * * * *
* * * * * * * * * * * * * * * *
* * * * * * * * * * * * * * * *
* * * * * * * * * * * * * * * *
* * * * * * * * * * * * * * * *
* * * * * * * * * * * * * * * *
* * * * * * * * * * * * * * * *
* * * * * * * * * * * * * * * *
* * * * * * * * * * * * * * * *
* * * * * * * * * * * * * * * *
* * * * * * * * * * * * * * * *
* * * * * * * * * * * * * * * *
* * * * * * * * * * * * * * * *
* * * * * * * * * * * * * * * *
* * * * * * * * * * * * * * * *
* * * * * * * * * * * * * * * *

Resource utilization (%) during last 24 hours:
session (average):
2 3 2 2 0 0 0 0 0 0 0 0 0 0 0
lines 216-258  0 0 0 0 0 0 0 0 0
session (maximum):
4 4 3 4 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0
packet buffer (average):
0 1 1 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0
packet buffer (maximum):
2 8 3 1 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0
packet descriptor (average):
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0
packet descriptor (maximum):
0 1 1 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0
packet descriptor (on-chip) (average):
5 5 5 5 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0
packet descriptor (on-chip) (maximum):
25 89 45 27 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0

Resource monitoring sampling data (per day):

CPU load (%) during last 7 days:
core 0 1 2 3 4 5 6 7
avg max avg max avg max avg max avg max avg max avg max avg max
* * * * * * * * * * * * * * * *
* * * * * * * * * * * * * * * *
* * * * * * * * * * * * * * * *
* * * * * * * * * * * * * * * *
* * * * * * * * * * * * * * * *
* * * * * * * * * * * * * * * *
* * * * * * * * * * * * * * * *

Resource utilization (%) during last 7 days:
session (average):
0 0 0 0 0 0 0
session (maximum):
0 0 0 0 0 0 0
packet buffer (average):
0 0 0 0 0 0 0
lines 259-301 packet buffer (maximum):
0 0 0 0 0 0 0
packet descriptor (average):
0 0 0 0 0 0 0
packet descriptor (maximum):
0 0 0 0 0 0 0
packet descriptor (on-chip) (average):
0 0 0 0 0 0 0
packet descriptor (on-chip) (maximum):
0 0 0 0 0 0 0

Resource monitoring sampling data (per week):

CPU load (%) during last 13 weeks:
core 0 1 2 3 4 5 6 7
avg max avg max avg max avg max avg max avg max avg max avg max
* * * * * * * * * * * * * * * *
* * * * * * * * * * * * * * * *
* * * * * * * * * * * * * * * *
* * * * * * * * * * * * * * * *
* * * * * * * * * * * * * * * *
* * * * * * * * * * * * * * * *
* * * * * * * * * * * * * * * *
* * * * * * * * * * * * * * * *
* * * * * * * * * * * * * * * *
* * * * * * * * * * * * * * * *
* * * * * * * * * * * * * * * *
* * * * * * * * * * * * * * * *
* * * * * * * * * * * * * * * *

Resource utilization (%) during last 13 weeks:
session (average):
0 0 0 0 0 0 0 0 0 0 0 0 0
session (maximum):
0 0 0 0 0 0 0 0 0 0 0 0 0
packet buffer (average):
0 0 0 0 0 0 0 0 0 0 0 0 0
packet buffer (maximum):
0 0 0 0 0 0 0 0 0 0 0 0 0
packet descriptor (average):
0 0 0 0 0 0 0 0 0 0 0 0 0
packet descriptor (maximum):
0 0 0 0 0 0 0 0 0 0 0 0 0
lines 302-344 packet descriptor (on-chip) (average):
0 0 0 0 0 0 0 0 0 0 0 0 0
packet descriptor (on-chip) (maximum):
0 0 0 0 0 0 0 0 0 0 0 0 0

[?1l> admin@NITL> admin@NITL> admin@NITL> admin@NITL> admin@NITL> admin@NITL> admin@NITL>
admin@NITL>
admin@NITL>
admin@NITL>
admin@NITL>
admin@NITL> show running resource-monitorexitshow running resource-monitorshow admin@NITL> show session exitshow routing fib virtual-router Route-All ecmp yesexitshow routing fib virtual-router Route-All ecmp yesexitshow running resource-monitorexitshow running resource-monitor admin@NITL> show running resource-monitor
> day Per-day monitoring statistics
> hour Per-hour monitoring statistics
> ingress-backlogs show top 5 sessions using at least 2% of packet buffers
> minute Per-minute monitoring statistics
> second Per-second monitoring statistics
> week Per-week monitoring statistics
| Pipe through a command
<Enter> Finish input

admin@NITL> show running resource-monitor johour last 24
[?1h=
Resource monitoring sampling data (per hour):

CPU load (%) during last 24 hours:
core 0 1 2 3 4 5 6 7
avg max avg max avg max avg max avg max avg max avg max avg max
* * 4 23 5 23 4 21 4 23 4 21 * * * *
* * 4 30 5 20 4 30 4 27 4 29 * * * *
* * 4 22 5 19 4 20 4 21 4 22 * * * *
* * 5 32 6 30 5 31 5 31 5 34 * * * *
* * * * * * * * * * * * * * * *
* * * * * * * * * * * * * * * *
* * * * * * * * * * * * * * * *
* * * * * * * * * * * * * * * *
* * * * * * * * * * * * * * * *
* * * * * * * * * * * * * * * *
* * * * * * * * * * * * * * * *
* * * * * * * * * * * * * * * *
* * * * * * * * * * * * * * * *
* * * * * * * * * * * * * * * *
* * * * * * * * * * * * * * * *
* * * * * * * * * * * * * * * *
* * * * * * * * * * * * * * * *
* * * * * * * * * * * * * * * *
* * * * * * * * * * * * * * * *
* * * * * * * * * * * * * * * *
* * * * * * * * * * * * * * * *
* * * * * * * * * * * * * * * *
* * * * * * * * * * * * * * * *
* * * * * * * * * * * * * * * *

Resource utilization (%) during last 24 hours:
session (average):
2 3 2 2 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0
session (maximum):
4 4 3 4 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0
packet buffer (average):
0 1 1 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0
packet buffer (maximum):
lines 1-42  2 8 3 1 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0
packet descriptor (average):
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0
packet descriptor (maximum):
0 1 1 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0
packet descriptor (on-chip) (average):
5 5 5 5 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0
packet descriptor (on-chip) (maximum):
25 89 45 27 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0

[?1l>admin@NITL> admin@NITL> show admin@NITL> show session admin@NITL> show session info
[?1h=
target-dp: *.dp0
--------------------------------------------------------------------------------
Number of sessions supported: 196606
Number of allocated sessions: 4301
Number of active TCP sessions: 3529
Number of active UDP sessions: 770
Number of active ICMP sessions: 2
Number of active GTPc sessions: 0
Number of active GTPu sessions: 0
Number of pending GTPu sessions: 0
Number of active BCAST sessions: 0
Number of active MCAST sessions: 0
Number of active predict sessions: 2
Number of active SCTP sessions: 0
Number of active SCTP associations: 0
Session table utilization: 2%
Number of sessions created since bootup: 803968
Packet rate: 1456/s
Throughput: 3458 kbps
New connection establish rate: 42 cps
--------------------------------------------------------------------------------
Session timeout
TCP default timeout: 3600 secs
TCP session timeout before SYN-ACK received: 5 secs
TCP session timeout before 3-way handshaking: 10 secs
TCP half-closed session timeout: 120 secs
TCP session timeout in TIME_WAIT: 60 secs
TCP session delayed ack timeout: 250 millisecs
TCP session timeout for unverified RST: 30 secs
UDP default timeout: 30 secs
ICMP default timeout: 6 secs
SCTP default timeout: 3600 secs
SCTP timeout before INIT-ACK received: 5 secs
SCTP timeout before COOKIE received: 60 secs
SCTP timeout before SHUTDOWN received: 30 secs
other IP default timeout: 30 secs
Captive Portal session timeout: 30 secs
Session timeout in discard state:
TCP: 90 secs, UDP: 60 secs, SCTP: 60 secs, other IP protocols: 60 secs
--------------------------------------------------------------------------------
Session accelerated aging: True
lines 1-42  Accelerated aging threshold: 80% of utilization
Scaling factor: 2 X
--------------------------------------------------------------------------------
Session setup
TCP - reject non-SYN first packet: True
Hardware session offloading: True
Hardware UDP session offloading: True
IPv6 firewalling: True
Strict TCP/IP checksum: True
Strict TCP RST sequence: True
Reject TCP small initial window: False
ICMP Unreachable Packet Rate: 200 pps
--------------------------------------------------------------------------------
Application trickling scan parameters:
Timeout to determine application trickling: 10 secs
Resource utilization threshold to start scan: 80%
Scan scaling factor over regular aging: 8
--------------------------------------------------------------------------------
Session behavior when resource limit is reached: drop
--------------------------------------------------------------------------------
Pcap token bucket rate : 10485760
--------------------------------------------------------------------------------
Max pending queued mcast packets per session : 0
--------------------------------------------------------------------------------

[?1l>admin@NITL> admin@NITL> show admin@NITL> show counter admin@NITL> show counter global filter admin@NITL> show counter global filter delta admin@NITL> show counter global filter delta yes
[?1h=
Global counters:
Elapsed time since last sampling: 341.100 seconds

name value rate severity category aspect description
--------------------------------------------------------------------------------
pkt_recv 725168 2125 info packet pktproc Packets received
pkt_recv_zero 725168 2125 info packet pktproc Packets received from QoS 0
pkt_sent 712920 2090 info packet pktproc Packets transmitted
pkt_alloc 37818 110 info packet resource Packets allocated
pkt_swbuf_fwd 1092 3 info packet pktproc Packets transmitted using software buffer
pkt_stp_rcv 511 1 info packet pktproc STP BPDU packets received
pkt_lacp_sent 24 0 info packet pktproc LACP Packets transmitted
pkt_lacp_recv 24 0 info packet pktproc LACP Packets received
session_allocated 16249 47 info session resource Sessions allocated
session_freed 14987 43 info session resource Sessions freed
session_installed 16126 47 info session resource Sessions installed
session_predict_dst -1 0 info session resource Active dst predict sessions
session_discard 5926 17 info session resource Session set to discard by security policy check
session_install_error 52 0 warn session pktproc Sessions installation error
session_install_error_s2c 52 0 warn session pktproc Sessions installation error s2c
session_unverified_rst 548 1 info session pktproc Session aging timer modified by unverified RST
session_hash_insert_duplicate 52 0 warn session pktproc Session setup: hash insert failure due to duplicate entry
session_servobj_timeout_override 17692 51 info session pktproc session timeout overridden by service object
flow_rcv_err 10 0 drop flow parse Packets dropped: flow stage receive error
flow_rcv_dot1q_tag_err 4450 13 drop flow parse Packets dropped: 802.1q tag not configured
flow_no_interface 4450 13 drop flow parse Packets dropped: invalid interface
flow_ipv6_disabled 824 2 drop flow parse Packets dropped: IPv6 disabled on interface
flow_policy_deny 3563 10 drop flow session Session setup: denied by policy
flow_tcp_non_syn 117 0 info flow session Non-SYN TCP packets without session match
flow_tcp_non_syn_drop 117 0 drop flow session Packets dropped: non-SYN TCP without session match
flow_fwd_l3_bcast_drop 3 0 drop flow forward Packets dropped: unhandled IP broadcast
flow_fwd_l3_mcast_drop 1842 5 drop flow forward Packets dropped: no route for IP multicast
flow_fwd_l3_noarp 110 0 drop flow forward Packets dropped: no ARP
flow_fwd_zonechange 218 0 drop flow forward Packets dropped: forwarded to different zone
flow_fwd_mtu_exceeded 3 0 info flow forward Packets lengths exceeded MTU
flow_fwd_drop_noxmit 777 2 info flow forward Packet dropped at forwarding: noxmit
flow_qos_pkt_enque 707538 2074 info flow qos Packet enqueued to QoS module
flow_qos_pkt_deque 707538 2074 info flow qos Packet dequeued from QoS module
flow_qos_pkt_flush 2 0 info flow qos Packet flushed from QoS queue due to configuration change
flow_parse_ip_ttlzero 6 0 drop flow parse Packets dropped: Zero TTL in IP packet
flow_ip6_mcast_off 824 2 info flow pktproc Packets received: IPv6 multicast pkts with flow off
lines 1-42 flow_parse_ipfrag_df_on 3 0 info flow parse IP fragments with DF/Reserve bit on
flow_ipfrag_recv 33 0 info flow ipfrag IP fragments received
flow_ipfrag_free 17 0 info flow ipfrag IP fragments freed after defragmentation
flow_ipfrag_merge 16 0 info flow ipfrag IP defragmentation completed
flow_ipfrag_swbuf 16 0 info flow ipfrag Software buffers allocated for reassembled IP packet
flow_ipfrag_frag 7 0 info flow ipfrag IP fragments transmitted
flow_ipfrag_restore 1 0 info flow ipfrag IP fragment restore packet
flow_ipfrag_del 16 0 info flow ipfrag IP fragment delete entry
flow_ipfrag_result_alloc 16 0 info flow ipfrag IP fragment result allocated
flow_ipfrag_result_free 16 0 info flow ipfrag IP fragment result freed
flow_ipfrag_entry_alloc 16 0 info flow ipfrag IP fragment entry allocated
flow_ipfrag_entry_free 16 0 info flow ipfrag IP fragment entry freed
flow_action_predict 2 0 info flow pktproc Predict sessions created
flow_predict_session_dup 16 0 info flow session Duplicate Predict Session installation attempts
flow_action_close 7436 21 drop flow pktproc TCP sessions closed via injecting RST
flow_action_reset 13 0 drop flow pktproc TCP clients reset via responding RST
flow_ager_delay_discard 123 0 info flow pktproc session ager to discard session due to rematch
flow_rematch_parent 127 0 info flow pktproc number of rematch deny for parent sessions
flow_arp_pkt_rcv 952 2 info flow arp ARP packets received
flow_arp_pkt_xmt 46 0 info flow arp ARP packets transmitted
flow_arp_pkt_replied 461 1 info flow arp ARP requests replied
flow_arp_pkt_learned 2 0 info flow arp ARP entry learned
flow_arp_rcv_gratuitous 11 0 info flow arp Gratuitous ARP packets received
flow_arp_resolve_xmt 46 0 info flow arp ARP resolution packets transmitted
flow_host_pkt_rcv 2971 8 info flow mgmt Packets received from control plane
flow_host_pkt_xmt 7939 23 info flow mgmt Packets transmitted to control plane
flow_host_decap_err 4 0 drop flow mgmt Packets dropped: decapsulation error from control plane
flow_host_xmt_nozone 5 0 drop flow mgmt Packets dropped: interface has no zone configured
flow_host_service_allow 2740 8 info flow mgmt Device management session allowed
flow_host_service_deny 107 0 drop flow mgmt Device management session denied
flow_host_service_unknown 73 0 drop flow mgmt Session discarded: unknown application to control plane
flow_host_vardata_rate_limit_ok 6 0 info flow mgmt Host vardata not sent: rate limit ok
flow_tunnel_encap_resolve 992 2 info flow tunnel tunnel structure lookup resolve
flow_tunnel_nh_session_insert 1 0 info flow tunnel Tunnel nexthop inserted, session
flow_tcp_cksm_sw_validation 1 0 info flow pktproc Packets for which TCP checksum validation was done in software
appid_ident_by_icmp 453 1 info appid pktproc Application identified by icmp type
appid_ident_by_simple_sig 8876 26 info appid pktproc Application identified by simple signature
appid_post_pkt_queued 16 0 info appid resource The total trailing packets queued in AIE
appid_ident_by_dport_first 4231 12 info appid pktproc Application identified by L4 dport first
appid_ident_by_dport 80 0 info appid pktproc Application identified by L4 dport
appid_proc 11183 32 info appid pktproc The number of packets processed by Application identification
appid_unknown_udp 289 0 info appid pktproc The number of unknown UDP applications after app engine
lines 43-84 appid_unknown_fini 30 0 info appid pktproc The number of unknown applications
appid_unknown_fini_empty 319 0 info appid pktproc The number of unknown applications because of no data
nat_dynamic_port_xlat 10740 31 info nat resource The total number of dynamic_ip_port NAT translate called
nat_dynamic_port_release 9919 29 info nat resource The total number of dynamic_ip_port NAT release called
dfa_sw 300558 881 info dfa pktproc The total number of dfa match using software
tcp_drop_packet 1413 4 warn tcp pktproc packets dropped because of failure in tcp reassembly
tcp_pkt_queued 4 0 info tcp resource The number of out of order packets queued in tcp
tcp_case_1 4 0 info tcp pktproc tcp reassembly case 1
tcp_case_2 1258 3 info tcp pktproc tcp reassembly case 2
tcp_drop_out_of_wnd 8 0 warn tcp resource out-of-window packets dropped
tcp_exceed_flow_seg_limit 1413 4 warn tcp resource packets dropped due to the limitation on tcp out-of-order queue size
tcp_fptcp_fast_retransmit 1 0 info tcp pktproc packets for fptcp fast retransmit
ctd_pkt_queued -89 0 info ctd resource The number of packets queued in ctd
ctd_sml_exit 80 0 info ctd pktproc The number of sessions with sml exit
ctd_sml_exit_detector_i 1749 5 info ctd pktproc The number of sessions with sml exit in detector i
ctd_sml_set_suspend 3 0 info ctd pktproc The number of decoder suspend requests
ctd_sml_unset_suspend 1344 3 info ctd pktproc The number of decoder resume requests
appid_bypass_no_ctd 360 1 info appid pktproc appid bypass due to no ctd
ctd_handle_reset_and_url_exit 18 0 info ctd pktproc Handle reset and url exit
ctd_inner_decode_exceed_flow_limit 20 0 info ctd pktproc Inner decoder exceeds limit. Replaced the oldest inner decoder.
ctd_err_bypass 1829 5 info ctd pktproc ctd error bypass
ctd_err_sw 1 0 info ctd pktproc ctd sw error
ctd_switch_decoder 428 1 info ctd pktproc ctd switch decoder
ctd_stop_proc 357 1 info ctd pktproc ctd stops to process packet
ctd_run_detector_i 8537 25 info ctd pktproc run detector_i
ctd_sml_vm_run_impl_opcodeexit 2236 6 info ctd pktproc SML VM opcode exit
ctd_sml_vm_run_impl_immed8000 4817 14 info ctd pktproc SML VM immed8000
ctd_sml_vm_run_eval_zip_ratio 16 0 info ctd pktproc SML VM eval zip ratio
ctd_decode_filter_chunk_normal 1181 3 info ctd pktproc Packets with normal chunks
ctd_decode_filter_QP 58 0 info ctd pktproc decode filter QP
ctd_sml_opcode_set_file_type 5279 15 info ctd pktproc sml opcode set file type
ctd_token_match_overflow 4943 14 info ctd pktproc The token match overflow
ctd_filter_decode_failure_qpdecode 58 0 error ctd pktproc Number of decode filter failure for qpdecode
ctd_bloom_filter_nohit 5948 17 info ctd pktproc The number of no match for virus bloom filter
ctd_bloom_filter_hit 24 0 info ctd pktproc The number of match for virus bloom filter
ctd_fwd_err_tcp_state 7657 22 info ctd pktproc Content forward error: TCP in establishment when session went away
aho_too_many_matches 4 0 info aho pktproc too many signature matches within one packet
aho_too_many_mid_res 10 0 info aho pktproc too many signature middle results within one packet
aho_sw 389937 1143 info aho pktproc The total usage of software for AHO
ctd_exceed_queue_limit 1 0 warn ctd resource The number of packets queued in ctd exceeds per session's limit, action bypass
ctd_appid_reassign 7573 22 info ctd pktproc appid was changed
ctd_decoder_reassign 428 1 info ctd pktproc decoder was changed
lines 85-126 ctd_skip_offset_error 1 0 warn ctd resource skip offset error
ctd_url_block 6280 18 info ctd pktproc sessions blocked by url filtering
ctd_process 15899 46 info ctd pktproc session processed by ctd
ctd_pkt_slowpath 332106 973 info ctd pktproc Packets processed by slowpath
ctd_pkt_slowpath_suspend_vm 893 2 info ctd pktproc Packets bypassed CTD at VM stage
ctd_pkt_slowpath_suspend_regex 8 0 info ctd pktproc Packets bypassed CTD at regex stage
ctd_detector_discard 27 0 info ctd pktproc session discarded by detector
ctd_hitcount_period_update 1 0 info ctd system Number of Policy Hit Count periodical update
log_url_cnt 1777 5 info log system Number of url logs
log_urlcontent_cnt 247 0 info log system Number of url content logs
log_uid_req_cnt 845 2 info log system Number of uid request logs
log_av_cnt 1 0 info log system Number of anti-virus logs
log_vulnerability_cnt 33 0 info log system Number of vulnerability logs
log_fileext_cnt 102 0 info log system Number of file block logs
log_traffic_cnt 19079 55 info log system Number of traffic logs
log_pkt_diag_us 16 0 info log system Time (us) spent on writing packet-diag logs
log_http_hdr_cnt 972 2 info log system Number of HTTP hdr field logs
log_email_hdr_cnt 41 0 info log system Number of EMAIL hdr field logs
ctd_http_range_response 250 0 info ctd system Number of HTTP range responses detected by ctd
log_suppress 729 2 info log system Logs suppressed by log suppression
proxy_process 20 0 info proxy pktproc Number of flows go through proxy
proxy_url_request_pkt_drop 2 0 drop proxy pktproc The number of packets get dropped because of waiting for url category request in ssl
proxy
proxy_url_category_unknown 1 0 info proxy pktproc Number of sessions checked by proxy with unknown url category
proxy_sessions 1 0 info proxy pktproc Current number of proxy sessions
ssl_client_sess_ticket 19 0 info ssl pktproc Number of ssl session with client sess ticket ext
uid_ipinfo_rcv 26 0 info uid pktproc Number of ip user info received
url_db_request 96 0 info url pktproc Number of URL database request
url_db_reply 4188 12 info url pktproc Number of URL reply
url_session_not_in_ssl_wait 11 0 error url system The session is not waiting for url in ssl proxy
zip_process_total 21041 61 info zip pktproc The total number of zip engine decompress process
zip_process_stop 9 0 info zip pktproc The number of zip decompress process stops lack of output buffer
zip_process_sw 4280 12 info zip pktproc The total number of zip software decompress process
zip_result_drop 1 0 warn zip pktproc The number of zip results dropped
zip_ctx_free_race 1 0 info zip pktproc The number of attempted frees of active zip contexts
zip_hw_in 17960085 52653 info zip pktproc The total input data size to hardware zip engine
zip_hw_out 103552249 303583 info zip pktproc The total output data size from hardware zip engine
tcp_fin_q_pkt_alloc 1108 3 info tcp pktproc packets allocated by tcp FIN queue
tcp_fin_q_pkt_free 1104 3 info tcp pktproc packets freed by tcp FIN queue
tcp_fin_q_hit 204 0 info tcp pktproc packets that trigger FIN queue retransmission
ctd_smb_outoforder_chunks 25 0 info ctd pktproc Number of out-of-order SMB chunks
--------------------------------------------------------------------------------
lines 127-167 Total counters shown: 160
--------------------------------------------------------------------------------

[?1l>admin@NITL> show counter global filter delta yes
[?1h=
Global counters:
Elapsed time since last sampling: 12.10 seconds

name value rate severity category aspect description
--------------------------------------------------------------------------------
pkt_recv 57075 4752 info packet pktproc Packets received
pkt_recv_zero 57075 4752 info packet pktproc Packets received from QoS 0
pkt_sent 57054 4750 info packet pktproc Packets transmitted
pkt_alloc 7713 642 info packet resource Packets allocated
pkt_swbuf_fwd 2286 190 info packet pktproc Packets transmitted using software buffer
pkt_stp_rcv 18 1 info packet pktproc STP BPDU packets received
session_allocated 993 82 info session resource Sessions allocated
session_freed 649 54 info session resource Sessions freed
session_installed 987 82 info session resource Sessions installed
session_discard 273 22 info session resource Session set to discard by security policy check
session_install_error 5 0 warn session pktproc Sessions installation error
session_install_error_s2c 5 0 warn session pktproc Sessions installation error s2c
session_unverified_rst 10 0 info session pktproc Session aging timer modified by unverified RST
session_hash_insert_duplicate 5 0 warn session pktproc Session setup: hash insert failure due to duplicate entry
session_servobj_timeout_override 1419 118 info session pktproc session timeout overridden by service object
flow_rcv_dot1q_tag_err 89 7 drop flow parse Packets dropped: 802.1q tag not configured
flow_no_interface 89 7 drop flow parse Packets dropped: invalid interface
flow_ipv6_disabled 7 0 drop flow parse Packets dropped: IPv6 disabled on interface
flow_policy_deny 114 9 drop flow session Session setup: denied by policy
flow_tcp_non_syn 9 0 info flow session Non-SYN TCP packets without session match
flow_tcp_non_syn_drop 9 0 drop flow session Packets dropped: non-SYN TCP without session match
flow_fwd_l3_mcast_drop 38 3 drop flow forward Packets dropped: no route for IP multicast
flow_fwd_l3_noarp 8 0 drop flow forward Packets dropped: no ARP
flow_fwd_zonechange 22 1 drop flow forward Packets dropped: forwarded to different zone
flow_fwd_mtu_exceeded 40 3 info flow forward Packets lengths exceeded MTU
flow_fwd_drop_noxmit 23 1 info flow forward Packet dropped at forwarding: noxmit
flow_qos_pkt_enque 53345 4441 info flow qos Packet enqueued to QoS module
flow_qos_pkt_deque 53344 4441 info flow qos Packet dequeued from QoS module
flow_ip6_mcast_off 7 0 info flow pktproc Packets received: IPv6 multicast pkts with flow off
flow_ipfrag_recv 160 13 info flow ipfrag IP fragments received
flow_ipfrag_free 120 9 info flow ipfrag IP fragments freed after defragmentation
flow_ipfrag_merge 40 3 info flow ipfrag IP defragmentation completed
flow_ipfrag_swbuf 40 3 info flow ipfrag Software buffers allocated for reassembled IP packet
flow_ipfrag_frag 160 13 info flow ipfrag IP fragments transmitted
flow_ipfrag_restore 40 3 info flow ipfrag IP fragment restore packet
flow_ipfrag_del 40 3 info flow ipfrag IP fragment delete entry
lines 1-42 flow_ipfrag_result_alloc 40 3 info flow ipfrag IP fragment result allocated
flow_ipfrag_result_free 40 3 info flow ipfrag IP fragment result freed
flow_ipfrag_entry_alloc 40 3 info flow ipfrag IP fragment entry allocated
flow_ipfrag_entry_free 40 3 info flow ipfrag IP fragment entry freed
flow_action_close 374 31 drop flow pktproc TCP sessions closed via injecting RST
flow_arp_pkt_rcv 24 1 info flow arp ARP packets received
flow_arp_pkt_xmt 2 0 info flow arp ARP packets transmitted
flow_arp_pkt_replied 12 0 info flow arp ARP requests replied
flow_arp_resolve_xmt 2 0 info flow arp ARP resolution packets transmitted
flow_host_pkt_rcv 141 11 info flow mgmt Packets received from control plane
flow_host_pkt_xmt 355 29 info flow mgmt Packets transmitted to control plane
flow_host_service_allow 135 11 info flow mgmt Device management session allowed
flow_host_service_unknown 1 0 drop flow mgmt Session discarded: unknown application to control plane
flow_tunnel_encap_resolve 2109 175 info flow tunnel tunnel structure lookup resolve
appid_ident_by_icmp 3 0 info appid pktproc Application identified by icmp type
appid_ident_by_simple_sig 460 38 info appid pktproc Application identified by simple signature
appid_post_pkt_queued -1 0 info appid resource The total trailing packets queued in AIE
appid_ident_by_dport_first 221 18 info appid pktproc Application identified by L4 dport first
appid_ident_by_dport 130 10 info appid pktproc Application identified by L4 dport
appid_proc 754 62 info appid pktproc The number of packets processed by Application identification
appid_unknown_udp 3 0 info appid pktproc The number of unknown UDP applications after app engine
appid_unknown_fini_empty 8 0 info appid pktproc The number of unknown applications because of no data
nat_dynamic_port_xlat 534 44 info nat resource The total number of dynamic_ip_port NAT translate called
nat_dynamic_port_release 413 34 info nat resource The total number of dynamic_ip_port NAT release called
dfa_sw 23403 1948 info dfa pktproc The total number of dfa match using software
tcp_case_2 169 14 info tcp pktproc tcp reassembly case 2
tcp_fptcp_fast_retransmit 176 14 info tcp pktproc packets for fptcp fast retransmit
ctd_pkt_queued 1 0 info ctd resource The number of packets queued in ctd
ctd_sml_exit_detector_i 104 8 info ctd pktproc The number of sessions with sml exit in detector i
ctd_sml_set_suspend 2 0 info ctd pktproc The number of decoder suspend requests
ctd_sml_unset_suspend 231 19 info ctd pktproc The number of decoder resume requests
appid_bypass_no_ctd 13 1 info appid pktproc appid bypass due to no ctd
ctd_handle_reset_and_url_exit 1 0 info ctd pktproc Handle reset and url exit
ctd_inner_decode_exceed_flow_limit 1 0 info ctd pktproc Inner decoder exceeds limit. Replaced the oldest inner decoder.
ctd_err_bypass 104 8 info ctd pktproc ctd error bypass
ctd_switch_decoder 7 0 info ctd pktproc ctd switch decoder
ctd_stop_proc 5 0 info ctd pktproc ctd stops to process packet
ctd_run_detector_i 417 34 info ctd pktproc run detector_i
ctd_sml_vm_run_impl_opcodeexit 110 9 info ctd pktproc SML VM opcode exit
ctd_sml_vm_run_impl_immed8000 399 33 info ctd pktproc SML VM immed8000
ctd_sml_vm_run_eval_zip_ratio 1 0 info ctd pktproc SML VM eval zip ratio
ctd_decode_filter_chunk_normal 9 0 info ctd pktproc Packets with normal chunks
lines 43-84 ctd_sml_opcode_set_file_type 124 10 info ctd pktproc sml opcode set file type
ctd_token_match_overflow 496 41 info ctd pktproc The token match overflow
ctd_bloom_filter_nohit 358 29 info ctd pktproc The number of no match for virus bloom filter
ctd_fwd_err_tcp_state 378 31 info ctd pktproc Content forward error: TCP in establishment when session went away
aho_too_many_mid_res 1 0 info aho pktproc too many signature middle results within one packet
aho_sw 32343 2693 info aho pktproc The total usage of software for AHO
ctd_appid_reassign 725 60 info ctd pktproc appid was changed
ctd_decoder_reassign 7 0 info ctd pktproc decoder was changed
ctd_url_block 305 25 info ctd pktproc sessions blocked by url filtering
ctd_process 975 81 info ctd pktproc session processed by ctd
ctd_pkt_slowpath 30140 2509 info ctd pktproc Packets processed by slowpath
ctd_pkt_slowpath_suspend_vm 634 52 info ctd pktproc Packets bypassed CTD at VM stage
ctd_pkt_slowpath_suspend_regex 2 0 info ctd pktproc Packets bypassed CTD at regex stage
log_url_cnt 114 9 info log system Number of url logs
log_urlcontent_cnt 16 1 info log system Number of url content logs
log_uid_req_cnt 38 3 info log system Number of uid request logs
log_vulnerability_cnt 5 0 info log system Number of vulnerability logs
log_fileext_cnt 7 0 info log system Number of file block logs
log_traffic_cnt 857 71 info log system Number of traffic logs
log_http_hdr_cnt 70 5 info log system Number of HTTP hdr field logs
ctd_http_range_response 22 1 info ctd system Number of HTTP range responses detected by ctd
log_suppress 10 0 info log system Logs suppressed by log suppression
uid_ipinfo_rcv 5 0 info uid pktproc Number of ip user info received
zip_process_total 16 1 info zip pktproc The total number of zip engine decompress process
zip_process_sw 2182 181 info zip pktproc The total number of zip software decompress process
zip_hw_in 10426 868 info zip pktproc The total input data size to hardware zip engine
zip_hw_out 59828 4981 info zip pktproc The total output data size from hardware zip engine
tcp_fin_q_pkt_alloc 68 5 info tcp pktproc packets allocated by tcp FIN queue
tcp_fin_q_pkt_free 68 5 info tcp pktproc packets freed by tcp FIN queue
tcp_fin_q_hit 18 1 info tcp pktproc packets that trigger FIN queue retransmission
ctd_smb_outoforder_chunks 7 0 info ctd pktproc Number of out-of-order SMB chunks
--------------------------------------------------------------------------------
Total counters shown: 109

Like what you see?

Show your appreciation!

Click Like if a post is helpful to you or if you just want to show your support.

Click Accept as Solution to acknowledge that the answer to your question has been provided.

The button appears next to the replies on topics you’ve started. The member who gave the solution and all future visitors to this topic will appreciate it!

These simple actions take just seconds of your time, but go a long way in showing appreciation for community members and the Live Community as a whole!

The Live Community thanks you for your participation!