PA-5050 (8.1.11) 100% Dataplane CPU (DP1)

cancel
Showing results for 
Show  only  | Search instead for 
Did you mean: 
Announcements
Please sign in to see details of an important advisory in our Customer Advisories area.

PA-5050 (8.1.11) 100% Dataplane CPU (DP1)

L2 Linker

Hi everybody,

 

We got two Palo Alto 5050's running in an active-passive configuration. We run three separate vsys. During working hours we see our dataplane exceed the 80% cpu util. Our dataplane DP0 shows a load of around 40% but our DP1 is maxing out to 100%. We tried disabling all logging and next gen funcionality but it's still maxing out to 100%. Does anybody know what could be going on? packet descriptor (on-chip) is also under heavy load on the DP1 while the DP0 is quiet...

 

show running resource-monitor

DP dp0:

Resource monitoring sampling data (per second):

CPU load sampling by group:
flow_lookup : 43%
flow_fastpath : 42%
flow_slowpath : 43%
flow_forwarding : 43%
flow_mgmt : 30%
flow_ctrl : 36%
nac_result : 45%
flow_np : 42%
dfa_result : 45%
module_internal : 43%
aho_result : 46%
zip_result : 45%
pktlog_forwarding : 33%
lwm : 0%
flow_host : 44%

 

CPU load (%) during last 60 minutes:
core 0 1 2 3 4 5 6 7
avg max avg max avg max avg max avg max avg max avg max avg max
* * 28 39 35 45 39 47 38 46 43 50 43 49 43 49
* * 33 46 39 51 43 54 43 53 47 57 47 57 46 56
* * 33 54 39 59 43 61 42 60 46 62 46 60 45 58
* * 31 42 37 48 42 52 41 50 46 55 46 55 46 54
* * 34 53 40 57 44 58 43 56 47 58 47 55 46 55
* * 31 50 37 55 42 57 41 57 46 58 45 57 45 55
* * 29 42 36 47 40 52 39 52 44 56 44 56 43 56
* * 33 48 39 54 43 57 43 56 47 60 46 59 46 58
* * 34 46 40 52 44 56 43 55 47 61 47 60 46 59
* * 30 45 36 51 40 53 40 53 44 56 44 55 43 54
* * 27 35 33 42 38 47 38 46 42 51 42 50 41 50
* * 27 46 33 51 37 54 37 53 41 54 41 51 41 50
* * 28 38 34 43 38 45 37 45 42 50 42 50 41 50
* * 30 50 35 54 40 57 39 56 43 58 43 57 42 55
* * 24 44 30 48 34 50 34 48 38 52 38 49 37 47

 

Resource utilization (%) during last 60 minutes:
session (average):
23 22 22 22 21 21 21 21 21 21 21 20 20 20 20
20 21 21 20 20 1 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 23

session (maximum):
23 23 22 22 22 21 21 21 21 21 21 21 20 20 20
21 21 21 21 21 15 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 23

packet buffer (average):
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0

packet buffer (maximum):
0 0 1 1 2 1 0 1 1 1 0 1 1 1 0
0 1 1 1 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 1

packet descriptor (average):
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0

packet descriptor (maximum):
0 0 0 0 1 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0

packet descriptor (on-chip) (average):
2 3 4 4 5 3 2 4 4 3 2 3 2 4 2
2 3 3 3 1 2 2 1 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 3

packet descriptor (on-chip) (maximum):
9 9 21 12 40 30 8 17 31 16 7 28 12 21 7
10 22 16 12 1 2 2 2 2 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 32

 

DP dp1:

Resource monitoring sampling data (per second):

CPU load sampling by group:
flow_lookup : 100%
flow_fastpath : 100%
flow_slowpath : 100%
flow_forwarding : 100%
flow_mgmt : 100%
flow_ctrl : 100%
nac_result : 100%
flow_np : 100%
dfa_result : 100%
module_internal : 100%
aho_result : 100%
zip_result : 100%
pktlog_forwarding : 100%
lwm : 0%
flow_host : 100%

 

Resource monitoring sampling data (per minute):

CPU load (%) during last 60 minutes:
core 0 1 2 3 4 5 6 7
avg max avg max avg max avg max avg max avg max avg max avg max
* * 94 100 95 100 96 100 95 100 96 100 96 100 96 100
* * 99 100 99 100 99 100 99 100 99 100 99 100 99 100
* * 99 100 99 100 99 100 99 100 99 100 99 100 99 100
* * 94 100 95 100 96 100 96 100 96 100 96 100 96 100
* * 98 100 99 100 99 100 99 100 99 100 99 100 99 100
* * 96 100 97 100 97 100 97 100 97 100 97 100 97 100
* * 92 100 93 100 94 100 94 100 95 100 95 100 94 100
* * 97 100 98 100 98 100 98 100 98 100 98 100 98 100
* * 98 100 98 100 99 100 98 100 99 100 99 100 99 100
* * 94 100 95 100 96 100 96 100 96 100 96 100 96 100
* * 89 100 91 100 92 100 92 100 93 100 93 100 93 100

 

Resource utilization (%) during last 60 minutes:
session (average):
13 13 13 13 13 13 13 13 12 12 12 12 12 12 12
12 12 12 12 11 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 13

session (maximum):
13 13 13 13 13 13 13 13 13 12 12 12 12 12 12
12 12 12 12 12 7 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 13

packet buffer (average):
0 1 1 0 0 0 0 0 1 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0

packet buffer (maximum):
1 1 1 1 2 1 1 2 2 1 1 1 0 1 1
1 1 1 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 1

packet descriptor (average):
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
lines 689-731 
packet descriptor (maximum):
1 1 1 0 1 1 1 1 1 1 1 0 0 0 1
0 1 1 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 1

packet descriptor (on-chip) (average):
26 60 54 30 44 37 26 46 53 28 17 13 10 11 17
19 40 31 8 1 2 2 1 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 35

packet descriptor (on-chip) (maximum):
87 87 86 80 86 76 85 91 86 86 72 60 31 57 89
86 85 90 46 1 2 2 2 1 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 89

 

-------------------

 

show session info

target-dp: *.dp0
--------------------------------------------------------------------------------
Number of sessions supported: 1600000
Number of allocated sessions: 253251
Number of active TCP sessions: 156978
Number of active UDP sessions: 94896
Number of active ICMP sessions: 30
Number of active GTPc sessions: 0
Number of active GTPu sessions: 0
Number of pending GTPu sessions: 0
Number of active BCAST sessions: 0
Number of active MCAST sessions: 0
Number of active predict sessions: 58
Number of active SCTP sessions: 0
Number of active SCTP associations: 0
Session table utilization: 15%
Number of sessions created since bootup: 3041258
Packet rate: 57222/s
Throughput: 281820 kbps
New connection establish rate: 2240 cps
--------------------------------------------------------------------------------
Session timeout
TCP default timeout: 3600 secs
TCP session timeout before SYN-ACK received: 5 secs
TCP session timeout before 3-way handshaking: 10 secs
TCP half-closed session timeout: 120 secs
TCP session timeout in TIME_WAIT: 15 secs
TCP session delayed ack timeout: 250 millisecs
TCP session timeout for unverified RST: 30 secs
UDP default timeout: 30 secs
ICMP default timeout: 6 secs
SCTP default timeout: 3600 secs
SCTP timeout before INIT-ACK received: 5 secs
SCTP timeout before COOKIE received: 60 secs
SCTP timeout before SHUTDOWN received: 30 secs
other IP default timeout: 30 secs
Captive Portal session timeout: 30 secs
Session timeout in discard state:
TCP: 90 secs, UDP: 60 secs, SCTP: 60 secs, other IP protocols: 60 secs
--------------------------------------------------------------------------------
Session accelerated aging: True
Accelerated aging threshold: 80% of utilization
Scaling factor: 2 X
--------------------------------------------------------------------------------
Session setup
TCP - reject non-SYN first packet: True
Hardware session offloading: True
Hardware UDP session offloading: True
IPv6 firewalling: True
Strict TCP/IP checksum: True
Strict TCP RST sequence: True
Reject TCP small initial window: False
Reject TCP SYN with different seq/options: True
ICMP Unreachable Packet Rate: 200 pps
--------------------------------------------------------------------------------
Application trickling scan parameters:
Timeout to determine application trickling: 10 secs
Resource utilization threshold to start scan: 80%
Scan scaling factor over regular aging: 8
--------------------------------------------------------------------------------
Session behavior when resource limit is reached: drop
--------------------------------------------------------------------------------
Pcap token bucket rate : 10485760
--------------------------------------------------------------------------------
Max pending queued mcast packets per session : 0
--------------------------------------------------------------------------------
Processing CPU: random
Broadcast first packet: no
--------------------------------------------------------------------------------

DP dp0:


Hardware Pools
[ 0] Packet Buffers : 57097/57344 0x80000000e69c0000
[ 1] Work Queue Entries : 228962/229376 0x80000000e4dc0000
[ 2] Output Buffers : 21750/21760 0x80000000ed9c0000
[ 3] DFA Result : 4096/4096 0x800000041f200000
[ 4] Timer Buffers : 4093/4096 0x800000041f600000
[ 5] LWM Pool : 1024/1024 0x80000000fff00000
[ 6] ZIP Commands : 1023/1024 0x800000041f000000
[ 7] Dma Cmd Buffers : 1019/1024 0x80000000e4cc0000

Software Pools
[ 0] Shared Pool 24 ( 24): 1052413/1060000 0x8000000020001080
[ 1] Shared Pool 32 ( 32): 813085/830000 0x8000000021c4f300
[ 2] Shared Pool 40 ( 40): 204737/205000 0x80000000238ce300
[ 3] Shared Pool 192 ( 192): 1703057/1800000 0x8000000024168680
[ 4] Shared Pool 256 ( 256): 104744/105000 0x80000000391dd480
[ 5] software packet buffer 0 ( 512): 25875/26000 0x800000003ac00700
[ 6] software packet buffer 1 ( 1024): 25935/26000 0x800000003b8cbe80
[ 7] software packet buffer 2 ( 2048): 25992/26000 0x800000003d249600
[ 8] software packet buffer 3 (33280): 14999/15000 0x800000004052ad80
[ 9] software packet buffer 4 (66048): 256/256 0x800000005e14c900
[10] ZIP Results ( 184): 1024/1024 0x800000000ff60e00
[11] CTD AV Block ( 1024): 32/32 0x800000007b7a7380
[12] Regex Results (11640): 4095/4096 0x800000007baf1180 36
[13] SSH Handshake State ( 6512): 128/128 0x800000007fb93300
[14] SSH State ( 3200): 1024/1024 0x800000007fc5ee00
[15] TCP host connections ( 184): 15/16 0x800000007ff80380

Shared Pools Statistics

Current local reuse cache counts for each pool:
core 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15
24 0 0 0 0 0 317 384 366 370 338 377 1 0 0 0 0
32 0 0 0 0 0 370 382 367 305 346 381 4 0 0 0 0
40 0 28 32 1 2 31 26 31 30 28 21 32 0 0 0 0
192 0 0 53 0 0 767 759 747 740 716 756 66 0 0 0 0
256 0 0 21 32 20 32 26 26 24 32 31 12 0 0 0 0
Local Reuse Pools Shared Pool 24 Shared Pool 32 Shared Pool 40 Shared Pool 192 Shared Pool 256
Cached / Max 2153/6144 2155/6144 262/512 4604/12288 256/512
Cached + Free / Total 1054566/1060000 815239/830000 204999/205000 1707661/1800000 105000/105000

User Quota Threshold Min.Alloc Cur.Alloc Max.Alloc Total-Alloc Fail-Thresh Fail-Nomem Local-Reuse Data(Pool)-SZ
fptcp_seg 49152 0 0 0 72 14640 0 0 12776 16 (24)
inner_decode 6250 0 0 17 39 7271 0 0 7242 24 (24)
detector_threat 262144 0 0 4465 5828 2933325 0 0 2932108 24 (24)
spyware_state 80000 0 0 342 462 114401 0 0 87827 24 (24)
vm_vcheck 205000 0 0 1 490 6211633 0 0 4370406 24 (24)
ctd_patmatch 400000 0 0 609 750 218866 0 0 179965 24 (24)
proxy_l2info 19040 0 0 0 0 0 0 0 0 24 (24)
proxy_pktmr 9520 0 0 0 0 0 0 0 0 16 (24)
vm_field 820000 0 0 14755 15854 36633460 0 0 25883098 32 (32)
prl_cookie 9520 0 0 0 0 0 0 0 0 32 (32)
decode_filter 205000 0 0 1 75 12757 0 0 10013 40 (40)
hash_decode 16384 0 0 0 0 0 0 0 0 104 (192)
appid_session 400000 0 0 61 199 595465 0 0 459545 104 (192)
appid_dfa_state 400000 0 0 122 213 19959 0 0 16834 184 (192)
cpat_state 100000 1440000 50000 0 0 0 0 0 0 184 (192)
sml_regfile 800000 0 0 56340 56412 1874652 0 0 1471390 192 (192)
ctd_flow 400000 0 0 28192 28218 943115 0 0 856084 192 (192)
ctd_flow_state 400000 0 0 7595 8227 561818 0 0 489884 192 (192)
ctd_dlp_flow 100000 1260000 50000 31 66 3425 0 0 3423 192 (192)
proxy_flow 19040 0 0 0 0 0 0 0 0 192 (192)
prl_st 9520 0 0 0 0 0 0 0 0 192 (192)
ssl_hs_st 9520 0 0 0 0 0 0 0 0 184 (192)
ssl_key_block 19040 0 0 0 0 0 0 0 0 192 (192)
ssl_st 19040 0 0 0 0 0 0 0 0 192 (192)
ssl_hs_mac 28560 0 0 0 0 0 0 0 0 variable
timer_chunk 95232 0 0 0 874 2385 0 0 1358 256 (256)
ctd_misc_wait 0 0 0 0 0 0 0 0 0 variable

Memory Pool Size 186368KB, start address 0x8000000000000000
alloc size 5491532, max 5492204
fixed buf allocator, size 190837400
sz allocator, page size 32768, max alloc 4096 quant 64
pool 0 element size 64 avail list 5 full list 13
pool 1 element size 128 avail list 43 full list 87
pool 2 element size 192 avail list 5 full list 21
pool 3 element size 256 avail list 1 full list 0
pool 4 element size 320 avail list 0 full list 11
pool 5 element size 384 avail list 3 full list 1
pool 6 element size 448 avail list 1 full list 14
pool 7 element size 512 avail list 2 full list 7
lines 44-86 pool 8 element size 576 avail list 3 full list 1
pool 9 element size 640 avail list 1 full list 2
pool 10 element size 704 avail list 1 full list 1
pool 11 element size 768 avail list 1 full list 1
pool 12 element size 832 avail list 1 full list 0
pool 13 element size 896 avail list 1 full list 0
pool 14 element size 960 avail list 1 full list 0
pool 15 element size 1024 avail list 1 full list 0
pool 16 element size 1088 avail list 1 full list 0
pool 17 element size 1152 avail list 1 full list 0
pool 18 element size 1216 avail list 1 full list 0
pool 26 element size 1728 avail list 1 full list 0
pool 29 element size 1920 avail list 1 full list 0
pool 30 element size 1984 avail list 1 full list 0
pool 32 element size 2112 avail list 1 full list 0
parent allocator
alloc size 7760488, max 7760488
malloc allocator
current usage 7766016 max. usage 7766016, free chunks 5586, total chunks 5823

Mem-Pool-Type MaxSz(KB) Threshold MinSz(KB) CurSz(B) Cur.Alloc Total-Alloc Fail-Thresh Fail-Nomem Local-Reuse(cache)
ctd_dlp_buf 6201 93184 3100 0 0 0 0 0 0 (0)
proxy 69474 0 0 61476 848 976 0 0 0 (0)
clientless_sslvpn 67998 0 0 0 0 0 0 0 0 (0)
l7_data 3032 0 0 0 0 0 0 0 0 (0)
l7_misc 91168 130457 45584 2750768 36038 934094 0 0 51870 (11)
cfg_name_cache 553 130457 276 0 0 0 0 0 0 (0)
scantracker 480 111820 240 491416 3233 47939 22353 0 22353 (0)
appinfo 3672 130457 1836 251472 1014 1014 0 0 0 (0)
parent_info 2565 130457 1282 0 0 0 0 0 0 (0)
user 33500 0 0 1418672 6726 6727 0 0 0 (0)
userpolicy 16000 0 0 115752 2067 6532 0 0 0 (0)
uid_map 500 0 0 0 0 0 0 0 0 (0)
dns 2048 130457 1024 0 0 0 0 0 0 (0)
credential 8192 111820 4096 0 0 0 0 0 0 (0)

Cache-Type MAX-Entries Cur-Entries Cur.SZ(B) Insert-Failure Mem-Pool-Type
ssl_server_cert 16384 9582 766560 0 l7_misc
ssl_cert_cn 25000 11408 942608 0 l7_misc
ssl_cert_cache 1024 0 0 0 proxy
ssl_sess_cache 6000 0 0 0 proxy
proxy_exclude 1024 8 1920 0 proxy
proxy_notify 8192 0 0 0 proxy
lines 87-129 username_cache 4096 0 0 0 cfg_name_cache
threatname_cache 4096 0 0 0 cfg_name_cache
hipname_cache 256 0 0 0 cfg_name_cache
ctd_block_answer 16384 0 0 0 l7_misc
ctd_cp 16384 0 0 0 l7_misc
ctd_driveby 4096 0 0 0 l7_misc
ctd_pcap 1024 0 0 0 l7_misc
ctd_sml 8192 8192 393216 0 l7_misc
ctd_url 400000 2194 163504 0 l7_misc
ctd_file_action 100000 4564 477824 0 l7_misc
app_tracker 65536 0 0 0 l7_misc
threat_tracker 4096 0 0 0 l7_misc
scan_tracker 4096 3233 491416 0 scantracker
app_info 17408 1014 251472 0 appinfo
parent_info 43776 0 0 0 parent_info
dns 9520 0 0 0 proxy
dns_v4 10000 0 0 0 dns
dns_v6 10000 0 0 0 dns
dns_id 1024 0 0 0 dns
tcp_mcb 10304 98 7056 0 l7_misc
sslvpn_ck_cache 1000 0 0 0 clientless_sslvpn
user_cache 128000 6726 1418672 0 user
userpolicy_cache 16000 2067 115752 0 userpolicy
uid_map_cache 128000 0 0 0 uid_map
split_tunnel 10000 0 0 0 l7_misc


DP dp1:


Hardware Pools
[ 0] Packet Buffers : 258484/262144 0x80000001d8fb0500
[ 1] Work Queue Entries : 487568/491520 0x80000001d53b0500
[ 2] Output Buffers : 98550/98560 0x80000001f8fbed80
[ 3] DFA Result : 4096/4096 0x800000041f200000
[ 4] Timer Buffers : 4093/4096 0x800000041f600000
[ 5] LWM Pool : 1024/1024 0x800000041fa00000
[ 6] ZIP Commands : 1023/1024 0x800000041f000000
[ 7] Dma Cmd Buffers : 1019/1024 0x80000001d52ae880

Software Pools
[ 0] Shared Pool 24 ( 24): 3614322/3630000 0x8000000080001080
[ 1] Shared Pool 32 ( 32): 4164866/4200000 0x80000000860ef900
[ 2] Shared Pool 40 ( 40): 1049711/1050000 0x800000008f121b00
[ 3] Shared Pool 192 ( 192): 7427684/7605000 0x8000000091d31100
[ 4] Shared Pool 256 ( 256): 119683/120000 0x80000000eaab8480
[ 5] software packet buffer 0 ( 512): 48977/49152 0x8000000020000700
[ 6] software packet buffer 1 ( 1024): 49035/49152 0x8000000021830800
[ 7] software packet buffer 2 ( 2048): 49143/49152 0x8000000024860900
[ 8] software packet buffer 3 (33280): 32764/32768 0x800000002a890a00
[ 9] software packet buffer 4 (66048): 432/432 0x800000006b8b0b00
[10] ZIP Results ( 184): 1023/1024 0x800000041fc2c680
[11] CTD AV Block ( 1024): 32/32 0x80000000ec987380
[12] Regex Results (11640): 4089/4096 0x80000000eccd1100 42
[13] SSH Handshake State ( 6512): 128/128 0x800000000fbe6a80
[14] SSH State ( 3200): 1024/1024 0x800000000fcb2580
[15] TCP host connections ( 184): 15/16 0x800000000ffd3b00

Shared Pools Statistics

Current local reuse cache counts for each pool:
core 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15
24 0 0 0 0 0 351 369 365 288 360 382 0 0 0 0 0
32 0 0 0 0 0 384 384 353 383 352 363 0 0 0 0 0
40 0 32 32 0 0 29 32 31 31 32 30 32 0 0 0 0
192 0 732 0 0 40 708 392 689 0 44 634 11 0 0 0 0
256 0 0 32 32 32 32 29 32 32 32 32 32 0 0 0 0
Local Reuse Pools Shared Pool 24 Shared Pool 32 Shared Pool 40 Shared Pool 192 Shared Pool 256
Cached / Max 2115/6144 2219/6144 281/512 3250/12288 317/512
Cached + Free / Total 3616431/3630000 4167073/4200000 1049992/1050000 7430933/7605000 120000/120000

User Quota Threshold Min.Alloc Cur.Alloc Max.Alloc Total-Alloc Fail-Thresh Fail-Nomem Local-Reuse Data(Pool)-SZ
fptcp_seg 131072 0 0 0 73 33447 0 0 24990 16 (24)
inner_decode 18750 0 0 31 37 9935 0 0 9904 24 (24)
detector_threat 1048576 0 0 11821 14018 5676551 0 0 5674363 24 (24)
spyware_state 240000 0 0 578 995 164304 0 0 122112 24 (24)
vm_vcheck 1048576 0 0 11 2310 11811910 0 0 8146043 24 (24)
ctd_patmatch 1200002 0 0 1127 1288 399611 0 0 312043 24 (24)
proxy_l2info 71424 0 0 0 0 0 0 0 0 24 (24)
proxy_pktmr 35712 0 0 0 0 0 0 0 0 16 (24)
vm_field 4194304 0 0 32894 35121 71589225 0 0 50877477 32 (32)
prl_cookie 35712 0 0 0 0 0 0 0 0 32 (32)
decode_filter 1048576 0 0 8 64 45065 0 0 33274 40 (40)
hash_decode 16384 0 0 0 0 0 0 0 0 104 (192)
appid_session 1200002 0 0 89 243 1175178 0 0 826629 104 (192)
appid_dfa_state 1200002 0 0 160 274 28406 0 0 22999 184 (192)
cpat_state 300000 6084000 150000 0 0 0 0 0 0 184 (192)
sml_regfile 2400004 0 0 104568 106766 3717150 0 0 2628941 192 (192)
ctd_flow 1200002 0 0 52284 53370 1871480 0 0 1637151 192 (192)
ctd_flow_state 1200002 0 0 16898 17946 1131006 0 0 868875 192 (192)
ctd_dlp_flow 300000 5323500 150000 71 129 11875 0 0 11855 192 (192)
proxy_flow 71424 0 0 0 0 0 0 0 0 192 (192)
prl_st 35712 0 0 0 0 0 0 0 0 192 (192)
ssl_hs_st 35712 0 0 0 0 0 0 0 0 184 (192)
ssl_key_block 71424 0 0 0 0 0 0 0 0 192 (192)
ssl_st 71424 0 0 0 0 0 0 0 0 192 (192)
ssl_hs_mac 107136 0 0 0 0 0 0 0 0 variable
timer_chunk 95232 0 0 0 662 6737 0 0 4899 256 (256)
ctd_misc_wait 0 0 0 0 0 0 0 0 0 variable

Memory Pool Size 456949KB, start address 0x8000000000000000
alloc size 5714364, max 5714572
fixed buf allocator, size 467913144
sz allocator, page size 32768, max alloc 4096 quant 64
pool 0 element size 64 avail list 9 full list 9
pool 1 element size 128 avail list 15 full list 145
pool 2 element size 192 avail list 9 full list 1
pool 3 element size 256 avail list 2 full list 0
pool 4 element size 320 avail list 3 full list 9
pool 5 element size 384 avail list 2 full list 2
pool 6 element size 448 avail list 3 full list 12
pool 7 element size 512 avail list 3 full list 6
pool 8 element size 576 avail list 2 full list 2
pool 9 element size 640 avail list 2 full list 1
pool 10 element size 704 avail list 1 full list 1
pool 11 element size 768 avail list 1 full list 1
pool 12 element size 832 avail list 1 full list 0
pool 13 element size 896 avail list 1 full list 0
pool 14 element size 960 avail list 1 full list 0
pool 15 element size 1024 avail list 1 full list 0
pool 16 element size 1088 avail list 1 full list 0
pool 17 element size 1152 avail list 1 full list 0
pool 18 element size 1216 avail list 1 full list 0
pool 26 element size 1728 avail list 1 full list 0
pool 29 element size 1920 avail list 1 full list 0
pool 30 element size 1984 avail list 1 full list 0
pool 32 element size 2112 avail list 1 full list 0
parent allocator
alloc size 8284776, max 8284776
malloc allocator
current usage 8290304 max. usage 8290304, free chunks 14026, total chunks 14279

Mem-Pool-Type MaxSz(KB) Threshold MinSz(KB) CurSz(B) Cur.Alloc Total-Alloc Fail-Thresh Fail-Nomem Local-Reuse(cache)
ctd_dlp_buf 18603 228474 9301 0 0 0 0 0 0 (0)
proxy 180907 0 0 61492 848 976 0 0 0 (0)
clientless_sslvpn 67998 0 0 0 0 0 0 0 0 (0)
l7_data 3032 0 0 0 0 0 0 0 0 (0)
l7_misc 216368 319864 108184 3267456 41478 1024019 0 0 22593 (10)
cfg_name_cache 553 319864 276 0 0 0 0 0 0 (0)
scantracker 480 274169 240 0 0 0 0 0 0 (0)
appinfo 3672 319864 1836 251472 1014 1014 0 0 0 (0)
parent_info 2565 319864 1282 0 0 0 0 0 0 (0)
user 33500 0 0 1418672 6726 6727 0 0 0 (0)
userpolicy 16000 0 0 273952 4892 14877 0 0 0 (0)
uid_map 500 0 0 0 0 0 0 0 0 (0)
dns 2048 319864 1024 0 0 0 0 0 0 (0)
credential 8192 274169 4096 0 0 0 0 0 0 (0)

Cache-Type MAX-Entries Cur-Entries Cur.SZ(B) Insert-Failure Mem-Pool-Type
ssl_server_cert 16384 9582 766560 0 l7_misc
ssl_cert_cn 25000 11408 942608 0 l7_misc
ssl_cert_cache 1024 0 0 0 proxy
ssl_sess_cache 6000 0 0 0 proxy
proxy_exclude 1024 8 1920 0 proxy
proxy_notify 8192 0 0 0 proxy
username_cache 4096 0 0 0 cfg_name_cache
threatname_cache 4096 0 0 0 cfg_name_cache
hipname_cache 256 0 0 0 cfg_name_cache
ctd_block_answer 16384 0 0 0 l7_misc
ctd_cp 16384 0 0 0 l7_misc
ctd_driveby 4096 0 0 0 l7_misc
ctd_pcap 1024 0 0 0 l7_misc
ctd_sml 8192 8192 393216 0 l7_misc
ctd_url 1572864 3533 295696 0 l7_misc
ctd_file_action 393216 8306 836472 0 l7_misc
app_tracker 65536 0 0 0 l7_misc
threat_tracker 4096 0 0 0 l7_misc
scan_tracker 4096 0 0 0 scantracker
app_info 17408 1014 251472 0 appinfo
parent_info 43776 0 0 0 parent_info
dns 35712 0 0 0 proxy
dns_v4 10000 0 0 0 dns
lines 259-301 dns_v6 10000 0 0 0 dns
dns_id 1024 0 0 0 dns
tcp_mcb 28736 457 32904 0 l7_misc
sslvpn_ck_cache 1000 0 0 0 clientless_sslvpn
user_cache 128000 6726 1418672 0 user
userpolicy_cache 16000 4892 273952 0 userpolicy
uid_map_cache 128000 0 0 0 uid_map
split_tunnel 10000 0 0 0 l7_misc

 

Global counters:
Elapsed time since last sampling: 15.837 seconds

name value rate severity category aspect description
--------------------------------------------------------------------------------
pkt_recv 3974127 250938 info packet pktproc Packets received
pkt_recv_zero 2660025 167962 info packet pktproc Packets received from QoS 0
pkt_sent 5549880 350437 info packet pktproc Packets transmitted
pkt_alloc 3685203 232695 info packet resource Packets allocated
pkt_swbuf_clone 2 0 info packet pktproc Packets replicated using software buffer
pkt_stp_rcv 216 13 info packet pktproc STP BPDU packets received
pkt_pvst_rcv 192 12 info packet pktproc PVST+ BPDU packets received
session_allocated 84507 5335 info session resource Sessions allocated
session_freed 82690 5220 info session resource Sessions freed
session_installed 59889 3781 info session resource Sessions installed
session_predict_dst -8 0 info session resource Active dst predict sessions
session_discard 82 4 info session resource Session set to discard by security policy check
session_install_error 77 4 warn session pktproc Sessions installation error
session_install_error_s2c 183 11 warn session pktproc Sessions installation error s2c
session_inter_cpu_install_error 106 6 warn session pktproc Inter-CPU Session installation error
session_unverified_rst 2915 183 info session pktproc Session aging timer modified by unverified RST
session_pkt_in_closed_state 5 0 info session pktproc Session is closing or closed and still receive TCP pkt
session_reuse_tcp_session 12 0 info session pktproc Session key is reuse for another TCP session
session_inter_cpu_sync_err 6214 392 warn session resource Inter-DP packet does not match a session
session_inter_cpu_refresh_msg_rcv 184018 11619 info session pktproc Inter-DP session refresh msg recevied
session_inter_cpu_refresh_msg_snd 184018 11619 info session pktproc Inter-DP session refresh msg sent
session_inter_cpu_teardown_msg_rcv 23364 1474 info session pktproc Inter-DP session teardown msg recevied
session_inter_cpu_teardown_msg_snd 23364 1474 info session pktproc Inter-DP session teardown msg sent
session_hash_insert_duplicate 183 11 warn session pktproc Session setup: hash insert failure due to duplicate entry
session_flow_hash_not_found_teardown 9624 607 warn session pktproc Session setup: flow hash not found during session teardown
session_flow_refresh_ss_cookie_mismatch 1 0 error session pktproc Session refresh: session sync cookie mismatch during session refresh
session_query_flow_not_found 2 0 error session pktproc Session status query: flow no found
session_servobj_timeout_override 16092 1015 info session pktproc session timeout overridden by service object
session_from_ha_prep_reuse 53 2 info session pktproc Prepare for reuse of from-ha session on getting RST
flow_rcv_err 2 0 drop flow parse Packets dropped: flow stage receive error
flow_rcv_dot1q_tag_err 172 10 drop flow parse Packets dropped: 802.1q tag not configured
flow_no_interface 172 10 drop flow parse Packets dropped: invalid interface
flow_np_pkt_rcv 3266719 206271 info flow offload Packets received from offload processor
flow_np_pkt_xmt 3564533 225076 info flow offload Packets transmitted to offload processor
flow_policy_deny 1537 96 drop flow session Session setup: denied by policy
flow_tcp_non_syn 973 61 info flow session Non-SYN TCP packets without session match
flow_scan_skip 5040 318 info flow session Session setup: scan detection skipped
flow_tcp_non_syn_drop 973 61 drop flow session Packets dropped: non-SYN TCP without session match
flow_fwd_l3_mcast_drop 88 5 drop flow forward Packets dropped: no route for IP multicast
flow_fwd_l3_ttl_zero 428 26 drop flow forward Packets dropped: IP TTL reaches zero
flow_fwd_l3_noarp 1 0 drop flow forward Packets dropped: no ARP
flow_fwd_notopology 1 0 drop flow forward Packets dropped: no forwarding configured on interface
flow_fwd_mtu_exceeded 23731 1497 info flow forward Packets lengths exceeded MTU
flow_fwd_drop_noxmit 996 62 info flow forward Packet dropped at forwarding: noxmit
flow_parse_l4_cksm 1 0 drop flow parse Packets dropped: TCP/UDP checksum failure
flow_parse_l4_tcpfinrst 1 0 drop flow parse Packets dropped: invalid TCP flags (FIN+RST+*)
flow_parse_ipfrag_df_on 4 0 info flow parse IP fragments with DF/Reserve bit on
flow_dos_syncookie_ack_err 47 2 info flow dos TCP SYN cookies: Invalid ACKs received, aggregate profile/zone
flow_dos_pf_noreplyttl 14 0 drop flow dos Packets dropped: Zone protection option 'suppress-icmp-timeexceeded'
flow_dos_pf_strictip 5 0 drop flow dos Packets dropped: Zone protection option 'strict-ip-check'
flow_ipfrag_recv 78234 4939 info flow ipfrag IP fragments received
flow_ipfrag_free 39166 2472 info flow ipfrag IP fragments freed after defragmentation
flow_ipfrag_merge 39094 2468 info flow ipfrag IP defragmentation completed
flow_ipfrag_swbuf 4 0 info flow ipfrag Software buffers allocated for reassembled IP packet
flow_ipfrag_frag 78184 4936 info flow ipfrag IP fragments transmitted
flow_ipfrag_pkt -24 -1 info flow ipfrag packets held by IP fragmentation
flow_ipfrag_restore 39092 2468 info flow ipfrag IP fragment restore packet
flow_ipfrag_del 39093 2468 info flow ipfrag IP fragment delete entry
flow_ipfrag_result_alloc 39093 2468 info flow ipfrag IP fragment result allocated
flow_ipfrag_result_free 39092 2468 info flow ipfrag IP fragment result freed
flow_ipfrag_entry_alloc 39141 2471 info flow ipfrag IP fragment entry allocated
flow_ipfrag_entry_free 39165 2472 info flow ipfrag IP fragment entry freed
flow_predict_installed 11 0 info flow pktproc Predict sessions installed
flow_action_close 76 4 drop flow pktproc TCP sessions closed via injecting RST
flow_arp_pkt_rcv 112 7 info flow arp ARP packets received
flow_arp_pkt_xmt 38 2 info flow arp ARP packets transmitted
flow_arp_pkt_replied 5 0 info flow arp ARP requests replied
flow_arp_rcv_gratuitous 31 1 info flow arp Gratuitous ARP packets received
flow_arp_resolve_xmt 2 0 info flow arp ARP resolution packets transmitted
flow_host_pkt_rcv 301 18 info flow mgmt Packets received from control plane
flow_host_pkt_xmt 7492 472 info flow mgmt Packets transmitted to control plane
flow_host_service_allow 2 0 info flow mgmt Device management session allowed
flow_host_service_unknown 2 0 drop flow mgmt Session discarded: unknown application to control plane
flow_health_monitor_rcv 276 16 info flow mgmt Health monitoring packet received
flow_health_monitor_xmt 276 16 info flow mgmt Health monitoring packet transmitted
flow_tunnel_activate 20 1 info flow tunnel Number of packets that triggerred tunnel activation
flow_tunnel_encap_resolve 20 1 info flow tunnel tunnel structure lookup resolve
flow_session_setup_msg_recv 48604 3069 info flow pktproc Flow msg: session setup messages received
flow_session_refresh_msg_recv 184015 11619 info flow pktproc Flow msg: session refresh messages received
flow_session_teardown_msg_recv 23364 1474 info flow pktproc Flow msg: session remove messages received
lines 44-86 flow_predict_add_msg_recv 9 0 info flow pktproc Flow msg: session predict messages received
flow_predict_add_reply_msg_recv 9 0 info flow pktproc Flow msg: session predict reply messages received
flow_predict_request_ack_msg_recv 9 0 info flow pktproc Flow msg: parent session require ack for predict messages received
flow_appidcache_update_msg_recv 132 8 info flow pktproc Flow msg: appid cache update messages received
flow_smlcache_update_msg_recv 19885 1255 info flow pktproc Flow msg: sml cache update messages received
flow_url_category_msg_recv 59 3 info flow pktproc Flow msg: url category messages received
flow_appid_detect_msg_recv 17781 1122 info flow pktproc Flow msg: appid detect update messages received
flow_session_status_query_msg_sent 5 0 info flow pktproc Flow msg: session status query messages sent
flow_session_status_query_msg_recv 5 0 info flow pktproc Flow msg: session status query messages received
flow_session_status_nack_msg_sent 2 0 info flow pktproc Flow msg: session status nack messages sent
flow_session_status_nack_msg_recv 2 0 info flow pktproc Flow msg: session status nack messages received
flow_msg_pkt_combined_sent 24268 1532 info flow pktproc msg and pkt combined sent
flow_msg_pkt_combined_rcv 24301 1534 info flow pktproc msg and pkt combined received
flow_msg_proc_err 6 0 error flow pktproc Flow msg: Fail to process msg received
flow_inter_cpu_pkt_fwd 46117 2911 info flow pktproc Pkt forwarded to other CPU
flow_inter_cpu_msg_fwd 245269 15486 info flow pktproc Msg forwarded to other CPU
flow_sfpwq_enque 2 0 info flow offload Session first packet wait queue: packets enqueued
flow_sfpwq_age 2 0 info flow offload Session first packet wait queue: packets deleted by ager
flow_sfpwq_enque_no_sess_pkt 2 0 info flow offload Session first packet wait queue: packets w/o session being found
flow_rcv_inter_cpu 61523 3884 info flow offload inter-cpu packets received
flow_msg_rcv_inter_cpu 269571 17021 info flow offload inter-cpu message received
flow_netmsg_doorbell_send 36 2 info flow pktproc netmsg doorbell sent
flow_netmsg_enqueue 36 2 info flow pktproc netmsg enqueue succeed
flow_netmsg_dequeue 36 2 info flow pktproc netmsg dequeue succeed
flow_netmsg_dequeue_op 36 2 info flow pktproc netmsg dequeue operation
mprelay_netmsg_SUCCESS 36 2 info mprelay pktproc netmsg enqueue succeed
flow_fpga_rcv_igr_IPCHKSUMERR 12 0 info flow offload FPGA IGR Exception: IPCHKSUMERR
flow_fpga_rcv_igr_IPSIPERR 86 5 info flow offload FPGA IGR Exception: IPSIPERR
flow_fpga_rcv_igr_FRAGERR 47507 2999 info flow offload FPGA IGR Exception: FRAGERR
flow_fpga_rcv_igr_INTFNOTFOUND 87 5 info flow offload FPGA IGR Exception: INTFNOTFOUND
flow_fpga_rcv_egr_TTLIS1 767 47 info flow offload FPGA EGR Exception: TTLIS1
flow_fpga_rcv_pkt_err 767 47 drop flow offload Packet dropped: offload processor parse error
flow_fpga_rcv_err 769 47 drop flow offload Packets dropped: receive error from offload processor
flow_fpga_flow_insert 59327 3745 info flow offload fpga flow insert transactions
flow_fpga_flow_delete 34262 2162 info flow offload fpga flow delete transactions
flow_fpga_flow_update 1891782 119452 info flow offload fpga flow update transaction
flow_fpga_rcv_stats 358983 22666 info flow offload fpga session refresh/stats message received
flow_fpga_rcv_sess_remove 2133 134 info flow offload fpga session remove message received
flow_fpga_rcv_slowpath 97186 6136 info flow offload fpga packets slowpath received
flow_fpga_rcv_fastpath 2537799 160244 info flow offload fpga packets for fastpath received
flow_fpga_rcv_ha 8 0 info flow offload fpga packets for ha received
flow_fpga_delete_ack_c2s_dp0 34080 2151 info flow offload tiger delete ack received for c2s flow on dp0
flow_fpga_delete_ack_s2c_dp0 34080 2151 info flow offload tiger delete ack received for s2c flow on dp0
lines 87-129 flow_tcp_cksm_sw_validation 4 0 info flow pktproc Packets for which TCP checksum validation was done in software
appid_ident_by_icmp 133 8 info appid pktproc Application identified by icmp type
appid_ident_by_simple_sig 10744 677 info appid pktproc Application identified by simple signature
appid_post_pkt_queued 75 3 info appid resource The total trailing packets queued in AIE
appid_ident_by_sport_first 3 0 info appid pktproc Application identified by L4 sport first
appid_ident_by_dport_first 15561 981 info appid pktproc Application identified by L4 dport first
appid_ident_by_dport 7 0 info appid pktproc Application identified by L4 dport
appid_proc 21404 1351 info appid pktproc The number of packets processed by Application identification
appid_unknown_max_pkts 1 0 info appid pktproc The number of unknown applications caused by max. packets reached
appid_unknown_udp 54 2 info appid pktproc The number of unknown UDP applications after app engine
appid_unknown_fini 141 8 info appid pktproc The number of unknown applications
appid_unknown_fini_empty 2024 127 info appid pktproc The number of unknown applications because of no data
nat_dynamic_port_xlat 12501 789 info nat resource The total number of dynamic_ip_port NAT translate called
nat_dynamic_port_release 12904 814 info nat resource The total number of dynamic_ip_port NAT release called
dfa_sw 24858 1568 info dfa pktproc The total number of dfa match using software
dfa_sw_min_threshold 24858 1568 info dfa offload Usage of software dfa caused by packet length min threshold
dfa_fpga 353222 22302 info dfa offload The total requests to FPGA for dfa
dfa_fpga_data 220173548 13902477 info dfa offload The total data size to FPGA for dfa
dfa_session_change 9 0 info dfa offload when getting dfa result from offload, session was changed
tcp_drop_packet 535 33 warn tcp pktproc packets dropped because of failure in tcp reassembly
tcp_pkt_queued -116 -7 info tcp resource The number of out of order packets queued in tcp
tcp_out_of_sync 46 2 warn tcp pktproc can't continue tcp reassembly because it is out of sync
tcp_case_1 11 0 info tcp pktproc tcp reassembly case 1
tcp_case_2 4532 285 info tcp pktproc tcp reassembly case 2
tcp_case_3 2 0 info tcp pktproc tcp reassembly case 3
tcp_case_4 2 0 info tcp pktproc tcp reassembly case 4
tcp_drop_out_of_wnd 23 0 warn tcp resource out-of-window packets dropped
tcp_exceed_flow_seg_limit 482 29 warn tcp resource packets dropped due to the limitation on tcp out-of-order queue size
tcp_new_syn 12 0 warn tcp pktproc A new SYN packet in tcp session
ctd_pkt_queued 12 0 info ctd resource The number of packets queued in ctd
ctd_sml_exit 1 0 info ctd pktproc The number of sessions with sml exit
ctd_sml_exit_detector_i 9670 609 info ctd pktproc The number of sessions with sml exit in detector i
ctd_sml_unset_suspend 9180 579 info ctd pktproc The number of decoder resume requests
appid_bypass_no_ctd 478 29 info appid pktproc appid bypass due to no ctd
ctd_handle_reset_and_url_exit 1062 66 info ctd pktproc Handle reset and url exit
ctd_inner_decode_exceed_flow_limit 159 9 info ctd pktproc Inner decoder exceeds limit. Replaced the oldest inner decoder.
ctd_err_bypass 9671 609 info ctd pktproc ctd error bypass
ctd_err_sw 21 0 info ctd pktproc ctd sw error
ctd_switch_decoder 11 0 info ctd pktproc ctd switch decoder
ctd_stop_proc 3553 223 info ctd pktproc ctd stops to process packet
ctd_run_detector_i 13299 839 info ctd pktproc run detector_i
ctd_sml_vm_run_impl_opcodeexit 8633 544 info ctd pktproc SML VM opcode exit
ctd_sml_vm_run_impl_immed8000 518 32 info ctd pktproc SML VM immed8000
lines 130-172 ctd_sml_vm_run_eval_zip_ratio 3 0 info ctd pktproc SML VM eval zip ratio
ctd_decode_filter_chunk_normal 7363 464 info ctd pktproc Packets with normal chunks
ctd_decode_filter_QP 33 2 info ctd pktproc decode filter QP
ctd_sml_opcode_set_file_type 1102 68 info ctd pktproc sml opcode set file type
ctd_token_match_overflow 80 4 info ctd pktproc The token match overflow
ctd_filter_decode_failure_qpdecode 33 2 error ctd pktproc Number of decode filter failure for qpdecode
ctd_bloom_filter_nohit 52724 3328 info ctd pktproc The number of no match for virus bloom filter
ctd_sml_cache_conflict 12 0 info ctd pktproc The number of sml cache conflict
ctd_fwd_err_session 84 5 info ctd pktproc Content forward error: invalid session state
ctd_fwd_err_tcp_state 12940 816 info ctd pktproc Content forward error: TCP in establishment when session went away
ctd_header_insert_end_err 8 0 warn ctd pktproc The number of header_insert_failure due to no header-end
ctd_header_insert_dec_fltr_lim 8 0 warn ctd pktproc The number of header_insert_failure due to decode_filter limitation
fpga_pkt -2 0 info fpga resource The packets held because of requests to FPGA
aho_request -1 0 info aho resource The AHO outstanding requests
aho_fpga 354201 22364 info aho resource The total requests to FPGA for AHO
aho_fpga_data 223676067 14123637 info aho resource The total data size to FPGA for AHO
aho_fpga_state_verify_failed 11 0 info aho pktproc when getting result from fpga, session's state was changed
aho_sw_offload_state_verify_failed 1 0 info aho pktproc when getting returned request from sw offload, session's state was change
d
aho_too_many_matches 109 6 info aho pktproc too many signature matches within one packet
aho_sw_min_threshold 6479 408 info aho pktproc Usage of software AHO caused by packet length min threshold
aho_sw_max_threshold 336 21 info aho pktproc Usage of software AHO caused by packet length max threshold
aho_sw_offload 5597 352 info aho pktproc The total number of software aho offload
aho_sw 6815 429 info aho pktproc The total usage of software for AHO
ctd_exceed_queue_limit 21 0 warn ctd resource The number of packets queued in ctd exceeds per session's limit, action b
ypass
ctd_predict_queue_deque 9 0 info ctd pktproc ctd predict queue deque
ctd_predict_queue_enque 9 0 info ctd pktproc ctd predict queue got enque due to predict waiting fpp
ctd_predict_ack_request_sent 9 0 info ctd pktproc ctd predict queue ack request sent
ctd_predict_ack_request_rcv 9 0 info ctd pktproc ctd predict queue ack request received
ctd_predict_ack_reply_sent 9 0 info ctd pktproc ctd predict queue ack reply sent
ctd_predict_ack_reply_rcv 9 0 info ctd pktproc ctd predict queue ack reply received
ctd_appid_reassign 10682 674 info ctd pktproc appid was changed
ctd_appid_reset 3542 223 info ctd pktproc go back to appid
ctd_decoder_reassign 11 0 info ctd pktproc decoder was changed
ctd_url_block 69 3 info ctd pktproc sessions blocked by url filtering
ctd_url_block_cont 6 0 info ctd pktproc sessions prompted with block/cont for url filtering
ctd_process 36428 2299 info ctd pktproc session processed by ctd
ctd_pkt_slowpath 450920 28472 info ctd pktproc Packets processed by slowpath
ctd_pkt_slowpath_suspend_regex 1 0 info ctd pktproc Packets bypassed CTD at regex stage
ctd_decoded_buf -2 0 info ctd pktproc decoded buffer
ctd_hitcount_period_update 2 0 info ctd system Number of Policy Hit Count periodical update
ha_msg_sent 652541 41203 info ha system HA: messages sent
lines 173-213 ha_msg_recv 8 0 info ha system HA: messages received
ha_session_setup_msg_sent 59329 3745 info ha pktproc HA: session setup messages sent
ha_session_teardown_msg_sent 34197 2158 info ha pktproc HA: session teardown messages sent
ha_session_update_msg_sent 558911 35291 info ha pktproc HA: session update messages sent
ha_predict_add_msg_sent 11 0 info ha pktproc HA: predict session add messages sent
ha_predict_delete_msg_sent 19 1 info ha pktproc HA: predict session delete messages sent
ha_arp_update_msg_sent 74 4 info ha pktproc HA: ARP update messages sent
ha_sess_upd_notsent_unsyncable 112 7 info ha system HA session update message not sent: session not syncable
ha_err_decap 8 0 error ha system Packets dropped: HA message decapsulation error
ha_err_decap_proto 8 0 error ha system Packets dropped: HA message protocol decapsulation error
log_url_cnt 5007 315 info log system Number of url logs
log_urlcontent_cnt 181 10 info log system Number of url content logs
log_uid_req_cnt 270 16 info log system Number of uid request logs
log_traffic_cnt 36240 2287 info log system Number of traffic logs
log_http_hdr_cnt 698 43 info log system Number of HTTP hdr field logs
ctd_http_range_response 506 31 info ctd system Number of HTTP range responses detected by ctd
log_suppress 112 7 info log system Logs suppressed by log suppression
uid_ipinfo_rcv 400 24 info uid pktproc Number of ip user info received
url_db_request 40 2 info url pktproc Number of URL database request
url_db_reply 185 11 info url pktproc Number of URL reply
url_request_pkt_drop 60 2 drop url pktproc The number of packets get dropped because of waiting for url category req
uest
url_session_not_in_wait 22 0 error url system The session is not waiting for url
url_reply_not_distribute 2 0 warn url pktproc Number of URL reply not distributed to other DPs
zip_process -1 0 info zip resource The outstanding zip processes
zip_process_total 611 38 info zip pktproc The total number of zip engine decompress process
zip_process_sw 7319 462 info zip pktproc The total number of zip software decompress process
zip_hw_in 712403 44982 info zip pktproc The total input data size to hardware zip engine
zip_hw_out 6398333 404011 info zip pktproc The total output data size from hardware zip engine
tcp_modi_q_pkt_alloc 566 35 info tcp pktproc packets allocated by tcp modification queue
tcp_modi_q_pkt_free 578 35 info tcp pktproc packets freed by tcp modification queue
tcp_modi_q_hit 12 0 info tcp pktproc packets that fall in tcp modification queue
tcp_modi_q_ack 3 0 info tcp pktproc packets that acked by tcp modification queue
tcp_fin_q_pkt_alloc 10 0 info tcp pktproc packets allocated by tcp FIN queue
tcp_fin_q_pkt_free 10 0 info tcp pktproc packets freed by tcp FIN queue
ctd_smb_outoforder_chunks 30 1 info ctd pktproc Number of out-of-order SMB chunks
pkt_flow_fastpath 1 0 info packet resource Packets entered module flow stage fastpath
pkt_flow_forwarding -2500 -157 info packet resource Packets entered module flow stage forwarding
pkt_nac_result 707440 44669 info packet resource Packets entered module nac stage result
pkt_flow_np 3266763 206273 info packet resource Packets entered module flow stage np
pkt_module_internal 1 0 info packet resource Packets entered module module stage internal
pkt_zip_result -1 0 info packet resource Packets entered module zip stage result
pkt_pktlog_forwarding -8 0 info packet resource Packets entered module pktlog stage forwarding

20 REPLIES 20

L4 Transporter

Hi,

 

https://docs.paloaltonetworks.com/pan-os/8-1/pan-os-admin/zone-protection-and-dos-protection/dos-pro...

 

Can you try run the following command, and identify the session to check if the traffic is intervsys traffic?

Hi, thank u for your response.

 

Here is the output:

 

zone RANGER-to-vsys2 is an inter vsys zone

 

-- SLOT: s1, DP: dp0 --
USAGE - ATOMIC: 10% TOTAL: 12%

TOP SESSIONS:
SESS-ID PCT GRP-ID COUNT
116089 5% 1 23
3 55
7 37

SESSION DETAILS
SESS-ID PROTO SZONE SRC SPORT DST DPORT IGR-IF EGR-IF TYPE APP
116089 6 RANGER-to-vsys2 10.215.237.3 57505 74.125.100.135 443 ethernet1/23 ethernet1/21 FLOW ssl


-- SLOT: s1, DP: dp1 --
USAGE - ATOMIC: 92% TOTAL: 97%

TOP SESSIONS:
SESS-ID PCT GRP-ID COUNT
34346535 6% 1 3
3 59
7 71
34428199 5% 1 5
3 50
7 63
34085016 3% 3 64
34101168 3% 3 76
34352675 3% 3 73
34356660 2% 3 42

SESSION DETAILS
SESS-ID PROTO SZONE SRC SPORT DST DPORT IGR-IF EGR-IF TYPE APP
34085016 6 RANGER-to-vsys2 10.215.237.2 3449 2.19.194.227 443 ethernet1/23 ethernet1/21 FLOW ssl
34101168 6 RANGER-to-vsys2 10.215.237.2 53494 172.217.132.40 443 ethernet1/23 ethernet1/21 FLOW ssl
34346535 6 RANGER-to-vsys2 10.215.237.3 16979 74.125.100.9 443 ethernet1/23 ethernet1/21 FLOW ssl
34352675 6 RANGER-to-vsys2 10.220.16.70 57026 52.97.136.214 443 ethernet1/23 ethernet1/21 FLOW ssl
34356660 6 RANGER-to-vsys2 10.215.237.3 4433 198.38.115.175 443 ethernet1/23 ethernet1/21 FLOW netflix-base
34428199 6 RANGER-to-vsys2 10.215.237.2 2645 74.125.100.10 443 ethernet1/23 ethernet1/21 FLOW ssl

Hi,

 

RANGER-to-vsys2 is an inter-vsys zone:

 

-- SLOT: s1, DP: dp0 --
USAGE - ATOMIC: 13% TOTAL: 15%

TOP SESSIONS:
SESS-ID PCT GRP-ID COUNT
32421 3% 1 8
3 38
7 31
332954 2% 1 18
3 22
7 19

SESSION DETAILS
SESS-ID PROTO SZONE SRC SPORT DST DPORT IGR-IF EGR-IF TYPE APP
32421 6 RANGER-to-vsys2 10.215.237.2 43416 198.38.115.136 443 ethernet1/23 ethernet1/21 FLOW netflix-base
332954 6 RANGER-to-vsys2 10.215.237.2 49231 172.217.132.39 443 ethernet1/23 ethernet1/21 FLOW ssl


-- SLOT: s1, DP: dp1 --
USAGE - ATOMIC: 93% TOTAL: 97%

TOP SESSIONS:
SESS-ID PCT GRP-ID COUNT
33687731 4% 3 92
34111845 3% 7 68
34029937 2% 1 2
3 23
7 17
34220192 2% 7 49

SESSION DETAILS
SESS-ID PROTO SZONE SRC SPORT DST DPORT IGR-IF EGR-IF TYPE APP
33687731 6 RANGER-to-vsys2 10.215.237.2 65300 13.107.136.254 443 ethernet1/23 ethernet1/21 FLOW ssl
34029937 6 INSIDE 10.215.237.2 54760 209.85.226.71 443 ethernet1/23 ethernet1/21 FLOW ssl
34111845 6 INSIDE 10.215.237.3 55302 40.68.93.43 443 ethernet1/23 ethernet1/21 FLOW ssl
34220192 6 INSIDE 10.215.237.2 38493 93.184.221.240 80 ethernet1/23 ethernet1/21 FLOW ms-update

Go back to that link above, in follow the step 2, and check if those sessions from

show running resource-monitor ingress-backlogs, are offloaded or not

 

ff that session is not offloaded, that is the problem, where it causing problems on that specific DP..

Hi NextGen,

 

Firstly thank u for your time and effort.

 

I checked a couple of sessions and most of them where offloaded and some of them are not. It's not one specific session that keeps appearing. Each time i execute the command "show running resource-monitor ingress-backlogs dp 1" different sessions appear.

 

Mostly i see sessions coming from our two HTTP forwarding proxy servers. Could it be possible that the Palo Alto experiences difficulties accepting so much sessions from only two hosts. About 4000 people connect trough these proxy servers to internet.

 

Some specific sessions take 10% CPU.

 

And why does DP1 shows 100% and DP0 only 40%. Shouldn't it be evenly distributed?

 

 

34065030 6 RANGER-to-vsys2 10.215.237.2 47465 172.217.132.105 443 ethernet1/23 ethernet1/21 FLOW ssl
34184167 6 RANGER-to-vsys2 10.215.237.3 17309 104.73.43.42 443 ethernet1/23 ethernet1/21 FLOW ssl

34161385 6 RANGER-to-vsys2 10.215.237.2 63766 158.179.76.26 443 ethernet1/23 ethernet1/21 FLOW ssl

34589698 6 RANGER-to-vsys2 10.215.237.2 26644 209.85.226.73 443 ethernet1/23 ethernet1/21 FLOW ssl

 

Example of one session:

 

Session 33951667

c2s flow:
source: 10.215.237.3 [RANGER-to-vsys2]
dst: 143.204.106.93
proto: 6
sport: 22009 dport: 443
state: ACTIVE type: FLOW
src user: unknown
dst user: unknown

s2c flow:
source: 143.204.106.93 [EXTERN]
dst: 213.194.44.30
proto: 6
sport: 443 dport: 6118
state: ACTIVE type: FLOW
src user: unknown
dst user: unknown

DP : 1
index(local): : 397235
start time : Tue Apr 7 11:16:32 2020
timeout : 3600 sec
time to live : 3592 sec
total byte count(c2s) : 986401
total byte count(s2c) : 30816459
layer7 packet count(c2s) : 13493
layer7 packet count(s2c) : 20436
vsys : vsys1
application : wetransfer-downloading
rule : RANGER_Prx-FTP-HTTP-HTTPS-01-02
service timeout override(index) : False
session to be logged at end : True
session in session ager : True
session updated by HA peer : False
address/port translation : source
nat-rule : Nat overload FWD-PRX(vsys1)
layer7 processing : completed
URL filtering enabled : True
URL category : online-storage-and-backup
session via syn-cookies : False
session terminated on host : False
session traverses tunnel : False
captive portal session : False
ingress interface : ethernet1/23
egress interface : ethernet1/21
session QoS rule : N/A (class 4)
tracker stage l7proc : ctd decoder bypass
end-reason : unknown

 

***  I would contact TAC to see if they have some better suggestions.

 

Q0:  how many sessions are generated by "RANGER_Prx-FTP-HTTP-HTTPS-01-02" ?  

Q1: Does rule "RANGER_Prx-FTP-HTTP-HTTPS-01-02" have threat, wildfire, SSL decryption, URL enabled?  

Q2: Can you try write an application override rule for  "RANGER_Prx-FTP-HTTP-HTTPS-01-02" , that will bypass all Layer 7 inspection.

 

Based on the rule name, it is allowing FTP, HTTP, HTTPS traffic.  It will be interesting to see the session counts during peak hours.  Since many of those HTTP and HTTPS sessions are short live and it could be high frequencies. 

 

>> And why does DP1 shows 100% and DP0 only 40%. Shouldn't it be evenly distributed?

 

It has to do with the DP hashing algorithm,  since DP0 already needs to handle new session creation on 5000 series.  I believe to 5050 only has 2 DP.  That is pretty much the best it can do.

 

 

 

Hi,

 

Thank u for your reply.

We already contacted TAC (through our supplier) and awaiting their response after collecting data. We already temporarily disabled all next gen firewall capabilities on all rules en disabled all logging on all rules. It did not help.

 

Could it be that the hashing algorithm sends all sessions to DP1 based on the source IP of the proxy servers?

 

Maybe it could help if we bypass the proxy servers so all clients will connect directly?

 

 

>> Could it be that the hashing algorithm sends all sessions to DP1 based on the source IP of the proxy servers?

 

DP0 has dual tasks, function for new sessions creation and as a regular DP.    If the hash algorithm ( I never got a clear answer from PAN how the session distribution works on the 5000 series)  is favoring DP1, that is all.

 

>> Maybe it could help if we bypass the proxy servers so all clients will connect directly?

 

If you can reduce the # of sessions on the firewall, that should help..

 

You can also try app override for that firewall rule, that may help as well

 

 

Well,

 

I would expect the firewall could handle this easily. The PA5050 has a troughput of 5 Gbit for next-gen. We are only handling 1 Gbit of traffic at the moment and ~220000 sessions.

 

Bypassing the proxy will only result in all the clients connecting directly to internet so it would not matter on the number of sessions i'm afraid.

 

The traffic of one user flows 4 times trough the firewall (2 different VSYS) before reaching the internet. First to the proxy (2x) and then to the internet (2x). Maybe we should optimize this traffic flow so it will only pass 2x though the firewall.

 

U are talking about app override. But if we disable all next-gen scanning on a rule this would do the same right? We already did this tot test if it would impact performance.

 

show system statistics session:

 

Device is up : 1 day 0 hour 49 mins 26 sec
Packet rate : 196424/s
Throughput : 927118 Kbps
Total active sessions : 218205
Active TCP sessions : 148124
Active UDP sessions : 68588
Active ICMP sessions : 2

 

 

>> I would expect the firewall could handle this easily. The PA5050 has a troughput of 5 Gbit for next-gen. We are only handling 1 Gbit of traffic at the moment and ~220000 sessions.

 

Can you provide a simple diagram how is the traffic flow in term of the firewall zones/interface/address spaces in relation to the vsys?  

 

> The traffic of one user flows 4 times trough the firewall (2 different VSYS) before reaching the internet. First to the proxy (2x) and then to the internet (2x). Maybe we should optimize this traffic flow so it will only pass 2x though the firewall.

 

A diagram will help..

 

> U are talking about app override. But if we disable all next-gen scanning on a rule this would do the same right? We already did this tot test if it would impact performance.

 

The firewall still perform appID, unless you appover it, that turns that rule into a L4 firewall.

Diagram:

 

Internet

      |

PA External Vsys----|

      |                         DMZ servers between vsys

Pa Internal Vsys-----|

      |

    LAN

 

All users are now connected through internet to the VPN concentrator located at the DMZ segment. So if a client wants to visit a website :

 

Laptop -> VPN Tunnel -> Tunnel flows trough External PA -> VPN Concentrator -> Internal PA -> Proxy servers on LAN -> Internal PA -> External PA -> Internet...

Which vsys is DMZ segment assigned to ?

 

Why are 2 vsys(es) needed?     This could be done with a single vsys with untrust zone, DMZ zone and LAN zone.

Clients are connected through a VPN tunnel from the internet to the VPN Concentrator in the DMZ. Then they access the Proxy server on the LAN segment. The proxy server sends the requests to internet through both firewalls.

 

drawing.png

2 Vsys are not needed in my opinion and don't add additional security.. but... Legacy... First it was a dual firewall dual vendor approach. Migrated it as is.

 

The DMZ servers have got two nics. One attached to external FW and one to internal FW.

 

The two firewall approach adds al lot of complexity.

  • 14270 Views
  • 20 replies
  • 0 Likes
Like what you see?

Show your appreciation!

Click Like if a post is helpful to you or if you just want to show your support.

Click Accept as Solution to acknowledge that the answer to your question has been provided.

The button appears next to the replies on topics you’ve started. The member who gave the solution and all future visitors to this topic will appreciate it!

These simple actions take just seconds of your time, but go a long way in showing appreciation for community members and the LIVEcommunity as a whole!

The LIVEcommunity thanks you for your participation!