- Access exclusive content
- Connect with peers
- Share your expertise
- Find support resources
05-02-2010 05:23 AM
Upgraded our PA-500 to 3.1. It came back up around an hour ago and the CPU usage has been on a pretty consistent 65-66% ever since.
I presume either this is normal, or not normal - how can I see what it's doing please?
Also even now a couple of hours later almost everything in the ACC shows up as "No matching records"?
It's all working i.e. traffic is coming in/out and being identified.
Thanks.
05-03-2010 01:18 PM
The upgrade contains a log conversion process that takes a long time to complete - in some cases several hours, depending on log size. It is normal to see elevated cpu usage on the dataplane during this process. You shouldn't see utilization spike too high for the management plane. You can see the cpu usage on the dataplane with the command >show running resource monitor and management plane >show system resources.
05-04-2010 12:32 AM
It seems to have settled on around 35% constant i.e. when nobody is here and it isn't doing anything.
Not sure if this will hold its formatting but here goes:
top - 08:30:55 up 1 day, 17:44, 1 user, load average: 1.20, 0.40, 0.19
Tasks: 86 total, 1 running, 85 sleeping, 0 stopped, 0 zombie
Cpu(s): 0.9%us, 1.5%sy, 2.6%ni, 94.5%id, 0.3%wa, 0.0%hi, 0.2%si, 0.0%st
Mem: 1007852k total, 974580k used, 33272k free, 97328k buffers
Swap: 2008084k total, 294128k used, 1713956k free, 596828k cached
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
9991 20 0 4464 1012 800 R 4 0.1 0:00.04 top
2178 15 -5 0 0 0 S 2 0.0 0:06.06 kjournald
1 20 0 1832 556 532 S 0 0.1 0:01.24 init
2 15 -5 0 0 0 S 0 0.0 0:00.00 kthreadd
3 RT -5 0 0 0 S 0 0.0 0:00.03 migration/0
4 15 -5 0 0 0 S 0 0.0 0:00.00 ksoftirqd/0
5 RT -5 0 0 0 S 0 0.0 0:00.06 migration/1
6 15 -5 0 0 0 S 0 0.0 0:00.00 ksoftirqd/1
7 15 -5 0 0 0 S 0 0.0 0:22.56 events/0
8 15 -5 0 0 0 S 0 0.0 0:03.06 events/1
9 15 -5 0 0 0 S 0 0.0 0:00.17 khelper
57 15 -5 0 0 0 S 0 0.0 0:00.12 kblockd/0
58 15 -5 0 0 0 S 0 0.0 0:00.00 kblockd/1
61 15 -5 0 0 0 S 0 0.0 0:00.00 kseriod
65 15 -5 0 0 0 S 0 0.0 0:00.00 ata/0
66 15 -5 0 0 0 S 0 0.0 0:00.00 ata/1
67 15 -5 0 0 0 S 0 0.0 0:00.00 ata_aux
73 15 -5 0 0 0 S 0 0.0 0:00.00 khubd
98 20 0 0 0 0 S 0 0.0 0:02.89 pdflush
99 20 0 0 0 0 S 0 0.0 0:04.74 pdflush
100 15 -5 0 0 0 S 0 0.0 0:12.92 kswapd0
101 15 -5 0 0 0 S 0 0.0 0:00.00 aio/0
102 15 -5 0 0 0 S 0 0.0 0:00.00 aio/1
103 15 -5 0 0 0 S 0 0.0 0:00.00 nfsiod
684 15 -5 0 0 0 S 0 0.0 0:00.00 scsi_eh_0
702 15 -5 0 0 0 S 0 0.0 0:00.89 mtdblockd
725 15 -5 0 0 0 S 0 0.0 0:00.00 rpciod/0
726 15 -5 0 0 0 S 0 0.0 0:00.00 rpciod/1
760 15 -5 0 0 0 S 0 0.0 0:34.24 kjournald
813 16 -4 2004 384 380 S 0 0.0 0:00.46 udevd
1640 15 -5 0 0 0 S 0 0.0 0:08.84 kjournald
1641 15 -5 0 0 0 S 0 0.0 0:00.00 kjournald
1833 20 0 2240 708 656 S 0 0.1 0:00.10 syslogd
1836 20 0 1888 400 348 S 0 0.0 0:00.00 klogd
1844 rpc 20 0 2080 496 492 S 0 0.0 0:00.01 portmap
1862 20 0 2348 728 724 S 0 0.1 0:00.04 rpc.statd
2014 20 0 6700 828 724 S 0 0.1 0:00.00 sshd
2046 20 0 6700 620 616 S 0 0.1 0:00.00 sshd
2055 20 0 3276 600 596 S 0 0.1 0:00.00 xinetd
2078 15 -5 0 0 0 S 0 0.0 0:00.00 lockd
2079 15 -5 0 0 0 S 0 0.0 0:25.65 nfsd
2080 15 -5 0 0 0 S 0 0.0 0:26.95 nfsd
2081 15 -5 0 0 0 S 0 0.0 0:30.20 nfsd
2082 15 -5 0 0 0 S 0 0.0 0:29.34 nfsd
2083 15 -5 0 0 0 S 0 0.0 0:36.20 nfsd
2084 15 -5 0 0 0 S 0 0.0 0:17.87 nfsd
2085 15 -5 0 0 0 S 0 0.0 0:29.94 nfsd
2086 15 -5 0 0 0 S 0 0.0 0:22.88 nfsd
2089 20 0 2356 668 576 S 0 0.1 0:00.16 rpc.mountd
2138 0 -20 54652 6584 3524 S 0 0.7 1:09.25 masterd_core
2141 20 0 1884 448 444 S 0 0.0 0:00.01 agetty
2148 0 -20 21216 2948 2596 S 0 0.3 0:54.86 masterd_manager
2155 15 -5 21068 3116 2704 S 0 0.3 27:29.38 sysd
2156 0 -20 25072 6296 2636 S 0 0.6 5:25.42 masterd_manager
2158 30 10 38476 5124 3292 S 0 0.5 8:07.88 python
2160 20 0 72648 3224 2844 S 0 0.3 0:15.19 sysdagent
2168 20 0 7208 1304 1300 S 0 0.1 0:00.06 tscat
2169 20 0 20912 2640 2540 S 0 0.3 0:01.98 brdagent
2170 20 0 25188 2696 2544 S 0 0.3 0:02.46 ehmon
2171 20 0 20820 2652 2520 S 0 0.3 0:00.26 chasd
2403 0 -20 2900 632 572 S 0 0.1 0:00.29 crond
2416 20 0 302m 123m 41m S 0 12.5 1:56.18 devsrvr
2417 20 0 287m 80m 5888 S 0 8.2 5:10.03 mgmtsrvr
2475 nobody 20 0 59116 2892 2756 S 0 0.3 0:00.52 appweb
2489 20 0 61500 4172 3640 S 0 0.4 0:01.89 ikemgr
2490 20 0 118m 56m 38m S 0 5.7 0:39.62 logrcvr
2491 20 0 85160 3328 3072 S 0 0.3 0:01.55 rasmgr
2492 20 0 76904 2824 2688 S 0 0.3 0:01.73 keymgr
2493 20 0 133m 2904 2708 S 0 0.3 0:00.35 varrcvr
2494 17 -3 35080 3328 3012 S 0 0.3 0:02.47 ha_agent
2495 20 0 76928 3220 2900 S 0 0.3 1:37.16 sslmgr
2496 20 0 52332 3120 2928 S 0 0.3 0:00.36 ldapd
2497 20 0 35204 3416 3064 S 0 0.3 0:00.36 dhcpd
2498 20 0 113m 3980 3432 S 0 0.4 0:03.64 routed
2499 20 0 61912 3524 3128 S 0 0.3 0:00.69 pppoed
2500 20 0 69948 3960 3600 S 0 0.4 0:02.45 authd
3110 20 0 26656 3772 2972 S 0 0.4 0:03.75 snmpd
3276 nobody 20 0 122m 8748 5464 S 0 0.9 1:20.94 appweb
6116 20 0 22204 2504 2064 S 0 0.2 0:00.16 sshd
6118 admin 20 0 22204 1468 1016 S 0 0.1 0:00.04 sshd
6119 admin 20 0 82704 18m 9944 S 0 1.9 0:03.31 cli
7473 nobody 20 0 77560 3784 3372 S 0 0.4 0:02.14 appweb
9988 admin 20 0 2972 672 568 S 0 0.1 0:00.03 less
9990 20 0 3828 1188 1052 S 0 0.1 0:00.04 sh
9992 20 0 1936 540 464 S 0 0.1 0:00.00 sed
15213 20 0 4020 3888 2996 S 0 0.4 0:00.20 ntpd
Resource monitoring sampling data (per second):
CPU load sampling by group:
flow_lookup : 5%
flow_fastpath : 5%
flow_slowpath : 5%
flow_forwarding : 5%
flow_mgmt : 1%
flow_ctrl : 1%
fpga_result : 5%
flow_np : 3%
dfa_result : 5%
module_internal : 5%
aho_result : 5%
zip_result : 5%
pktlog_forwarding : 5%
pci : 0%
flow_host : 2%
CPU load (%) during last 60 seconds:
core 0 1 2 3
0 1 15 15
0 1 4 5
0 1 5 5
0 1 4 5
0 1 8 8
0 1 13 13
0 1 9 9
0 1 6 6
0 1 7 8
0 1 6 6
0 1 9 8
0 1 12 13
0 1 13 14
0 1 15 15
0 1 15 15
0 1 14 14
0 1 14 16
0 1 14 14
0 1 12 11
0 1 14 14
0 1 16 17
0 1 17 17
0 1 18 18
0 1 17 17
0 1 18 18
0 1 17 17
0 1 17 16
0 1 22 22
0 1 24 24
0 1 17 17
0 1 12 11
0 1 11 10
0 1 11 11
0 1 11 11
0 1 13 13
0 1 15 15
0 1 15 15
0 1 19 19
0 1 24 23
0 1 21 21
0 1 15 15
0 1 15 15
0 1 22 22
0 1 27 26
0 1 22 22
0 1 16 17
0 1 11 11
0 1 10 10
0 1 12 12
0 1 18 18
0 1 20 20
0 1 18 18
0 1 14 14
0 1 10 10
0 1 10 10
0 1 11 11
0 1 13 13
0 1 18 18
0 1 20 20
0 1 15 15
Resource utilization (%) during last 60 seconds:
session:
3 3 3 3 3 3 3 3 3 3 3 3 3 3 3
3 3 3 3 3 3 3 3 3 3 3 3 3 3 3
3 3 3 3 3 3 3 3 3 3 3 3 3 3 3
3 3 3 3 3 3 3 3 3 3 3 3 3 3 3
packet buffer:
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
packet descriptor:
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
packet descriptor (on-chip):
3 2 2 2 2 2 2 2 2 2 2 2 2 2 2
2 2 2 2 2 2 2 2 2 2 2 2 2 3 4
2 2 2 3 2 2 2 2 2 2 2 2 2 2 2
2 2 2 2 2 2 2 2 2 2 2 2 2 2 3
Resource monitoring statistics (per minute):
CPU load (%) during last 60 minutes:
core 0 1 2 3
avg max avg max avg max avg max
0 0 1 1 15 27 15 26
0 0 1 1 15 24 15 22
0 0 1 1 10 24 10 24
0 0 1 1 7 24 7 24
0 0 1 1 6 20 6 21
0 0 1 1 6 15 6 15
0 0 1 1 4 10 4 9
0 0 1 1 5 16 5 15
0 0 1 1 4 14 4 14
0 0 1 1 7 23 7 24
0 0 1 1 3 7 4 8
0 0 1 1 6 17 7 16
0 0 1 1 8 36 8 36
0 0 1 1 30 53 30 53
0 0 1 1 30 54 30 54
0 0 1 1 15 47 15 47
0 0 1 1 8 17 8 17
0 0 1 1 6 20 6 19
0 0 1 1 5 14 5 14
0 0 1 1 6 17 6 18
0 0 1 1 5 16 5 16
0 0 1 1 4 13 4 13
0 0 1 1 7 37 7 36
0 0 1 1 4 19 4 19
0 0 1 1 7 27 7 26
0 0 1 1 3 15 4 19
0 0 1 1 4 15 4 15
0 0 1 1 2 8 2 8
0 0 1 1 3 17 3 17
0 0 1 1 3 11 3 11
0 0 1 1 4 14 4 15
0 0 1 1 3 9 3 9
0 0 1 1 3 5 3 6
0 0 1 1 4 25 4 27
0 0 1 1 4 10 4 9
0 0 1 1 4 10 4 10
0 0 1 1 4 12 4 13
0 0 1 1 4 13 4 14
0 0 1 1 5 11 5 11
0 0 1 1 5 15 5 15
0 0 1 1 5 12 5 12
0 0 1 1 5 23 6 25
0 0 1 1 4 9 4 9
0 0 1 1 4 14 4 14
0 0 1 1 5 14 6 14
0 0 1 1 5 17 5 18
0 0 1 1 4 13 4 13
0 0 1 1 2 7 2 7
0 0 1 1 4 14 3 14
0 0 1 1 2 7 2 7
0 0 1 1 3 9 3 10
0 0 1 1 3 20 3 21
0 0 1 1 6 31 6 31
0 0 1 1 1 3 1 3
0 0 1 1 3 10 3 10
0 0 1 1 3 10 3 10
0 0 1 1 6 18 6 18
0 0 1 1 2 9 2 10
0 0 1 1 3 14 3 13
0 0 1 1 2 8 2 8
Resource utilization (%) during last 60 minutes:
session (average):
3 2 2 2 2 1 1 1 2 2 2 2 2 3 3
2 2 2 1 2 1 1 1 1 1 1 1 1 1 1
1 1 1 1 1 1 1 1 1 1 1 1 1 1 1
1 1 1 1 1 1 0 0 1 1 1 1 1 0 1
session (maximum):
3 2 3 2 2 2 1 2 2 2 2 2 2 3 3
3 3 2 2 2 1 1 2 1 1 1 1 1 1 1
1 1 1 1 1 1 2 1 2 1 1 1 2 1 2
2 1 1 1 1 1 1 0 1 1 1 1 1 1 1
packet buffer (average):
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
packet buffer (maximum):
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
packet descriptor (average):
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
packet descriptor (maximum):
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
packet descriptor (on-chip) (average):
2 2 2 2 2 2 2 2 2 2 2 2 2 2 2
2 2 2 2 2 2 2 2 2 3 2 2 2 2 2
2 2 2 2 2 2 2 2 2 2 2 2 2 2 2
2 2 2 2 2 2 2 2 2 2 2 2 2 2 2
packet descriptor (on-chip) (maximum):
4 3 3 4 3 3 3 3 3 4 2 2 2 4 4
4 3 3 3 3 3 3 5 2 11 3 2 2 2 2
2 2 4 2 3 2 2 2 2 3 2 2 3 3 2
2 4 2 4 3 3 4 5 2 2 3 2 2 3 3
Resource monitoring statistics (per hour):
CPU load (%) during last 24 hours:
core 0 1 2 3
avg max avg max avg max avg max
0 0 1 1 2 31 2 31
0 0 1 1 1 20 1 20
0 0 0 1 0 13 0 13
0 0 0 1 0 19 0 16
0 0 0 1 0 9 0 9
0 0 0 1 0 6 0 6
0 0 0 1 0 12 0 11
0 0 1 1 0 13 0 13
0 0 1 3 0 32 0 31
0 0 0 1 0 5 0 4
0 0 0 1 0 28 0 28
0 0 0 1 0 19 0 33
0 0 1 1 1 47 1 49
0 0 1 1 1 46 1 45
0 0 1 1 1 20 1 15
0 0 0 1 0 18 0 18
0 0 1 1 1 21 1 21
0 0 0 1 0 15 0 14
0 0 1 1 0 19 0 19
0 0 1 1 1 50 1 46
0 0 1 1 1 26 1 24
0 0 1 1 1 24 1 24
0 0 1 1 1 46 1 44
0 0 1 1 0 7 0 7
Resource utilization (%) during last 24 hours:
session (average):
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0
session (maximum):
2 0 0 0 1 0 0 0 0 0 0 0 0 0 0
0 0 0 1 0 1 0 0 0
packet buffer (average):
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0
packet buffer (maximum):
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0
packet descriptor (average):
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0
packet descriptor (maximum):
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0
packet descriptor (on-chip) (average):
2 2 2 2 2 2 2 2 2 2 2 2 2 2 2
2 2 2 2 2 2 2 2 2
packet descriptor (on-chip) (maximum):
5 3 3 3 5 3 3 5 2 4 5 5 5 2 6
2 4 3 3 4 4 3 5 4
Resource monitoring statistics (per day):
CPU load (%) during last 7 days:
core 0 1 2 3
avg max avg max avg max avg max
0 0 0 1 0 50 0 46
0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0
Resource utilization (%) during last 7 days:
session (average):
0 0 0 0 0 0 0
session (maximum):
1 0 0 0 0 0 0
packet buffer (average):
0 0 0 0 0 0 0
packet buffer (maximum):
0 0 0 0 0 0 0
packet descriptor (average):
0 0 0 0 0 0 0
packet descriptor (maximum):
0 0 0 0 0 0 0
packet descriptor (on-chip) (average):
2 0 0 0 0 0 0
packet descriptor (on-chip) (maximum):
5 0 0 0 0 0 0
Resource monitoring statistics (per week):
CPU load (%) during last 13 weeks:
core 0 1 2 3
avg max avg max avg max avg max
0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0
Resource utilization (%) during last 13 weeks:
session (average):
0 0 0 0 0 0 0 0 0 0 0 0 0
session (maximum):
0 0 0 0 0 0 0 0 0 0 0 0 0
packet buffer (average):
0 0 0 0 0 0 0 0 0 0 0 0 0
packet buffer (maximum):
0 0 0 0 0 0 0 0 0 0 0 0 0
packet descriptor (average):
0 0 0 0 0 0 0 0 0 0 0 0 0
packet descriptor (maximum):
0 0 0 0 0 0 0 0 0 0 0 0 0
packet descriptor (on-chip) (average):
0 0 0 0 0 0 0 0 0 0 0 0 0
packet descriptor (on-chip) (maximum):
0 0 0 0 0 0 0 0 0 0 0 0 0
05-05-2010 12:23 PM
Any ideas please anyone from PA?
05-06-2010 08:50 AM
Hi,
Here also the same problem. High CPU usage on 2 PA2050 boxes.
1 Box is running now almost 1 week on 3.1.1 and still 38-50 % cpu usage.
This is also the standby box, so there is little or no traffic on this system.
Regards,
O. Bor
05-06-2010 09:59 AM
I was told today via UK support that it's due to a bug/issue with Dynamic URL filtering.
I've not actually been in a position to switch it off to test it, and I'm not sure how this could be the case even when there's nothing going through the unit but if you're able to confirm if you're using that feature and try without it?
05-07-2010 05:10 AM
Hi,
I have also delteted the logs on my standby box and still 38% CPU usage with 3-10 sessions and with very little or no traffic at all.
I have used the clear log acc, config, system, threat and traffic. So it can't be the log conversion! Any help or ideas are welcome.
We have no production problems currently but the high cpu usage is strange when you are used to use 10% max usage with 3.0.x versions.
Regards,
O. Bor
Click Accept as Solution to acknowledge that the answer to your question has been provided.
The button appears next to the replies on topics you’ve started. The member who gave the solution and all future visitors to this topic will appreciate it!
These simple actions take just seconds of your time, but go a long way in showing appreciation for community members and the LIVEcommunity as a whole!
The LIVEcommunity thanks you for your participation!