- Access exclusive content
- Connect with peers
- Share your expertise
- Find support resources
02-25-2013 08:38 AM
Hi,
my PA500 management CPU is 100%
PAN OS release 5.0.2 (same problem with 5.0.1)
If I reboot the firewall, management CPU usage goes down for some days than raise again to 100%
02-25-2013 08:55 AM
Now I cannot come back to 5.0.1
Here the output on 5.0.2
top - 17:55:29 up 3 days, 9:15, 1 user, load average: 7.23, 7.18, 7.23
Tasks: 100 total, 2 running, 98 sleeping, 0 stopped, 0 zombie
Cpu(s): 27.1%us, 24.1%sy, 18.8%ni, 28.0%id, 1.8%wa, 0.0%hi, 0.2%si, 0.0%st
Mem: 995872k total, 889880k used, 105992k free, 12516k buffers
Swap: 2008084k total, 166616k used, 1841468k free, 210132k cached
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
2476 20 0 183m 72m 64m S 187 7.4 4340:44 useridd
26581 30 10 56488 16m 4272 R 5 1.7 0:01.23 pan_logdb_index
26589 20 0 4468 1028 800 R 4 0.1 0:00.05 top
1 20 0 1836 556 528 S 0 0.1 0:00.96 init
2 20 0 0 0 0 S 0 0.0 0:00.00 kthreadd
3 RT 0 0 0 0 S 0 0.0 0:01.53 migration/0
4 20 0 0 0 0 S 0 0.0 0:00.07 ksoftirqd/0
5 RT 0 0 0 0 S 0 0.0 0:01.78 migration/1
6 20 0 0 0 0 S 0 0.0 0:00.03 ksoftirqd/1
7 20 0 0 0 0 S 0 0.0 0:13.57 events/0
8 20 0 0 0 0 S 0 0.0 0:13.36 events/1
9 20 0 0 0 0 S 0 0.0 0:00.05 khelper
12 20 0 0 0 0 S 0 0.0 0:00.00 async/mgr
112 20 0 0 0 0 S 0 0.0 0:00.00 sync_supers
114 20 0 0 0 0 S 0 0.0 0:00.00 bdi-default
115 20 0 0 0 0 S 0 0.0 0:02.45 kblockd/0
116 20 0 0 0 0 S 0 0.0 0:01.62 kblockd/1
125 20 0 0 0 0 S 0 0.0 0:00.00 ata/0
126 20 0 0 0 0 S 0 0.0 0:00.00 ata/1
127 20 0 0 0 0 S 0 0.0 0:00.00 ata_aux
132 20 0 0 0 0 S 0 0.0 0:00.00 khubd
135 20 0 0 0 0 S 0 0.0 0:00.00 kseriod
156 20 0 0 0 0 S 0 0.0 0:00.00 rpciod/0
157 20 0 0 0 0 S 0 0.0 0:00.00 rpciod/1
172 20 0 0 0 0 S 0 0.0 11:01.95 kswapd0
173 20 0 0 0 0 S 0 0.0 0:00.00 aio/0
174 20 0 0 0 0 S 0 0.0 0:00.00 aio/1
175 20 0 0 0 0 S 0 0.0 0:00.00 nfsiod
732 20 0 0 0 0 S 0 0.0 0:00.10 octeon-ethernet
760 20 0 0 0 0 S 0 0.0 0:00.00 scsi_eh_0
765 20 0 0 0 0 S 0 0.0 0:00.97 mtdblockd
793 20 0 0 0 0 S 0 0.0 0:00.00 usbhid_resumer
833 20 0 0 0 0 S 0 0.0 0:06.93 kjournald
886 16 -4 1996 440 400 S 0 0.0 0:01.15 udevd
1850 20 0 0 0 0 S 0 0.0 0:00.09 kjournald
1851 20 0 0 0 0 S 0 0.0 0:00.00 kjournald
1963 20 0 0 0 0 S 0 0.0 0:04.58 flush-8:0
2043 20 0 2008 664 576 S 0 0.1 0:00.23 syslogd
2046 20 0 1892 380 328 S 0 0.0 0:00.04 klogd
2055 20 0 1872 332 236 S 0 0.0 0:00.06 irqbalance
2063 rpc 20 0 2084 500 444 S 0 0.1 0:00.00 portmap
2081 20 0 2116 592 588 S 0 0.1 0:00.05 rpc.statd
2150 20 0 6868 672 576 S 0 0.1 0:00.01 sshd
2198 20 0 6804 444 440 S 0 0.0 0:00.00 sshd
2207 20 0 3280 584 580 S 0 0.1 0:00.01 xinetd
2226 20 0 0 0 0 S 0 0.0 0:00.00 lockd
2227 20 0 0 0 0 S 0 0.0 0:19.59 nfsd
2228 20 0 0 0 0 S 0 0.0 0:25.22 nfsd
2229 20 0 0 0 0 S 0 0.0 0:22.94 nfsd
2230 20 0 0 0 0 S 0 0.0 0:25.19 nfsd
2231 20 0 0 0 0 S 0 0.0 0:16.96 nfsd
2232 20 0 0 0 0 S 0 0.0 0:26.38 nfsd
2233 20 0 0 0 0 S 0 0.0 0:23.46 nfsd
2234 20 0 0 0 0 S 0 0.0 0:18.89 nfsd
2237 20 0 2360 684 552 S 0 0.1 0:00.59 rpc.mountd
2295 0 -20 65244 5100 2052 S 0 0.5 7:30.70 masterd_core
2305 0 -20 27864 1568 1172 S 0 0.2 0:46.41 masterd_manager
2312 15 -5 36716 2172 1304 S 0 0.2 46:15.02 sysd
2314 0 -20 32356 5708 1196 S 0 0.6 10:05.93 masterd_manager
2320 20 0 92040 6696 2580 S 0 0.7 0:02.82 dagger
2321 30 10 40568 3832 1772 S 0 0.4 9:21.27 python
2322 20 0 76640 3040 1528 S 0 0.3 0:01.25 cryptod
2323 20 0 166m 2124 1436 S 0 0.2 1:48.51 sysdagent
2339 20 0 7212 1280 1088 S 0 0.1 0:00.07 tscat
2340 20 0 71580 1240 1080 S 0 0.1 0:07.05 brdagent
2341 20 0 31780 1256 1092 S 0 0.1 0:07.38 ehmon
02-25-2013 09:12 AM
On 5.0.2 this high cpu is because of user-id process. This issue has been identified and a fix is currently being targeted for PAN OS 5.0.3. However on 5.0.1 as far as I know there is no such known issues and I am curious to see what is causing the cpu spikes on 5.0.1, but it seems like we cannot go back to 5.0.1.
Thanks,
Sandeep T
02-25-2013 01:36 PM
There is an issue with pan_logdb_index in 5.0.1 which can cause some intermittent high CPU usage.
Bob
02-25-2013 02:36 PM
I'm seeing this on 4.1.11 - is there anything reported on this release? I can't see any one process in the system resources which is chewing up all the CPU, yet the management interface tells me the management CPU is running at 100%
top - 09:32:45 up 8 days, 21 min, 1 user, load average: 8.15, 9.01, 9.49
Tasks: 98 total, 2 running, 96 sleeping, 0 stopped, 0 zombie
Cpu(s): 24.0%us, 36.4%sy, 3.2%ni, 35.9%id, 0.1%wa, 0.0%hi, 0.3%si, 0.0%st
Mem: 995888k total, 868512k used, 127376k free, 13708k buffers
Swap: 2008084k total, 320776k used, 1687308k free, 310732k cached
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
2394 20 0 169m 36m 32m S 182 3.8 11730:01 useridd
12802 30 10 3896 1276 1092 R 5 0.1 0:06.13 genindex.sh
13886 20 0 4468 1032 800 R 3 0.1 0:00.05 top
2387 20 0 273m 63m 1616 S 2 6.5 19:46.28 logrcvr
1 20 0 1836 564 536 S 0 0.1 0:02.30 init
2 20 0 0 0 0 S 0 0.0 0:00.00 kthreadd
3 RT 0 0 0 0 S 0 0.0 0:02.85 migration/0
4 20 0 0 0 0 S 0 0.0 0:00.03 ksoftirqd/0
5 RT 0 0 0 0 S 0 0.0 0:02.78 migration/1
6 20 0 0 0 0 S 0 0.0 0:00.08 ksoftirqd/1
7 20 0 0 0 0 S 0 0.0 0:00.39 events/0
8 20 0 0 0 0 S 0 0.0 0:00.08 events/1
9 20 0 0 0 0 S 0 0.0 0:00.03 khelper
12 20 0 0 0 0 S 0 0.0 0:00.00 async/mgr
112 20 0 0 0 0 S 0 0.0 0:00.00 sync_supers
114 20 0 0 0 0 S 0 0.0 0:00.00 bdi-default
115 20 0 0 0 0 S 0 0.0 0:02.19 kblockd/0
116 20 0 0 0 0 S 0 0.0 0:01.28 kblockd/1
125 20 0 0 0 0 S 0 0.0 0:00.00 ata/0
126 20 0 0 0 0 S 0 0.0 0:00.00 ata/1
127 20 0 0 0 0 S 0 0.0 0:00.00 ata_aux
132 20 0 0 0 0 S 0 0.0 0:00.00 khubd
135 20 0 0 0 0 S 0 0.0 0:00.00 kseriod
154 20 0 0 0 0 S 0 0.0 0:00.00 rpciod/0
155 20 0 0 0 0 S 0 0.0 0:00.00 rpciod/1
167 20 0 0 0 0 S 0 0.0 1:13.35 kswapd0
168 20 0 0 0 0 S 0 0.0 0:00.00 aio/0
169 20 0 0 0 0 S 0 0.0 0:00.00 aio/1
170 20 0 0 0 0 S 0 0.0 0:00.00 nfsiod
723 20 0 0 0 0 S 0 0.0 0:00.06 octeon-ethernet
741 20 0 0 0 0 S 0 0.0 0:00.00 scsi_eh_0
743 20 0 0 0 0 S 0 0.0 0:00.00 scsi_eh_1
750 20 0 0 0 0 S 0 0.0 0:01.23 mtdblockd
773 20 0 0 0 0 S 0 0.0 0:00.00 usbhid_resumer
812 20 0 0 0 0 S 0 0.0 0:34.67 kjournald
865 16 -4 2008 392 388 S 0 0.0 0:01.34 udevd
1713 20 0 0 0 0 S 0 0.0 0:01.68 kjournald
1714 20 0 0 0 0 S 0 0.0 0:00.00 kjournald
1800 20 0 0 0 0 S 0 0.0 0:32.48 flush-8:0
1859 20 0 2008 640 572 S 0 0.1 0:00.62 syslogd
1862 20 0 1892 332 328 S 0 0.0 0:00.02 klogd
1870 rpc 20 0 2084 492 488 S 0 0.0 0:00.01 portmap
1888 | 20 0 2116 652 648 S | 0 0.1 0:00.06 rpc.statd | |||
1957 | 20 0 6852 492 412 S | 0 0.0 0:00.01 sshd | |||
2005 | 20 0 6788 404 400 S | 0 0.0 0:00.00 sshd | |||
2014 | 20 0 3280 616 612 S | 0 0.1 0:00.01 xinetd | |||
2033 | 20 0 | 0 | 0 | 0 S | 0 0.0 0:00.00 lockd |
2034 | 20 0 | 0 | 0 | 0 S | 0 0.0 1:10.63 nfsd |
2035 | 20 0 | 0 | 0 | 0 S | 0 0.0 1:11.76 nfsd |
2036 | 20 0 | 0 | 0 | 0 S | 0 0.0 1:14.73 nfsd |
2037 | 20 0 | 0 | 0 | 0 S | 0 0.0 1:16.72 nfsd |
2038 | 20 0 | 0 | 0 | 0 S | 0 0.0 1:18.89 nfsd |
2039 | 20 0 | 0 | 0 | 0 S | 0 0.0 1:08.08 nfsd |
2040 | 20 0 | 0 | 0 | 0 S | 0 0.0 1:08.19 nfsd |
2041 | 20 0 | 0 | 0 | 0 S | 0 0.0 1:07.54 nfsd |
2044 | 20 0 2360 676 576 S | 0 0.1 0:02.35 rpc.mountd | |||
2107 | 0 -20 62228 4596 1936 S | 0 0.5 21:25.51 masterd_core | |||
2117 | 0 -20 26132 1372 1004 S | 0 0.1 0:35.20 masterd_manager | |||
2124 | 15 -5 38576 5244 1268 S | 0 0.5 279:50.74 sysd | |||
2126 | 0 -20 30324 4828 1072 S | 0 0.5 28:22.05 masterd_manager | |||
2130 | 20 0 90584 7148 1896 S | 0 0.7 3:07.52 dagger | |||
2131 | 30 10 38772 3600 1672 S | 0 0.4 25:30.28 python | |||
2132 | 20 0 76744 3128 1700 S | 0 0.3 2:47.90 cryptod | |||
2133 | 20 0 164m 1932 1332 S | 0 0.2 4:57.28 sysdagent | |||
2147 | 20 0 7212 872 868 S | 0 0.1 0:00.08 tscat | |||
2150 | 20 0 69768 1092 948 S | 0 0.1 2:30.10 brdagent | |||
2151 | 20 0 30080 1156 976 S | 0 0.1 1:24.66 ehmon | |||
2152 | 20 0 45560 1156 1012 S | 0 0.1 0:03.49 chasd |
2317 | 20 0 | 0 | 0 | 0 S | 0 0.0 0:07.30 kjournald |
2349 | 20 0 2896 624 572 S | 0 0.1 0:00.21 crond | |||
2357 | 20 0 520m 211m 4284 S | 0 21.7 82:23.88 mgmtsrvr | |||
2378 | 20 0 386m 70m 10m S | 0 7.3 24:08.62 devsrvr | |||
2386 | 20 0 90052 2648 1500 S | 0 0.3 1:13.27 ikemgr | |||
2388 | 20 0 97216 2524 1596 S | 0 0.3 33:34.36 rasmgr | |||
2389 | 20 0 97020 1588 1148 S | 0 0.2 23:40.02 keymgr | |||
2390 | 20 0 199m 1912 1224 S | 0 0.2 8:14.65 varrcvr | |||
2391 | 17 -3 54472 1812 1260 S | 0 0.2 0:41.29 ha_agent | |||
2392 | 20 0 84852 1420 1120 S | 0 0.1 0:01.00 sslmgr | |||
2393 | 20 0 55288 1628 1144 S | 0 0.2 0:01.62 dhcpd | |||
2395 | 20 0 73356 1760 1180 S | 0 0.2 0:02.43 dnsproxyd | |||
2396 | 20 0 72588 1512 1084 S | 0 0.2 0:02.48 pppoed | |||
2397 | 20 0 133m 2944 1780 S | 0 0.3 1:29.95 routed | |||
2398 | 20 0 131m 4900 3360 S | 0 0.5 76:31.29 authd | |||
2741 | 20 0 21324 1200 832 S | 0 0.1 0:00.19 sshd | |||
2801 darren | 20 0 21460 1020 656 S | 0 0.1 0:00.09 sshd | |||
2802 darren | 20 0 95072 11m 2464 S | 0 1.2 0:04.66 cli | |||
2978 nobody | 20 0 206m 37m 4340 S | 0 3.8 140:35.43 appweb3 | |||
2990 | 20 0 25520 1720 1408 S | 0 0.2 0:30.59 snmpd | |||
3889 nobody | 20 0 191m 19m 3232 S | 0 2.0 517:48.40 appweb3 | |||
3902 nobody | 20 0 113m 1912 1396 S | 0 0.2 0:32.93 appweb3 | |||
8993 | 20 0 1824 448 444 S | 0 0.0 0:00.01 agetty | |||
11476 | 20 0 3744 3624 2756 S | 0 0.4 0:00.00 ntpd | |||
12756 | 20 0 2960 500 412 S | 0 0.1 0:00.00 crond | |||
12758 | 20 0 3720 1116 988 S | 0 0.1 0:00.01 genindex_batch. |
12765 | 20 0 31688 4908 2884 S | 0 0.5 0:00.41 masterd_batch |
13880 darren | 20 0 2976 668 564 S | 0 0.1 0:00.04 less |
13883 | 20 0 3832 1192 1056 S | 0 0.1 0:00.05 sh |
13887 | 20 0 1940 540 464 S | 0 0.1 0:00.00 sed |
02-25-2013 03:14 PM
Yes, this issue is present in 4.1.11 as well. Please open a case with support and we can provide next steps for this issue.
03-07-2013 08:58 AM
On 5.0.1 after logging in to the GUI, the CPU is usually around 60%. But after a couple minutes it drops down to under 10%. On 5.0.2 it stays at 100%
09-13-2013 11:16 PM
Hi Sandeep,.
Can you please let me know that how you identifed that user ID is consuming more CPU resource from the command >show system resources
As i am facing this issue again and again at many client places. Please provide some information about how to identify that which process is consuming more CPU resource.
Thank you,
Gururaj
09-14-2013 12:39 AM
Hello Gururaj,
You can apply > show system resources follow ---- for run time output also follow below mentioned articles for more information.
Re: please explain "show system resources" output
How to Interpret: show system resources
2476 20 0 183m 72m 64m S 187 7.4 4340:44 useridd >>>>>>>>>>>>>>>>> identify the process
Thanks
Subhankar
09-16-2013 05:12 PM
BobW wrote:
There is an issue with pan_logdb_index in 5.0.1 which can cause some intermittent high CPU usage.
Bob
An issue which continues through to 5.0.6 at least - I'm told it "might" be fixed in 5.0.8, but I'm not holding my breath.
10-03-2013 03:19 AM
I've noticed high cpu too. We've been experimenting with ditching the software user id agent on the DC's in favor of the hardware (on device) user id agent. Reading Architecting User Identification Deployments gave me a better insight in usage of the different user id possibilities. Now I'm considering keeping the software user id agents on the DC, since it suits our environment better.
E.g. the hardware user id agent can cause a lot of cpu demand on both the management server and DC's. Traffic between firewall and DC is completely different when using software of hardware user id agent.
Careful planning may improve the situation. If you get management cpu down, the GUI user experience also enhances greatly.
09-24-2014 12:06 AM
Regarding my situation:
I upgrade RAM from 1 to 2 GB
I cannot believe my eyes: now GUI usage is fast and commit take less than 2 minutes
09-24-2014 12:25 AM
You can't upgrade the RAM on a 2000 series. I only wish you could!
Click Accept as Solution to acknowledge that the answer to your question has been provided.
The button appears next to the replies on topics you’ve started. The member who gave the solution and all future visitors to this topic will appreciate it!
These simple actions take just seconds of your time, but go a long way in showing appreciation for community members and the LIVEcommunity as a whole!
The LIVEcommunity thanks you for your participation!