- Access exclusive content
- Connect with peers
- Share your expertise
- Find support resources
02-04-2013 12:37 PM
Dears,
My PA2020 has 2 agent working identifying my AD users... but the mgnt plane is running 100% all day long...
Any suggestion ?
pls find below the show resources output....
PA2020 running OS 5.0.2
top - 18:26:05 up 6 days, 1:33, 1 user, load average: 10.26, 11.02, 12.17 <<<<<<<<<<<<<<<< !!!!!
Tasks: 100 total, 2 running, 98 sleeping, 0 stopped, 0 zombie
Cpu(s): 51.9%us, 46.0%sy, 2.1%ni, 0.0%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st
Mem: 995872k total, 901792k used, 94080k free, 5996k buffers
Swap: 2212876k total, 647316k used, 1565560k free, 179620k cached
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
2373 root 20 0 209m 72m 63m S 140 7.5 10861:51 useridd<<<<<<<<<<<<<<<<< 140% CPU !!!!
21021 nobody 20 0 429m 51m 4808 S 37 5.3 329:34.26 appweb3
2042 root 30 10 4468 964 792 R 4 0.1 0:00.12 top
2371 root 20 0 651m 210m 4076 S 4 21.6 118:50.34 mgmtsrvr
1720 admin 20 0 4532 1164 912 R 1 0.1 0:02.64 top
2405 root 20 0 355m 89m 2192 S 1 9.2 48:59.31 logrcvr
2142 root 15 -5 39636 2920 1240 S 1 0.3 106:28.41 sysd
2151 root 30 10 40568 3644 1692 S 0 0.4 21:50.38 python
2408 root 20 0 247m 2480 1628 S 0 0.2 5:39.85 varrcvr
2415 root 20 0 141m 2640 1760 S 0 0.3 1:17.82 routed
1 root 20 0 1836 560 536 S 0 0.1 0:02.30 init
02-06-2013 10:12 AM
We've had a great number of problems (other than just Mgmt CPU) with 5.0.2 on our PA-500's. It seems to me that 5.0.2 has a number of issues, but that they manifest most dramatically (based on the other posts in this thread) on the PA-500 appliances (CPU and Memory capacity would be my guess).
We're rolling back to 4.1.10 tonight on the 500's and 5.0.1 on the VM-100 appliance.
02-06-2013 11:50 AM
Same problem with 5.0.2 and 3020.
Case opened.
02-06-2013 03:15 PM
Actually 5.0.0 seemed fine as far as the CPU issue goes. Unfortunately the DHCP server was borked. It would simply stop granting leases.
5.0.2 is perfect w.r.t. the DHCP ;_)
02-08-2013 08:10 AM
Same issue here PA2050 Running 5.0.2
PID USER | PR NI VIRT RES SHR S %CPU %MEM | TIME+ COMMAND |
2331 | 20 0 244m 75m 64m S 124 7.8 7094:50 useridd | |
2329 | 20 0 624m 258m 5668 S 13 26.6 406:35.06 mgmtsrvr | |
3672 | 30 10 4468 992 792 R | 8 0.1 0:00.18 top |
3647 | 30 10 46936 1980 1672 R | 7 0.2 0:00.18 pan_logdb_index |
3686 | 20 0 4468 1064 800 R | 7 0.1 0:00.10 top |
3696 | 20 0 12440 1996 1720 S | 7 0.2 0:00.05 wmic |
02-08-2013 08:55 AM
Dears,
Yesterday we have moved back to 5.0.1 and everything is OK now.... unless the real time QoS graphical which is not OK... but that is a known bug for this version....
PA confirmed PANOS 5.0.2 has a bug for useridd process and there is no estimated time of correction... so I did that .. and I also suggest everyone to do the same.
Thanks in advance!!
02-08-2013 07:26 PM
essilorbr: Thanks for the followup!
*mumbles something about PaloAlto and the quality assurance team*
02-11-2013 10:56 AM
we have the same problem here PA-2050 only user-id agents on 8 domain-controllers. Downgraded to 5.0.1 - problem solved.
02-13-2013 07:19 PM
May be irrelevant but after the content updates 12/02/13 the management CPU is no longer at 100% Still around 30% but usable.
02-19-2013 08:58 PM
This issue seems to be present in 4.1.11 as well, which is disappointing, as we rolled back from 5.0.2 to 4.1.10 to get away from this.
02-19-2013 10:26 PM
Can confirm the same problem with 4.1.11 on a PA-2050. Case open with support....
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
22576 root 20 0 272m 49m 33m S 116 5.1 1007:22 useridd
02-20-2013 08:53 AM
Same observation here...
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
1781 20 0 166m 66m 63m S 97 6.6 855:45.60 useridd
...How does this kind of stuff get by QA???
02-20-2013 08:53 AM
PA: How does this stuff get past the QA process?
02-20-2013 02:39 PM
In the image below, the circle on the left shows the management plane CPU with PANOS 5.0.1, the circle on the right shows the management plane with PANOS 5.0.2. So for us at least it actually got worse with the upgrade.
02-20-2013 02:42 PM
yes, 5.0.2 and 4.1.11 both seem to introduce this issue. Roll back to 5.0.1 or 4.1.10.
02-20-2013 02:45 PM
See my image for what 5.0.1 looks like. The issue persisted in 5.0.1. The issue got worse with the upgrade.
Click Accept as Solution to acknowledge that the answer to your question has been provided.
The button appears next to the replies on topics you’ve started. The member who gave the solution and all future visitors to this topic will appreciate it!
These simple actions take just seconds of your time, but go a long way in showing appreciation for community members and the LIVEcommunity as a whole!
The LIVEcommunity thanks you for your participation!