5.0.5 Management CPU?

cancel
Showing results for 
Show  only  | Search instead for 
Did you mean: 
Announcements

5.0.5 Management CPU?

L4 Transporter

So, I finally got to upgrade my 2020's to 5.0.5 (not been able up to now owing to operational requirements) after hearing from our PA rep's tech guys that it's mostly stable now, and he hasn't heard any bad reports on it.

However, it seems the high management CPU bug/issue is back - my Management CPU is now running pretty constantly at around 60% on the active device.

Has anyone else seen this, or is there something in my config I should look for which might be causing it?

The killer seems to be one or two services service

top - 08:50:00 up 48 min,  1 user,  load average: 1.83, 1.90, 1.78

Tasks: 104 total,   2 running, 102 sleeping,   0 stopped,   0 zombie

Cpu(s): 20.7%us,  6.8%sy, 26.0%ni, 39.8%id,  6.0%wa,  0.1%hi,  0.5%si,  0.0%st

Mem:    995872k total,   969460k used,    26412k free,    27628k buffers

Swap:  2212876k total,     3744k used,  2209132k free,   370064k cached

PID USER      PR  NI  VIRT  RES  SHR S %CPU %MEM    TIME+  COMMAND

13057       30  10 60224  17m 5624 R   95  1.8   0:02.63 pan_logdb_index

2379   20   0  391m 239m 8120 S0 24.7   3:35.96 mgmtsrvr

although this doesn't equate to the 60% CPU being reported by the GUI.

Could it be just some post-update process which is still running that I should be a bit patient about?

Thanks.

1 accepted solution

Accepted Solutions

L4 Transporter

Just an update for those who are interested.

Although I don't have a response from my case as yet, this appears to have been something related to the upgrade from 4.1.11-h1 to 5.0.5 - the re-index process which was the cause of the issue ran insanely high for a couple of days, but now has dropped back to what I would consider "normal' - I.E. it's barely noticeable.

View solution in original post

9 REPLIES 9

L3 Networker

Hello,

The historical CPU utilization with command below

>show running resource-monitor

Do you see any cores?

>show system files

If you see cores, or finding any other issues with the unit please open a case with support. Otherwise CPU utilization of 60% is not considered to be a concerns. It depends on what services are configured on the unit.

Thank you.

No, I see no cores when I run "show system files".

For my own peace of mind, can you define which services may cause the management CPU to run this high? I'm not sure if what I have running would be considered "excessive".

I run the following

Global protect portal with as many as 30 SSL VPN users

5-10 IPSEC VPN tunnels

Web filtering using Brightcloud

Roughly 30 security policies with a default virus scan included in traffic

Would this push into the realm of pushing the CPU to 60%?

Thanks

L5 Sessionator

Can you look at o/p for the following command :-

> show system resources follow (press C it will o/p the entire path which is causing the cpu to go high)

press 1 (will show the cpu details)

press (shift + H) thread info

The most obvious one is as follows

7474 root  30  10  3896 1292 1092 R   67  0.1   0:10.42 /bin/bash /usr/local/bin/genindex.sh 2:13:30:01

The percentage fluctuates between about 35 and 70, but tends to stay above the 50-60 mark most of the time.

Thanks

This script does the indexing of  log databases, please open a case with support so that we can look into it to see why it causing the cpu to hike.

Thanks, I have done so.

Cheers.

L7 Applicator

You can analyze below mentioned command output to verify the data-plane high CPU.

PA-firewall> show running resource-monitor      >>>>>> check if any particular parameter is consuming higher CPU.

CPU load sampling by group:

flow_lookup                    :     0%

flow_fastpath                  :     0%

flow_slowpath                  :     0%

flow_forwarding                :     0%

flow_mgmt                      :     1%

flow_ctrl                      :     1%

nac_result                     :     0%

flow_np                        :     0%

dfa_result                     :     0%

module_internal                :     0%

aho_result                     :     0%

zip_result                     :     0%

pktlog_forwarding              :     0%

lwm                            :     0%

flow_host                      :     1%

You may also check CPU utilization pattern to get an idea. ( High CPU at any specific time on a day, Specific traffic flow… etc. )

CPU load (%) during last 60 seconds:

Resource utilization (%) during last 60 seconds:

Resource monitoring sampling data (per minute):

Resource utilization (%) during last 60 minutes:

Resource monitoring sampling data (per hour):

Resource utilization (%) during last 24 hours:

Resource monitoring sampling data (per day):

Resource utilization (%) during last 7 days:

Resource monitoring sampling data (per week):

Resource utilization (%) during last 13 weeks:

  1. packet buffer:
  2. packet descriptor:
  3. packet descriptor (on-chip):

Thanks

Subhankar

sgantait wrote:

You can analyze below mentioned command output to verify the data-plane high CPU.

The data-plane is not an issue. it rarely gets above 10%, even when the device is at its busiest.

The issue is the management plane.

Cheers.

L4 Transporter

Just an update for those who are interested.

Although I don't have a response from my case as yet, this appears to have been something related to the upgrade from 4.1.11-h1 to 5.0.5 - the re-index process which was the cause of the issue ran insanely high for a couple of days, but now has dropped back to what I would consider "normal' - I.E. it's barely noticeable.

  • 1 accepted solution
  • 5314 Views
  • 9 replies
  • 0 Likes
Like what you see?

Show your appreciation!

Click Like if a post is helpful to you or if you just want to show your support.

Click Accept as Solution to acknowledge that the answer to your question has been provided.

The button appears next to the replies on topics you’ve started. The member who gave the solution and all future visitors to this topic will appreciate it!

These simple actions take just seconds of your time, but go a long way in showing appreciation for community members and the LIVEcommunity as a whole!

The LIVEcommunity thanks you for your participation!