- Access exclusive content
- Connect with peers
- Share your expertise
- Find support resources
Enhanced Security Measures in Place: To ensure a safer experience, we’ve implemented additional, temporary security measures for all users.
06-08-2017 01:00 AM - edited 06-08-2017 01:01 AM
When Secondary Firewall became active, management plane utilization is not more than 10% for over months.
Last week manual failover made, Primary is active now. MP utilization is above 60% all the time.
All the configurations are same as it's in HA. Both firewall has 10+ unused FQDN objects and FQDN refresh happening for every 30 seconds eventhough its unused.
Please suggest what might have causes the primary MP utilization above 50%. its consistent.
show system resources - for primary active firewall. output taken during 56% MP-CPU
top - 11:16:26 up 250 days, 17:59, 1 user, load average: 1.16, 1.01, 0.86
Tasks: 119 total, 2 running, 116 sleeping, 0 stopped, 1 zombie
Cpu(s): 33.0%us, 1.0%sy, 0.1%ni, 65.7%id, 0.0%wa, 0.0%hi, 0.2%si, 0.0%st
Mem: 3850716k total, 3695272k used, 155444k free, 150692k buffers
Swap: 2008084k total, 11044k used, 1997040k free, 1874152k cached
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
6239 nobody 20 0 114m 18m 7424 S 2.0 0.5 19:51.11 appweb3
9050 20 0 8940 1584 1408 S 2.0 0.0 0:00.01 wmic
1 20 0 2084 628 600 S 0.0 0.0 2:22.11 init
2 20 0 0 0 0 S 0.0 0.0 0:00.00 kthreadd
3 RT 0 0 0 0 S 0.0 0.0 0:44.36 migration/0
4 20 0 0 0 0 S 0.0 0.0 2:11.88 ksoftirqd/0
5 RT 0 0 0 0 S 0.0 0.0 0:44.38 migration/1
6 20 0 0 0 0 S 0.0 0.0 2:03.20 ksoftirqd/1
7 20 0 0 0 0 S 0.0 0.0 50:24.46 events/0
8 20 0 0 0 0 S 0.0 0.0 51:11.03 events/1
9 20 0 0 0 0 S 0.0 0.0 0:00.01 khelper
14 20 0 0 0 0 S 0.0 0.0 0:00.00 async/mgr
164 20 0 0 0 0 S 0.0 0.0 0:17.16 sync_supers
166 20 0 0 0 0 S 0.0 0.0 0:18.36 bdi-default
167 20 0 0 0 0 S 0.0 0.0 0:04.86 kblockd/0
lines 1-23
top - 11:16:26 up 250 days, 17:59, 1 user, load average: 1.16, 1.01, 0.86
Tasks: 119 total, 2 running, 116 sleeping, 0 stopped, 1 zombie
Cpu(s): 33.0%us, 1.0%sy, 0.1%ni, 65.7%id, 0.0%wa, 0.0%hi, 0.2%si, 0.0%st
Mem: 3850716k total, 3695272k used, 155444k free, 150692k buffers
Swap: 2008084k total, 11044k used, 1997040k free, 1874152k cached
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
6239 nobody 20 0 114m 18m 7424 S 2.0 0.5 19:51.11 appweb3
9050 20 0 8940 1584 1408 S 2.0 0.0 0:00.01 wmic
1 20 0 2084 628 600 S 0.0 0.0 2:22.11 init
2 20 0 0 0 0 S 0.0 0.0 0:00.00 kthreadd
3 RT 0 0 0 0 S 0.0 0.0 0:44.36 migration/0
4 20 0 0 0 0 S 0.0 0.0 2:11.88 ksoftirqd/0
5 RT 0 0 0 0 S 0.0 0.0 0:44.38 migration/1
6 20 0 0 0 0 S 0.0 0.0 2:03.20 ksoftirqd/1
7 20 0 0 0 0 S 0.0 0.0 50:24.46 events/0
8 20 0 0 0 0 S 0.0 0.0 51:11.03 events/1
9 20 0 0 0 0 S 0.0 0.0 0:00.01 khelper
14 20 0 0 0 0 S 0.0 0.0 0:00.00 async/mgr
164 20 0 0 0 0 S 0.0 0.0 0:17.16 sync_supers
166 20 0 0 0 0 S 0.0 0.0 0:18.36 bdi-default
167 20 0 0 0 0 S 0.0 0.0 0:04.86 kblockd/0
168 20 0 0 0 0 S 0.0 0.0 0:04.22 kblockd/1
170 20 0 0 0 0 S 0.0 0.0 0:00.00 kacpid
171 20 0 0 0 0 S 0.0 0.0 0:00.00 kacpi_notify
172 20 0 0 0 0 S 0.0 0.0 0:00.00 kacpi_hotplug
319 20 0 0 0 0 S 0.0 0.0 0:00.00 ata/0
320 20 0 0 0 0 S 0.0 0.0 0:00.00 ata/1
321 20 0 0 0 0 S 0.0 0.0 0:00.00 ata_aux
322 20 0 0 0 0 S 0.0 0.0 0:00.00 ksuspend_usbd
327 20 0 0 0 0 S 0.0 0.0 0:00.01 khubd
330 20 0 0 0 0 S 0.0 0.0 0:00.01 kseriod
368 20 0 0 0 0 S 0.0 0.0 0:00.00 rpciod/0
369 20 0 0 0 0 S 0.0 0.0 0:00.00 rpciod/1
395 20 0 0 0 0 S 0.0 0.0 3:20.52 kswapd0
396 20 0 0 0 0 S 0.0 0.0 0:00.00 aio/0
397 20 0 0 0 0 S 0.0 0.0 0:00.00 aio/1
398 1 -19 0 0 0 S 0.0 0.0 0:00.00 nfsiod
547 20 0 0 0 0 S 0.0 0.0 0:00.00 scsi_eh_0
lines 1-40 549 20 0 0 0 0 S 0.0 0.0 0:00.00 scsi_eh_1
551 20 0 0 0 0 S 0.0 0.0 0:00.00 scsi_eh_2
553 20 0 0 0 0 S 0.0 0.0 0:00.00 scsi_eh_3
555 20 0 0 0 0 S 0.0 0.0 0:00.00 scsi_eh_4
557 20 0 0 0 0 S 0.0 0.0 0:00.00 scsi_eh_5
608 20 0 0 0 0 S 0.0 0.0 0:00.00 kpsmoused
618 20 0 0 0 0 S 0.0 0.0 0:00.00 edac-poller
633 20 0 0 0 0 S 0.0 0.0 4:05.82 kjournald
686 16 -4 2220 448 380 S 0.0 0.0 0:00.50 udevd
1307 20 0 0 0 0 S 0.0 0.0 0:00.00 kstriped
1342 20 0 0 0 0 S 0.0 0.0 1:08.68 kjournald
1343 20 0 0 0 0 S 0.0 0.0 0:00.01 kjournald
1407 20 0 0 0 0 S 0.0 0.0 8:54.85 flush-8:0
1582 20 0 2136 724 612 S 0.0 0.0 0:24.79 syslogd
1585 20 0 2084 440 388 S 0.0 0.0 0:00.02 klogd
1594 20 0 2124 340 240 S 0.0 0.0 31:17.93 irqbalance
1602 rpc 20 0 2224 488 460 S 0.0 0.0 0:00.01 portmap
1620 20 0 2248 636 632 S 0.0 0.0 0:00.01 rpc.statd
1685 20 0 2984 596 592 S 0.0 0.0 0:00.01 xinetd
1710 20 0 0 0 0 S 0.0 0.0 0:00.00 lockd
1711 1 -19 0 0 0 S 0.0 0.0 12:19.07 nfsd
1712 1 -19 0 0 0 S 0.0 0.0 12:27.25 nfsd
1713 1 -19 0 0 0 S 0.0 0.0 12:12.59 nfsd
1714 1 -19 0 0 0 S 0.0 0.0 12:06.74 nfsd
1715 1 -19 0 0 0 S 0.0 0.0 12:06.13 nfsd
1716 1 -19 0 0 0 S 0.0 0.0 12:12.76 nfsd
1717 1 -19 0 0 0 S 0.0 0.0 12:15.42 nfsd
1718 1 -19 0 0 0 S 0.0 0.0 12:05.01 nfsd
1721 20 0 3580 1988 592 S 0.0 0.1 0:14.82 rpc.mountd
1775 0 -20 50304 11m 3476 S 0.0 0.3 419:30.03 masterd_apps
1782 20 0 2216 500 496 S 0.0 0.0 0:00.01 agetty
1793 15 -5 21944 4256 2380 S 0.0 0.1 1030:16 sysd
1834 20 0 67412 12m 3984 S 0.0 0.3 38:18.14 dagger
1835 30 10 16500 6252 3148 S 0.0 0.2 210:23.33 python
1842 20 0 118m 5236 3240 S 0.0 0.1 76:30.82 sysdagent
1895 20 0 12580 2776 2192 S 0.0 0.1 38:24.00 ehmon
1896 20 0 75504 4152 2840 S 0.0 0.1 44:58.71 chasd
1905 20 0 4184 2332 1328 S 0.0 0.1 0:00.03 tscat
1980 20 0 77532 6060 3700 S 0.0 0.2 48:41.36 cryptod
2088 20 0 6028 1572 1216 S 0.0 0.0 0:00.01 sshd
lines 41-80 2112 20 0 6028 1644 1280 S 0.0 0.0 0:00.97 sshd
2118 20 0 0 0 0 S 0.0 0.0 0:42.11 kjournald
2182 20 0 232m 34m 3932 S 0.0 0.9 88:21.02 cord
2183 20 0 257m 146m 12m S 0.0 3.9 139:20.75 devsrvr
2186 20 0 313m 161m 115m S 0.0 4.3 510:28.47 useridd
2286 20 0 40120 4080 2644 S 0.0 0.1 0:00.03 nginx
2514 nobody 20 0 55928 5504 3048 S 0.0 0.1 45:29.49 nginx
3911 20 0 1295m 71m 20m R 0.0 1.9 795:33.94 mongod
3931 20 0 69360 6212 3884 S 0.0 0.2 637:32.50 ikemgr
3933 20 0 92064 8404 4356 S 0.0 0.2 49:54.16 rasmgr
3934 20 0 88040 4496 3128 S 0.0 0.1 50:23.70 keymgr
3935 20 0 397m 30m 5420 S 0.0 0.8 81:03.46 varrcvr
3937 20 0 71736 4984 3132 S 0.0 0.1 108:07.95 l2ctrld
3939 17 -3 49360 5396 3336 S 0.0 0.1 81:44.49 ha_agent
3940 20 0 105m 13m 3984 S 0.0 0.4 59:38.30 satd
3941 20 0 166m 29m 3812 S 0.0 0.8 53:37.05 sslmgr
3942 20 0 67708 5108 3208 S 0.0 0.1 56:55.65 pan_dhcpd
3943 20 0 65572 6980 3388 S 0.0 0.2 50:19.58 dnsproxyd
3945 20 0 64488 5788 3236 S 0.0 0.2 47:51.57 pppoed
3946 17 -3 144m 14m 4540 S 0.0 0.4 64:59.50 routed
3947 20 0 2182m 24m 5508 S 0.0 0.7 87:14.51 authd
3994 20 0 43956 11m 3372 S 0.0 0.3 227:05.60 snmpd
4016 20 0 57052 6932 3380 S 0.0 0.2 35:30.12 nginx
5309 20 0 16948 1208 340 S 0.0 0.0 0:00.00 syslog-ng
5310 20 0 17384 2708 1392 S 0.0 0.1 0:11.58 syslog-ng
6233 nobody 20 0 138m 35m 8588 S 0.0 0.9 18:18.32 appweb3
6638 20 0 13744 4872 2924 S 0.0 0.1 376:08.64 packet_path_pin
8970 20 0 8940 1580 1408 S 0.0 0.0 0:00.01 wmic
8973 20 0 8860 2680 2188 S 0.0 0.1 0:00.06 sshd
8990 20 0 8940 1844 1652 S 0.0 0.0 0:00.01 wmic
9001 20 0 8940 1584 1408 S 0.0 0.0 0:00.01 wmic
9005 ssokhair 20 0 8860 1500 1004 S 0.0 0.0 0:00.01 sshd
9006 ssokhair 20 0 59600 21m 10m S 0.0 0.6 0:00.32 cli
9039 20 0 8940 1688 1508 S 0.0 0.0 0:00.01 wmic
9040 20 0 9076 2184 1784 S 0.0 0.1 0:00.01 wmic
9045 ssokhair 20 0 5676 964 772 S 0.0 0.0 0:00.01 less
9047 20 0 13540 2400 1588 S 0.0 0.1 0:00.02 sh
9048 20 0 13176 2300 1324 R 0.0 0.1 0:00.01 top
9049 20 0 12900 1836 1108 S 0.0 0.0 0:00.01 sed
13500 20 0 39996 4064 2768 S 0.0 0.1 0:00.03 nginx
lines 81-120 13525 nobody 20 0 56952 5628 2156 S 0.0 0.1 13:03.46 nginx
13526 nobody 20 0 56952 5628 2156 S 0.0 0.1 10:13.41 nginx
14534 20 0 890m 291m 8076 S 0.0 7.8 653:22.91 mgmtsrvr
15441 20 0 1478m 469m 11m S 0.0 12.5 35791:06 logrcvr
19604 20 0 3088 1124 584 S 0.0 0.0 0:00.42 crond
26021 20 0 7684 1968 1476 S 0.0 0.1 16:05.86 ntpd
32294 20 0 0 0 0 Z 0.0 0.0 0:00.03 mgmtsrvr <defunct>
show system resources
top - 11:17:07 up 250 days, 18:00, 1 user, load average: 0.98, 0.99, 0.86
Tasks: 120 total, 3 running, 116 sleeping, 0 stopped, 1 zombie
Cpu(s): 33.0%us, 1.0%sy, 0.1%ni, 65.7%id, 0.0%wa, 0.0%hi, 0.2%si, 0.0%st
Mem: 3850716k total, 3698072k used, 152644k free, 150808k buffers
Swap: 2008084k total, 11044k used, 1997040k free, 1875248k cached
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
1895 20 0 12580 2776 2192 S 2.0 0.1 38:24.01 ehmon
2186 20 0 313m 161m 115m S 2.0 4.3 510:28.68 useridd
1 20 0 2084 628 600 S 0.0 0.0 2:22.11 init
2 20 0 0 0 0 S 0.0 0.0 0:00.00 kthreadd
3 RT 0 0 0 0 S 0.0 0.0 0:44.36 migration/0
4 20 0 0 0 0 S 0.0 0.0 2:11.88 ksoftirqd/0
5 RT 0 0 0 0 S 0.0 0.0 0:44.38 migration/1
6 20 0 0 0 0 S 0.0 0.0 2:03.20 ksoftirqd/1
7 20 0 0 0 0 S 0.0 0.0 50:24.47 events/0
8 20 0 0 0 0 S 0.0 0.0 51:11.04 events/1
9 20 0 0 0 0 S 0.0 0.0 0:00.01 khelper
14 20 0 0 0 0 S 0.0 0.0 0:00.00 async/mgr
164 20 0 0 0 0 S 0.0 0.0 0:17.16 sync_supers
166 20 0 0 0 0 S 0.0 0.0 0:18.36 bdi-default
167 20 0 0 0 0 S 0.0 0.0 0:04.86 kblockd/0
168 20 0 0 0 0 S 0.0 0.0 0:04.22 kblockd/1
170 20 0 0 0 0 S 0.0 0.0 0:00.00 kacpid
171 20 0 0 0 0 S 0.0 0.0 0:00.00 kacpi_notify
172 20 0 0 0 0 S 0.0 0.0 0:00.00 kacpi_hotplug
319 20 0 0 0 0 S 0.0 0.0 0:00.00 ata/0
320 20 0 0 0 0 S 0.0 0.0 0:00.00 ata/1
321 20 0 0 0 0 S 0.0 0.0 0:00.00 ata_aux
322 20 0 0 0 0 S 0.0 0.0 0:00.00 ksuspend_usbd
327 20 0 0 0 0 S 0.0 0.0 0:00.01 khubd
330 20 0 0 0 0 S 0.0 0.0 0:00.01 kseriod
368 20 0 0 0 0 S 0.0 0.0 0:00.00 rpciod/0
369 20 0 0 0 0 S 0.0 0.0 0:00.00 rpciod/1
395 20 0 0 0 0 S 0.0 0.0 3:20.52 kswapd0
396 20 0 0 0 0 S 0.0 0.0 0:00.00 aio/0
397 20 0 0 0 0 S 0.0 0.0 0:00.00 aio/1
398 1 -19 0 0 0 S 0.0 0.0 0:00.00 nfsiod
547 20 0 0 0 0 S 0.0 0.0 0:00.00 scsi_eh_0
lines 1-40 549 20 0 0 0 0 S 0.0 0.0 0:00.00 scsi_eh_1
551 20 0 0 0 0 S 0.0 0.0 0:00.00 scsi_eh_2
553 20 0 0 0 0 S 0.0 0.0 0:00.00 scsi_eh_3
555 20 0 0 0 0 S 0.0 0.0 0:00.00 scsi_eh_4
557 20 0 0 0 0 S 0.0 0.0 0:00.00 scsi_eh_5
608 20 0 0 0 0 S 0.0 0.0 0:00.00 kpsmoused
618 20 0 0 0 0 S 0.0 0.0 0:00.00 edac-poller
633 20 0 0 0 0 S 0.0 0.0 4:05.82 kjournald
686 16 -4 2220 448 380 S 0.0 0.0 0:00.50 udevd
1307 20 0 0 0 0 S 0.0 0.0 0:00.00 kstriped
1342 20 0 0 0 0 S 0.0 0.0 1:08.68 kjournald
1343 20 0 0 0 0 S 0.0 0.0 0:00.01 kjournald
1407 20 0 0 0 0 S 0.0 0.0 8:54.85 flush-8:0
1582 20 0 2136 724 612 S 0.0 0.0 0:24.79 syslogd
1585 20 0 2084 440 388 S 0.0 0.0 0:00.02 klogd
1594 20 0 2124 340 240 S 0.0 0.0 31:17.93 irqbalance
1602 rpc 20 0 2224 488 460 S 0.0 0.0 0:00.01 portmap
1620 20 0 2248 636 632 S 0.0 0.0 0:00.01 rpc.statd
1685 20 0 2984 596 592 S 0.0 0.0 0:00.01 xinetd
1710 20 0 0 0 0 S 0.0 0.0 0:00.00 lockd
1711 1 -19 0 0 0 S 0.0 0.0 12:19.07 nfsd
1712 1 -19 0 0 0 S 0.0 0.0 12:27.25 nfsd
1713 1 -19 0 0 0 S 0.0 0.0 12:12.59 nfsd
1714 1 -19 0 0 0 S 0.0 0.0 12:06.74 nfsd
1715 1 -19 0 0 0 S 0.0 0.0 12:06.13 nfsd
1716 1 -19 0 0 0 S 0.0 0.0 12:12.76 nfsd
1717 1 -19 0 0 0 S 0.0 0.0 12:15.43 nfsd
1718 1 -19 0 0 0 S 0.0 0.0 12:05.01 nfsd
1721 20 0 3580 1988 592 S 0.0 0.1 0:14.82 rpc.mountd
1775 0 -20 50304 11m 3476 S 0.0 0.3 419:30.09 masterd_apps
1782 20 0 2216 500 496 S 0.0 0.0 0:00.01 agetty
1793 15 -5 21944 4256 2380 S 0.0 0.1 1030:16 sysd
1834 20 0 67412 12m 3984 S 0.0 0.3 38:18.22 dagger
1835 30 10 16500 6252 3148 S 0.0 0.2 210:23.34 python
1842 20 0 118m 5236 3240 S 0.0 0.1 76:30.84 sysdagent
1896 20 0 75504 4152 2840 S 0.0 0.1 44:58.72 chasd
1905 20 0 4184 2332 1328 S 0.0 0.1 0:00.03 tscat
1980 20 0 77532 6060 3700 S 0.0 0.2 48:41.36 cryptod
2088 20 0 6028 1572 1216 S 0.0 0.0 0:00.01 sshd
2112 20 0 6028 1644 1280 S 0.0 0.0 0:00.97 sshd
lines 41-80 2118 20 0 0 0 0 S 0.0 0.0 0:42.12 kjournald
2182 20 0 232m 34m 3932 S 0.0 0.9 88:21.03 cord
2183 20 0 257m 146m 12m S 0.0 3.9 139:20.76 devsrvr
2286 20 0 40120 4080 2644 S 0.0 0.1 0:00.03 nginx
2514 nobody 20 0 55928 5504 3048 S 0.0 0.1 45:29.55 nginx
3911 20 0 1295m 71m 20m R 0.0 1.9 795:34.05 mongod
3931 20 0 69360 6212 3884 R 0.0 0.2 637:32.59 ikemgr
3933 20 0 92064 8404 4356 S 0.0 0.2 49:54.17 rasmgr
3934 20 0 88040 4496 3128 S 0.0 0.1 50:23.71 keymgr
3935 20 0 397m 30m 5420 S 0.0 0.8 81:03.60 varrcvr
3937 20 0 71736 4984 3132 S 0.0 0.1 108:07.96 l2ctrld
3939 17 -3 49360 5396 3336 S 0.0 0.1 81:44.50 ha_agent
3940 20 0 105m 13m 3984 S 0.0 0.4 59:38.30 satd
3941 20 0 166m 29m 3812 S 0.0 0.8 53:37.05 sslmgr
3942 20 0 67708 5108 3208 S 0.0 0.1 56:55.65 pan_dhcpd
3943 20 0 65572 6980 3388 S 0.0 0.2 50:19.59 dnsproxyd
3945 20 0 64488 5788 3236 S 0.0 0.2 47:51.57 pppoed
3946 17 -3 144m 14m 4540 S 0.0 0.4 64:59.51 routed
3947 20 0 2182m 24m 5508 S 0.0 0.7 87:14.52 authd
3994 20 0 43956 11m 3372 S 0.0 0.3 227:05.64 snmpd
4016 20 0 57052 6932 3380 S 0.0 0.2 35:30.12 nginx
5309 20 0 16948 1208 340 S 0.0 0.0 0:00.00 syslog-ng
5310 20 0 17384 2708 1392 S 0.0 0.1 0:11.58 syslog-ng
6233 nobody 20 0 139m 36m 8588 S 0.0 1.0 18:20.63 appweb3
6239 nobody 20 0 114m 18m 7424 S 0.0 0.5 19:51.38 appweb3
6638 20 0 13744 4872 2924 S 0.0 0.1 376:08.69 packet_path_pin
8973 20 0 8860 2680 2188 S 0.0 0.1 0:00.06 sshd
9005 ssokhair 20 0 8860 1500 1004 S 0.0 0.0 0:00.01 sshd
9006 ssokhair 20 0 59600 21m 10m S 0.0 0.6 0:00.32 cli
9141 20 0 8940 1588 1408 S 0.0 0.0 0:00.01 wmic
9146 20 0 8940 1588 1408 S 0.0 0.0 0:00.01 wmic
9156 20 0 8940 1584 1408 S 0.0 0.0 0:00.01 wmic
9167 20 0 8940 1588 1408 S 0.0 0.0 0:00.01 wmic
9295 20 0 8944 2048 1784 S 0.0 0.1 0:00.01 wmic
9296 20 0 8944 2052 1784 S 0.0 0.1 0:00.01 wmic
9299 20 0 8940 1636 1464 S 0.0 0.0 0:00.01 wmic
9301 ssokhair 20 0 5676 960 772 S 0.0 0.0 0:00.01 less
9303 20 0 13540 2404 1588 S 0.0 0.1 0:00.02 sh
9304 20 0 13176 2304 1324 R 0.0 0.1 0:00.01 top
9305 20 0 12900 1832 1108 S 0.0 0.0 0:00.01 sed
lines 81-120 13500 20 0 39996 4064 2768 S 0.0 0.1 0:00.03 nginx
13525 nobody 20 0 56952 5628 2156 S 0.0 0.1 13:03.46 nginx
13526 nobody 20 0 56952 5628 2156 S 0.0 0.1 10:13.41 nginx
14534 20 0 890m 291m 8080 S 0.0 7.8 653:23.42 mgmtsrvr
15441 20 0 1478m 471m 11m S 0.0 12.5 35791:06 logrcvr
19604 20 0 3088 1124 584 S 0.0 0.0 0:00.42 crond
26021 20 0 7684 1968 1476 S 0.0 0.1 16:05.86 ntpd
32294 20 0 0 0 0 Z 0.0 0.0 0:00.03 mgmtsrvr <defunct>
06-17-2017 11:53 PM
Thanks for suggestion,
As per TAC, it was due to bug related to logrcvr which is fixed in 7.1.6. we have upgraded to latest version and checking the cpu usage.
06-08-2017 02:25 AM - edited 06-08-2017 02:29 AM
Hi,
l can see it is average 33% from all of your outputs. What PAN-OS and HW version you have?
Can you run below command and monitor Cpu(s): 1.0%us stats:
> show system resources follow
p.s FQDN default refresh is 30 minutes
06-08-2017 06:25 AM
Hi Javith,
Seeing as you have a zombie process in that output, it might be worth giving the management server a restart.
32294 20 0 0 0 0 Z 0.0 0.0 0:00.03 mgmtsrvr <defunct>
> debug software restart process management-server
let us know if this helps,
Ben
06-11-2017 12:08 AM
Hi Ben,
Thanks for suggestion. i will do it and then update you.
However i have doubt zombie process holding 20-30% of CPU.
if we add the CPU% utilized by all the processes (show system resources output). it would not be more than 20%.
But GUI showing M-plane above 56% consistently. is it due to zombie process?
06-11-2017 05:30 AM - edited 06-11-2017 05:31 AM
Still Management plane utilization is high even after the restart
@Javith_Ali wrote:Hi Ben,
Thanks for suggestion. i will do it and then update you.
However i have doubt zombie process holding 20-30% of CPU.
if we add the CPU% utilized by all the processes (show system resources output). it would not be more than 20%.
But GUI showing M-plane above 56% consistently. is it due to zombie process?
06-11-2017 12:05 PM - edited 06-11-2017 03:49 PM
Heys,
So when you run > show system resources follow command you observing average 30% MP utilisation, right?
What PAN-OS and hardware are you using?
06-11-2017 04:05 PM - edited 06-11-2017 04:10 PM
Its PA3000 series. version:7.1.4 h2
yes, its showing 30% MP utilization in CLI and above 60% in GUI.
found out bug-id: 61146 related to fqdnobjects
either we have to remove/extend the fqdn refresh or upgrade to 7.1.5 i think.
06-11-2017 04:19 PM
still i cant figure out how the secondary firewall with same configuration(with fqdn) worked well with 10% MP when it was active for over months.
06-12-2017 02:21 AM - edited 06-12-2017 12:37 PM
Hi,
l always check known issues in the current release as well as fixed bugs in a couple of newer releases. If you think you don't hit any already discovered "bugs", then you could try to manually failover to the passive device (if possible). Also what is happening if you refresh the widget manually?:
06-17-2017 11:53 PM
Thanks for suggestion,
As per TAC, it was due to bug related to logrcvr which is fixed in 7.1.6. we have upgraded to latest version and checking the cpu usage.
Click Accept as Solution to acknowledge that the answer to your question has been provided.
The button appears next to the replies on topics you’ve started. The member who gave the solution and all future visitors to this topic will appreciate it!
These simple actions take just seconds of your time, but go a long way in showing appreciation for community members and the LIVEcommunity as a whole!
The LIVEcommunity thanks you for your participation!