Enhanced Security Measures in Place:   To ensure a safer experience, we’ve implemented additional, temporary security measures for all users.

Increase disk space on Panorama VM in Azure

cancel
Showing results for 
Show  only  | Search instead for 
Did you mean: 
Announcements

Increase disk space on Panorama VM in Azure

L3 Networker

Hi All,

We have a Panorama VM in Azure running pan-os 10.1.10-h2

the Root partition sitting at close to 100% and tried all the recommendations to clear space but not much luck thusfar.

partition is 7.5GB size.

 

PAN-212530 addressed this issue  on 10.1.10 but still happening.

 

apart from the pan-os running, what else is kept on the root partition normally?

 

thanks in adv

3 REPLIES 3

L3 Networker

Update..

 

panadmin@panorama> show system disk-space

Filesystem Size Used Avail Use% Mounted on
/dev/root 7.9G 7.5G 9.4M 100% /
none 16G 128K 16G 1% /dev
/dev/sda5 24G 14G 8.5G 63% /opt/pancfg
/dev/sda6 5.8G 1.9G 3.7G 34% /opt/panrepo
tmpfs 16G 670M 16G 5% /dev/shm
cgroup_root 16G 0 16G 0% /cgroup
/dev/sda8 32G 20G 11G 65% /opt/panlogs
/dev/loop0 9.8G 23M 9.2G 1% /opt/logbuffer
/dev/sdc1 961G 745G 167G 82% /opt/panlogs/ld1
tmpfs 12M 44K 12M 1% /opt/pancfg/mgmt/ssl/private
tmpfs 32M 0 32M 0% /mnt/pantmp

panadmin@panorama>


panadmin@panorama> debug techsupport duts run


/var depth 1 ====================================================

3708M /var

/var depth 2 ====================================================

3219M /var/log
336M /var/appweb
128M /var/nodejs
23M /var/lib
1M or less: 19 files

.........

.....

..

 

 

Question - assuming /VAR is located on my root mount? if so it shows only half of my disk space utilized then.

planning a reboot on panorama next week to see if this clears the utilization issue.

will update once done.

 

 

update - reboot completed - still the same issue 😞

logged a tac with Palo

Did you find a response for this issue?

L3 Networker

Hi,

yes resolved - had to log a TAC in the end

 

the issue was related to the pan_cluster logs leading to a high root partition reported on the 10.0 series 

under PAN-156007, which is an internal issue and does not come under the software issues

 

TAC engineer had to connect into the root and deleted the _pan_cluster logs then restarted the elasticsearch process - fixed the issue for us.

 

cheers

 

 

 

  • 1080 Views
  • 3 replies
  • 0 Likes
Like what you see?

Show your appreciation!

Click Like if a post is helpful to you or if you just want to show your support.

Click Accept as Solution to acknowledge that the answer to your question has been provided.

The button appears next to the replies on topics you’ve started. The member who gave the solution and all future visitors to this topic will appreciate it!

These simple actions take just seconds of your time, but go a long way in showing appreciation for community members and the LIVEcommunity as a whole!

The LIVEcommunity thanks you for your participation!