Expedition root directory keeps growing

cancel
Showing results for 
Show  only  | Search instead for 
Did you mean: 
Announcements

Expedition root directory keeps growing

L3 Networker

I have expanded the root directory like 2 times already and it keeps filling up. So now the expedition gui will not load. 

 

So what is taking all the space?  

 

expedition@pan-expedition:/hdd/PaloLogs$ df -h

Filesystem                       Size  Used Avail Use% Mounted on

udev                             2.9G     0  2.9G   0% /dev

tmpfs                            595M   24M  572M   4% /run

/dev/mapper/Expedition--vg-root  109G  106G     0 100% /

tmpfs                            3.0G     0  3.0G   0% /dev/shm

tmpfs                            5.0M     0  5.0M   0% /run/lock

tmpfs                            3.0G     0  3.0G   0% /sys/fs/cgroup

/dev/sdb                          99G   28G   67G  29% /hdd

/dev/sda1                        472M  109M  340M  25% /boot

tmpfs                            595M     0  595M   0% /run/user/1000

expedition@pan-expedition:/hdd/PaloLogs$

 

I think the "projects" live in the root so it must be large to be taking up that much room so what are some options here?

 

 

/dev/sbd is where the logs from the pan are getting exported.

22 REPLIES 22

No I am exporting the logs to the second HDD that I created. there is nothing in the PALogs directory. I cant get the database started either. 

the DB cannot start likely due to the lack of storage left in the primary HDD.

 

if you are free for a zoom conference monday or tuesday afternoon feel free to send an email to fwmigrate (at) paloaltonetworks.com

HD is good on the root. 

expedition@pan-expedition:~$ systemctl status mariadb.service

mariadb.service - MariaDB 10.1.38 database server

   Loaded: loaded (/lib/systemd/system/mariadb.service; enabled; vendor preset: enabled)

  Drop-In: /etc/systemd/system/mariadb.service.d

           └─migrated-from-my.cnf-settings.conf

   Active: failed (Result: exit-code) since Tue 2019-04-23 18:47:13 CDT; 4s ago

     Docs: man:mysqld(8)

           https://mariadb.com/kb/en/library/systemd/

  Process: 1549 ExecStart=/usr/sbin/mysqld $MYSQLD_OPTS $_WSREP_NEW_CLUSTER $_WSREP_START_POSITION (code=exited, status=1/FAILURE)

  Process: 1186 ExecStartPre=/bin/sh -c [ ! -e /usr/bin/galera_recovery ] && VAR= ||   VAR=`/usr/bin/galera_recovery`; [ $? -eq 0 ]   && systemctl set-environment _WSREP_START_POSITION=$VAR |

  Process: 1180 ExecStartPre=/bin/sh -c systemctl unset-environment _WSREP_START_POSITION (code=exited, status=0/SUCCESS)

  Process: 1159 ExecStartPre=/usr/bin/install -m 755 -o mysql -g root -d /var/run/mysqld (code=exited, status=0/SUCCESS)

Main PID: 1549 (code=exited, status=1/FAILURE)

   Status: "MariaDB server is down"

 

Apr 23 18:47:09 pan-expedition mysqld[1549]: 2019-04-23 18:47:09 140206857144576 [Note] InnoDB:  Percona XtraDB (http://www.percona.com) 5.6.42-84.2 started; log sequence number 247797360

Apr 23 18:47:10 pan-expedition mysqld[1549]: 2019-04-23 18:47:10 140205893871360 [Note] InnoDB: Dumping buffer pool(s) not yet started

Apr 23 18:47:10 pan-expedition mysqld[1549]: 2019-04-23 18:47:10 140206857144576 [Note] Plugin 'FEEDBACK' is disabled.

Apr 23 18:47:10 pan-expedition mysqld[1549]: 2019-04-23 18:47:10 140206857144576 [Note] Recovering after a crash using tc.log

Apr 23 18:47:10 pan-expedition mysqld[1549]: 2019-04-23 18:47:10 140206857144576 [ERROR] Can't init tc log

Apr 23 18:47:10 pan-expedition mysqld[1549]: 2019-04-23 18:47:10 140206857144576 [ERROR] Aborting

Apr 23 18:47:13 pan-expedition systemd[1]: mariadb.service: Main process exited, code=exited, status=1/FAILURE

Apr 23 18:47:13 pan-expedition systemd[1]: Failed to start MariaDB 10.1.38 database server.

Apr 23 18:47:13 pan-expedition systemd[1]: mariadb.service: Unit entered failed state.

Apr 23 18:47:13 pan-expedition systemd[1]: mariadb.service: Failed with result 'exit-code'.

lines 1-24/24 (END)

expedition@pan-expedition:~$ df -h

Filesystem                       Size  Used Avail Use% Mounted on

udev                             2.9G     0  2.9G   0% /dev

tmpfs                            595M  8.3M  587M   2% /run

/dev/mapper/Expedition--vg-root  109G  4.3G  100G   5% /

tmpfs                            3.0G     0  3.0G   0% /dev/shm

tmpfs                            5.0M     0  5.0M   0% /run/lock

tmpfs                            3.0G     0  3.0G   0% /sys/fs/cgroup

/dev/sdb                          99G   94G     0 100% /hdd

/dev/sda1                        472M  109M  340M  25% /boot

tmpfs                            595M     0  595M   0% /run/user/1000

expedition@pan-expedition:~$

 

You have two issues here.

 

One is that you are running out of space in your system, and you should try to find out what is taking this space.

You already started to find out this process.

 

The other issue, which may have arose due to the lack of space, is that mariadb crashed and left the tc.log in an inconsistent state.

Check this thread

https://live.paloaltonetworks.com/t5/Expedition-Discussions/Incorrect-User-or-Password-from-GUI/td-p...

with the last posts, that it would tell you how to delete the tc.log file and restart the mariadb server again.

 

Anyhow, do not forget to keep with the first task, to make sure that you won't run out of space again soon.

Looks like even though I have the logs exporting to another hard drive, and have the temporary data structure folder also set to that other hard drive, expedition is still either moving or copying log files to the /data directory. Why is this?

 

root@pan-expedition:/# for i in `ls`; do du -hs $i; done

16M bin

106M boot

82G data

4.0K data2

12M datastore

0 dev

17M etc

52G hdd

255M home

0 initrd.img

0 initrd.img.old

710M lib

4.0K lib64

4.0K logs

16K lost+found

8.0K media

4.0K mnt

453M opt

16K PALogs

du: cannot access 'proc/71855/task/71855/fd/4': No such file or directory

du: cannot access 'proc/71855/task/71855/fdinfo/4': No such file or directory

du: cannot access 'proc/71855/fd/3': No such file or directory

du: cannot access 'proc/71855/fdinfo/3': No such file or directory

0 proc

43M root

31M run

14M sbin

8.0K snap

4.0K srv

0 sys

272K tmp

1.9G usr

958M var

0 vmlinuz

0 vmlinuz.old

root@pan-expedition:/# cd data

root@pan-expedition:/data# for i in `ls`; do du -hs $i; done

82G 10.20.63.85

root@pan-expedition:/data# cd 10.20.63.85

root@pan-expedition:/data/10.20.63.85# ls -la

total 85073012

drwx------ 2 root     root            4096 Apr 29 00:00 .

drwxrwxrwx 3 www-data www-data        4096 Apr  9 15:24 ..

-rw-r--r-- 1 root     root      2928389732 Apr 23 23:59 PA-5250-LOANER_traffic_2019_04_23_last_calendar_day.csv

-rw-r--r-- 1 root     root     16263451665 Apr 24 23:59 PA-5250-LOANER_traffic_2019_04_24_last_calendar_day.csv

-rw-r--r-- 1 root     root     16577735082 Apr 25 23:59 PA-5250-LOANER_traffic_2019_04_25_last_calendar_day.csv

-rw-r--r-- 1 root     root     15462734094 Apr 26 23:59 PA-5250-LOANER_traffic_2019_04_26_last_calendar_day.csv

-rw-r--r-- 1 root     root     19307974444 Apr 27 23:59 PA-5250-LOANER_traffic_2019_04_27_last_calendar_day.csv

-rw-r--r-- 1 root     root     12322880043 Apr 28 23:59 PA-5250-LOANER_traffic_2019_04_28_last_calendar_day.csv

-rw-r--r-- 1 root     root      4251551633 Apr 29 07:58 PA-5250-LOANER_traffic_2019_04_29_last_calendar_day.csv

root@pan-expedition:/data/10.20.63.85# ls -la

total 82256368

drwx------ 2 root     root            4096 Apr 29 08:02 .

drwxrwxrwx 3 www-data www-data        4096 Apr  9 15:24 ..

-rw-r--r-- 1 root     root     16263451665 Apr 24 23:59 PA-5250-LOANER_traffic_2019_04_24_last_calendar_day.csv

-rw-r--r-- 1 root     root     16577735082 Apr 25 23:59 PA-5250-LOANER_traffic_2019_04_25_last_calendar_day.csv

-rw-r--r-- 1 root     root     15462734094 Apr 26 23:59 PA-5250-LOANER_traffic_2019_04_26_last_calendar_day.csv

-rw-r--r-- 1 root     root     19307974444 Apr 27 23:59 PA-5250-LOANER_traffic_2019_04_27_last_calendar_day.csv

-rw-r--r-- 1 root     root     12322880043 Apr 28 23:59 PA-5250-LOANER_traffic_2019_04_28_last_calendar_day.csv

-rw-r--r-- 1 root     root      4295702848 Apr 29 08:02 PA-5250-LOANER_traffic_2019_04_29_last_calendar_day.csv

root@pan-expedition:/data/10.20.63.85# 

 

I would say that you are still running the syslog server in Expedition and defined it to place the syslog entries in /data

 

Here an example of what we could have as a rsyslog config:

 

 

#####################################################
# Log everything to a per host daily logfile #
#####################################################

$ModLoad imtcp

### Listeners
$InputTCPServerRun 10514

# specify senders you permit to access
$AllowedSender TCP, 127.0.0.1, 10.11.29.0/24, 172.16.26.0/24, *.paloaltonetworks.com

$template DynaTrafficLog,"/data/%FROMHOST-IP%/%HOSTNAME%_traffic_%$YEAR%_%$MONTH%_%$DAY%_last_calendar_day.csv"
*.* -?DynaTrafficLog

If you are exporting the logs to a specific folder, I guess you do not need to be running the syslog service and you do not need to ask the FW to use a logforwarding profile that sends the entries to Expedition.

 

Does it make sense? 

Yup it sure does. I think I set up syslog hoping to use it but could never figure out the use or the how to. How do you turn this service off? Or do I need to do in the conf file?

expedition@Expedition:~/BUILD# sudo service rsyslog stop

 

Afterwards, modify the config file so it would stop listening the ports. In this case, if Expedition tries to restart the service, it won't capture the data.

 

But, best and in addition, you should stop the log forwarding profile in the firewalls.

  • 10847 Views
  • 22 replies
  • 0 Likes
Like what you see?

Show your appreciation!

Click Like if a post is helpful to you or if you just want to show your support.

Click Accept as Solution to acknowledge that the answer to your question has been provided.

The button appears next to the replies on topics you’ve started. The member who gave the solution and all future visitors to this topic will appreciate it!

These simple actions take just seconds of your time, but go a long way in showing appreciation for community members and the LIVEcommunity as a whole!

The LIVEcommunity thanks you for your participation!