delete all logs from panorama

cancel
Showing results for 
Show  only  | Search instead for 
Did you mean: 

delete all logs from panorama

L1 Bithead

Our Panorama M600 is in a weird state with regards to logging. pushing configs to devices is just fine, but es-health is red and has been for the last few days. Thought it was rebuilding but sure looks like it's totally broken.

 

We are thinking of wiping all data and starting from scratch (which is okay since we have logs on the firewalls to fall back to). Can you just delete the Managed Collector, remove all the disks, and then recreate the collector and add the disks in and things start from scratch?

 

Not sure if removing/adding the disks pairs in the managed collectors will remove all data or keep the data on the raid (we want to remove).

 

Anyone know how to do that without having to reconfigure Pano from scratch?

4 REPLIES 4

Cyber Elite
Cyber Elite

Thank you for the post @czane

 

To be honest, I do not believe that red status of elastic search of log collector is a valid reason for wiping of log collector. I would reach this option only if it is unavoidable. I came across red status of elastic search issue a few times in the past. In some cases this was a bug that was resolved by PAN-OS upgrade.

 

If possible could you elaborate what PAN-OS version you are running?

Cold you also provide output from: show log-collector-es-cluster as well as: show log-collector detail

 

BTW, have you trued to reboot Panorama?

 

Kind Regards

Pavel

 

Help the community: Like helpful comments and mark solutions.

L1 Bithead

We're at 9.1.10 and we're rebooted once after we noticed no logs coming in. Maybe i'll try to upgrade to 9.1.13 and see what happens. can't hurt.

 

admin@UH-Panorama> show log-collector-es-cluster health

{
"cluster_name" : "__pan_cluster__",
"status" : "red",
"timed_out" : false,
"number_of_nodes" : 3,
"number_of_data_nodes" : 2,
"active_primary_shards" : 772,
"active_shards" : 774,
"relocating_shards" : 0,
"initializing_shards" : 0,
"unassigned_shards" : 158,
"delayed_unassigned_shards" : 0,
"number_of_pending_tasks" : 0,
"number_of_in_flight_fetch" : 0,
"task_max_waiting_in_queue_millis" : 0,
"active_shards_percent_as_number" : 83.04721030042919
}

 

admin@UH-Panorama> show log-collector all


Serial CID Hostname Connected Config Status SW Version IPv4 - IPv6
---------------------------------------------------------------------------------------------------------
[serialnum] 4 UH-Panorama yes In Sync 9.1.10 [ip address] - unknown

Redistribution status: none
Last commit-all: commit succeeded, current ring version 1
SearchEngine status: Unknown
md5sum 4f5f09b388c8b735caa1b0ab4d6c543c updated at ?

Certificate Status:
Certificate subject Name:
Certificate expiry at: none
Connected at: none
Custom certificate Used: no
Last masterkey push status: Unknown
Last masterkey push timestamp: none

Cyber Elite
Cyber Elite

Thank you for reply @czane

 

While I was running 9.1.10, I was hitting this bug: PAN-166557 after I added M-600 as a new dedicated log collector. The symptom was the same, the elastic search service status was red. You might be facing different issue, but as a next step, I would recommend to upgrade to 9.1.13.

 

Kind Regards

Pavel

Help the community: Like helpful comments and mark solutions.

L1 Bithead

Thanks for the reply!

 

Rebooted - same

 

Upgraded to 9.1.13 - got a bit further, after sitting over night it's now stuck at 87.xxxx for active_shards_percent_as_number.

I did open a ticket with PA, they aren't sure either. We'll try to delete the logs and see if we can get the es thing back alive. It's been almost a week with no consolidated logs so we're just hoping for a fix anyway we can.

 

  • 4036 Views
  • 4 replies
  • 0 Likes
Like what you see?

Show your appreciation!

Click Like if a post is helpful to you or if you just want to show your support.

Click Accept as Solution to acknowledge that the answer to your question has been provided.

The button appears next to the replies on topics you’ve started. The member who gave the solution and all future visitors to this topic will appreciate it!

These simple actions take just seconds of your time, but go a long way in showing appreciation for community members and the LIVEcommunity as a whole!

The LIVEcommunity thanks you for your participation!