Enhanced Security Measures in Place:   To ensure a safer experience, we’ve implemented additional, temporary security measures for all users.

Panorama VM upgrade to PANOS 8.0.x & switch mode to Panorama Mode

cancel
Showing results for 
Show  only  | Search instead for 
Did you mean: 
Announcements

Panorama VM upgrade to PANOS 8.0.x & switch mode to Panorama Mode

L1 Bithead

Hi,

 

This is my feedback experience after two weeks of migration process and troubleshooting moving to Panorama 8.0.7 in Panorama Mode.

 

The most part of the process is available from the Panorama 8.0 Administrator's Guide 8.0 but I noticed a lack of information for certain steps.

 

I hope that can help the next adventurers.

 

 

#Existing configuration
Panorama Virtual Appliance in Legacy Mode running PANOS 7.1.14
    [->Virtual System Disk (52GB)
    [->Virtual Logging Disk (2TB)

 

Objective 1: Migrate Panorama Virtual Appliance in Legacy Mode version 7.1.14 to Panorama Virtual Appliance Panorama Mode version 8.0.7
Objective 2: Increase Logging storage from 2TB to 4TB - Existing logs must be preserved.

#Target configuration
Panorama Virtual Appliance in Panorama Mode running PANOS 8.0.7
    [->Virtual System Disk (81GB)
    [->Local Log Collector
        [->Virtual Logging Disk 01 (2TB)
        [->Virtual Logging Disk 02 (2TB)

 

 

Performed actions before upgrading to PANOS 8.0.7:
- Upgraded Panorama Virtual Appliance hardware settings to support Panorama Mode with 4TB of logging storage

=> 8 CPUs and 32GB memory

 

#Run the PANOS upgrade process...

(The upgrade process is documented by PaloAlto no need to add comments).

 

Required actions after having upgraded Panorama Virtual Appliance from PANOS 7.1.14 to 8.0.7:
#Panorama Virtual Appliance state: [Legacy Mode]
Step 1: Replace the existing virtual system disk by a new one with a higher storage capacity.
* Power off Panorama Virtual Appliance.
* From the vSphere client console: Add and attach a new virtual system disk of 81GB (Thick Provision Lazy Zeroed disk format & SCSI Virtual Device Node).
* Power on Panorama Virtual Appliance

* Identify the target virtual system disk ID by running CLI 'show system disk details'

* Run CLI 'request system clone-system-disk target sdc' to migrate the current virtual system disk to the new one of 81GB.

  The copying process takes around 20 to 25 minutes, during which Panorama reboots. When the process finishes, the output tells you to shut down Panorama.
* Power off Panorama Virtual Appliance
* From the vSphere client console: Remove the original virtual system disk of 52GB
* Select the new virtual system disk of 81GB, set the Virtual Device Node to SCSI (0:0)
* Power on Panorama Virtual Appliance
  Wait for Panorama to reboot on the new virtual system disk (around 15 minutes).

 

Step 2: Add a virtual logging disk.
* Power off Panorama Virtual Appliance
* From the vSphere client console: Add and attach 2 new virtual disks of 2TB (Thick Provision Lazy Zeroed disk format & SCSI Virtual Device Node)
* Power on Panorama Virtual Appliance

 

Step 3: Switch from Legacy mode to Panorama mode.
* Run CLI 'request system system-mode panorama'
  Wait for the process to finish and for Panorama to reboot (around five minutes) before continuing.
* From Panorama dashboard, General Information settings, verify that the Mode is now panorama.
#Panorama Virtual Appliance state: [Panorama Mode]

 

Step 4: Migrate existing logs to the new virtual logging disks.
* Run CLI 'request logdb migrate vm start'
  The process duration varies by the volume of log data we are migrating.
  To check the status of the migration, run the following CLI 'request logdb migrate vm status'
  When the migration finishes, the output displays: migration has been done
* Verify that the existing logs are available.

 

Step 4 didn't worked for me and logdb migration CLI returned the following error message: 'logdb migration failed to start: device is not part of a collector group'

 

I noticed that default local log collector created by the system when it moved from the Legacy mode to Panorama mode was marked 'not synchronised - ring version mismatch'

 

This step is not documented but a commit to Panorama is required to make the local log collector running and synchronized at first time.

 

In addition when I proceed with a Panorama commit the operation stopped and system returned the following error:

 

Operation
Commit
Status
Completed
Result
Failed
Details
•    Validation Error:
•    panorama -> log-settings -> url unexpected here
•    panorama -> log-settings is invalid
Warnings

 

This error is caused by Panorama running-config.

During the running mode swap operation debscribed in step 3 a couple of 'non playable' objects are added in Panorama running configuration.

 

These 'non playable' objects have to be manually removed from Panorama running-config to 'restore' the commit function.

 

* From Panorama GUI (setup -> operations) save then export your Panorama running-config.

* Open the xml file then look for the following entries:

 

 <panorama>
    <log-settings>
    ...
      <correlation>
        <match-list>
          <entry name="correlation-high">
            <filter>(severity eq high)</filter>
          </entry>
          <entry name="correlation-critical">
            <filter>(severity eq critical)</filter>
          </entry>
        </match-list>
      </correlation>
      <url>
        <match-list>
          <entry name="panorama-url-critical">
            <filter>(severity eq critical)</filter>
          </entry>
        </match-list>
      </url>
      <data>
        <match-list>
          <entry name="panorama-data-critical">
            <filter>(severity eq critical)</filter>
          </entry>
        </match-list>
      </data>
    </log-settings>

 

Parts in red have to be removed from Panorama running-config to make it commitable by the system.

 

* From Panorama GUI (setup -> operations) import then load your custom Panorama running-config.

 

These changes fixed my commit issue then first commit made the Panorama local log-collector synchronised.

 

 

Still relevant to step 4 I've also noticed that the second new virtual logging disk was not visble and linked to the local log-collector so the logging storage was limited to 2TB instead of 4TB.

It's seems that when an additionnal virtual logging disks are added in Panorama mode the system does only consider the first available volume as a log-collector disk pair.

 

#Target configuration
Panorama Virtual Appliance in Panorama Mode running PANOS 8.0.7
    [->System Disk (81GB)
    [->Local Log Collector
        [->Virtual Logging Disk 01 (2TB)
        [->Virtual Logging Disk 02 (2TB)

 

Optionnal step: Identify then add the second virtual logging disk

* Run CLI 'show system disk details'

 

Name   : sdd
State  : Present
Size   : 2097152 MB
Status : Unavailable
Reason : Admin disabled

Name   : sdb
State  : Present
Size   : 2097152 MB
Status : Ready for migration
Reason : Admin disabled

Name   : sdc
State  : Present
Size   : 2097152 MB
Status : Available
Reason : Admin enabled

 

sdb is my old Panorama 7.1.x logging disk (disabled by system)

sdc is my first new 2TB virtual logging disk (enabled by default)

sdd is my second new 2TB virtual logging disk (disabled by default)

 

* Run CLI 'request system disk add sdd' to enable the second virtual logging disk.

  The new virtual logging disk can now be assigned as a second disk-pair in Panorama log-collector (managed-collectors -> Panorama -> Disks -> Add Disk Pair B)

 

 

Let's roll out again step 4...

 

Step 4 bis: Migrate existing logs to the new virtual logging disks.
* Run CLI 'request logdb migrate vm start''
  The process duration varies by the volume of log data we are migrating.
  To check the status of the migration, run the following CLI 'request logdb migrate vm status'
  When the migration finishes, the output displays: migration has been done
* Verify that the existing logs are available.

 

This time the log migration starts. It took arround a whole week for my old 2TB logdb.

 

Step 5: Remove the old log disk

* Power off Panorama Virtual Appliance
* From the vSphere client console: Remove the old logging disk of 2TB
* Power on Panorama Virtual Appliance
  Wait for Panorama to reboot on the new system disk (around 15 minutes).

 

Cheers.

13 REPLIES 13

L1 Bithead

I'm curious, was Panorama usable during your week long log conversion?  What was the responset time like?

 

Thanks!

Hi Jedigeek5,

 

Yes during the whole log migration process Panorama ran normally.

I was able to make / push configuration changes and manage firewalls (25 units) as usual.

 

Cheers.

L3 Networker

In order to add an additional disk, the customer migrated from legacy to Panorama mode. Then, migrated all logs and "show system disk details" shows migration completed. Further, removed the old drive and the new one is in use now.

 

Josep@Corp-Panorama01> request logdb migrate vm start

migration had been done
logdb migration failed to start

 

 

Issue: The logs do not show up on Panorama. 

 

The show logging-status device serial# shows that the logs are been received from firewalls. Error in logd.log:

 

logd_send_hints_adver(logd.c:2276): Error sending hint adver for inb_logger_id:2 seg:233 gennum:34e5
2018-09-26 14:01:40.161 -0600 Error: logd_send_hints_adver(logd.c:2276): Error sending hint adver for inb_logger_id:2 seg:0 gennum:69c8

 

 

Questions:

1. Log-collector is not configured on this Panorama. Is log-collector a mandate in Panorama mode?

 

2. If I configure log-collector, it would create a new ring file. So, the new logs may appear on the Panorama. But does the old logs (which were obtained using migration) still be present on disk (and show up on Monitor tab) or will they be lost?

 

 

Thanks

 

Hello,

 

1. In Panorama mode a log-collector is mandatory. A default log-collector is created when Panorama switches from Legacy ymode to Panorama mode. If the log collector is not present or not running properly log migration process doesn't start.

 

2. Old logs are visible in Panorama mode if the log-collector is running and if the logs migration process (request logdb migrate vm start) has been performed with success. You can monitor log migration status with the following CLI: request logdb migrate vm status

 

Regards.

Thank you for replying. I configured log-collector and old logs show up as well.  

 

Cheers!

 

L3 Networker

Hi,

 

The logs during the time of migration (from start till configuring log-collector) are missing. Is this expected?

 

Regards

Hello,

 

Yes old logs are not available in Panorama mode until the log migration process in not completed.

log migration progress can be monitored with the following CLI: 'request logdb migrate vm status'

 

Cheers.

L0 Member

Great info, thanks.

 

Note that for version 9 I had to run the CLI command "commit-all log-collector-config log-collector-group default".

Hopefully this helps someone.

If we are using NFS volume in legacy mode, what will happen to the logs stored in the NFS volume when we migrate to Panorama mode because NFS is not supported in Panorama mode? 

This just saved me so much time. I was getting this error, and general searches on the PA support site didn't help. I had a different collector group name, but this was the missing piece I needed.

L1 Bithead

Hello,

1. In Panorama mode a log-collector is mandatory. A default log-collector is created when Panorama switches from Legacy ymode to Panorama mode. If the log collector is not present or not running properly log migration process doesn't start.

 

2. Old logs are visible in Panorama mode if the log-collector is running and if the logs migration process (request logdb migrate vm start) has been performed with success. You can monitor log migration status with the following CLI:

> request logdb migrate vm status

but it didn't work for me and logdb migration CLI returned the following error message: 'logdb migration failed to start: device is not part of a collector group'

 

Migrate the disk data to the new system disk taking too long time.. its been 2 hrs after run the command. no improvement

Hi GaneshanSelvaraj,

 

The CLI 'request system clone-system-disk target sdc' to migrate the current virtual system disk to the new one of 81GB was completed in a couple of minutes on my side.

You should contact the support if the virtual system disk copy processing takes too long.

 

Good luck with your Panorama migration.

  • 44649 Views
  • 13 replies
  • 14 Likes
Like what you see?

Show your appreciation!

Click Like if a post is helpful to you or if you just want to show your support.

Click Accept as Solution to acknowledge that the answer to your question has been provided.

The button appears next to the replies on topics you’ve started. The member who gave the solution and all future visitors to this topic will appreciate it!

These simple actions take just seconds of your time, but go a long way in showing appreciation for community members and the LIVEcommunity as a whole!

The LIVEcommunity thanks you for your participation!