Management Articles

Featured Article
Issue If the "scp export logdb" command is used on the CLI on a PA-7050, it will not export Traffic, Threat, Data Filtering and URL logs.  This command does not pickup any logdbs from the logcard, so it will not pick up Traffic, Threat, Data Filtering and URL logs.   This is expected behavior for the PA-7050 platform.   Command "scp export logdb" will only export system, config and alarm logdbs only.
View full article
gbogojevic ‎09-14-2018 12:38 PM
1,664 Views
0 Replies
Where does the space go? A log collector is deployed with 4 1TB disk pairs. The GUI reports 3.23 TB of total space that can be allocated via quota. Various CLI commands show different values from the GUI. What is going on here? How much space do you actually have for logs?
View full article
cstancill ‎07-30-2018 12:14 PM
2,363 Views
1 Reply
3 Likes
Panorama Management and Logging Overview           The Panorama solution is comprised of two overall functions: Device Management and Log Collection/Reporting. A brief overview of these two main functions follow:   Device Management: This includes activities such as configuration management and deployment, deployment of PAN-OS and content updates. Log Collection: This includes collecting logs from one or multiple firewalls, either to a single Panorama or to a distributed log collection infrastructure. In addition to collecting logs from deployed firewalls, reports can be generated based on that log data whether it resides locally to the Panorama (e.g single M-series or VM appliance) for on a distributed logging infrastructure.   The Panorama solution allows for flexibility in design by assigning these functions to different physical pieces of the management infrastructure. For example: Device management may be performed from a VM Panorama, while the firewalls forward their logs to colocated dedicated log collectors:         In the example above, device management function and reporting are performed on a VM Panorama appliance. There are three log collector groups. Group A, contains two log collectors and receives logs from three standalone firewalls. Group B, consists of a single collector and receives logs from a pair of firewalls in an Active/Passive high availability (HA) configuration. Group C contains two log collectors as well, and receives logs from two HA pairs of firewalls. The number of log collectors in any given location is dependent on a number of factors. The design considerations are covered below. Note: any platform can be a dedicated manager, but only M-Series can be a dedicated log collector.     Log Collection   Managed Devices   While all current Panorama platforms have an upper limit of 1000 devices for management purposes, it is important for Panorama sizing to understand what the incoming log rate will be from all managed devices. To start with, take an inventory of the total firewall appliances that will be managed by Panorama.   Use the following spreadsheet to take an inventory of your devices that need to store logs: MODEL PAN-OS (Major Branch #)  Location Measured Average Log Rate   Ex: 5060    Ex: 6.1.0 Ex: Main Data Center   Ex. 2500 logs/s                                      Logging Requirements   This section will cover the information needed to properly size and deploy Panorama logging infrastructure to support customer requirements. There are three main factors when determining the amount of total storage required and how to allocate that storage via Distributed Log Collectors. These factors are: Log Ingestion Requirements: This is the total number of logs that will be sent per second to the Panorama infrastructure. Log Storage Requirements: This is the timeframe for which the customer needs to retain logs on the management platform. There are different driving factors for this including both policy based and regulatory compliance motivators. Device Location: The physical location of the firewalls can drive the decision to place DLC appliances at remote locations based on WAN bandwidth etc.   Each of these factors are discussed in the sections below:   Log Ingestion Requirements   The aggregate log forwarding rate for managed devices needs to be understood in order to avo id a design where more logs are regularly being sent to Panorama than it can receive, process, and write to disk. The table below outlines the maximum number of logs per second that each hardware platform can forward to Panorama and can be used when designing a soluti on to calculate the maximum number of logs that can be forwarded to Panorama in the customer environment.            Device Log Forwarding Platform  Supported Logs per Second (LPS)  PA-200 250 PA-220 1,200 PA-500 625 PA-820/850 10,000 PA-3000 series 10,000 PA-3220 7,000 PA-3250 15,000 PA-3260 24,000 PA-5050/60 10,000 PA-5220 30,000 PA-5250 55,000 PA-5260 To Be Tested PA-7050/7080 70,000 VM-50 1,250 VM-100/200 2,500 VM-300/1000-HV 8,000 VM-500 8,000 VM-700 10,000                                                             The log ingestion rate on Panorama is influenced by the platform and mode in use (mixed mode verses logger mode). The table below shows the ingestion rates for Panorama on the different available platforms and modes of operation.  The numbers in parenthesis next to VM denote the number of CPUs and Gigabytes of RAM assigned to the VM.              Panorama Log Ingestion Platform  Mixed Dedicated  VM (8/16) 10,000 18,000 M-200 10,000 28,000 M-500 15,000 30,000 M-600 25,000 50,000   The above numbers are all maximum values. In live deployments, the actual log rate is generally some fraction of the supported maximum. Determining actual log rate is heavily dependent on the customer's traffic mix and isn't necessarily tied to throughput. For example, a single offloaded SMB session will show high throughput but only generate one traffic log. Conversely, you can have a smaller throughput comprised of thousands of UDP DNS queries that each generate a separate traffic log. For sizing, a rough correlation can be drawn between connections per second and logs per second.     Methods for Determining Log Rate New Customer: Leverage information from existing customer sources. Many customers have a third party logging solution in place such as Splunk, ArcSight, Qradar, etc. The number of logs sent from their existing firewall solution can pulled from those systems. When using this method, get a log count from the third party solution for a full day and divide by 86,400 (number of seconds in a day). Do this for several days to get an average. Be sure to include both business and non-business days as there is usually a large variance in log rate between the two. Use data from evaluation device. This information can provide a very useful starting point for sizing purposes and, with input from the customer, data can be extrapolated for other sites in the same design.  This method has the advantage of yielding an average over several days. A script (with instructions) to assist with calculating this information can be found is attached to this document. To use, download the file named "ts_lps.zip". Unpack the zip file and reference the README.txt for instructions. If no information is available, use the Device Log Forwarding table above as reference point. This will be the least accurate method for any particular customer. Existing Customer:     For existing customers, we can leverage data gathered from their existing firewalls and log collectors: To check the log rate of a single firewall, download the attached file named "Device.zip", unpack the zip file and reference the README.txt file for instructions. This package will query a single firewall over a specified period of time (you can choose how many samples) and give an average number of logs per second for that period. At minimum this script should be run for 24 consecutive hours on a business day. Running the script for a full week will help capture the cyclical ebb and flow of the network. If the customer does not have a log collector, this process will need to be run against each firewall in the environment. If the customer has a log collector (or log collectors), download the attached file named "lc_lps.zip", unpack the zip file and reference the README.txt file for instructions This package will query the log collector MIB to take a sample of the incoming log rate over a specified period.   Log Storage Requirements   Factors Affecting Log Storage Requirements There are several factors that drive log storage requirements. Most of these requirements are regulatory in nature. Customers may need to meet compliance requirements for HIPAA, PCI, or Sarbanes-Oxely.     PCI DSS requirement 10.7 Sarbanes-Oxley Act, Section 802 HIPAA - § 164.316(b)(2)(i)   There are other governmental and industry standards that may need to be considered. Additionally, some companies have internal requirements. For example: that a certain number of days worth of logs be maintained on the original management platform. Ensure that all of these requirements are addressed with the customer when designing a log storage solution.   Focus is on the minumum number of days worth of logs that needs to be stored. If there is a maximum number of days required (due to regulation or policy), you can set the maximum number of days to keep logs in the quota configuration.   Calculating Required Storage Calculating required storage space based on a given customer's requirements is fairly straight forward process but can be labor intensive when achieving higher degrees of accuracy. With PAN-OS 8.0, the aggregated size of all log types is 500 Bytes. This number accounts for both the logs themselves as well as the associated indices. The Threat database is the data source for Threat logs as well as URL, Wildfire Submissions, and Data Filtering logs.     Note that we may not be the logging solution for long term archival.  In these cases suggest Syslog forwarding for archival purposes.        The equation to determine the storage requirements for particular log type is:   Example: Customer wants to be able to keep 30 days worth of traffic logs with a log rate of 1500 logs per second:             The result of the above calculation accounts for detailed logs only. With default quota settings reserve 60% of the available storage for detailed logs. This means that the calculated number represents 60% of the total storage that will need to be purchased. To calculate the total storage required, devide this number by .60:       Default log quotas for Panorama 8.0 and later are as follows:   Log Type % Storage Detailed Firewall Logs 60 Summary Firewall Logs 30 Infrastructure and Audit Logs 5 Palo Alto Networks Platform Logs .1 3rd Party External Logs .1      The attached worksheet will take into account the default quota on Panorama and provide a total amount of storage required.       Calculating Required Storage For Logging Service   There are three different cases for sizing log collection using the Logging Service. For in depth sizing guidance, refer to Sizing Storage For The Logging Service.   Log collection for Palo Alto Networks Next Generation Firewalls Log collection for GlobalProtect Cloud Service Mobile User Log collection for GlobalProtect Cloud Service Remote Office     Log Collection for Palo Alto Next Generation Firewalls The log sizing methodology for firewalls logging to the Logging Service is the same when sizing for on premise log collectors. The only difference is the size of the log on disk. In the Logging Service, both threat and traffic logs can be calculated using a size of 1500 bytes.    Log Collection for GlobalProtect Cloud Service Mobile User Per user log generation depends heavily on both the type of user as well as the workloads being executed in that environment. On average, 1TB of storage on the Logging Service will provide 30 days retention for 5000 users. An advantage of the logging service is that adding storage is much simpler to do than in a traditional on premise distributed collection environment. This means that if your environment is significantly busier than the average, it is a simple matter to add whatever storage is necessary to meet your retention requirements.   Log Collection for GlobalProtect Cloud Service Remote Office GlobalProtect Cloud Service (GPCS) for remote offices is sold based on bandwidth. While log rate is largely driven by connection rate and traffic mix, in sample enterprise environments log generation occurs at a rate of approximately 1.5 logs per second per megabit of throughput. The attached sizing work sheet uses this rate and takes into account busy/off hours in order to provide an estimated average log rate.           LogDB Storage Quotas   Storage quotas were simplified starting in PAN-OS version 8.0. Detail and summary logs each have their own quota,  regardless of type (traffic/threat):   Log Type Quota (%) Detailed Firewall Logs 60 Summary Firewall Logs 30 Infrastructure and Audit Logs 5 Palo Alto Networks Platform Logs .1 3rd Party External Logs .1 Total 95.2       Device Location The last design consideration for logging infrastructure is location of the firewalls relative to the Panorama platform they are logging to. If the device is separated from Panorama by a low speed network segment (e.g. T1/E1), it is recommended to place a Dedicated Log Collector (DLC) on site with the firewall. This allows log forwarding to be confined to the higher speed LAN segment while allowing Panorama to query the log collector when needed. For reference, the following tables shows bandwidth usage for log forwarding at different log rates. This includes both logs sent to Panorama and the acknowledgement from Panorama to the firewall. Note that for both the 7000 series and 5200 series, logs are compressed during transmission.           Log Forwarding Bandwidth Log Rate (LPS)  Bandwidth Used 1300 8 Mbps 8000 56 Mbps 10000 64 Mbps 16000 52.8 - 140.8 Mbps (96.8)      Log Forwarding Bandwidth - 7000 and 5200 Series Log Rate (LPS)  Bandwidth Used 1300 .6 Mbps 8000 4 Mbps 10000 4.5 Mbps 16000 5 - 10 Mbps           Device Management There are several factors to consider when choosing a platform for a Panorama deployment. Initial factors include: Number of concurrent administrators need to be supported? Does the Customer have VMWare virtualization infrastructure that the security team has access to? Does the customer require dual power supplies? What is the estimated configuration size? Will the device handle log collection as well?   Panorama Virtual Appliance This platform operates as a virtual M-100 and shares the same log ingestion rate. Adding additional resources will allow the virtual Panorama appliance to scale both it's ingestion rate as well as management capabilities. The minimum requirements for a Panorama virtual appliance running 8.0 is 8 vCPUs and 16GB vRAM.           When to choose Virtual Appliance? The customer has large VMWare Infrastructure that the security has access to Customer is using dedicated log collectors and are not in mixed mode When not to choose Virtual Appliance? Server team and Security team are separate and do not want to share Customer has no virtual infrastructure   M-100 Hardware Platform This platform has dedicated hardware and can handle up to concurrent 15 administrators. When in mixed mode, is capable of ingesting 10,000 - 15,000 logs per second. When to choose M-100? The customer needs a dedicated platform, but is very price sensitive Customer is using dedicated log collectors and are not in mixed mode but do not have VM infrastructure When not to choose M-100? If dual power supplies are required Mixed mode with more than 10k log/s or more than 8TB required for log retention Has more than 15 concurrent admins   M-500 Hardware Platform This platform has the highest log ingestion rate, even when in mixed mode. The higher resource availability will handle larger configurations and more concurrent administrators (15-30). Offers dual power supplies, and has a strong growth roadmap. When to choose M-500? The customer needs a dedicated platform, and has a large or growing deployment Customer is using dual mode with more than 10k log/s Customer want to future proof their investments Customer needs a dedicated appliance but has more than 15 concurrent admins Requires dual power supplies When not to choose M-500? If the customer has VM first environment and does not need more than 48 TB of log storage The customer is very price sensitive   High Availability This section will address design considerations when planning for a high availability deployment. Panorama high availability is Active/Passive only and both appliances need to be fully licensed. There are two aspects to high availability when deploying the Panorama solution. These aspects are Device Management and Logging. The two aspects are closely related, but each has specific design and configuration requirements.   Device Management HA: The ability to retain device management capabilities upon the loss of a Panorama device (either an M-series or virtual appliance). Logging HA or Log Redundancy: The ability to retain firewall logs upon the loss of a Panorama device (M-series only).   Device Management HA When deploying the Panorama solution in a high availability design, many customers choose to place HA peers in separate physical locations. From a design perspective, there are two factors to consider when deploying a pair of Panorama appliances in a High Availability configuration. These concerns are network latency and throughput.   Network Latency The latency of intervening network segments affects the control traffic between the HA members. HA related timers can be adjusted to the need of the customer deployment. The maximum recommended value is 1000 ms. Preemption Hold Time: If the Preemptive option is enabled, the Preemption Hold Time is the amount of time the passive device will wait before taking the active role. In this case, both devices are up, and the timer applies to the device with the "Primary" priority. Promotion Hold Time: The promotion hold timer specifies the interval that the Secondary device will wait before assuming the active rote. In this case, there has been a failure of the primary device and this timer applies to the Secondary device. Hello Interval: This timer defines the number of milliseconds between Hello packets to the peer device. Hello packets are used to verify that the peer device is operational. Heartbeat Interval: This timer defines the number of milliseconds between ICMP messages sent to the peer. Heartbeat packets are used to verify that the peer device is reachable. Relation between network latency and Heartbeat interval Because the heartbeat is used to determine reachability of the HA peer, the Heartbeat interval should be set higher than the latency of the link between the HA members.   HA Timer Presets While customers can set their HA timers specifically to suit their environment, Panorama also has two sets of preconfigured timers that the customer can use. These presets cover a majority of customer deployments   Recommended: Timer Setting Preemption Hold TIme 1 Hello Interval 8000 Heartbeat Interval 2000 Monitor Fail Hold Up Time 0 Additional Master Hold Up Time 7000   Aggressive: Timer Setting      Preemption Hold TIme 500 Hello Interval 8000 Heartbeat Interval 1000 Monitor Fail Hold Up Time 0 Additional Master Hold Up Time  5000     Configuration Sync                                                                              HA Sync Process     The HA sync process occurs on Panorama when a change is made to the configuration on one of the members in the HA pair. When a change is made and committed on the Active-Primary, it will send a send a message to the Active-Secondary that the configuration needs to be synchronized. The Active-Secondary will send back an acknowledgement that it is ready. The Active-Primary will then send the configuration to the Active-Secondary. The Active-Secondary will merge the configuration sent by the Active-Primary and enqueue a job to commit the changes. This process must complete within three minutes of the HA-Sync message being sent from the Active-Primary Panorama. The main concern is size of the configuration being sent and the effective throughput of the network segment(s) that separate the HA members.     Log Availability The other piece of the Panorama High Availability solution is providing availability of logs in the event of a hardware failure. There are two methods for achieving this when using a log collector infrastructure (either dedicated or in mixed mode).   Log Redundancy PAN-OS 7.0 and later include an explicit option to write each log to 2 log collectors in the log collector group. By enabling this option, a device sends it's log to it's primary log collector, which then replicates the log to another collector in the same group:     Log duplication ensures that there are two copies of any given log in the log collector group. This is a good option for customers who need to guarantee log availability at all times. Things to consider:   1. The replication only takes place within a log collector group. 2. The overall available storage space is halved (because each log is written twice). 3. Overall Log ingestion rate will be reduced by up to 50%.    Log Buffering Firewalls require an acknowledgement from the Panorama platform that they are forwarding logs to. This means that in the event that the firewall's primary log collector becomes unavailable, the logs will be buffered and sent when the collector comes back online. There are two methods to buffer logs. The first method is to configure separate log collector groups for each log collector:         In this situation, if Log Collector 1 goes down, Firewall A & Firewall B will each store their logs on their own local log partition until the collector is brought back up. The local log partition for current firewall models are:   Model Log Partition Size (GB)  PA-200 2.4 PA-220 32 PA-800 Series 172 PA-3000 Series    90 PA-3200 Series 125 PA-5000 Series 88 PA-5200 Series 1800   The second method is to place multiple log collectors into a group. In this scenario, the firewall can be configured with a priority list so if the primary log collector goes down, the second collector on the list will buffer the logs until all of the collectors in the group know that the primary collector is down at which time, new logs will stop being assigned to the down collector.   In the architecture shown below, Firewall A & Firewall B are configured to send their logs to Log Collector 1 primarily, with Log Collector 2 as a backup. If Log Collector 1 becomes unreachable, the devices will send their logs to Log Collector 2. Collector 2 will buffer logs that are to be stored on Collector 1 until it can pull Collector 1 out of the rotation.     Considerations for Log Collector Group design   There are three primary reasons for configuring log collectors in a group:   Greater log retention is required for a specific firewall (or set of firewalls) than can be provided by a single log collector (to scale retention). Greater ingestion capacity is required for a specific firewall than can be provided by a single log collector (to scale ingestion). Requirement for log redundancy.   When considering the use of log collector groups there are a couple of considerations that need to be addressed at the design stage:   Spread ingestion accross the available collectors: Multiple device forwarding preference lists can be created. This allows ingestion to be handled by multiple collectors in the collector group. For example, preference list 1 will have half of the firewalls and list collector 1 as the primary and collector 2 as the secondary. Preference list 2 will have the remainder of the firewalls and list collector 2 as the primary and collector 1 as the secondary. Latency matters: Network latency between collectors in a log collector group is an important factor in performance. A general design guideline is to keep all collectors that are members of the same group close together. The following table provides an idea of what you can expect at different latancy measurements with redundancy enabled and disabled. In this case, 'Log Delay' is the undesired result of high latency - logs don't show up in the UI until well after they are sent to Panorama.     Inter LC Latency (ms) Log Rate Redundancy enabled Log Delay 50 10K No No 100 5K No No 100 10K No Yes 50 5K Yes No 50 10K Yes Yes 100 5K Yes No 150 3K Yes No 150 5K Yes Yes        Using The Sizing Worksheet      The information that you will need includes desired retention period and average log rate.     Retention Period: Number of days that logs need to be kept. Average Log Rate: The measured or estimated aggregate log rate. Redundancy Required: Check this box if the log redundancy is required. Storage for Detailed Logs: The amount of storage (in Gigabytes) required to meet the retention period for detailed logs. Total Storage Required: The storage (in Gigabytes) to be purchased. This accounts for all logs types at the defualt quota settings.     Example Use Cases                                                        
View full article
cstancill ‎07-12-2018 03:14 PM
93,115 Views
9 Replies
10 Likes
For Panorama 7.0, refer to the Panorama Administrator’s Guide for the procedures to Configure Log Forwarding, Add a Firewall as a Managed Device, and Analyze Log Data for the PA-7050 firewall and other firewall platforms.   Details A PA-7000 series is configured as a Panorama managed device. Panorama will display logs (traffic logs) for the PA-7000 series, even if there is not a "Log Forwarding Profile" defined or configured on any security policy.   The following examples are for traffic observed on Panorama, even though there is not a Log Forwarding Profile on PA-7000 series. Shown below is traffic observed for Rule "ANY" on Panorama for the PA-7000 series:   In the example below, changing context to the PA-7000 series, reveals the Forwarding Profile is not configured on the Security Policy "ANY":   As shown below, the Log Forwarding profile is not configured on the PA-7000 series:   What is observed in Panorama, is a real time running query from the management port on Panorama to the PA-7000 series, which results in displaying the logs.   Note: The logs are physically residing only on the PA-7000 series. This occurs because Panorama cannot handle the rate at which a PA-7000 series would send its logs out of the box, therefore offloading for this platform to Panorama is not supported.   However, the PA-7000 series does support offloading of its logs to syslog, email and SNMP servers. The PA-7000 series has a dedicated Log Processing Card (LPC). Any unused port on any of the NPCs can be defined to be the LPC (Interface Type: Log Card). A data port configured as the type Log Card performs log forwarding for all of the following: Syslog Email SNMP WildFire file forwarding Only one port on the Palo Alto Networks firewall can be configured as a Log Card interface and a commit error is displayed if log forwarding is enabled and there is no interface configured with the Interface Type: "Log Card".   Make sure that the IP assigned to the Log Card Interface can reach the Syslog, Email, SNMP and/or WildFire servers.   Special Note This limitation was overcome with the release of PAN-OS 8.0 For more information please refer to:   https://www.paloaltonetworks.com/documentation/80/pan-os/newfeaturesguide/management-features/pa-7000-series-firewall-log-forwarding-to-panorama   https://live.paloaltonetworks.com/t5/Featured-Articles/PAN-OS-8-0-Forwarding-PA-7000-Logs-to-Panorama/ta-p/132063
View full article
mivaldi ‎07-11-2018 09:58 AM
31,733 Views
6 Replies
2 Likes
How to collect logs from the different GlobalProtect clients (Windows and Mac).
View full article
sraghunandan ‎05-30-2018 03:39 PM
31,197 Views
5 Replies
1 Like
Details Verify the logs are being written. Run the following commands from CLI: > show log traffic direction equal backward > show log threat direction equal backward > show log url direction equal backward > show log url system equal backward If logs are being written to the Palo Alto Networks device then the issue may be display related through the WebGUI.   Run the following command from CLI: > debug software restart process management-server note: restarting the management-server will reset your ssh connection. owner: bryan
View full article
panagent ‎04-04-2018 01:06 AM
45,257 Views
3 Replies
La Automatización de Palo Alto Networks a partir de PAN-OS 8.0, y los Dynamic Address Group (DAG). El mismo tiene una utilidad importante para lograr generar un Data Center auto-defendido, sin necesidad de tener que aplicar políticas manualmente.
View full article
MarceloRey ‎02-06-2018 12:49 AM
3,217 Views
0 Replies
2 Likes
This article discusses the change in behaviour from PAN-OS 7.0 and higher where the 'deny' action in the security policy results in the application-specific 'deny' action.   From PAN-OS 7.0 branch onwards, the 'deny' policy action is noted as per the default deny action for the application. For example, the default deny action for application 'SSL' is 'drop-reset' and listed in the traffic logs as 'reset-both'.   For checking the default 'deny' action of an application, please refer to Applipedia or Objects > Application on the firewall GUI.   Below is an example showing the action 'Deny' for application 'SSL'            Note the 'Deny Action' for application SSL is 'drop-reset'       The action listed for a security policy with action 'deny' in the previous PANOS version 6.1 can be seen as 'deny' itself          NOTE : The above change in behaviour for action 'deny' may result in the logs and reports capturing results with action as 'reset-both' and this is expected behaviour.   For more details on the change in security policy actions and options, please refer to:   Granular Actions for Blocking Traffic in Security Policy  Configurable Deny Action   Applicable actions with all available options:   1. Action 'Deny'       2. Action 'Allow'       3. Action 'Drop'         4. Action 'Reset-client'       5. Action 'Reset-server'       5. Action 'Reset both client and server'    
View full article
syadav ‎01-08-2018 06:53 AM
5,471 Views
0 Replies
Question Why is it that when I use the command >scp export log traffic query start-time equal <time stamp> end-time equal <time stamp> to <location> on a firewall, I can get a CSV file that has more than 1 million lines, but when the command is ran on a Panorama I only get a maximum amount of 65535 lines? Answer The distributed nature of Panorama and PA-7000 platforms makes that a log query will cause several sources to be accessed and potentially terrabytes of data needed to be sifted through to accommodate for the export which could cause performance degradation, as the management plane will be taxed, and network congestion in distributed collector environments. This is why the log export capability is set to a 65535 lines limitation by default for these platforms. The total number of exported lines can be increased to 1 million by setting the max-log-count parameter.   This limitation is not imposed on firewall platforms as they store their logs on a single disk with limited storage capacity, making a large query less resource intensive. Log export on a firewall system is limited to 4 billion lines.   If log needs to be routinely exported off of Panorama, consider Configure Log Forwarding from Panorama to External Destinations
View full article
zsanem ‎12-14-2017 06:45 AM
2,306 Views
0 Replies
The following CLI command will allow you to export the logged data from Panorama: >scp export logdb to username@hostpath   >Note that you need to add a filename.   An example :< > scp export logdb to user@10.192.0.32:/Users/user/filename.tar.gz Password:******* >
View full article
nrice ‎12-13-2017 01:35 AM
5,626 Views
4 Replies
After upgrading PAN-OS to 6.0.x or later in the Panorama appliance and the Log Collector, we’re now unable to see the system log sent from the Log Collector in the Panorama appliance. This scenario seems to occur if the Local Log Collector configuration is deleted before upgrading PAN-OS. Any help?
View full article
TShimizu ‎08-16-2017 03:07 AM
11,008 Views
1 Reply
1 Like
Overview Panorama saves a backup of every committed configuration from each device it manages. In addition, Panorama saves copies of its own committed configurations. To facilitate off-box backup requirements, the system supports a method to regularly export these backups to an external data store. This document describes the steps to back up Panorama.   Steps Managing device backups from the Web UI: To manage device backups on Panorama: Go to Panorama > Managed Devices. Click Manage in the Backups column for a device. This brings up a window showing saved and committed configurations for the device. Click Load to restore the selected configuration to the device. To remove a saved configuration, click .   Managing Panorama Configuration Backups from the GUI Go to Panorama > Setup > Operations. Click “Export named Panorama configuration snapshot” or “Export Panorama configuration version” under the Configuration Management section. Select the configuration from the configuration drop down list in the pop-up window. Click OK.   Manual Export and Import of Panorama Configuration from the CLI On the CLI, the commands below will export the Panorama configuration: > tftp export configuration with the following parameters: remote-port tftp server port source-ip Set source address to specified interface address from from to tftp host   > scp export configuration with the following parameters: remote-por t SSH port number on remote host source-ip Set source address to specified interface address from from to destination (username@host:path)   To import Panorma’s configuration from the CLI, use the following command: > tftp import configuration from 1.2.3.4 file c:/a/b/c   To export Panorama’s entire log database, use the following command: > scp export logdb to   <value>  Destination (username@host:path)   To import a Panorama log database, use the following command: > scp import logdb from userabc@1.2.4.3:c/a/b/c Note:  For LogDBs larger than 2 TB the export will fail. The failure comes from the tarring lack of /tmp space for operation and archiving failure.    owner: swhyte
View full article
nrice ‎08-16-2017 02:53 AM
18,390 Views
5 Replies
1 Like
Procedure applies for PANOS versions  8.0 and below   Scenario Standalone M-500 Panorama in Hybrid mode ( Panorama device management and local Log Collector configured )  faced a hardware issue that requires chassis replacement. The M-500 uses 8 disk pairs for storing the logs received from managed devices. Naming convention Faulty M-500 device to be replaced will be called  "Old-M-500". Newly received replacement device will be called "New-M-500". You can use any name desired in your environment. These names are used for easier understanding of operations in the procedure.   Requirements In order to replace the faulty chassis Old-M-500 we need to have the configuration saved, so that we can import it in the New-M-500. Configuration can be exported by following the procedure in this Live article:. How to Back Up Panorama or by following the administrator manual: Export Panorama and Firewall Configurations The Old-M-500 has 8 disk pairs that will be moved to the New-M-500.   Procedure details 1) Power down the failed M-500 platform - Old-M-500.   Shutdown Panorama Link 2) Configure the New-M-500 in Panorama mode. Import the configuration exported from the faulty device. Import Old-M-500 exported configuration in the New-M-500. Load the named imported configuration into the New-M-500. Modify the Hostname from Old-M-500 to New-M-500.   Commit the configuration to Panorama. 3) Take the Primary disks from Old-M-500 ( A1, B1, C1, D1, E1, F1, G1, H1) and move them to the same Primary positions in New-M-500 ( A1, B1, C1, D1, E1, F1, G1, H1) .   Check M-500 Hardware documentation for correct identification of disks. M-500 Hardware Guide   The picture below shows the physical positioning of the drives inside the M-500 devices.     On New-M-500 we are going to add the Primary Log disks to RAID using CLI commands.  We must use "force" and "no-format" option. Force option associates the disk pair that is previously associated with another Log Collector. The option “no-format” keeps the logs by not formatting the disk storage. In this step we are going to add the Primary log disks only.   Secondary Log Disks will be added towards the end of the procedure. This is done as the Secondary Log Disks are used as data backup and we do not want to use them until the Migration of logs is confirmed.   In our example we have 8 Active RAID pairs ( A, B, C, D, E, F, G, H ). The full list of commands to attach the 8 primary 8 disks is: admin@New-M-500> request system raid add A1 force no-format admin@New-M-500> request system raid add B1 force no-format admin@New-M-500> request system raid add C1 force no-format admin@New-M-500> request system raid add D1 force no-format admin@New-M-500> request system raid add E1 force no-format admin@New-M-500> request system raid add F1 force no-format admin@New-M-500> request system raid add G1 force no-format admin@New-M-500> request system raid add H1 force no-format 4) Check the disk adding status by verifying the status and RAID status:   > show system raid detail Example: Output for 8 primary disks inserted after the adding operation ends: admin@New-M-500> show system raid detail Disk Pair A                           Available    Status                       clean, degraded    Disk id A1                           Present        model        : ST91000640NS            size         : 953869 MB        status       : active sync    Disk id A2                           Missing Disk Pair B                           Available    Status                       clean, degraded    Disk id B1                           Present        model        : ST91000640NS            size         : 953869 MB        status       : active sync    Disk id B2                           Missing .... Disk Pair G                           Available    Status                       clean, degraded    Disk id G1                           Present        model        : ST91000640NS            size         : 953869 MB        status       : active sync    Disk id G2                           Missing Disk Pair H                           Available    Status                       clean, degraded    Disk id H1                           Present        model        : ST91000640NS            size         : 953869 MB        status       : active sync    Disk id H2                           Missing To follow the state of the addition you can check the Management Plane raid.log debug logs through CLI:     >  tail lines 120 mp-log raid.log   This commands shows the last 120 lines that contain all the logs necessary to check the disk operations.   Mar 20 00:01:37 DEBUG: raid_util: argv: ['GetArrayId', 'A1'] Mar 20 00:01:37 DEBUG: raid_util: argv: ['Add', 'A1', 'force', 'no-format', 'verify'] Mar 20 00:01:37 DEBUG: Verifying drive A1 to be added. Mar 20 00:01:37 DEBUG: create_md 1, sdb Mar 20 00:01:38 DEBUG: raid_util: argv: ['Add', 'A1', 'force', 'no-format'] Mar 20 00:01:38 INFO: Adding drive A1 (sdb) Mar 20 00:01:38 DEBUG: create_md 1, sdb Mar 20 00:01:38 DEBUG: create_md_paired_drive 1, sdb, no_format=True Mar 20 00:01:38 DEBUG: Mounting Disk Pair A (/dev/md1) Mar 20 00:01:38 DEBUG: set_drive_pairing_one 1 Mar 20 00:01:38 INFO: New Disk Pair A detected. Mar 20 00:01:38 DEBUG: Created Disk Pair A (/dev/md1) from A1 (/dev/sdb1) Mar 20 00:01:38 INFO: Done Adding drive A1 ... Mar 20 00:02:41 DEBUG: raid_util: argv: ['GetArrayId', 'H1'] Mar 20 00:02:41 DEBUG: raid_util: argv: ['Add', 'H1', 'force', 'no-format', 'verify'] Mar 20 00:02:41 DEBUG: Verifying drive H1 to be added. Mar 20 00:02:41 DEBUG: create_md 8, sdp Mar 20 00:02:41 DEBUG: raid_util: argv: ['Add', 'H1', 'force', 'no-format'] Mar 20 00:02:41 INFO: Adding drive H1 (sdp) Mar 20 00:02:41 DEBUG: create_md 8, sdp Mar 20 00:02:41 DEBUG: create_md_paired_drive 8, sdp, no_format=True Mar 20 00:02:42 DEBUG: Mounting Disk Pair H (/dev/md8) Mar 20 00:02:42 DEBUG: set_drive_pairing_one 8 Mar 20 00:02:42 INFO: New Disk Pair H detected. Mar 20 00:02:42 DEBUG: Created Disk Pair H (/dev/md8) from H1 (/dev/sdp1) Mar 20 00:02:42 INFO: Done Adding drive H1 5)  Next step is to regenerate the Log Disks' Metadata for each RAID disk slot. Note:  This command can take a long time to finish depending on the data size stored on the disks, because the command rebuilds all the log indexes.   > request metadata-regenerate slot 1 > request metadata-regenerate slot 2 > request metadata-regenerate slot 3 > request metadata-regenerate slot 4 > request metadata-regenerate slot 5 > request metadata-regenerate slot 6 > request metadata-regenerate slot 7 > request metadata-regenerate slot 8 Sample Output:   Bringing down vld: vld-0-0 Process 'vld-0-0' executing STOP Removing old metadata from /opt/pancfg/mgmt/vld/vld-0 Process 'vld-0-0' executing START Done generating metadata for LD:1 .... admin@New-M-500> request metadata-regenerate slot 8 Bringing down vld: vld-7-0 Process 'vld-7-0' executing STOP Removing old metadata from /opt/pancfg/mgmt/vld/vld-7 Process 'vld-7-0' executing START Done generating metadata for LD:8 You can check the status of the metadata regeneration by opening a new CLI window and running the command to follow the debug log file vldmgr.log:   > tail lines 100 follow yes mp-log vldmgr.log This commands shows the last 100 lines and then follows the logfile vldmgr.log: Sample output: 2017-03-19 23:38:42.836 -0700 sysd send 'stop LD:1 became unavailable' to 'vld-0-0' vldmgr:vldmgr 2017-03-19 23:38:43.185 -0700 Error:  _process_fd_event(pan_vld_mgr.c:2113): connection failed on fd:13 for cs:vld-0-0 2017-03-19 23:38:43.185 -0700 Sending to MS new status for slot 0, vldid 1280: not online 2017-03-19 23:38:43.185 -0700 setting LD refcount in var:runtime.ld-refcount.LD1 to 0. create:false 2017-03-19 23:38:46.186 -0700 vldmgr vldmgr diskinfo cb from sysd .... 2017-03-20 00:20:56.792 -0700 setting LD refcount in var:runtime.ld-refcount.LD7 to 2. create:false 2017-03-20 00:20:56.792 -0700 Sending to MS new status for slot 6, vldid 1286: online 2017-03-20 00:20:56.905 -0700 connection failed for err 111 with vld-7-0. Will start retry 3 in 2000 2017-03-20 00:20:58.907 -0700 connection failed for err 111 with vld-7-0. Will start retry 4 in 2000 2017-03-20 00:21:00.908 -0700 Connection to vld-7-0 established 2017-03-20 00:21:00.908 -0700 connect(2) succeeded on fd:20 for cs:vld-7-0 2017-03-20 00:21:00.908 -0700 setting LD refcount in var:runtime.ld-refcount.LD8 to 2. create:false 2017-03-20 00:21:00.908 -0700 Sending to MS new status for slot 7, vldid 1287: online 6) On the New-M-500 add a new Local Collector. Click add under Panorama > Managed Collectors to add a new Collector. Under the General tab, enter the serial number of the New-M-500 device that we are moving the disks to. ( Visual example can be found below.  ) We will add the disks to the New-M-500 Log Collector in a later step.   7) Check the status of the new Log Collector. Check for following things in the output of the command: a. Connected status should display “yes” b. Disk capacity should display the correct size c. Disk pair will display as “Disabled” but this is expected behavior at this stage in the RMA process   >  show log-collector serial-number <serial-number-of-New-M-500> Sample output:   > show log-collector serial-number 007307000539 Serial           CID      Hostname           Connected    Config Status    SW Version         IPv4 - IPv6                                                      --------------------------------------------------------------------------------------------------------- 007307000539     0        M-500_LAB          yes          Out of Sync      7.1.7              10.193.81.241 - unknown Redistribution status:       none Last commit-all: commit succeeded, >>>>>>>>current ring version 0<<<<<<<< md5sum  updated at ? Raid disks DiskPair A: Disabled,  Status: Present/Available,  Capacity: 870 GB DiskPair B: Disabled,  Status: Present/Available,  Capacity: 870 GB DiskPair C: Disabled,  Status: Present/Available,  Capacity: 870 GB DiskPair D: Disabled,  Status: Present/Available,  Capacity: 870 GB DiskPair E: Disabled,  Status: Present/Available,  Capacity: 870 GB DiskPair F: Disabled,  Status: Present/Available,  Capacity: 870 GB DiskPair G: Disabled,  Status: Present/Available,  Capacity: 870 GB DiskPair H: Disabled,  Status: Present/Available,  Capacity: 870 GB 8) Add the disks to the New-M-500 Log collector configuration: Panorama > Managed Collectors Click on the name of the Log Collector (Eg. New-M-500) Click on the tab Disks Add all the disks that were moved to the New-M-500 device.  ( Eg. A,B,C,D,E,F,G,H)       9) On New-M-500 add the new Local Log Collector that we have created to the existing Log Collector Group that the Old-M-500 was a part of,  in this example the Old-M-500 log collector was part of the "default" Collector Group. Add the New-M-500 Log collector where the Old-M-500 Log collector was present:       10) Delete the failed Log Collector from the Collector Group. On WebGUI go to Panorama > Collector Group Select the Collector Group name where the New-M-500 is configured. In the Collector Group popup select the tab "Device Log Forwarding". Delete all references of the serial number of the failed Old-M-500.   11) Issue a Panorama Commit only.   12) Issue Commit to Collector Group. 13. Check that the old logs are visible.       14. Add spare disks to RAID, so that we rebuild full RAID redundancy will log migration already confirmed. Physically move disks from Old-M-500  A2, B2, C2, D2, E2, F2, G2, H2  to the New-M-500 A2, B2, C2, D2, E2, F2, G2, H2.   Check that the disks are available to be added to RAID:   > show system raid detail The newly added disks will be in the state "Present" and status "Not in use". admin@New-M-500> show system raid detail Sample Output: Disk Pair A                           Available    Status                       clean, degraded    Disk id A1                           Present        model        : ST91000640NS            size         : 953869 MB        status       : active sync    Disk id A2                           Present        model        : ST91000640NS            size         : 953869 MB        status       : not in use .... Disk Pair H                           Available    Status                       clean, degraded    Disk id H1                           Present        model        : ST91000640NS            size         : 953869 MB        status       : active sync    Disk id H2                           Present        model        : ST91000640NS            size         : 953869 MB        status       : not in use           15. Add the secondary disks (A2,B2,C2,D2,E2,F2,G2,H2) to RAID using the command:   > request system raid add A2 force > request system raid add B2 force > request system raid add C2 force > request system raid add D2 force > request system raid add E2 force > request system raid add F2 force > request system raid add G2 force > request system raid add H2 force   Note:  Executing this command may delete all data on the drive being added. Do you want to continue? (y or n) Press the key "y" to accept. After these commands the RAID goes to "Spare Rebuild" operation.  Please note that this may be a lengthy operation and it will run in the background until it ends. During this time logging to the Log Collector Group will be on hold. Once operation is over the log forwarding to the New-M-500 will resume. You can check the status of the operation running the command: > show system raid detail Sample Output: > show system raid detail Disk Pair A                           Available    Status     clean, degraded, recovering (2% complete)    Disk id A1                           Present        model        : ST91000640NS            size         : 953869 MB        status       : active sync    Disk id A2                           Present        model        : ST91000640NS            size         : 953869 MB        status       : spare rebuilding   .... Disk Pair H                           Available    Status     clean, degraded, recovering (0% complete)    Disk id H1                           Present        model        : ST91000640NS            size         : 953869 MB        status       : active sync    Disk id H2                           Present        model        : ST91000640NS            size         : 953869 MB        status       : spare rebuilding         16. Once the Spare rebuild operation is finished the New-M-500 is in fully operational state and the RMA process is done.
View full article
bbolovan ‎08-07-2017 01:02 AM
7,680 Views
2 Replies
2 Likes
Problem URLs in "URL" field of URL filtering logs does not include port number when accessed URLs are not port 80 or 443.     The corresponding logs sent to syslog server: Jul 18 13:30:04 Lab130-35-PA-3060 1,2017/07/18 13:30:03,010401000897,THREAT,url,1,2017/07/18 13:30:03,192.168.35.110,10.128.128.207,10.128.128.35,10.128.128.207,Trust-to-Untrust,,,web-browsing,vsys1,L3-Trust,L3-Untrust,ethernet1/6,ethernet1/3,test,2017/07/18 13:30:03,20381,1,16871,8888,21504,8888,0x408000,tcp,alert,10.128.128.207/,(9999),test8888,informational,client-to-server,3628,0x0,192.168.0.0-192.168.255.255,10.0.0.0-10.255.255.255,0,text/html,0,,,1,,,,,,,,0 Jul 18 13:30:06 Lab130-35-PA-3060 1,2017/07/18 13:30:05,010401000897,THREAT,url,1,2017/07/18 13:30:05,192.168.35.110,10.128.128.207,10.128.128.35,10.128.128.207,Trust-to-Untrust,,,web-browsing,vsys1,L3-Trust,L3-Untrust,ethernet1/6,ethernet1/3,test,2017/07/18 13:30:05,20388,1,16872,80,44827,80,0x408000,tcp,alert,10.128.128.207/,(9999),test8888,informational,client-to-server,3629,0x0,192.168.0.0-192.168.255.255,10.0.0.0-10.255.255.255,0,text/html,0,,,1,,,,,,,,0     Resolution After PAN-OS 7.0, this field's output has been changed. The port number is shown in the URL field when accessed URLs are not port 80 or 443.     The corresponding logs sent to syslog server also include port number in the field: Jul 18 13:50:37 Lab130-35-PA-3060 1,2017/07/18 13:50:36,010401000897,THREAT,url,1,2017/07/18 13:50:36,192.168.35.110,10.128.128.207,10.128.128.35,10.128.128.207,Trust-to-Untrust,,,web-browsing,vsys1,L3-Trust,L3-Untrust,ethernet1/6,ethernet1/3,test,2017/07/18 13:50:36,11,1,16968,80,46822,80,0x408000,tcp,alert,10.128.128.207/,(9999),test8888,informational,client-to-server,3631,0x0,192.168.0.0-192.168.255.255,10.0.0.0-10.255.255.255,0,text/html,0,,,1,,,,,,,,0,0,0,0,0,,Lab130-35-PA-3060, Jul 18 13:50:39 Lab130-35-PA-3060 1,2017/07/18 13:50:38,010401000897,THREAT,url,1,2017/07/18 13:50:38,192.168.35.110,10.128.128.207,10.128.128.35,10.128.128.207,Trust-to-Untrust,,,web-browsing,vsys1,L3-Trust,L3-Untrust,ethernet1/6,ethernet1/3,test,2017/07/18 13:50:38,21,1,16969,8888,55926,8888,0x508000,tcp,alert,10.128.128.207:8888/,(9999),test8888,informational,client-to-server,3632,0x0,192.168.0.0-192.168.255.255,10.0.0.0-10.255.255.255,0,text/html,0,,,1,,,,,,,,0,0,0,0,0,,Lab130-35-PA-3060,     Note: Filtering sites setting does not need a port number (Objects > Custom Objects > URL Category)
View full article
dyamada ‎07-27-2017 05:04 AM
4,301 Views
0 Replies
Overview For IP-to-user mappings, many networks have more than one monitored Active Directory or Domain Controller for data redundancy. Troubleshooting user mapping issues may be harder if the source of a particular user mapping is unknown. This document presents how to use the >  show log userid  command to obtain useful information regarding user mapping information, including how the user mapping was learned by the firewall.   Steps As an example, one User-ID agent (Agent243) and one Agentless User-ID (Agentless243) are configured on the firewall. Verify the configured sources from which you are learning user mappings. For User-ID Agents hosted on a Windows machine, use the command: > show user user-id-agent statistics For agentless User-ID configured on the firewall, use the following command: > show user server-monitor statistics Verify the user mappings that are currently learned on the firewall, using either of these commands. For all known mappings on the firewall: > show user ip-user-mapping all For user mappings to a specific IP - Example 1.1.1.1: > show user ip-user-mapping ip 1.1.1.1 Once you know enough about the configured data sources or users, you can use the >  show log userid command to derive more useful information about the user mappings. Note: Debug mode should be enabled on the User-ID process for in-depth logging Enabled debug mode > debug user-id log-ip-user-mapping yes Disable debug mode after acquiring the desired logs > debug user-id log-ip-user-mapping no   Examples of using the show log userid command: Determine the most recent addresses learned from the agenless user-id source: > show log userid datasourcename equal Agentless243 direction equal backward Domain,Receive Time,Serial #,Type,Threat/Content Type,Config Version,Generate Time,Virtual System,ip,User,datasourcename,eventid,Repeat Count,timeout, beginport,endport,datasource,datasourcetype,seqno,actionflags 1,2013/10/17 17:31:05,0006C114479,USERID,login,4,2013/10/17 17:31:05,vsys1, 10.66.22.60,plano2008r2\userid,Agentless243,0,1,2700,0,0,active-directory, unknown,4434,0x0 1,2013/10/17 17:29:58,0006C114479,USERID,login,4,2013/10/17 17:29:58,vsys1, 10.66.22.85,plano2008r2\ldapsvc,Agentless243,0,1,2700,0,0,active-directory, unknown,4342,0x0     Determine the most recent mappings received for IP address 192.168.40.212: > show log userid ip in 192.168.40.212 direction equal backward Domain,Receive Time,Serial #,Type,Threat/Content Type,Config Version,Generate Time,Virtual System,ip,User,datasourcename,eventid,Repeat Count,timeout, beginport,endport,datasource,datasourcetype,seqno,actionflags 1,2013/10/17 17:09:33,0006C114479,USERID,login,3,2013/10/17 17:09:33,vsys1, 192.168.40.212,plano2008r2\tasonibare,Agent243,0,1,3600,0,0,agent,unknown,18, 0x0   Determine the mappings that were identified through kerberos authentication: > show log userid datasourcetype equal kerberos   Determine the earliest recent mappings received for user 'piano2008r2\userid' > show log userid user equal 'piano2008r2\userid' Domain,Receive Time,Serial #,Type,Threat/Content Type,Config Version,Generate Time,Virtual System,ip,User,datasourcename,eventid,Repeat Count,timeout, beginport,endport,datasource,datasourcetype,seqno,actionflags 1,2013/10/17 17:09:33,0006C114479,USERID,login,3,2013/10/17 17:09:33,vsys1, 10.66.22.87,piano2008r2\userid,Agent243,0,1,3600,0,0,agent,unknown,8,0x0 1,2013/10/17 17:11:54,0006C114479,USERID,login,4,2013/10/17 17:11:54,vsys1, 10.66.22.87,piano2008r2\userid,Agentless243,0,1,2700,0,0,active-directory, unknown,21,0x0 Note: The command above includes the domain and the username in quotes and the direction keyword was left out. This user has also been learned from both the agentless and user-id agent sources.   Show all logs related to userid: > show log userid   owner: tasonibare
View full article
tasonibare ‎07-26-2017 03:10 PM
22,169 Views
5 Replies
1 Like
The exported logdb is stored using proprietary compression algorithms. They cannot be decompressed and viewed.   See also For information on how to export and import the logdb CLI Commands to Export/Import Configuration and Log Files The logs can also be retrieved using the API PAN-OS® and Panorama™ 7.1 XML API Usage Guide PAN-OS® and Panorama™ 8.0 XML API Usage Guide   owner: sraghunandan
View full article
sraghunandan ‎05-31-2017 01:42 AM
7,788 Views
2 Replies
With agentless User-ID, the user mappings are directly obtained by the queries made by the firewall itself on the domain controller.   The IP-user mapping logs can be viewed by performing the steps below.   Steps Turn on logging for ip-user mapping > debug user-id log-ip-user-mapping yes View the log > show log userid 1,2013/03/28 12:53:05,001701000225,USERID,login,12,2013/03/28 12:53:05,vsys1, 172.17. 128.92,plano2008r2\administrator,test,0,1,2700,0,0,active-directory,unknown,1,0x0 1,2013/03/28 12:53:05,001701000225,USERID,login,12,2013/03/28 12:53:05,vsys1,172.17. 128.92,plano2008r2\administrator,test,0,1,2700,0,0,active-directory,unknown,2,0x0 1,2013/03/28 12:53:05,001701000225,USERID,login,12,2013/03/28 12:53:05,vsys1,172.17. 128.92,plano2008r2\administrator,test,0,1,2700,0,0,active-directory,unknown,3,0x0 1,2013/03/28 12:53:05,001701000225,USERID,login,12,2013/03/28 12:53:05,vsys1,172.17. 128.92,plano2008r2\administrator,test,0,1,2700,0,0,active-directory,unknown,4,0x0 Turn off logging > debug user-id log-ip-user-mapping no   See also For more information on User-ID, please see the following link: User-ID resource list   owner: anatrajan
View full article
Chatri ‎04-24-2017 03:06 AM
8,851 Views
5 Replies
Symptoms On a daily basis, the firewall reports the following error: opaque: Failed to check Content content upgrade info due to generic communication error   Connectivity to the update server has been verified and no issues were found.   Issue This error message occurs if two or more content updates are scheduled at the same time (example: anti-virus and Apps-Threats).   Resolution From the Device > Dynamic Updates page, change the schedule for every content database so the update times are not equal or close together, add a shift in minutes if two update schedules do come close together.       owner: rvanderveken
View full article
npare ‎03-17-2017 03:20 AM
24,786 Views
9 Replies
Steps Note: The following example shows how to export a Traffic log.  The process is similar for the other types of logs. Go to Monitor > Logs > Traffic. Click Export to CSV icon. An Exporting Logs popup window is displayed. Click Download file. A CSV file is downloaded to the local Desktop. Open the CSV file in Excel.   Note: Logs can be exported using Filters.  Refer to: How to Add, Save, Load, and Clear Log Filters.  Once the filter is applied, follow the same steps listed above.   Export logs to a SCP or FTP server Run the following commands to export log files: SCP > scp export log traffic start-time equal 2011/12/21@12:00:00 end-time equal 2011/12/26@12:00:00 to <value> Destination (username:password@host) or (username@host) FTP > ftp export log traffic start-time equal 2011/12/21@12:00:00 end-time equal 2011/12/26@12:00:00 to <value> Destination (username:password@host) or (username@host) note: 'start-time' and 'end-time' are mandatory values when exporting from the CLI     owner: ppatel
View full article
panagent ‎02-09-2017 01:05 AM
49,286 Views
12 Replies
1 Like
FAQ (Frequently Asked Questions): Is there a global option on the device to send all traffic logs to Panorama? Answer: There is no global option to send all traffic logs to panorama.  Each individual policy will have to be set to send logs to Panorama.   Can users logging into Panorama see all devices? Answer: Users logging into Panorama will only see the device context for which they have access.   Will both the device and panorama have the traffic logs? Answer: Yes both Panorama and the device will have the traffic logs.   How do I commit the changes to all devices at the same time, since the devices are showing out of sync? Answer: At the bottom of the device display, choose "display by device-group".  The commit will then make changes to all the devices in the group.   owner: mrajdev
View full article
panagent ‎01-03-2017 06:53 AM
6,108 Views
0 Replies
Details     Since PAN-OS 6.0 the CLI was changed to conform to a new method of storing and retrieving pcaps. Previously, threat pcaps were stored in the PAN-OS file system as files in a directory, with a directory for each day.  This limited pcap storage to around 131K pcap files per day, due to file system performance limitations.   Beginning in PAN-OS 6.0, pcaps are stored in a database. Rather than identifying pcaps by a timestamp or time range, a unique “pcap id” is given to each pcap. The pcap id is stored in the associated threat log and provides a cleaner way to reference pcaps for a specific threat log.   The previous method for downloading all pcaps for a given day was simply a mask for retrieving all files from a particular directory (day).  Because the new method of storing pcaps uses a database, there is no equivalent method for downloading all pcaps for a particular day.   For example: tftp > tftp export threat-pcap search-time "2014/08/08 13:13:13" to 192.168.0.2 pcap-id 1199650572660113409 scp > scp export threat-pcap search-time "2014/08/08 13:13:13" to <username@host:path>   owner: kwens
View full article
‎12-07-2016 08:56 AM
2,354 Views
0 Replies
This article explains the meaning of messsage 'Unable to save content info file' while installing content version on the Palo Alto Networks firewall.   This message means that information regarding the content, such as Features, Size, Release Date, etc .could not be installed on the firewall. This message is harmless and just means that information could not be saved hence, all this type of information is listed as "Unknown" under Device > Dynamic Updates.             This message is observed when dynamic updates are installed manually by downloading the file from the support website. It is generally observed in the following scenarios:   A first-time installation of content version is done on the firewall and this installation is manual There is a content version installed on the firewall but it was also done manually and upgrade to content version is also done manually   The message would be observed with each manual installation unless dynamic updates are performed directly from the firewall by connecting to the update server (i.e. firewall having internet connection to perform dynamic updates).
View full article
poagrawal ‎12-07-2016 07:39 AM
1,751 Views
0 Replies
Symptoms Consider a proxy server deployed between users on a network and firewall. In such a case, the firewall shows the proxy server's IP address as the source IP address in the traffic logs. Hence, restricting access based on actual user and determining actual user from traffic logs is not possible.   This article focuses on providing a solution to this issue. Note: This is applicable to PAN-OS 7.0 and later. Diagnosis Prerequisites:   Proxy server should add X-Forwarded-For (XFF) header containing actual IP of client when forwarding to firewall Configure User-Identification on the firewall to gather ip-user-mapping Enable XFF identification for User-ID. To learn more about this, please click here. Solution   Setup:   Proxy server (192.168.30.103)  ---- PA Firewall ----- Internet   Configure security policies on firewall as shown in order:             Details:   Allow DNS - Required to allow DNS queries before actual connection   Allow Handshake - Required to allow TCP 3-way handshake because XFF would be in HTTP GET packet, which would follow the 3-way handshake. Hence, user mapping could be determined only after the initial handshake. Following are traffic logs for the initial 3-way handshake:     Note this policy has URL filtering profile applied to allow only an initial 3-way handshake and no web-browsing. After the 3-way handshake, further action is determined by user-specific policies:     XFF - Required for restricting user-based access (application can be changed to specific web-browsing [since XFF is in HTTP] or combined with other user-based policy as required. Also, a URL filtering profile could be applied for more restrictions on traffic.   After HTTP GET packets come on the firewall from a proxy server, the firewall checks the ip-user-mapping table to find  and apply policies based on the source user.   GET Packet: User Mapping:   Policy Shift:           Additional notes:   - For HTTPS, complete SSL handshake needs to be allowed (as Allow Handshake but no URL filtering) and SSL decryption needs to be enabled to read XFF header and check user-mapping - If there is no user mapping for the IP in XFF, Source User would be blank in traffic logs and user based policies will not come into action - If you enable XFF for user-ID, URL filtering logs will show username in Source User instead of XFF IP. To see how to enable XFF in URL filtering logs, please click here - XFF can be enabled for URL filtering logs, even if there is no URL filtering license. For more details, please click here
View full article
hagarwal ‎12-05-2016 03:55 PM
3,917 Views
0 Replies
1 Like
Overview The VPN tunnel between two devices fails with error "Unknown ikev2 peer," even if all the crypto profiles, pre-shared-keys and proxy IDs match. This article features the details of the cause of this error message   Issue Generally, this error is seen when building the tunnel with Microsoft Azure. However, it is not limited to just Microsoft Azure and could be with any VPN peer device. Shown beliw is how the error messages are seen on the Palo Alto Networks firewall:     "Unknown ikev2 peer" means that there is an IKE version mismatch between the VPN peers. One of the peer is using IKEv1, and another peer is using IKEv2. This could be verified through the packet captures as shown below.   Note: Microsoft Azure by default, uses IKEv2 version unless specified, and is the common cause of this error.   One peer sending IKEv2 message:       Another peer sending IKEv1 message:     Resolution To fix this problem, IKE versions should be matched on both peers.   Note: Prior to version 7.0, the Palo Alto Networks firewall does not support IKEv2 version hence, you need to change IKE version on the VPN peer to v1. Starting from PAN-OS 7.0, you can control the IKE version from the Palo Alto Networks firewall itself.   For more information on how to change the IKE version on Palo Alto Networks firewall, please click here    
View full article
hagarwal ‎10-18-2016 11:29 AM
3,250 Views
0 Replies
Data Filtering logs are part of the Informational Threat Logs.   1. Create 3 files with credit card information.   5376-4698-9386-4886 5564-8017-1758-1316 5464-9730-1302-5263 5257-2750-0534-2578 5564-9616-5310-6823 5483-3128-3984-7229 5352-9543-2663-9003 5130-0484-5710-3076 5210-3641-5712-1745 5559-4615-4452-4711 (1 text file with 10 credit card numbers)   5376-4698-9386-4886 5564-8017-1758-1316 (another text file with 2 credit card numbers)   5376-4698-9386-4886 5564-8017-1758-1316 5559-4615-4452-4711 (another text file with 3 credit card numbers)   I have set the CC weight to 1 and set alert level to 3 and block to 6.   For Configuring Data Filtering Profile, go to Objects_Tab > Security_Profiles > Data_Filtering:   For Configuring Data Filtering Pattern, go to Objects_Tab > Custom_Objects > Data_Patterns: So when I sent these files through FTP, we got the following results:   +1st file, I get reset both on Data Filtering logs. +2nd file, I did not get any alerts. +3rd file, I got an alert on Data Filtering logs.    
View full article
michandras ‎10-03-2016 05:31 PM
3,647 Views
0 Replies
There may be cases where analysis/verification is required to determine whether traffic is being sent/received via the management interface. One such example would be during authentication testing to verify whether requests are being sent from the device to the LDAP or Radius server. Another example would be to determine whether a device is being polled/reachable through a SNMP server. Starting with PAN-OS 5.0 it is possible to know PCAP traffic to/from the management interface. The option is strictly CLI based utilizing tcpdump.   Example below: As captures are strictly/implicitly utilizing the management interface, there is no need to manually specify interfaces as with a traditional tcpdump. For example: admin@myNGFW> tcpdump filter "host 10.16.0.106 and not port 22" Press Ctrl-C to stop capturing tcpdump: listening on eth0, link-type EN10MB (Ethernet), capture size 96 bytes   Note: Filters must be enclosed in quotes, as in: > tcpdump filter "host 10.16.0.106 and not port 22"   When a capture is complete, press Ctrl-C to stop capturing: admin@myNGFW> tcpdump filter " host 10.16.0.106 and not port 22 " Press Ctrl-C to stop capturing tcpdump: listening on eth0, link-type EN10MB (Ethernet), capture size 96 bytes ^C 6 packets captured 12 packets received by filter 0 packets dropped by kernel   To view the PCAP on the CLI run the view-pcap command. For example:   admin@myNGFW> view-pcap mgmt-pcap mgmt.pcap 15:42:57.834414 IP 10.16.0.106.https > 10.192.1.0.61513: P 196197148:196197179(31) ack 2821691363 win 66 <nop,nop,timestamp 9463094 700166797> 15:42:57.834477 IP 10.16.0.106.https > 10.192.1.0.61513: F 31:31(0) ack 1 win 66 <nop,nop,timestamp 9463094 700166797> 15:42:57.834910 IP 10.192.1.0.61513 > 10.16.0.106.https: . ack 31 win 4095 <nop,nop,timestamp 700231236 9463094> 15:42:57.834933 IP 10.192.1.0.61513 > 10.16.0.106.https: . ack 32 win 4095 <nop,nop,timestamp 700231236 9463094> 15:42:58.142807 IP 10.192.1.0.61513 > 10.16.0.106.https: F 1:1(0) ack 32 win 4096 <nop,nop,timestamp 700231542 9463094> 15:42:58.142831 IP 10.16.0.106.https > 10.192.1.0.61513: . ack 2 win 66 <nop,nop,timestamp 9463125 700231542>       Following are a few filter examples (though NOT limited solely to these options) which can be referenced/utilized/applied: Filter By Port > tcpdump filter "port 80" Filter By Source IP > tcpdump filter "src x.x.x.x" Filter By Destination IP > tcpdump filter "dst x.x.x.x" Filter By Host (src & dst) IP > tcpdump filter "host x.x.x.x" Filter By Host (src & dst) IP, excluding SSH traffic > tcpdump filter "host x.x.x.x and not port 22"   Additionally, you can manually export the PCAP via SCP or TFTP, i.e.: > scp export mgmt-pcap from mgmt.pcap to   <value>  Destination (username@host:path)   > tftp export mgmt-pcap from mgmt.pcap to   <value>  tftp host   Note: By default, there is a maximum limit of 68 bytes (Snap Length) per packet on PA-200, PA-500 and PA-2000. For the PA-3000, PA-4000 and PA-5000, the default limit is 96 bytes per packet. To extend this limit, use the "snaplen" option.   admin@myNGFW> tcpdump snaplen   <value>   <0-65535> Snarf snaplen bytes of data from each packet. (0 means use the required length to catch whole packets) admin@myNGFW > tcpdump snaplen 0 Press Ctrl-C to stop capturing tcpdump: listening on eth0, link-type EN10MB (Ethernet), capture size 65535 bytes   See Also Tcpdump Packet Capture Truncated   owner: bryan
View full article
bryan ‎09-21-2016 06:48 AM
78,313 Views
9 Replies
1 Like
Symptom After changing back time manually, the WebGUI stops showing the traffic log.   Troubleshooting Steps Run the show log traffic direction equal backward command and see if the traffic log is displayed on CLI. If so, it is a WebGUI issue. Run the debug log-receiver statistics command and see if "Traffic logs written" gets counted up. > debug log-receiver statistics   Logging statistics ----------------------------------------- Log incoming rate: 0/sec Log written rate: 0/sec Corrupted packets: 0 Corrupted URL packets: 0 Logs discarded (queue full): 0 Traffic logs written: 1292   Run the debug log-receiver on debug command to enable log-receiver debug log. Next, run tail follow yes mp-log logrcvr.log and look for following messages: > tail follow yes mp-log logrcvr.log   Feb 24 14:09:50 pan_logrcvr(pan_log_receiver.c:1806): real data Feb 24 14:09:50 pan_logrcvr(pan_log_receiver.c:1764): try select Feb 24 14:09:53 pan_logrcvr(pan_log_receiver.c:1796): pipe data Feb 24 14:09:53 pan_logrcvr(pan_log_receiver.c:1764): try select   Cause The request from the GUI to retrieve the logs has a time stamp in it. When the time is manually changed back, it creates the mismatch between the GUI time stamp and the logs, so the system does not retrieve logs.   Workaround Since this happens in such a specific scenario, the issue can be avoided by not changing back the time manually. If this scenario occurs, it can be recovered by running the following CLI command:   Pre PAN-OS 7.0 > debug software restart log-receiver starting from PAN-OS 7.0 > debug software restart process  log-receiver   owner: ymiyashita
View full article
ymiyashita ‎09-14-2016 01:46 AM
33,010 Views
11 Replies
Overview Even when both the nodes in an HA pair are configured to fetch dynamic updates (threat or antivirus updates) at the same time, the firewall generates a version mismatch alert in the system logs. If email alerts are configured on the firewall, the system admin receives these alerts. This article focuses on explaining the behavior of such alerts in the firewall. Details Even though both members of the firewall have the same update schedule, there would be a brief period of time when both members would have a different version of dynamic updates. During this difference, HA checks generate a system log, mentioning a mismatch in the dynamic updates version. Prior to PAN-OS 7.1, these mismatch alerts were generated with HIGH severity in system logs as follows: 2016/08/02 10:18:05 high ha HA Group 2: Threat Content version does not match 2016/08/02 10:18:05 high ha HA Group 2: Application Content version does not match Now, if the email alerts are configured to send HIGH alerts to the system admin, they would receive a version mismatch alert on the firewalls. However, it is possible that by the time they check on the firewall, there is no version mismatch on the firewall. The reason is, as soon as the version matches on the firewall after that brief period of difference, the firewall generates these alerts with INFO severity as follows: 2016/08/03 10:18:27 info ha HA Group 2: Threat Content version now matches 2016/08/03 10:18:27 info ha HA Group 2: Application Content version now matches Since email alerts were set for only HIGH severity, the system admin does not receive these alerts.     Starting from PAN-OS 7.1, there is a behavior change in how these alerts are generated.   The first time the HA check detects a mismatch in the dynamic update version on both firewalls, these alerts are generated with 'informational' severity:       If this mismatch persists for longer than one hour, the HA check will generate alerts with 'high' severity:         Therefore, if email alerts are configured to send 'high' severity alerts, the system admin gets an alarm only when there is a genuine mismatch and not when there is a mismatch for just a brief period of time.      
View full article
hagarwal ‎09-07-2016 01:14 AM
5,627 Views
0 Replies
1 Like
The following document covers various type of logs and log severity. These logs can be viewed under Monitor > Logs > System.   This document can be helpful when searching for logs for a particular event or creating an alert for an event.   HA   When HA firewalls change their state, we can see following logs in the system log.     If the PAN-OS of the HA firewalls is not matching, we see the following logs in the system log.     When sessions get synchronised among HA firewalls, we see the following log:   If a firewall changes its role because of the preempt option, we see following log:   In HA, if some of the data port goes up, we see the following log for link monitoring:     For HA port ,we see the following logs:               URL Filtering   When the URL database of the firewall gets upgraded, we see the following log.   When the firewall selects some cloud for its URL database, we see the following log.     When a connection to the URL cloud fails, we see the following logs.     When URL database download fails, we see the following logs.      PBF   When PBF goes up or down, we see the following logs:    
View full article
pankaku ‎07-27-2016 04:17 PM
2,243 Views
0 Replies