Panorama Logging with NFS

Reply
Highlighted
Not applicable

Panorama Logging with NFS

I'm currently making a log concept for our new PaloAlto firewall environment for our new internet perimeter. I have a few questions about that.

Here is what we want to build:

- a two stage firewall concept

- outer firewall is a PA-5050 cluster with Threat and URL

- inner firewall is a PA-5020 cluster with just firewalling

- inbetween of the two clusters is the DMZ with about 70 servers

- 150 Mbit/s internet bandwidth

- 5'000 users who will do webbrowsing over these two firewall clusters

Currently we have one Check Point cluster which will be replaced by the mentioned PA firewalls.

We have about 12 million (12x10^6) sessions per day and log files of 1.2 GB each day. Every session is logged.

I analyzed the log DB of our PA-500 we already have and found that the PA will need about 6 times more log space per session than a Check Point firewall (correct me if I'm wrong).

So I expect to have about 6 GB log data per day per firewall. 12 GB with both firewall stages involved. Half a year of log will need about 2.1TB storage. With future growth even more.

My questions:

- are these calculations realistic?

- With vdisk: can Panorama use more than 2 TB disk space with the new VMFS5 on vSphere 5.x (which can 64 TB with physical compatibility!!)?

- With NFS: What's the bandwith need from the firewall to Panorama and from Panorama to the NFSv3 Storage for 12 million sessions per day? How does the log flow look like?

- How will log querys perform on a Panorama log database of over 2 TB?

- What are your experiences with NFS log db storage?

- How does your log concept look in case you have such an environment?

Thanks in advance for your answers

L4 Transporter

Re: Panorama Logging with NFS

You have made some assumptions in your calculation which may end up causing issues with your result.

When you said that you "analyzed the log DB of our PA-500" were you referring to a device which is seeing the same traffic as the CheckPoint cluster you wish to replace? If the PA-500 was not seeing the same sessions it may be hard to accurately say the 6x log space factor is true. Palo Alto Networks devices log on every app transition so it is possible they could generate many more logs than the CheckPoint boxes.

You can use an average log size of 600 bytes for calulation purposes. There is more information on the "Panorama Logging Suggestions" doc on KP which explains this calculation (https://live.paloaltonetworks.com/docs/DOC-1921).

We have not tested yet with VMFS5 to determine if the support will work out of the box. We cannot guarantee there will not be issue if you were to try the configuration with Panorama.

L6 Presenter

Re: Panorama Logging with NFS

I think its not uncommon (specially when it comes to these huge amount of logs) that one use Panorama only for somewhat shortterm digging and setup a dedicated syslogserver (either something just syslog-ng based or something more expensive such as Splunk or even Arcsight).

This way the syslogserver will do all the archiving (and you can setup filters of what you want to save and which compression you wish to use - compared to Panorama who saves everything that each PAN unit is throwing at it) while the Panorama have the full blown easy to click GUI as you get with ACC.

The Panorama can then also perform daily or weekly pdf-reports for management. Where the syslogserver is used to dig into logs which are older than they can fit in the Panorama (if you lets say retain logs in Panorama for the last 90 days (3 months) or so).

On the other hand - how did the huge companies (often refered to by the marketing) setup their logging?

I have also seen (I think it was in this forum) that each PAN device can roughly spit out 7000-9000 logrows/second as a peak. Dunno what happens when it got to much to do (if the logs are queued up or if they are just thrown away) - anyone who knows more details of the internals of PAN logging?

Juniper for example have a "Oops - missed some" when the internal logengine cannot deal with all the logentries produced by the fabric.


And a small tip regarding your design (which on the other hand is a matter of taste ;-)

You could setup an outer-dmz and an inner-dmz on dedicated interfaces of each firewall and put your clients in the core.

Like so:

outer net <-> outer firewall <-> core <-> inner firewall <-> servers (inner-dmz)

clients <-> core

outer-dmz <-> outer firewall

inner-dmz <-> inner firewall

This way a client who needs to reach the outer net will be:

client <-> core <-> outer firewall <-> outer net

while if it wants to reach an internal server it will be:

client <-> core <-> inner firewall <-> server (inner-dmz)

Like what you see?

Show your appreciation!

Click Like if a post is helpful to you or if you just want to show your support.

Click Accept as Solution to acknowledge that the answer to your question has been provided.

The button appears next to the replies on topics you’ve started. The member who gave the solution and all future visitors to this topic will appreciate it!

These simple actions take just seconds of your time, but go a long way in showing appreciation for community members and the Live Community as a whole!

The Live Community thanks you for your participation!