Panorama is dropping lot of traffic to syslog splunk
cancel
Showing results for 
Search instead for 
Did you mean: 

Panorama is dropping lot of traffic to syslog splunk

L3 Networker

I have a active-standby panorama cluster version 8.1.17 that manages about 40 firewalls.  The active-cluster panorama is also a log collector-group.

20 firewalls send traffic/threat/URL logs to active panorama and the other 20 firewalls send traffic/threat/URL logs to the standby panorama.  From there, I configure panorama to forward these logs to syslog splunk.  I have PAN TAC support look at the configuration and they confirm the setup is good.

 

Here is the issue.  When I use the command "less mp-log syslog-ng.log", I can see the drop increment from panorama to the syslog splunk every 30 minutes or so.  The counter is measured every ten minutes.  On the syslog Splunk side, they confirmed that the traffic never arrived in tcpdump (syslog is clear text so we can decode the missing logs). 

 

I've opened a ticket with PAN support and waiting to hear back from them but it is currently with the first tier level TAC support so not much hope so far.

 

Why would panorama stop forwarding log to external syslog splunk?  Has anyone seen this issue before?

 

TIA.

9 REPLIES 9

L7 Applicator

have you tracked global counters and log forwarder statistics? maybe the log rate is too high for it to be able to complete each forward

Tom Piens
Like my answer? check out my book! https://bit.ly/MasteringPAN

@reaper:  What is the command do you recommend?  I am using "debug log-collector log-collection-stats show log-forwarding-stats | match syslog" and I am seeing this:

 

syslog enqueued count: 3260998077
syslog sent count: 3260769863
syslog dropped count: 422974378
syslog Queue depth: 0

 

What do you mean by the log rate is too high?  My panorama is running in AWS with the biggest available EC2 available.

@dtran,

So just because you deploy an ever bigger instance doesn't mean it'll handle the amount of logs being generated. You pretty quickly run into an area of diminishing/no return on additional resources. 

@BPry:  My instance is 16CPU with 64GB RAM.  According to PAN, it is capable of handling of 10,000 logs/sec.  Maximum logs/sec in my situation is around 4,500 logs/sec so I have plenty of resource on the Panorama to handle the log.  Waiting for the next move from TAC support.

@dtran 

 

Please keep us posted what TAC says about this.

 

Regards

MP

Just a quick update on this issue.

 

It turned out that my log collectors spiked to 15K/sec for incoming log and the instance we have is only for 10K/sec for incoming logs.  I am going to increase the size, CPU and memory, of the AWS EC2 instance.  Hopefully, it will go away.

 

Hey @dtran ,

I am curios how did you find the spikes?

 

Thanks!

I use a python to log into Panorama every 5 seconds and run these three commands and pipe them into an ascii file:

 

show clock | match GMT

Sat Mar 20 07:40:04 GMT 2021
> debug log-collector log-collection-stats show log-forwarding-stats | match "syslog dropped count"
syslog dropped count: 23026208
> debug log-collector log-collection-stats show incoming-logs | match "Incoming"
Incoming log rate = 1389.45

 

Then I use grep and awk to find out if the count the diff in the dropped count and Incoming log rate base on the timestamp. 

 

PAN TAC support also has something similar but they run it in Teraterm for Windows.  Real engineers use Linux

I had to give you a like for the Linux client @dtran, because you're right ; - )

Like what you see?

Show your appreciation!

Click Like if a post is helpful to you or if you just want to show your support.

Click Accept as Solution to acknowledge that the answer to your question has been provided.

The button appears next to the replies on topics you’ve started. The member who gave the solution and all future visitors to this topic will appreciate it!

These simple actions take just seconds of your time, but go a long way in showing appreciation for community members and the LIVEcommunity as a whole!

The LIVEcommunity thanks you for your participation!