Thank you for reply @Deepak25
1.) Personally I would advice to go with this option: "Create new log forwarding preference list for some firewalls from the list and keep new LC as a primary and others as secondary , tertiary." The log forwarding preference only defines which log collector ingests the logs, but the log location among log collectors in the same log collector group is determined by hash algorithm, so the logs do not necessarily have to reside in the first log collector in log forwarding preference list. Full details are provided in this article: https://docs.paloaltonetworks.com/panorama/9-1/panorama-admin/panorama-overview/centralized-logging-and-reporting/caveats-for-a-collector-group-with-multiple-log-collectors.html
2.) You do not have to modify this setting. On the Firewall side, you configure only Panorama Manager IP address (Primary and Secondary in the case of HA), so you can keep this setting as it is after you bring 3rd log collector online. The Firewall will learn log forwarding preference after it is registered to Panorama Manager.
3.) The answer to this question is provided in the link, I shared in the point no.1. Basically, all log collectors in the same log collector group are working as one storage and all log collectors are storing logs. When it comes to enabling log redundancy across log collectors, there is one caveat: Log redundancy is available only if each Log Collector has the same number of logging disks.
4.) I do not have confidence to answer this question without knowing your environment, however the disk quota is configuration that will not change by adding new log collector. For actual utilization, by adding 3rd log collector, you will get more storage, but if you have a lot of logs, you will eventually reach 100% utilization, but you will have longer retention period.
There is one more point that I forgot to mention in my earlier post, for log collector you will need this license: https://knowledgebase.paloaltonetworks.com/KCSArticleDetail?id=kA10g000000PNu0CAG&lang=en_US%E2%80%A9
Kind Regards
Pavel
... View more