- Access exclusive content
- Connect with peers
- Share your expertise
- Find support resources
01-22-2021 06:01 AM
Hi Folks, looking for advice on an alert which is bugging me.
Our Panorama instance regularly (2 per hour) reports that:
Panorama - SYSTEM ALERT : high : User Group count of 12080 exceeds threshold of 10000
Now this is correct - as it comes from two (soon to be four) firewalls plumbed into LDAP/AD:
admin@Panorama> show user group list device-group FW1a | match Total
Total: 6040
admin@Panorama> show user group list device-group FW1b | match Total
Total: 6040
Each firewall is an A-A HA pair - so both really need to know the groups -
We are about to deploy a second HA A-A pair which would also have the same number of groups (4 x 6040)
So i believe that the solution to this is to have Panorama import from one firewall (or have it import and share to the firewalls)
I am unsure how to do this in a resilient manner - i.e. if Panorama or the identified firewall is off the air for any period of time, can the remaining firewalls keep the group list populated and up to date?
Any pointers would be appreciated.
01-25-2021 03:03 AM
hi @GN_ROS
yes, put both members of the cluster in the same device group 🙂 that's how it's supposed to be done (this is what the 'master' is for etc)
could you elaborate on why you are splitting up your admins over 2 different devices in the same cluster?
01-24-2021 08:30 PM
So the groups would be cached so if something happened they would just work off the cached information, but they wouldn't update if the source was unavailable. The part that is kind of confusing in your post though is that this appears to be pulling the same information from both firewalls, as the number of groups is exactly the same. In that sense, you shouldn't actually be exceeding your group count since the group would just be 6040 if the membership and domain information is the same anyways.
01-25-2021 12:16 AM
Did you set a master device in the Device Group?
a device group has a master device from which the user-id information is collected
if you set it to none it might be grabbing the info from all members or i you've set all firewalls in different groups, then they're all polled for 'their group's information)
01-25-2021 02:29 AM
Hi folks,
@reaper , thanks for the reply.
Yes master device is set, however this may be the root of the issue.
We use a device group structure like so
Prod_Firewalls
-> ActiveFW1_group
FW1a
->ActiveFW2_group
FW1b
In ActiveFW1_group - FW1a is the master
In ActiveFW2_group - FW1b is master
In Prod_Firewalls - None is selected, as there is no option for the lower groups or FW's to be selected.
So, Panorama therefore sees the two as Master i suppose.
We have 2 groups like this such that 2 sets of admins can operate the permissions model.
Do you have any advice on how to overcome the dual mastership for identity info?
01-25-2021 03:03 AM
hi @GN_ROS
yes, put both members of the cluster in the same device group 🙂 that's how it's supposed to be done (this is what the 'master' is for etc)
could you elaborate on why you are splitting up your admins over 2 different devices in the same cluster?
01-28-2021 06:01 AM
@reaper
Sorry for the late response.
The initial reasons are historic... and probably not valid now if designed fresh.
While it is an active-active pair, each has its own virtual router and needed specific rules per side.
I think most of what we wish to achieve can be accomplished with targets.
i will accept your solution as the best.
Thanks for your help.
01-28-2021 09:30 AM
Allow me to add more solution 😉
Templates and device groups are two different sections of configuration that live independent of eachother
You can accomplish the separate config through templates: assign each firewall its own template stack, create a shared template of device config that both firewalls will get and then create a unique template per firewall that contains the things that need to be applied to that firewall only (routing, HA config, dynamic updates, hostname,...)
While keeping both firewalls in the same device group so they get all the same policies and objects
02-02-2021 03:47 AM
@reaper - Thanks for the info (and sorry for the late reply - i didn't get a notification)
Allow me to clarify the position we have :
We essentially have 3 rulebases :
RED - controlled by Admin group 1
BLUE - controlled by Admin group 2
PURPLE - shared control by both admin groups.
The firewalls are active-active for the PURPLE ruleset and then there is a distinct virtual router (with unique routing) on each firewall to suit the RED ruleset (on HA device 0) and the BLUE ruleset (on HA device 1)
The device tree at the moment is
PURPLE
/ \
RED BLUE
FW0 FW1
In the text diagram above - FW0 is master for red device group and FW1 is master for blue - there is no option to set the purple master device (option is none).
I am happy with using template stacks to achieve the differences needed for Network and Device settings.
So while i can move rules from RED and BLUE to PURPLE (with appropriate rule targets) - with the goal of removing RED and BLUE, this then creates one flat rulebase with shared control, we lose the RBAC control.
am i missing any other way to organise things?
as an aside - how does this scale well, as with 4 tiers available for device groups, there could be a significant number of devices all receiving user groups at the leafs, but if they are not merged up each branch to the core (i.e panorama) then there is always going to be crazy high numbers of duplicate entries. Why does panorama not also have the ability to set a master at each branch of the device tree? (as there are no devices attached to the branch level - the option is none rather than any child device names)
Thanks
02-02-2021 04:11 AM
I think the disconnect happens in the way that this is set up
The usual approach in your scenario would be to set up vsys on the chassis so each admin has their own virtual firewall in the device groups. you would then have 2 device groups with each 2 members (vsys1FWA+vsys1FWB and vsys2FWA+vsys2FWB) and then have member A set to master in both device groups
It is highly unusual to have one admin to manage only FWA and the other only FWB and still have a cluster (because panorama will push only to the one device, causing massive config discrepancy across the cluster). A cluster is supposed to have fully synced config
To your aside:
By design, a single member in a device group becomes the master for that device group. If you have multiple devices you can choose which one is master for that group as all the devices in that group are 'the same' (so cluster members should always be in the same device group)
Vsys are employed to make them 'different'
Click Accept as Solution to acknowledge that the answer to your question has been provided.
The button appears next to the replies on topics you’ve started. The member who gave the solution and all future visitors to this topic will appreciate it!
These simple actions take just seconds of your time, but go a long way in showing appreciation for community members and the LIVEcommunity as a whole!
The LIVEcommunity thanks you for your participation!