Hoping someone can clear up some confusion I have with the processing speed fothe 7050 firewall. The literature states that each NPC adds 20 Gbps of processing to the chassis. You can scale out your deployment and speed by adding NPC's, the first packet processor will do the job of distributing the load accross the NPC.
Picture this scenerio:
I have 1 NPC and 3 10Gbps in/out connection setup (6 10Gbps ports used). This would therotically have a total througput requirement of 30Gbps through the firewall. This is greater then 20Gbps so I go out and buy and extra NPC but done plug anything it.
Would the 7050 be able to distribute half the session to the extra NPC (the one with no cables in it) and thus allow me to go over 20Gbps on one NPC.
for this to work that would mean that the physical ports on the NPC have direct access to the switch fabric. My assumption is that the whole card is stuck at 20Gbps no matter what and you ahve to play the game of distributing your connections across multiple cards.
Solved! Go to Solution.
I have not seen this listed in any of the public documentation, but I did have this very discussion with the sales engineer earlier this year.
the answer was you do add capacity by adding the card and you do NOT need to manage where the physical connections are landed. The chassis system is a system that can share the processing load using a PA created algorithm.
I agree with pulukas the aim of 7050 is to be able to ad extra processing power with no change in your config.
Just add a new NPC card and the backplane will be able to load balance processing between all NPC.
Just to add to this thread, yes you should be able to achieve 30Gbps max firewall performance by simply adding an NPC even if you do not use the ports on that NPC. However, there is a change that you may need to make on the system level. By default the session distribution policy is set for ingress-slot. That means that sessions will only be handled by the NPC that has connected ports. In order to load balance between all NPCs you would have to change the session distribution policy to either session-load, random or round-robin. The differences between each setting are documented here.
Note that if in HA, the session distribution policy needs to configured the same on both HA peers separately as this is not sync'd with configurations.
We presently have x2 PA-7050's with x2 20GQXM NPC's per chassis installed in each. Using split uplinks between the 2 cards in each chassis on the 40G interfaces with session distribution enabled for "session-load". We are still seeing a MAX of 20G throughput. I have a TAC case open but presently seeing the opposite of the claim on throughput with additional cards.
Click Accept as Solution to acknowledge that the answer to your question has been provided.
The button appears next to the replies on topics you’ve started. The member who gave the solution and all future visitors to this topic will appreciate it!
These simple actions take just seconds of your time, but go a long way in showing appreciation for community members and the Live Community as a whole!
The Live Community thanks you for your participation!