Introduction
Palo Alto Networks VM-Series Next-Generation Firewall for Google Cloud is the industry-leading virtualized firewall to protect applications and data with next-generation security features that deliver superior visibility, precise control, and threat prevention at the application level. Google Cloud internal TCP/UDP Load Balancing enables end-users to run and scale VM-Series deployments behind a single internal IP address. The VM-Series can be deployed in zonally dispersed managed instance groups to scale up and down based PAN-OS metrics delivered to Google Cloud StackDriver. Traffic from the VPC network can be routed to the load balanced firewalls by creating a custom static route that uses the internal TCP/UDP load balancer as the next hop.
Here are some of the benefits of using TCP/UDP load balancing as next hop in your VPC network:
- Load balance traffic across VM-Series firewalls to protect applications and data with next-generation security features at scale.
- Leverage Google Cloud managed instance groups to horizontally scale VM-Series firewalls across regions and zones.
- Enable Symmetric Hashing to maintain original client IP address.
- Provide high availability with reliable failover through cloud load balancer health checks.
Symmetric Hashing
Overview
Historically, source network address translation, or SNAT, has to be applied when routing east-west traffic through load balanced VM-Series firewalls. SNAT can be avoided by either removing the load balancer or by configuring the load balancer’s backend service to use a primary and failover instance group. However, implementing either of these workarounds limits your ability to effectively scale firewalls for your east-west traffic flows.
Google Cloud announced general availability of an internal TCP/UDP load balancer feature called symmetric hashing. Symmetric hashing maps a given traffic flow to a VM-Series firewall while ignoring the directionality of the IP addresses and ports. This capability ensures server response traffic traverses through the same VM-Series firewall that received the initial client request without proxying or obscuring the initial client IP address.
Symmetric hashing can be enabled on your internal TCP/UDP load balancer as long as the following statements are true:
- The forwarding rule for the internal TCP/UDP load balancer was created on or after June 22, 2021.
- The custom static route referencing the forwarding rule was created on or after June 22, 2021.
- The internal TCP/UDP load balancer’s backend service does not use the NONE session affinity setting.
East-West Security with Network Peering
In this model, you can deploy compute resources in the same project as the VM-Series firewall, or distribute compute resources across multiple projects with unique VPC networks and administrative domains. VPC Network Peering by Google Cloud enables you to centralize security by connecting spoke networks to a hub network that contains load balanced VM-Series firewalls. Custom routes defined in the hub network use the VM-Series internal TCP/UDP load balancer as the next hop. The routes can then be propagated to the spoke network’s route table via Network Peering's import/export custom route functionality. If symmetric hashing is enabled on the internal TCP/UDP load balancer, the VM-Series can be deployed into managed instance groups to automatically scale based on utilization metrics.
Example 1.
The hub network contains VM-Series firewalls deployed in a managed instance group that serve as the backend of an internal TCP/UDP load balancer with symmetric hashing enabled. The hub network has a custom default route that uses the TCP/UDP load balancer as the next hop. This route is propagated over the VPC peering connection (via import/export custom routes) to spoke1 and spoke2 VPC network’s route table. Below is an example of how a request is routed and handled by the internal load balancer with symmetric hashing enabled.
Client Request
- A TCP request from SPOKE1-VM (10.1.0.2) to SPOKE2-VM (10.2.0.2) is routed to the hub VPC network peering connection via the imported default custom route from the hub VPC network.
- The custom default route in the hub network routes the request to the TCP/UDP load balancer forwarding rule.
- The load balancer sends the request to any of the available VM-Series firewalls for inspection (availability dependent on load balancer health checks).
- After inspection, the VM-Series sends the request into the hub network where it is then routed to the spoke2 peering connection.
Hub-and-Spoke with Symmetric Hashing - Client Request Traffic
Server Response
- The TCP response from SPOKE2-VM (10.2.0.2) to SPOKE1-VM (10.1.0.2) is routed to the hub VPC network peering connection via the imported default custom route from the hub VPC network.
- The custom default route in the hub network routes the response to the TCP/UDP load balancer forwarding rule.
- Symmetric hashing on the load balancer sends the response to the same VM-Series firewall that received the initial client request (without proxying or using source network address translation).
- The VM-Series sends the response into the hub’s network where it is routed to spoke1 peering connection and the session is established.
Hub and Spoke with Symmetric Hashing - Server Response Traffic
Example 2.
In this example, the VM-Series firewalls secure traffic between the production, development, and QA VPC networks. Within each VPC, the VM-Series dataplane interfaces are the backend service of separate internal TCP/UDP load balancers; each of which have symmetric hashing enabled. A custom default route is defined in the production VPC to use the production load balancer as the next hop. Likewise, a custom default route is defined in the development VPC network to use the development load balancer as the next hop.
Even though the request and response traffic are sent through separate TCP/UDP load balancers, the VM-Series does not need to apply SNAT to maintain symmetry. Symmetric hashing ensures packets that are a part of the same flow use the same firewall for a given session. If desired, you can still apply SNAT, but it is not required.
Scale Security for Outbound Traffic for Global VPC Networks
VPC networks, including their associated routes and firewall rules, are global resources. They are not associated with a particular region or zone. The VPC network’s subnets determine regionality. This provides organizations with the unique ability to have cloud resources deployed globally, while maintaining a small VPC network footprint with a centralized route domain and firewall rule set.
The VM-Series firewall with Google Cloud’s internal load balancer can handle egress requests from regions that are different from its own. For example, if the VM-Series and internal load balancer are deployed in region1, requests from region2 and region3 can be routed to the region1 firewalls. However, this may be undesirable due to cross region transaction costs, latency considerations, and cross-region resiliency requirements.
You could deploy load balanced VM-Series firewalls in regions that are reflective of your workload’s locations. However, this design previously presented a design issue. If two or more custom static routes with the same destination using different internal TCP/UDP load balancers as the next hop, the traffic could not be distributed among the load balancers using ECMP.
This limitation can be completely overcome by leveraging Google Cloud’s network tags. Network tags make routes applicable only to instances that use the corresponding network tag and can be used for a variety of use-cases, including:
-
Prevention of cross-region traffic flows.
-
Isolation of egress traffic between development environment and production environments.
-
Creation of “swimming-lanes” to distribute traffic to different sets of load balanced firewalls.
Example 3.
The diagram below is an example of how to use network tags to prevent cross region traffic flows for outbound internet requests. The trust VPC route table has two default routes: default-east and default-west. Each route has a unique network tag applied to it: vmseries-east and vmseries-west. Although both routes belong to the VPC route table, the routes are only applied to compute resources that share the same network tag. For example, the compute resources in the us-east subnets have the east-vmseries tag applied. This ensures resources residing in us-east will only use the us-east VM-Series firewall set. Likewise, compute resources in us-west have the west-vmseries network tag applied to force us-west traffic through the us-west VM-Series firewall set.
Network Tags for Internal Load Balancer as Next Hop
The end result with network tags is you do not need to segregate different client instances into separate VPC networks, each pointing to their preferred internal TCP/UDP load balancer front-ending a set of VM-Series firewalls. Below is an example architecture of using network tags to isolate regional subnet traffic flow through a set of firewalls that share the same region
More Information