Has anyone else had a play with the GWLB on AWS?
I know it must be PAN-OS 10.0.2 or higher to work,
I have tested with multiple instances,
As a bump in the wire it works fine. until you apply NAT, then it doesn't work at all for any traffic that is NAT'd.
I have an open TAC for this, they are replicating the fault to work it out but surely this was all tested before it went public.
I also found overlay routing breaks traffic flow. its not documented anywhere that I could find but what I found was it processes the GENEVE traffic in the virtual router where without it, is just an in-return non routed flow.
If you've tinkered with it and actually got inbound/outbound NAT and/or overlay routing to function, please let me know what you did.
sadly the documentation just doesnt provide any decent clarity for this feature.
Also extremely disappointed they havent integrated this into version 9.1.
I am hopeful they will add it with 9.1.7 in a functional state as I am not planning to move my clients to 10.0 until the list of known issues is about 1/4 its current size.
Thanks for your feedback.
How are you deploying the GWLB with VM-Series? Are you using any of the templates provided on our Github repo?
If you are using a CFT with an autoscale template, then it will create a NAT GW along with other components. The template also takes care of the automatic route population for any new APP VPCs.
We are using a nat gateway. outbound works just fine until you apply NAT of any sort. just trying to apply any sort of NAT to change the direction of the traffic breaks it.
so if i put a nat rule that traffic to 22.214.171.124 gets d-nat to 126.96.36.199, the traffic never exits the firewall.
key use for this was for inbound traffic, redirecting inbound traffic to the correct ALB that lives in another vpc, the traffic seems to get dropped at the firewall, even though pcaps show it *thinks* it is being forwarded on, its not.
Inbound requires ingress routing to use the GWLB without SNAT. You can do that within the application vpc using a public-facing LB in front of the application.
Or if you want to have a dedicated inbound VPC, you use the same design as above but move your pool members across the TGW.
If you prefer the traditional Load Balancer sandwich design where the firewalls are pool members of the front door LB and you are going to SNAT/DNAT to the application, you would either use a dedicated set of firewalls or add new Untrust and Trust interfaces to the firewall as ETH3/4 and use those for ingress outside of the GWLB. This is necessary as the GWLB traffic must hairpin inside of the Geneve tunnel, you cannot insert flows into the tunnel from another interface.
neither of those suit the application Im doing.
We have multiple inbound services that sit behind the firewall.
the current layout is the trust/untrust sandwich but we would prefer to move away from the NLB design as they have a capacity limitation in regards to autoscale groups.
the design plan is that we have 'anchor' network addresses for each inbound service, the traffic comes in via the IGW, steered through the geneve tunnel to the palo, at which point we apply a destination NAT to the actual application load balancer of that service (which is in a different vpc), the traffic is still supposed to egress the geneve tunnel just with a different destination address, but the palo seems to drop it (even though the palo pcap believes it is forwarded on (seen in transmit stage with correct destination IP)
on your comment re: overlay routing, why did they put the feature command in there if they havent got it working yet? /logic 😕
For those playing at home,
In further discussions with @jmeurer, AWS apply a 5 tuple hash on the traffic to ensure return path, so applying a NAT breaks the traffic flow and AWS drops the traffic.
Overlay routing is not yet functional, hopefully it will be in 10.0.4 or 10.0.5 and I can test if I can get the NAT to work for me in that aspect. I am curious if the return traffic would exit via the same geneve tunnel given theres no routes through it in that situation, only time will tell.
that in depends how you setup your interfaces. when you have only one interface then is of course your traffic recognized als interzone fraffic to Outside -> Outside. when you want to split it then you have to create sub interfaces and map them to a another zone and adopt your FW VR routing.
My design is as per below. Let me know if any issue.
Server-1 (Outside)==>TGW==>SecurityVPC==>GWLBe==>EndPoint Service==>GWLB==>PaloAlto Outside interface (Eth1/1)==>Pa Processing==>PaloAlto Inside interface(Eth1/2)==> Server-2 (Inside).
I am not using GP instead traffic is ping/ssh. Whenever i process the traffic from Outside to Inside traffic logs saying traffic outside to outside hence not matching correct policy and not processing.
Click Accept as Solution to acknowledge that the answer to your question has been provided.
The button appears next to the replies on topics you’ve started. The member who gave the solution and all future visitors to this topic will appreciate it!
These simple actions take just seconds of your time, but go a long way in showing appreciation for community members and the LIVEcommunity as a whole!
The LIVEcommunity thanks you for your participation!