Load balance a clientless application

cancel
Showing results for 
Show  only  | Search instead for 
Did you mean: 

Load balance a clientless application

L3 Networker

Hello evrybody,

 

  some of our customers make heavy use of the Clientless VPN feature of the GlobalProtect Portal, and one of the most used application this time is the Apache Guacamole remote desktop gateway (https://guacamole.apache.org/). The GP Portal is configured to show the icon of this app and clicking on the app, of course, the remote users access the web server (NGINX in reverse proxy mode) through which the application is served.

 

As you know, when using the clientless feature, all the requests made by the remote clients are proxied by the firewall, so from the application server's point of view all the connections originate from the same IP address (the IP of the firewall interface facing the application).

 

This is working very fine for us but, with the ever growing number of remote users, a single Guacamole server is no more enough to manage all the necessary concurrent connections. So we need to add more Guacamole servers and put those servers behind a load-balancer (i.e. HAProxy or NGINX itself). In this fashion, the clientless app on the GP Portal is configured to point to the load balancer address instead of the address of the Guacamole server. The problem is: how can we load balance and ensure stickiness of the connections if all the user requests are coming form the firewall IP address? I already asked to the customer support to know if the firewall can inject an XFF in the HTTP requests, but it's not possible. Any idea?

Linus does not push the flush toilet button. He simply says: make clean!
5 REPLIES 5

L6 Presenter

Hi @grenzi ,

 

As per my understanding, this can be easily manage on load balancer level. You can look for Persistence Profile and type that you will help would be Destination Affinity. 

 

Hope it helps!

M

Check out my YouTube channel - https://www.youtube.com/@NetworkTalks

Hi @SutareMayur ,

 

  I'll try to load balance with HAProxy at layer 4, so that a single TCP stream will be forwarded always to the same backend server. I have to figure it out if the entire user session is always included in a single TCP connection, but I'm not convinced.

 

Thank you,

G.

Linus does not push the flush toilet button. He simply says: make clean!

@grenzi,

Because Guacamole is just a standard website from an access perspective, you should be able to use Cookie Insert (Barracuda) or Sticky Cookie (NGINX) on the traffic. This allows each session to be load-balanced like you would expect, but it inserts a cookie so that the session persists on a single server. That's how I would deal with this.

 

Now if you run into something that isn't capable of being load-balanced for any reason, that's when you have to get kind of tricky with clienteles access until additional features come online to pass XFF and such. Effectively you just manually load-balance by creating multiple different apps and feeding those apps to different user groups. It's a much more manual process, but for things that really can't load-balance well that's effectively the only "solution". 

Hi @BPry, thank you!

 

I'll do some tests with NGINX (I need to learn more about Sticky Cookie).

 

Regards,

G.

Linus does not push the flush toilet button. He simply says: make clean!

Just found that the Sticky Cookie session persistence is only available in NGINX PLUS. It seems that HAProxy can be configured in a similar way, so I'll try with HAProxy.

Linus does not push the flush toilet button. He simply says: make clean!
  • 5896 Views
  • 5 replies
  • 0 Likes
Like what you see?

Show your appreciation!

Click Like if a post is helpful to you or if you just want to show your support.

Click Accept as Solution to acknowledge that the answer to your question has been provided.

The button appears next to the replies on topics you’ve started. The member who gave the solution and all future visitors to this topic will appreciate it!

These simple actions take just seconds of your time, but go a long way in showing appreciation for community members and the LIVEcommunity as a whole!

The LIVEcommunity thanks you for your participation!