Frequently changing IP for a FQDN

cancel
Showing results for 
Show  only  | Search instead for 
Did you mean: 

Frequently changing IP for a FQDN

L0 Member

FQDN :  "dc.applicationinsights.azure.com"

The IP of the above Azure FQDN changes rapidly, sometimes even within a second. I was requesting my Palo Alto Firewall team to add this FQDN to the allowed policies so that my deployed application can communicate with the Azure AppInsights and send the logs. The observation is that even after adding the FQDN to the allowed policy, it gets denied. The reason that the network team gave was the FQDN is unstable from Azure's side and needs to be reported since it's changing its IP often. Also, the setting "Minimum FQDN Refresh Time(sec)" is set to 30 and the PA FW team does not want to change it as this is a default number. Can someone throw light on how this can be done in the PA firewall?

1 accepted solution

Accepted Solutions

L6 Presenter

In short... it can't. In slightly longer, you can do ugly work-arounds.

 

The FQDN "dc.applicationinsights.azure.com" is what is known as a fast flux DNS name. The DNS response has a very short TTL before the response is no longer valid, often less than 30sec whereas 30sec is the minimum DNS caching time of virtually all DNS servers. Long a technique used by scammers and hackers to hide their compromised assets by constantly flipping destination IPs, it is unfortunately now being used more and more by legitimate sites.

 

dc.applicationinsights.azure.com is a CNAME for global.in.ai.monitor.azure.com with a 300sec TTL

global.in.ai.monitor.azure.com is a CNAME for global.in.ai.privatelink.monitor.azure.com with a 25sec TTL*

global.in.ai.privatelink.monitor.azure.com is a CNAME for dc.trafficmanager.net with a 25sec TTL*

dc.trafficmanager.net is a CNAME for wus204-breeziest-in.cloudapp.net** with a 10sec TTL*

wus204-breeziest-in.cloudapp.net is an A record for the destination IP 40.78.253.203** with a 10sec TTL*

*fast flux DNS

**constantly changing

 

So the endpoint IP is changing every 10sec (or faster depending on how diverse your DNS servers are). When the PaloAlto queries the FQDN address object to learn its IP it gets one answer, while when the client queries the IP for the server a few seconds later it gets a completely different IP. The PaloAlto has a minimum 30sec cache timer (which I believe can not be decreased) to prevent the PaloAlto from thrashing trying to constantly re-query invalid/broken address objects.

 

In short, you can't use an address object to track these FQDNs because the DNS answer is constantly changing (and not giving a full reply of all the DNS possibilities in any given response). Depending on how you are trying to firewall/filter sites in the PaloAlto, there are a couple options.

1) Instead of using a FQDN address object, you can query the DNS over a long period of time and learn all the possible DNS responses. You can then create an IP address object for each, add all the IP objects to an address group, and then add that address group to your PaloAlto. Doing a quick look I see at least 5 different intermediate CNAMEs and 5 final destination IPs, though I suspect the total pool is far larger depending on where you query from.

This has the advantage that you have an address object that can be used in all firewall Security Policies, but the disadvantage that the DNS answers are no longer tracked and if azure.com changes in the future you will have to re-query all the answers and rebuild the address object list....

 

2) If you are only accessing the site via web protocols (HTTP/HTTPS), you can create a URL filter for the DNS name and use that in a Security Policy to match traffic. This has the advantage that it doesn't matter what the destination IP is, but the disadvantage that it won't apply to other protocols (i.e. FTP, SQL/DB, certain API interfaces, etc.) and if you are not doing SSL decryption other DNS names under the same server certificate may slip through the filtering.

View solution in original post

4 REPLIES 4

L6 Presenter

In short... it can't. In slightly longer, you can do ugly work-arounds.

 

The FQDN "dc.applicationinsights.azure.com" is what is known as a fast flux DNS name. The DNS response has a very short TTL before the response is no longer valid, often less than 30sec whereas 30sec is the minimum DNS caching time of virtually all DNS servers. Long a technique used by scammers and hackers to hide their compromised assets by constantly flipping destination IPs, it is unfortunately now being used more and more by legitimate sites.

 

dc.applicationinsights.azure.com is a CNAME for global.in.ai.monitor.azure.com with a 300sec TTL

global.in.ai.monitor.azure.com is a CNAME for global.in.ai.privatelink.monitor.azure.com with a 25sec TTL*

global.in.ai.privatelink.monitor.azure.com is a CNAME for dc.trafficmanager.net with a 25sec TTL*

dc.trafficmanager.net is a CNAME for wus204-breeziest-in.cloudapp.net** with a 10sec TTL*

wus204-breeziest-in.cloudapp.net is an A record for the destination IP 40.78.253.203** with a 10sec TTL*

*fast flux DNS

**constantly changing

 

So the endpoint IP is changing every 10sec (or faster depending on how diverse your DNS servers are). When the PaloAlto queries the FQDN address object to learn its IP it gets one answer, while when the client queries the IP for the server a few seconds later it gets a completely different IP. The PaloAlto has a minimum 30sec cache timer (which I believe can not be decreased) to prevent the PaloAlto from thrashing trying to constantly re-query invalid/broken address objects.

 

In short, you can't use an address object to track these FQDNs because the DNS answer is constantly changing (and not giving a full reply of all the DNS possibilities in any given response). Depending on how you are trying to firewall/filter sites in the PaloAlto, there are a couple options.

1) Instead of using a FQDN address object, you can query the DNS over a long period of time and learn all the possible DNS responses. You can then create an IP address object for each, add all the IP objects to an address group, and then add that address group to your PaloAlto. Doing a quick look I see at least 5 different intermediate CNAMEs and 5 final destination IPs, though I suspect the total pool is far larger depending on where you query from.

This has the advantage that you have an address object that can be used in all firewall Security Policies, but the disadvantage that the DNS answers are no longer tracked and if azure.com changes in the future you will have to re-query all the answers and rebuild the address object list....

 

2) If you are only accessing the site via web protocols (HTTP/HTTPS), you can create a URL filter for the DNS name and use that in a Security Policy to match traffic. This has the advantage that it doesn't matter what the destination IP is, but the disadvantage that it won't apply to other protocols (i.e. FTP, SQL/DB, certain API interfaces, etc.) and if you are not doing SSL decryption other DNS names under the same server certificate may slip through the filtering.

L0 Member

For trusted SaaS providers such as Azure/M365 and the likes - I suppose the risk of getting all IP addresses instead of using FQDN seems limited from a security standpoint - but indeed it's not a bulletproof solution since they may add more addresses or shift them around between different services/FQDN's at some point which still could be problematic for service reacheability . It is somewhat beyond me why these providers really have to adapt the NS resolve results so frequently - 10 seconds seems illogical to me - I cannot imagine that congestion/RTT/ other CDN metrics change that fast...but here we are 😞  .....So is there really not a more structural solution to this issue ? (I'd like to think about a trustpoint for API queries keeping the ip lists up2date sort of approach but that requires all SaaS providers to go along with this and that seems unlikely to happen) - On the other hand, we cannot decrypt all TLS handshakes either (M365 has a bunch of so called performance optimized FQDN's - that are discouraged to get decrypted/re-encrypted - ) to enable us to apply URL filtering beyond FQDN level....not ideal all in all .....

L0 Member

I'd like to add that for M365 the "issue" is recognized and their answer is Office 365 URLs and IP address ranges - Microsoft 365 Enterprise | Microsoft Learn - so a JSON file that is published well ahead of changes - Question is how can we integrate this directly in Panoroma.

For other SaaS providers one might hope they follow suite in publishing a dynamic list - or stay away from fast fluxing altogether.

Thanks

Rik

PaloAlto does provide an EDL service for some SAAS services, including M365/Azure. Its automatically updated daily, though it doesn't give you quite the same granularity you get from manually building the lists. Unfortunately, there is no way to pull EDLs into GP client configs yet, but works on the firewall. You can find the available options here:

 

https://docs.paloaltonetworks.com/resources/edl-hosting-service

  • 1 accepted solution
  • 3420 Views
  • 4 replies
  • 0 Likes
  • 29 Subscriptions
Like what you see?

Show your appreciation!

Click Like if a post is helpful to you or if you just want to show your support.

Click Accept as Solution to acknowledge that the answer to your question has been provided.

The button appears next to the replies on topics you’ve started. The member who gave the solution and all future visitors to this topic will appreciate it!

These simple actions take just seconds of your time, but go a long way in showing appreciation for community members and the LIVEcommunity as a whole!

The LIVEcommunity thanks you for your participation!