Traceroute traversing ION are never visible

cancel
Showing results for 
Show  only  | Search instead for 
Did you mean: 

Traceroute traversing ION are never visible

L0 Member

When running a traceroute that traverses a Prisma SD-Wan ION; any hop beyond the ION we receive a "request timed out". Is this expected behavior that we cannot fix? We have a number of tools that rely on traceroute for path visualization, and this has been an issue that we have found with the IONs. 

1 accepted solution

Accepted Solutions

L2 Linker

Actually I have a workaround for you, you can add "Direct on Any Public" to your Enterprise-Default rule and this will allow it to work:

 

Without policy change: 

admin@PA-440> traceroute source 10.150.42.18 host 8.8.8.8
traceroute to 8.8.8.8 (8.8.8.8), 30 hops max, 60 byte packets
1 * * *
2 10.150.42.17 (10.150.42.17) 1.162 ms 1.133 ms 1.141 ms
3 * * *
4 * * *
5 * * *
6 * * *
7 * * *
8 * * *
9 * * *
10 * * *
11 * * *
12 * * *
13 * * *
14 dns.google (8.8.8.8) 14.616 ms 14.630 ms 18.613 ms
admin@PA-440>

With Policy change:

admin@PA-440> traceroute source 10.150.42.18 host 8.8.8.8
traceroute to 8.8.8.8 (8.8.8.8), 30 hops max, 60 byte packets
1 * * *
2 10.150.42.17 (10.150.42.17) 6.675 ms 6.685 ms 6.679 ms
3 * * *
4 192.168.180.1 (192.168.180.1) 11.586 ms 11.584 ms 11.578 ms
5 * * *
6 99-178-168-1.lightspeed.rlghnc.sbcglobal.net (99.178.168.1) 17.506 ms 6.041 ms 5.971 ms
7 99.173.76.21 (99.173.76.21) 5.958 ms 5.951 ms 8.910 ms
8 32.130.25.178 (32.130.25.178) 19.895 ms * *
9 32.130.20.54 (32.130.20.54) 16.903 ms 16.908 ms 16.888 ms
10 32.130.104.145 (32.130.104.145) 17.129 ms 16.958 ms 16.956 ms
11 12.255.10.64 (12.255.10.64) 16.868 ms 16.847 ms 17.571 ms
12 172.253.71.63 (172.253.71.63) 15.974 ms 15.976 ms 15.955 ms
13 142.251.241.187 (142.251.241.187) 15.925 ms 16.955 ms 15.831 ms
14 dns.google (8.8.8.8) 15.816 ms 15.793 ms 15.718 ms
admin@PA-440>  

 It may seem like the policy would allow private traffic to be sent out direct path, but there are underlying rules that stop that happening, this will just allow the response to be handled correctly.

 

rgallagher_0-1749676062042.png

 

View solution in original post

4 REPLIES 4

L2 Linker

Unfortunately due to the way the WAN interfaces are hardened the TTL expired response from each hop of the traceroute is dropped, There's currently no way to override this behavior.

L2 Linker

Actually I have a workaround for you, you can add "Direct on Any Public" to your Enterprise-Default rule and this will allow it to work:

 

Without policy change: 

admin@PA-440> traceroute source 10.150.42.18 host 8.8.8.8
traceroute to 8.8.8.8 (8.8.8.8), 30 hops max, 60 byte packets
1 * * *
2 10.150.42.17 (10.150.42.17) 1.162 ms 1.133 ms 1.141 ms
3 * * *
4 * * *
5 * * *
6 * * *
7 * * *
8 * * *
9 * * *
10 * * *
11 * * *
12 * * *
13 * * *
14 dns.google (8.8.8.8) 14.616 ms 14.630 ms 18.613 ms
admin@PA-440>

With Policy change:

admin@PA-440> traceroute source 10.150.42.18 host 8.8.8.8
traceroute to 8.8.8.8 (8.8.8.8), 30 hops max, 60 byte packets
1 * * *
2 10.150.42.17 (10.150.42.17) 6.675 ms 6.685 ms 6.679 ms
3 * * *
4 192.168.180.1 (192.168.180.1) 11.586 ms 11.584 ms 11.578 ms
5 * * *
6 99-178-168-1.lightspeed.rlghnc.sbcglobal.net (99.178.168.1) 17.506 ms 6.041 ms 5.971 ms
7 99.173.76.21 (99.173.76.21) 5.958 ms 5.951 ms 8.910 ms
8 32.130.25.178 (32.130.25.178) 19.895 ms * *
9 32.130.20.54 (32.130.20.54) 16.903 ms 16.908 ms 16.888 ms
10 32.130.104.145 (32.130.104.145) 17.129 ms 16.958 ms 16.956 ms
11 12.255.10.64 (12.255.10.64) 16.868 ms 16.847 ms 17.571 ms
12 172.253.71.63 (172.253.71.63) 15.974 ms 15.976 ms 15.955 ms
13 142.251.241.187 (142.251.241.187) 15.925 ms 16.955 ms 15.831 ms
14 dns.google (8.8.8.8) 15.816 ms 15.793 ms 15.718 ms
admin@PA-440>  

 It may seem like the policy would allow private traffic to be sent out direct path, but there are underlying rules that stop that happening, this will just allow the response to be handled correctly.

 

rgallagher_0-1749676062042.png

 

Can you explain what is happening when configuring it this way. I did add this to our lab network and are seeing hops that we were not before. I'm just confused on what is happening here with enabling the L3 failure path to just the enterprise default rule; and what impact if any does it have on any other rules with the Path Stack/Set

L2 Linker

The short version is cause the inbound NAT is happening before the path policy for the return packets, it's then matching enterprise-default rule post NAT, then we evaluate the permitted paths, it came in DIA which is not a permitted path for that rule, hence it gets dropped. So adding the L3 failure path makes this traffic now pass the path evaluation....and bingo your traceroute works! 

 

As from a risk perspective there is no additional risk, there are some underlying rules that block any RFC1918 destined traffic from ever being sent DIA.

  • 1 accepted solution
  • 700 Views
  • 4 replies
  • 0 Likes
Like what you see?

Show your appreciation!

Click Like if a post is helpful to you or if you just want to show your support.

Click Accept as Solution to acknowledge that the answer to your question has been provided.

The button appears next to the replies on topics you’ve started. The member who gave the solution and all future visitors to this topic will appreciate it!

These simple actions take just seconds of your time, but go a long way in showing appreciation for community members and the LIVEcommunity as a whole!

The LIVEcommunity thanks you for your participation!