- Access exclusive content
- Connect with peers
- Share your expertise
- Find support resources
06-17-2010 05:36 AM
Hi,
I am trying to get an aggregation link up between a Cisco and PA-4050 switch (v3.1.2). I have two link in the group and have configured L3 sub interfaces to seperate VLANs. I am able to send traffic across these links but they are clearly not functioning as aggregated interfaces as i loose packets when failing one of the two links (more like grouped ports and the remain gray on the display (picture attached)). The cisco side is not happy that there is a trunk either.
Can anyone send me the correct configuration for the PA and Cisco side for this to work. The nearest description I found refers to the PA no supporting LACP and requiring a ststic configuration - I am not clear on what that would look like. Thanks!
06-17-2010 09:41 AM
Hi Andrew,
I think your cisco config should look something like this:
port-channel load-balance dst-ip
interface Port-channel5
switchport access vlan 10
switchport mode access
interface GigabitEthernet0/1
switchport access vlan 10
switchport mode access
channel-group 5 mode passive
interface GigabitEthernet0/2
switchport access vlan 10
switchport mode access
channel-group 5 mode passive
12/21/2011 - I'm editing this post to attempt to reduce the confusion. The above is incorrect and I have stricken it through. Below is correct as is confirmed by the other participants:
port-channel load-balance dst-ip
interface Port-channel5
switchport access vlan 10
switchport mode access
interface GigabitEthernet0/1
switchport access vlan 10
switchport mode access
channel-group 5 mode on
interface GigabitEthernet0/2
switchport access vlan 10
switchport mode access
channel-group 5 mode on
This will keep LACP from attempting to negotiate.
Cheers,
Kelly
Message was edited by: kbrazil
06-17-2010 12:32 PM
I have this working between PAN-4050 and Cisco Nexus 7000
Here is the Cisco config. Passive mode didn't work, had to use active. Also, in the GUI the ethernetlinks show green, the ae link shows gray as shown in attached pics.
interface Ethernet2/1
switchport
switchport mode trunk
switchport trunk native vlan 600
switchport trunk allowed vlan 6,600-602
logging event port link-status
logging event port trunk-status
channel-group 1 mode active
no shutdown
And the PAN config
network {
interface {
ethernet {
ethernet1/1 {
link-state auto;
aggregate-group ae1;
link-duplex full;
link-speed 1000;
}
ethernet1/2 {
link-state auto;
aggregate-group ae1;
link-duplex full;
link-speed 1000;
}
ethernet1/3 {
link-state auto;
aggregate-group ae1;
link-duplex full;
link-speed 1000;
}
ethernet1/4 {
link-state auto;
aggregate-group ae1;
link-duplex full;
link-speed 1000;
}
....
aggregate-ethernet {
ae1 {
layer3 {
interface-management-profile MGMT;
ip {
x.x.x.x/x { } <----replace with your IP
}
}
}
06-17-2010 02:13 PM
Kelly,
Thanks for your response, I noticed your comment on the article https://live.paloaltonetworks.com/docs/DOC-1098 that the PAN can now support proper link aggregation using Passive configuration. I am pretty sure I tried this on the cisco but will do again when I am back in the lab on Monday (I was seeing the same symptoms as sreynolds reply to my post). Could I ask if you know whether the ae port status stays grey or should turn green? If you look at my original post you cans see the attachment show them as grey.
The problem I am seeing is that I loose alternate packets for about 10 pings (5 pass 5 fail) when a trunk port is brought up or down. If this was a cisco to cisco setup I would losse one ping max. The ports on the PAN are L3. This may be how it works I just want to make sure.
Thanks Again,
Andy..
06-17-2010 02:54 PM
I have to admit that I have limited experience on the PAN with AE interfaces, though I have done it a couple times. I'm speaking from experience from other vendors since our implementation at PAN is nearly identical to how some other security devices implement link-agg.
We do link aggregation but we do not fully implement 802.3ad, which means we only support a static configuration with no LACP. Other than LACP, which is a control protocol over link aggregation, link agg configuration is local and should be interoperable with most any implementation. For instance, there is no particular requirement on how load balancing is performed on either end. (one side can do per-session, the other can do per-packet and it should still "work")
This is why I suggested using "passive" above so LACP would not try to negotiate and would force the agg group up. It looks like doing the opposite has worked for someone else so now I'm confused. I don't remember whether the interface will turn green - I suspect it should. You might check with Support to get a definitive answer here.
Cheers,
Kelly
05-02-2011 03:32 AM
Hello,
I tried out what works with PANOS and Cisco IOS. So here is my description what you have to do when you want to create an aggregate Interface between a Cisco switch and a PA Device:
With best regards
Ronald Jaeckel
09-26-2011 01:56 AM
Could anyone from PAN Support give a CORRECT answer to the question, please?
How I have to configure Cisco switch to proper handle PAN link aggregation in L3 mode (subinterface)?
Do I have to configure LACP? Do I have to set it in Active or Passive mode?
I can't try in lab!
Thanks in advance
09-26-2011 02:20 AM
You should not use active-mode neither passive-mode.
LACP is not supported at all by PAN, and even if it would be supported, it would not be of any help for L3 ports.
Below two examples of what we use.
On a pair of Nexus , 2 x 10Gb VPC channel to a PA-5050
interface port-channel11
switchport mode trunk
vpc 11
switchport trunk native vlan xx
switchport trunk allowed vlan yy-zz
spanning-tree port type edge trunk
interface Ethernet1/11
switchport mode trunk
switchport trunk native vlan xx
switchport trunk allowed vlan yy-zz
channel-group 11
ON Catalyst 6xxx , 4 x 1GB channel to PA-5020
interface Port-channel21
switchport
switchport trunk encapsulation dot1q
switchport trunk allowed vlan yy-zz
switchport mode trunk
logging event link-status
logging event trunk-status
logging event bundle-status
logging event subif-link-status
spanning-tree portfast edge trunkend
interface GigabitEthernet8/30
switchport
switchport trunk encapsulation dot1q
switchport trunk allowed vlan yy-zz
switchport mode trunk
logging event bundle-status
speed 1000
duplex full
channel-group 21 mode on
end
09-26-2011 02:36 AM
Thanks Bart.
So, I need to use "classic" Etherchannel configuration on switch side? No dinamic configuration (LACP like)?
Many many thanks!
09-26-2011 02:40 AM
Yes indeed, the classic configuration works fine with PAN :
"port-channel mode on"
05-09-2013 01:08 AM
Hey guys I am having an issue with a 2 x 10Gb (vPC) port-channel between two Nexus 5548s and a PAN 5050. I can get the port channel to come up on both switches, however I get around 25% packet loss when pinging over it. I have changed the port channel load balancing to src-dst-ip and have tried a couple different configurations on the cisco side with no resolve. As soon as I shut one side of the vPC down it cleans up.
PAN: 5.1.5
Nexus: 6.0.2
Both sides of the vPC are configured the same and show up.
description PAN5050
switchport mode trunk
switchport trunk allowed vlan 128,135,159
channel-group 201
description PAN5050
switchport mode trunk
switchport trunk allowed vlan 128,135,159
vpc 201
SW-1: 201 Po201(SU) Eth NONE Eth1/1(P)
SW-2 201 Po201(SU) Eth NONE Eth1/1(P)
05-21-2013 03:07 PM
Why hasn't anyone from Palo Alto replied to this?
06-24-2020 09:30 AM
Thank you for configuration suggestion, can you please tell me why we need to Load Balance Port-channel with only "dst-ip" instead of "src-dst-ip" ?
Click Accept as Solution to acknowledge that the answer to your question has been provided.
The button appears next to the replies on topics you’ve started. The member who gave the solution and all future visitors to this topic will appreciate it!
These simple actions take just seconds of your time, but go a long way in showing appreciation for community members and the LIVEcommunity as a whole!
The LIVEcommunity thanks you for your participation!