Well, a lot of the docs I've seen regarding teaming had device stacking as an explicit feature[1], so it was the first thing that came to my mind for an activebackup on top of two 3-link lacps. Guess with stacking being buggy and not really tested, I'll drop it for the suggested lacp-only config. Only thing missing here is that I had a 90second delay_up configured on the stacked activebackup, though for real world use cases this should be mostly equivalent to just putting that delay on all of the enoX ports.

Regarding the kernel locking issue, I can provide some more debug info, I just need to know how to obtain it.

[1] Case in point: http://libteam.org/files/teamdev.pp.pdf page 6

2016-08-16 11:09 GMT+02:00 Jiri Pirko <jiri@resnulli.us>:
Tue, Aug 16, 2016 at 11:01:43AM CEST, jamie.bainbridge@gmail.com wrote:
>On 16 August 2016 at 18:52, Jiri Pirko <jiri@resnulli.us> wrote:
>> Tue, Aug 16, 2016 at 10:15:53AM CEST, mariusz.g.mazur@gmail.com wrote:
>>>I'm not on red hat and don't use their networking scripts. For what I use,
>>>I do have a workaround to guarantee that vlans are removed before teamd
>>>tries to remove a stacked interface.
>>>
>>>The bug report was for figuring out where the kernel lock is and maybe
>>>fixing it.
>>
>>
>> got it. Looks like a mutex deadlock to me. kernel issue. would be good
>> if you can debug it some more. the stacked teams usecase is quite
>> unusual.
>>
>> btw why do you stack teams? why don't you just have one team with 6
>> devices?
>
>To elaborate, it is not required to stack teams in this way to achieve
>failover with LACP.
>
>Only one Aggregator is used at a time. If you put all 6 interfaces in
>the one Team, three ports will negotiate one Aggregator ID, and the
>other three ports will negotiate another Aggregator ID. If one
>Aggregator goes down then the other Aggregator used. In this way, LACP
>has active-backup failover built into the protocol.
>
>I've never tried this on Team but it works well with bonding. Actually
>this is the only way to do it with bonding, because you cannot stack
>bonds.

Actually you can:
test1:~/net-next$ sudo ip link add bondx1 type bond
test1:~/net-next$ sudo ip link add bondx2 type bond
test1:~/net-next$ sudo ip link set bondx1 master bondx2

I was thinking about disabling stacking for team in past. Anyway, it is
not really needed.

>
>Bonding offers aggregator selection based on bandwidth and number of
>slaves with the "ad_select" parameter. Just skimming the wiki, I could
>not see an equivalent with Team, but check the source.

Sure you can do that and more with team. Just see man:

       runner.agg_select_policy (string)
              This selects the policy of how the aggregators will be selected. The following are available:

              lacp_prio — Aggregator with highest priority according to LACP standard will be selected. Aggregator priority is affected by per-port option lacp_prio.

              lacp_prio_stable — Same as previous one, except do not replace selected aggregator if it is still usable.

              bandwidth — Select aggregator with highest total bandwidth.

              count — Select aggregator with highest number of ports.

              port_options  —  Aggregator  with  highest  priority  according  to per-port options prio and sticky will be selected. This means that the aggregator containing the port with the highest priority will be selected unless at least one of the ports in the currently
              selected aggregator is sticky.

              Default: lacp_prio



>
>Jamie
>
>
>>>2016-08-14 10:24 GMT+02:00 Jiri Pirko <jiri@resnulli.us>:
>>>
>>>> Tue, Aug 09, 2016 at 11:04:48AM CEST, mariusz.g.mazur@gmail.com wrote:
>>>> >I have three team interfaces:
>>>> >- sw1 and sw2, both are three-NIC lacps
>>>> >- and an activebackup called team0 (I'm open to better naming suggestions
>>>> >:) on top of sw1+sw2.
>>>> >
>>>> >Here is how vlan interface teardown looks like using the unstacked sw1
>>>> >interface:
>>>> >
>>>> >[root@vc1n3 ~]# systemctl start teamd@sw1
>>>> >[root@vc1n3 ~]# ip l s sw1 up
>>>> >[root@vc1n3 ~]# ip link add link sw1 name lan.246 type vlan id 246
>>>> >[root@vc1n3 ~]# ip l s lan.246 up
>>>> >[root@vc1n3 ~]# systemctl stop teamd@sw1
>>>> >[root@vc1n3 ~]#
>>>> >
>>>> >Everything went fine, both sw1 and lan.246@sw1 are now gone.
>>>> >
>>>> >Now let's try this with stacked team interfaces:
>>>> >
>>>> >[root@vc1n3 ~]# systemctl start teamd@sw1 teamd@sw2
>>>> >[root@vc1n3 ~]# systemctl start teamd@team0
>>>> >[root@vc1n3 ~]# ip l s team0 up
>>>> >[root@vc1n3 ~]# ip link add link team0 name lan.246 type vlan id 246
>>>> >[root@vc1n3 ~]# ip l s lan.246 up
>>>> >[root@vc1n3 ~]# systemctl stop teamd@team0
>>>> >
>>>> >… and it freezes. I can't tell you what the state of the network
>>>> interfaces
>>>> >is, because running 'ip link show' also freezes. The teamd process becomes
>>>> >unkillable. Only thing left to do now is to hard reset the system, cause I
>>>> >can't do a proper shutdown in any shape or form.
>>>>
>>>>
>>>> Looks this might be fixed with the Xin Long's patch I just applied. Try
>>>> the current git please.
>>>>
>> _______________________________________________
>> libteam mailing list
>> libteam@lists.fedorahosted.org
>> https://lists.fedorahosted.org/admin/lists/libteam@lists.fedorahosted.org