Hi,
Nowadays, when vdsm receives the setupNetowrk verb, it mangles /etc/sysconfig/network-scripts/ifcfg-* files and restarts the network service, so they are read by the responsible SysV service.
This is very much Fedora-oriented, and not up with the new themes in Linux network configuration. Since we want oVirt and Vdsm to be distribution agnostic, and support new features, we have to change.
setupNetwork is responsible for two different things: (1) configure the host networking interfaces, and (2) create virtual networks for guests and connect the to the world over (1).
Functionality (2) is provided by building Linux software bridges, and vlan devices. I'd like to explore moving it to Open vSwitch, which would enable a host of functionalities that we currently lack (e.g. tunneling). One thing that worries me is the need to reimplement our config snapshot/recovery on ovs's database.
As far as I know, ovs is unable to maintain host level parameters of interfaces (e.g. eth0's IPv4 address), so we need another tool for functionality (1): either speak to NetworkManager directly, or to use NetCF, via its libvirt virInterface* wrapper.
I have minor worries about NetCF's breadth of testing and usage; I know it is intended to be cross-platform, but unlike ovs, I am not aware of a wide Debian usage thereof. On the other hand, its API is ready for vdsm's usage for quite a while.
NetworkManager has become ubiquitous, and we'd better integrate with it better than our current setting of NM_CONTROLLED=no. But as DPB tells us, https://lists.fedorahosted.org/pipermail/vdsm-devel/2012-November/001677.htm... we'd better offload integration with NM to libvirt.
We would like to take Network configuration in VDSM to the next level and make it distribution agnostic in addition for setting the infrastructure for more advanced features to be used going forward. The path we think of taking is to integrate with OVS and for feature completeness use NetCF, via its libvirt virInterface* wrapper. Any comments or feedback on this proposal is welcomed.
Thanks to the oVirt net team members who's input has helped writing this email.
Dan.
----- Original Message -----
From: "Dan Kenigsberg" danken@redhat.com To: vdsm-devel@fedorahosted.org Sent: Sunday, November 11, 2012 4:07:30 PM Subject: [vdsm] Future of Vdsm network configuration
Hi,
Nowadays, when vdsm receives the setupNetowrk verb, it mangles /etc/sysconfig/network-scripts/ifcfg-* files and restarts the network service, so they are read by the responsible SysV service.
This is very much Fedora-oriented, and not up with the new themes in Linux network configuration. Since we want oVirt and Vdsm to be distribution agnostic, and support new features, we have to change.
setupNetwork is responsible for two different things: (1) configure the host networking interfaces, and (2) create virtual networks for guests and connect the to the world over (1).
Functionality (2) is provided by building Linux software bridges, and vlan devices. I'd like to explore moving it to Open vSwitch, which would enable a host of functionalities that we currently lack (e.g. tunneling). One thing that worries me is the need to reimplement our config snapshot/recovery on ovs's database.
As far as I know, ovs is unable to maintain host level parameters of interfaces (e.g. eth0's IPv4 address), so we need another tool for functionality (1): either speak to NetworkManager directly, or to use NetCF, via its libvirt virInterface* wrapper.
I have minor worries about NetCF's breadth of testing and usage; I know it is intended to be cross-platform, but unlike ovs, I am not aware of a wide Debian usage thereof. On the other hand, its API is ready for vdsm's usage for quite a while.
NetworkManager has become ubiquitous, and we'd better integrate with it better than our current setting of NM_CONTROLLED=no. But as DPB tells us, https://lists.fedorahosted.org/pipermail/vdsm-devel/2012-November/001677.htm... we'd better offload integration with NM to libvirt.
We would like to take Network configuration in VDSM to the next level and make it distribution agnostic in addition for setting the infrastructure for more advanced features to be used going forward. The path we think of taking is to integrate with OVS and for feature completeness use NetCF, via its libvirt virInterface* wrapper. Any comments or feedback on this proposal is welcomed.
Thanks to the oVirt net team members who's input has helped writing this email.
Hi,
As far as I see this, network manager is a monster that is a huge dependency to have just to create bridges or configure network interfaces... It is true that on a host where network manager lives it would be not polite to define network resources not via its interface, however I don't like we force network manager.
libvirt is long not used as virtualization library but system management agent, I am not sure this is the best system agent I would have chosen.
I think that all the terms and building blocks got lost in time... and the result integration became more and more complex.
Stabilizing such multi-layered component environment is much harder than monolithic environment.
I would really want to see vdsm as monolithic component with full control over its resources, I believe this is the only way vdsm can be stable enough to be production grade.
Hypervisor should be a total slave of manager (or cluster), so I have no problem in bypassing/disabling any distribution specific tool in favour of atoms (brctl, iproute), in non persistence mode.
I know this derive some more work, but I don't think it is that complex to implement and maintain.
Just my 2 cents...
Regards, Alon
----- Original Message -----
From: "Alon Bar-Lev" alonbl@redhat.com To: "Dan Kenigsberg" danken@redhat.com Cc: vdsm-devel@fedorahosted.org Sent: Sunday, November 11, 2012 3:46:43 PM Subject: Re: [vdsm] Future of Vdsm network configuration
----- Original Message -----
From: "Dan Kenigsberg" danken@redhat.com To: vdsm-devel@fedorahosted.org Sent: Sunday, November 11, 2012 4:07:30 PM Subject: [vdsm] Future of Vdsm network configuration
Hi,
Nowadays, when vdsm receives the setupNetowrk verb, it mangles /etc/sysconfig/network-scripts/ifcfg-* files and restarts the network service, so they are read by the responsible SysV service.
This is very much Fedora-oriented, and not up with the new themes in Linux network configuration. Since we want oVirt and Vdsm to be distribution agnostic, and support new features, we have to change.
setupNetwork is responsible for two different things: (1) configure the host networking interfaces, and (2) create virtual networks for guests and connect the to the world over (1).
Functionality (2) is provided by building Linux software bridges, and vlan devices. I'd like to explore moving it to Open vSwitch, which would enable a host of functionalities that we currently lack (e.g. tunneling). One thing that worries me is the need to reimplement our config snapshot/recovery on ovs's database.
As far as I know, ovs is unable to maintain host level parameters of interfaces (e.g. eth0's IPv4 address), so we need another tool for functionality (1): either speak to NetworkManager directly, or to use NetCF, via its libvirt virInterface* wrapper.
I have minor worries about NetCF's breadth of testing and usage; I know it is intended to be cross-platform, but unlike ovs, I am not aware of a wide Debian usage thereof. On the other hand, its API is ready for vdsm's usage for quite a while.
NetworkManager has become ubiquitous, and we'd better integrate with it better than our current setting of NM_CONTROLLED=no. But as DPB tells us, https://lists.fedorahosted.org/pipermail/vdsm-devel/2012-November/001677.htm... we'd better offload integration with NM to libvirt.
We would like to take Network configuration in VDSM to the next level and make it distribution agnostic in addition for setting the infrastructure for more advanced features to be used going forward. The path we think of taking is to integrate with OVS and for feature completeness use NetCF, via its libvirt virInterface* wrapper. Any comments or feedback on this proposal is welcomed.
Thanks to the oVirt net team members who's input has helped writing this email.
Hi,
As far as I see this, network manager is a monster that is a huge dependency to have just to create bridges or configure network interfaces... It is true that on a host where network manager lives it would be not polite to define network resources not via its interface, however I don't like we force network manager.
libvirt is long not used as virtualization library but system management agent, I am not sure this is the best system agent I would have chosen.
I think that all the terms and building blocks got lost in time... and the result integration became more and more complex.
Stabilizing such multi-layered component environment is much harder than monolithic environment.
I would really want to see vdsm as monolithic component with full control over its resources, I believe this is the only way vdsm can be stable enough to be production grade.
Hypervisor should be a total slave of manager (or cluster), so I have no problem in bypassing/disabling any distribution specific tool in favour of atoms (brctl, iproute), in non persistence mode.
So you propose that we would keep the network configuration database ourselves (something like sqlite maybe), disable network.service and networkmanager.service and put up and down the interfaces we need via brctl/iproute, sysfs and other netlink talking interfaces right?
I won't deny that for hypervisor nodes it sounds really well. For installations on machines that maybe serve other purposes as well, it could be slightly problematic. Not the part of managing the network, but the part of disabling network manager and network.service.
Since what you said was bypass NM and network.service, maybe it would be better instead to leave whichever is default enabled and let the user define which interfaces we should manage, and make those unavailable to NM and network.service. Thre are four cases here:
NM enabled network.service disabled: Simply create ifcfg-* for the interfaces that we want to manage that include NM_CONTROLLED=no and the MAC address of the interface. NM disabled and network.service disabled: Just make sure that the interfaces we are to manage do not have a ifcfg-* file. NM disabled and network.service disabled: No special requirements to make it work. NM enabled and network.service enabled: Make sure that there are no ifcfg-* files for the interfaces we manage and create a NM keyfile stating the interface as not managed.
Alon, just correct me if I am wrong in my interpretation of what you said, I wanted to expand on it to make sure I understood it well.
Best, Toni
I know this derive some more work, but I don't think it is that complex to implement and maintain.
Just my 2 cents...
Regards, Alon _______________________________________________ vdsm-devel mailing list vdsm-devel@lists.fedorahosted.org https://lists.fedorahosted.org/mailman/listinfo/vdsm-devel
----- Original Message -----
From: "Antoni Segura Puimedon" asegurap@redhat.com To: "Alon Bar-Lev" alonbl@redhat.com Cc: vdsm-devel@fedorahosted.org, "Dan Kenigsberg" danken@redhat.com Sent: Sunday, November 11, 2012 5:47:54 PM Subject: Re: [vdsm] Future of Vdsm network configuration
----- Original Message -----
From: "Alon Bar-Lev" alonbl@redhat.com To: "Dan Kenigsberg" danken@redhat.com Cc: vdsm-devel@fedorahosted.org Sent: Sunday, November 11, 2012 3:46:43 PM Subject: Re: [vdsm] Future of Vdsm network configuration
----- Original Message -----
From: "Dan Kenigsberg" danken@redhat.com To: vdsm-devel@fedorahosted.org Sent: Sunday, November 11, 2012 4:07:30 PM Subject: [vdsm] Future of Vdsm network configuration
Hi,
Nowadays, when vdsm receives the setupNetowrk verb, it mangles /etc/sysconfig/network-scripts/ifcfg-* files and restarts the network service, so they are read by the responsible SysV service.
This is very much Fedora-oriented, and not up with the new themes in Linux network configuration. Since we want oVirt and Vdsm to be distribution agnostic, and support new features, we have to change.
setupNetwork is responsible for two different things: (1) configure the host networking interfaces, and (2) create virtual networks for guests and connect the to the world over (1).
Functionality (2) is provided by building Linux software bridges, and vlan devices. I'd like to explore moving it to Open vSwitch, which would enable a host of functionalities that we currently lack (e.g. tunneling). One thing that worries me is the need to reimplement our config snapshot/recovery on ovs's database.
As far as I know, ovs is unable to maintain host level parameters of interfaces (e.g. eth0's IPv4 address), so we need another tool for functionality (1): either speak to NetworkManager directly, or to use NetCF, via its libvirt virInterface* wrapper.
I have minor worries about NetCF's breadth of testing and usage; I know it is intended to be cross-platform, but unlike ovs, I am not aware of a wide Debian usage thereof. On the other hand, its API is ready for vdsm's usage for quite a while.
NetworkManager has become ubiquitous, and we'd better integrate with it better than our current setting of NM_CONTROLLED=no. But as DPB tells us, https://lists.fedorahosted.org/pipermail/vdsm-devel/2012-November/001677.htm... we'd better offload integration with NM to libvirt.
We would like to take Network configuration in VDSM to the next level and make it distribution agnostic in addition for setting the infrastructure for more advanced features to be used going forward. The path we think of taking is to integrate with OVS and for feature completeness use NetCF, via its libvirt virInterface* wrapper. Any comments or feedback on this proposal is welcomed.
Thanks to the oVirt net team members who's input has helped writing this email.
Hi,
As far as I see this, network manager is a monster that is a huge dependency to have just to create bridges or configure network interfaces... It is true that on a host where network manager lives it would be not polite to define network resources not via its interface, however I don't like we force network manager.
libvirt is long not used as virtualization library but system management agent, I am not sure this is the best system agent I would have chosen.
I think that all the terms and building blocks got lost in time... and the result integration became more and more complex.
Stabilizing such multi-layered component environment is much harder than monolithic environment.
I would really want to see vdsm as monolithic component with full control over its resources, I believe this is the only way vdsm can be stable enough to be production grade.
Hypervisor should be a total slave of manager (or cluster), so I have no problem in bypassing/disabling any distribution specific tool in favour of atoms (brctl, iproute), in non persistence mode.
So you propose that we would keep the network configuration database ourselves (something like sqlite maybe), disable network.service and networkmanager.service and put up and down the interfaces we need via brctl/iproute, sysfs and other netlink talking interfaces right?
I won't deny that for hypervisor nodes it sounds really well. For installations on machines that maybe serve other purposes as well, it could be slightly problematic. Not the part of managing the network, but the part of disabling network manager and network.service.
Since what you said was bypass NM and network.service, maybe it would be better instead to leave whichever is default enabled and let the user define which interfaces we should manage, and make those unavailable to NM and network.service. Thre are four cases here:
NM enabled network.service disabled: Simply create ifcfg-* for the interfaces that we want to manage that include NM_CONTROLLED=no and the MAC address of the interface. NM disabled and network.service disabled: Just make sure that the interfaces we are to manage do not have a ifcfg-* file. NM disabled and network.service disabled: No special requirements to make it work. NM enabled and network.service enabled: Make sure that there are no ifcfg-* files for the interfaces we manage and create a NM keyfile stating the interface as not managed.
Alon, just correct me if I am wrong in my interpretation of what you said, I wanted to expand on it to make sure I understood it well.
Best, Toni
I know this derive some more work, but I don't think it is that complex to implement and maintain.
Just my 2 cents...
Hello Toni,
The demonstrate what I think, let's take this to the extreme...
Hypervisor should be stable and rock solid, so I would use the minimum required dependencies with tight integration. For this purpose I would use kernel + busybox + host-manager. host-manager that uses ioctls/netlink to perform the network management and storage management. And as we only use qemu/kvm linked against qemu. We may add some OPTIONAL infrastructure component like openvswitch for extra functionality.
I, personally, don't see the value in running the hypervisor on generic hosts, meaning running VMs on host that performs other tasks as well, such as database server or application server.
But let's say there is some value in that, so we have to ask: 1. What is the stability factor we expect from these hosts? 2. How well do we need to integrate with the distribution specific features?
If the answer to (1) is as same as hypervisor, then we take the same software and compromise with the integration.
Otherwise we perform the minimum we can for such integration, such as removing the network interfaces from the network manager control.
The reasoning behind my opinion is that components such as dbus, systemd, network manager are component that were design to solve the problem of the END USER, not to be used as MISSION CRITICAL infrastructure components. This was part of the effort to make the Linux desktop more friendly. But then leaked to the MISSION CRITICAL core.
The stability of the hypervisor should be the same or higher than the hosts it runs, so it cannot use none mission critical components to achieve that.
The solution can be to write the whole network functionality as plugins, example: bridge plugin, vlan plugin, bond plugin etc... Then have implementation of these plugins using network manager, openvswitch, ioctl/netlink. Using the appropriate plugin based on desired functionality per desired stability.
I really like to see rock solid monolithic host manager / cluster manager.
I hope I clarified a little...
Regards, Alon
Hi Alon,
Alon Bar-Lev píše v Ne 11. 11. 2012 v 13:28 -0500:
----- Original Message -----
From: "Antoni Segura Puimedon" asegurap@redhat.com To: "Alon Bar-Lev" alonbl@redhat.com Cc: vdsm-devel@fedorahosted.org, "Dan Kenigsberg" danken@redhat.com Sent: Sunday, November 11, 2012 5:47:54 PM Subject: Re: [vdsm] Future of Vdsm network configuration
----- Original Message -----
From: "Alon Bar-Lev" alonbl@redhat.com To: "Dan Kenigsberg" danken@redhat.com Cc: vdsm-devel@fedorahosted.org Sent: Sunday, November 11, 2012 3:46:43 PM Subject: Re: [vdsm] Future of Vdsm network configuration
----- Original Message -----
From: "Dan Kenigsberg" danken@redhat.com To: vdsm-devel@fedorahosted.org Sent: Sunday, November 11, 2012 4:07:30 PM Subject: [vdsm] Future of Vdsm network configuration
Hi,
Nowadays, when vdsm receives the setupNetowrk verb, it mangles /etc/sysconfig/network-scripts/ifcfg-* files and restarts the network service, so they are read by the responsible SysV service.
This is very much Fedora-oriented, and not up with the new themes in Linux network configuration. Since we want oVirt and Vdsm to be distribution agnostic, and support new features, we have to change.
setupNetwork is responsible for two different things: (1) configure the host networking interfaces, and (2) create virtual networks for guests and connect the to the world over (1).
Functionality (2) is provided by building Linux software bridges, and vlan devices. I'd like to explore moving it to Open vSwitch, which would enable a host of functionalities that we currently lack (e.g. tunneling). One thing that worries me is the need to reimplement our config snapshot/recovery on ovs's database.
As far as I know, ovs is unable to maintain host level parameters of interfaces (e.g. eth0's IPv4 address), so we need another tool for functionality (1): either speak to NetworkManager directly, or to use NetCF, via its libvirt virInterface* wrapper.
I have minor worries about NetCF's breadth of testing and usage; I know it is intended to be cross-platform, but unlike ovs, I am not aware of a wide Debian usage thereof. On the other hand, its API is ready for vdsm's usage for quite a while.
NetworkManager has become ubiquitous, and we'd better integrate with it better than our current setting of NM_CONTROLLED=no. But as DPB tells us, https://lists.fedorahosted.org/pipermail/vdsm-devel/2012-November/001677.htm... we'd better offload integration with NM to libvirt.
We would like to take Network configuration in VDSM to the next level and make it distribution agnostic in addition for setting the infrastructure for more advanced features to be used going forward. The path we think of taking is to integrate with OVS and for feature completeness use NetCF, via its libvirt virInterface* wrapper. Any comments or feedback on this proposal is welcomed.
Thanks to the oVirt net team members who's input has helped writing this email.
Hi,
As far as I see this, network manager is a monster that is a huge dependency to have just to create bridges or configure network interfaces... It is true that on a host where network manager lives it would be not polite to define network resources not via its interface, however I don't like we force network manager.
NM is a default way of network configuration from F17 on and it's available on all platforms. It isn't exactly small but it wouldn't pull any dependency AFAICT because all its dependencies are on Fedora initramfs already...
libvirt is long not used as virtualization library but system management agent, I am not sure this is the best system agent I would have chosen.
I think that all the terms and building blocks got lost in time... and the result integration became more and more complex.
Stabilizing such multi-layered component environment is much harder than monolithic environment.
I would really want to see vdsm as monolithic component with full control over its resources, I believe this is the only way vdsm can be stable enough to be production grade.
Hypervisor should be a total slave of manager (or cluster), so I have no problem in bypassing/disabling any distribution specific tool in favour of atoms (brctl, iproute), in non persistence mode.
So you propose that we would keep the network configuration database ourselves (something like sqlite maybe), disable network.service and networkmanager.service and put up and down the interfaces we need via brctl/iproute, sysfs and other netlink talking interfaces right?
I won't deny that for hypervisor nodes it sounds really well. For installations on machines that maybe serve other purposes as well, it could be slightly problematic. Not the part of managing the network, but the part of disabling network manager and network.service.
Since what you said was bypass NM and network.service, maybe it would be better instead to leave whichever is default enabled and let the user define which interfaces we should manage, and make those unavailable to NM and network.service. Thre are four cases here:
NM enabled network.service disabled: Simply create ifcfg-* for the interfaces that we want to manage that include NM_CONTROLLED=no and the MAC address of the interface. NM disabled and network.service disabled: Just make sure that the interfaces we are to manage do not have a ifcfg-* file. NM disabled and network.service disabled: No special requirements to make it work. NM enabled and network.service enabled: Make sure that there are no ifcfg-* files for the interfaces we manage and create a NM keyfile stating the interface as not managed.
Alon, just correct me if I am wrong in my interpretation of what you said, I wanted to expand on it to make sure I understood it well.
Best, Toni
I know this derive some more work, but I don't think it is that complex to implement and maintain.
Just my 2 cents...
Hello Toni,
The demonstrate what I think, let's take this to the extreme...
Hypervisor should be stable and rock solid, so I would use the minimum required dependencies with tight integration. For this purpose I would use kernel + busybox + host-manager. host-manager that uses ioctls/netlink to perform the network management and storage management. And as we only use qemu/kvm linked against qemu. We may add some OPTIONAL infrastructure component like openvswitch for extra functionality.
I, personally, don't see the value in running the hypervisor on generic hosts, meaning running VMs on host that performs other tasks as well, such as database server or application server.
But let's say there is some value in that, so we have to ask:
- What is the stability factor we expect from these hosts?
- How well do we need to integrate with the distribution specific features?
If the answer to (1) is as same as hypervisor, then we take the same software and compromise with the integration.
Otherwise we perform the minimum we can for such integration, such as removing the network interfaces from the network manager control.
The reasoning behind my opinion is that components such as dbus, systemd, network manager are component that were design to solve the problem of the END USER, not to be used as MISSION CRITICAL infrastructure components. This was part of the effort to make the Linux desktop more friendly. But then leaked to the MISSION CRITICAL core.
This is surely not true for systemd and as far as I know about NetworkManager, it's recent developments are moving it to mission-critical grade software.
The stability of the hypervisor should be the same or higher than the hosts it runs, so it cannot use none mission critical components to achieve that.
The solution can be to write the whole network functionality as plugins, example: bridge plugin, vlan plugin, bond plugin etc...
Putting this together with other facts (inability of current kernel + scripts to handle full IPv6 functionality), you effectively propose to write Yet Another Network Daemon, This Time Done Right.
If you can spend one hour of your time to listen to some networking-related talks, please have a look at these two: https://www.youtube.com/watch?v=lzCLkjjrg1Q (by Pavel Šimerda, one of NetworkManager developers) https://www.youtube.com/watch?v=XUgmFyBe_9w (by SUSE guys developing Wicked)
Then have implementation of these plugins using network manager, openvswitch, ioctl/netlink. Using the appropriate plugin based on desired functionality per desired stability.
I really like to see rock solid monolithic host manager / cluster manager.
Systemd is on the best path to become such a monolithic beast that will do everything, given its efforts to absorb functionalities unrelated to init into its monolithic design (syslog, anacron).
David
I hope I clarified a little...
Regards, Alon _______________________________________________ vdsm-devel mailing list vdsm-devel@lists.fedorahosted.org https://lists.fedorahosted.org/mailman/listinfo/vdsm-devel
----- Original Message -----
From: "David Jaša" djasa@redhat.com To: "Alon Bar-Lev" alonbl@redhat.com Cc: vdsm-devel@fedorahosted.org Sent: Monday, November 12, 2012 7:13:19 PM Subject: Re: [vdsm] Future of Vdsm network configuration
Hi Alon,
Alon Bar-Lev píše v Ne 11. 11. 2012 v 13:28 -0500:
----- Original Message -----
From: "Antoni Segura Puimedon" asegurap@redhat.com To: "Alon Bar-Lev" alonbl@redhat.com Cc: vdsm-devel@fedorahosted.org, "Dan Kenigsberg" danken@redhat.com Sent: Sunday, November 11, 2012 5:47:54 PM Subject: Re: [vdsm] Future of Vdsm network configuration
----- Original Message -----
From: "Alon Bar-Lev" alonbl@redhat.com To: "Dan Kenigsberg" danken@redhat.com Cc: vdsm-devel@fedorahosted.org Sent: Sunday, November 11, 2012 3:46:43 PM Subject: Re: [vdsm] Future of Vdsm network configuration
----- Original Message -----
From: "Dan Kenigsberg" danken@redhat.com To: vdsm-devel@fedorahosted.org Sent: Sunday, November 11, 2012 4:07:30 PM Subject: [vdsm] Future of Vdsm network configuration
Hi,
Nowadays, when vdsm receives the setupNetowrk verb, it mangles /etc/sysconfig/network-scripts/ifcfg-* files and restarts the network service, so they are read by the responsible SysV service.
This is very much Fedora-oriented, and not up with the new themes in Linux network configuration. Since we want oVirt and Vdsm to be distribution agnostic, and support new features, we have to change.
setupNetwork is responsible for two different things: (1) configure the host networking interfaces, and (2) create virtual networks for guests and connect the to the world over (1).
Functionality (2) is provided by building Linux software bridges, and vlan devices. I'd like to explore moving it to Open vSwitch, which would enable a host of functionalities that we currently lack (e.g. tunneling). One thing that worries me is the need to reimplement our config snapshot/recovery on ovs's database.
As far as I know, ovs is unable to maintain host level parameters of interfaces (e.g. eth0's IPv4 address), so we need another tool for functionality (1): either speak to NetworkManager directly, or to use NetCF, via its libvirt virInterface* wrapper.
I have minor worries about NetCF's breadth of testing and usage; I know it is intended to be cross-platform, but unlike ovs, I am not aware of a wide Debian usage thereof. On the other hand, its API is ready for vdsm's usage for quite a while.
NetworkManager has become ubiquitous, and we'd better integrate with it better than our current setting of NM_CONTROLLED=no. But as DPB tells us, https://lists.fedorahosted.org/pipermail/vdsm-devel/2012-November/001677.htm... we'd better offload integration with NM to libvirt.
We would like to take Network configuration in VDSM to the next level and make it distribution agnostic in addition for setting the infrastructure for more advanced features to be used going forward. The path we think of taking is to integrate with OVS and for feature completeness use NetCF, via its libvirt virInterface* wrapper. Any comments or feedback on this proposal is welcomed.
Thanks to the oVirt net team members who's input has helped writing this email.
Hi,
As far as I see this, network manager is a monster that is a huge dependency to have just to create bridges or configure network interfaces... It is true that on a host where network manager lives it would be not polite to define network resources not via its interface, however I don't like we force network manager.
NM is a default way of network configuration from F17 on and it's available on all platforms. It isn't exactly small but it wouldn't pull any dependency AFAICT because all its dependencies are on Fedora initramfs already...
libvirt is long not used as virtualization library but system management agent, I am not sure this is the best system agent I would have chosen.
I think that all the terms and building blocks got lost in time... and the result integration became more and more complex.
Stabilizing such multi-layered component environment is much harder than monolithic environment.
I would really want to see vdsm as monolithic component with full control over its resources, I believe this is the only way vdsm can be stable enough to be production grade.
Hypervisor should be a total slave of manager (or cluster), so I have no problem in bypassing/disabling any distribution specific tool in favour of atoms (brctl, iproute), in non persistence mode.
So you propose that we would keep the network configuration database ourselves (something like sqlite maybe), disable network.service and networkmanager.service and put up and down the interfaces we need via brctl/iproute, sysfs and other netlink talking interfaces right?
I won't deny that for hypervisor nodes it sounds really well. For installations on machines that maybe serve other purposes as well, it could be slightly problematic. Not the part of managing the network, but the part of disabling network manager and network.service.
Since what you said was bypass NM and network.service, maybe it would be better instead to leave whichever is default enabled and let the user define which interfaces we should manage, and make those unavailable to NM and network.service. Thre are four cases here:
NM enabled network.service disabled: Simply create ifcfg-* for the interfaces that we want to manage that include NM_CONTROLLED=no and the MAC address of the interface. NM disabled and network.service disabled: Just make sure that the interfaces we are to manage do not have a ifcfg-* file. NM disabled and network.service disabled: No special requirements to make it work. NM enabled and network.service enabled: Make sure that there are no ifcfg-* files for the interfaces we manage and create a NM keyfile stating the interface as not managed.
Alon, just correct me if I am wrong in my interpretation of what you said, I wanted to expand on it to make sure I understood it well.
Best, Toni
I know this derive some more work, but I don't think it is that complex to implement and maintain.
Just my 2 cents...
Hello Toni,
The demonstrate what I think, let's take this to the extreme...
Hypervisor should be stable and rock solid, so I would use the minimum required dependencies with tight integration. For this purpose I would use kernel + busybox + host-manager. host-manager that uses ioctls/netlink to perform the network management and storage management. And as we only use qemu/kvm linked against qemu. We may add some OPTIONAL infrastructure component like openvswitch for extra functionality.
I, personally, don't see the value in running the hypervisor on generic hosts, meaning running VMs on host that performs other tasks as well, such as database server or application server.
But let's say there is some value in that, so we have to ask:
- What is the stability factor we expect from these hosts?
- How well do we need to integrate with the distribution specific
features?
If the answer to (1) is as same as hypervisor, then we take the same software and compromise with the integration.
Otherwise we perform the minimum we can for such integration, such as removing the network interfaces from the network manager control.
The reasoning behind my opinion is that components such as dbus, systemd, network manager are component that were design to solve the problem of the END USER, not to be used as MISSION CRITICAL infrastructure components. This was part of the effort to make the Linux desktop more friendly. But then leaked to the MISSION CRITICAL core.
This is surely not true for systemd and as far as I know about NetworkManager, it's recent developments are moving it to mission-critical grade software.
The stability of the hypervisor should be the same or higher than the hosts it runs, so it cannot use none mission critical components to achieve that.
The solution can be to write the whole network functionality as plugins, example: bridge plugin, vlan plugin, bond plugin etc...
Putting this together with other facts (inability of current kernel + scripts to handle full IPv6 functionality), you effectively propose to write Yet Another Network Daemon, This Time Done Right.
If you can spend one hour of your time to listen to some networking-related talks, please have a look at these two: https://www.youtube.com/watch?v=lzCLkjjrg1Q (by Pavel Šimerda, one of NetworkManager developers) https://www.youtube.com/watch?v=XUgmFyBe_9w (by SUSE guys developing Wicked)
Then have implementation of these plugins using network manager, openvswitch, ioctl/netlink. Using the appropriate plugin based on desired functionality per desired stability.
I really like to see rock solid monolithic host manager / cluster manager.
Systemd is on the best path to become such a monolithic beast that will do everything, given its efforts to absorb functionalities unrelated to init into its monolithic design (syslog, anacron).
David
David,
I am more than well aware of the complexity of network.
Question:
Had you designed an embedded mission critical component (car controller computer for example or avionic grade computer) had you used systemd and network manager, or even based on fedora or rhel?
I won't... and I as far as I stand, there is no difference between hypervisor and these kind of mission critical components. I hope we return to this discussion in future if and when vdsm will be used in greater market share.
Regards, Alon.
Hi Alon,
Alon Bar-Lev píše v Po 12. 11. 2012 v 12:22 -0500:
----- Original Message -----
From: "David Jaša" djasa@redhat.com To: "Alon Bar-Lev" alonbl@redhat.com Cc: vdsm-devel@fedorahosted.org Sent: Monday, November 12, 2012 7:13:19 PM Subject: Re: [vdsm] Future of Vdsm network configuration
Hi Alon,
Alon Bar-Lev píše v Ne 11. 11. 2012 v 13:28 -0500:
----- Original Message -----
From: "Antoni Segura Puimedon" asegurap@redhat.com To: "Alon Bar-Lev" alonbl@redhat.com Cc: vdsm-devel@fedorahosted.org, "Dan Kenigsberg" danken@redhat.com Sent: Sunday, November 11, 2012 5:47:54 PM Subject: Re: [vdsm] Future of Vdsm network configuration
----- Original Message -----
From: "Alon Bar-Lev" alonbl@redhat.com To: "Dan Kenigsberg" danken@redhat.com Cc: vdsm-devel@fedorahosted.org Sent: Sunday, November 11, 2012 3:46:43 PM Subject: Re: [vdsm] Future of Vdsm network configuration
----- Original Message -----
From: "Dan Kenigsberg" danken@redhat.com To: vdsm-devel@fedorahosted.org Sent: Sunday, November 11, 2012 4:07:30 PM Subject: [vdsm] Future of Vdsm network configuration
Hi,
Nowadays, when vdsm receives the setupNetowrk verb, it mangles /etc/sysconfig/network-scripts/ifcfg-* files and restarts the network service, so they are read by the responsible SysV service.
This is very much Fedora-oriented, and not up with the new themes in Linux network configuration. Since we want oVirt and Vdsm to be distribution agnostic, and support new features, we have to change.
setupNetwork is responsible for two different things: (1) configure the host networking interfaces, and (2) create virtual networks for guests and connect the to the world over (1).
Functionality (2) is provided by building Linux software bridges, and vlan devices. I'd like to explore moving it to Open vSwitch, which would enable a host of functionalities that we currently lack (e.g. tunneling). One thing that worries me is the need to reimplement our config snapshot/recovery on ovs's database.
As far as I know, ovs is unable to maintain host level parameters of interfaces (e.g. eth0's IPv4 address), so we need another tool for functionality (1): either speak to NetworkManager directly, or to use NetCF, via its libvirt virInterface* wrapper.
I have minor worries about NetCF's breadth of testing and usage; I know it is intended to be cross-platform, but unlike ovs, I am not aware of a wide Debian usage thereof. On the other hand, its API is ready for vdsm's usage for quite a while.
NetworkManager has become ubiquitous, and we'd better integrate with it better than our current setting of NM_CONTROLLED=no. But as DPB tells us, https://lists.fedorahosted.org/pipermail/vdsm-devel/2012-November/001677.htm... we'd better offload integration with NM to libvirt.
We would like to take Network configuration in VDSM to the next level and make it distribution agnostic in addition for setting the infrastructure for more advanced features to be used going forward. The path we think of taking is to integrate with OVS and for feature completeness use NetCF, via its libvirt virInterface* wrapper. Any comments or feedback on this proposal is welcomed.
Thanks to the oVirt net team members who's input has helped writing this email.
Hi,
As far as I see this, network manager is a monster that is a huge dependency to have just to create bridges or configure network interfaces... It is true that on a host where network manager lives it would be not polite to define network resources not via its interface, however I don't like we force network manager.
NM is a default way of network configuration from F17 on and it's available on all platforms. It isn't exactly small but it wouldn't pull any dependency AFAICT because all its dependencies are on Fedora initramfs already...
libvirt is long not used as virtualization library but system management agent, I am not sure this is the best system agent I would have chosen.
I think that all the terms and building blocks got lost in time... and the result integration became more and more complex.
Stabilizing such multi-layered component environment is much harder than monolithic environment.
I would really want to see vdsm as monolithic component with full control over its resources, I believe this is the only way vdsm can be stable enough to be production grade.
Hypervisor should be a total slave of manager (or cluster), so I have no problem in bypassing/disabling any distribution specific tool in favour of atoms (brctl, iproute), in non persistence mode.
So you propose that we would keep the network configuration database ourselves (something like sqlite maybe), disable network.service and networkmanager.service and put up and down the interfaces we need via brctl/iproute, sysfs and other netlink talking interfaces right?
I won't deny that for hypervisor nodes it sounds really well. For installations on machines that maybe serve other purposes as well, it could be slightly problematic. Not the part of managing the network, but the part of disabling network manager and network.service.
Since what you said was bypass NM and network.service, maybe it would be better instead to leave whichever is default enabled and let the user define which interfaces we should manage, and make those unavailable to NM and network.service. Thre are four cases here:
NM enabled network.service disabled: Simply create ifcfg-* for the interfaces that we want to manage that include NM_CONTROLLED=no and the MAC address of the interface. NM disabled and network.service disabled: Just make sure that the interfaces we are to manage do not have a ifcfg-* file. NM disabled and network.service disabled: No special requirements to make it work. NM enabled and network.service enabled: Make sure that there are no ifcfg-* files for the interfaces we manage and create a NM keyfile stating the interface as not managed.
Alon, just correct me if I am wrong in my interpretation of what you said, I wanted to expand on it to make sure I understood it well.
Best, Toni
I know this derive some more work, but I don't think it is that complex to implement and maintain.
Just my 2 cents...
Hello Toni,
The demonstrate what I think, let's take this to the extreme...
Hypervisor should be stable and rock solid, so I would use the minimum required dependencies with tight integration. For this purpose I would use kernel + busybox + host-manager. host-manager that uses ioctls/netlink to perform the network management and storage management. And as we only use qemu/kvm linked against qemu. We may add some OPTIONAL infrastructure component like openvswitch for extra functionality.
I, personally, don't see the value in running the hypervisor on generic hosts, meaning running VMs on host that performs other tasks as well, such as database server or application server.
But let's say there is some value in that, so we have to ask:
- What is the stability factor we expect from these hosts?
- How well do we need to integrate with the distribution specific
features?
If the answer to (1) is as same as hypervisor, then we take the same software and compromise with the integration.
Otherwise we perform the minimum we can for such integration, such as removing the network interfaces from the network manager control.
The reasoning behind my opinion is that components such as dbus, systemd, network manager are component that were design to solve the problem of the END USER, not to be used as MISSION CRITICAL infrastructure components. This was part of the effort to make the Linux desktop more friendly. But then leaked to the MISSION CRITICAL core.
This is surely not true for systemd and as far as I know about NetworkManager, it's recent developments are moving it to mission-critical grade software.
The stability of the hypervisor should be the same or higher than the hosts it runs, so it cannot use none mission critical components to achieve that.
The solution can be to write the whole network functionality as plugins, example: bridge plugin, vlan plugin, bond plugin etc...
Putting this together with other facts (inability of current kernel + scripts to handle full IPv6 functionality), you effectively propose to write Yet Another Network Daemon, This Time Done Right.
If you can spend one hour of your time to listen to some networking-related talks, please have a look at these two: https://www.youtube.com/watch?v=lzCLkjjrg1Q (by Pavel Šimerda, one of NetworkManager developers) https://www.youtube.com/watch?v=XUgmFyBe_9w (by SUSE guys developing Wicked)
Then have implementation of these plugins using network manager, openvswitch, ioctl/netlink. Using the appropriate plugin based on desired functionality per desired stability.
I really like to see rock solid monolithic host manager / cluster manager.
Systemd is on the best path to become such a monolithic beast that will do everything, given its efforts to absorb functionalities unrelated to init into its monolithic design (syslog, anacron).
David
David,
I am more than well aware of the complexity of network.
Question:
Had you designed an embedded mission critical component (car controller computer for example or avionic grade computer) had you used systemd and network manager, or even based on fedora or rhel?
No I do not have such kind of experience.
I won't... and I as far as I stand, there is no difference between hypervisor and these kind of mission critical components. I hope we return to this discussion in future if and when vdsm will be used in greater market share.
My observation goes like this: complex things need complex solutions - otherwise the solutions are incomplete and hacky. Or alternatively, previously lightweight solutions grow in size and complexity as they mature, ultimately matching in size and complexity things they were supposed to be lighter alternative of.
I'd argue that there are three tiers of mission criticality in RHEV/oVirt architecture: * most critical: things that can make VMs crash (kernel general, kvm, qemu + spice-server + ...; libvirt and vdsm can be here if they issue stop/destroy commands by error) * highly critical: things that disrupt normal operation of the VM but VM can recover from them (network failures, non-corrupting storage failures, ...; most of vdsm and libvirt functionality are like this, as well as engine failures when user needs to connect to a VM console) * critical: most of engine functionality is in this level is here. If VM works, it will continue working, but you can't manage it
From this perspective, NM is on the best path to be as stable enough for
second level after its bugs 683173 and 682872 (@bugzilla.gnome.org) are implemented.
David
Regards, Alon. _______________________________________________ vdsm-devel mailing list vdsm-devel@lists.fedorahosted.org https://lists.fedorahosted.org/mailman/listinfo/vdsm-devel
On 11/11/2012 10:46 PM, Alon Bar-Lev wrote:
----- Original Message -----
From: "Dan Kenigsberg" danken@redhat.com To: vdsm-devel@fedorahosted.org Sent: Sunday, November 11, 2012 4:07:30 PM Subject: [vdsm] Future of Vdsm network configuration
Hi,
Nowadays, when vdsm receives the setupNetowrk verb, it mangles /etc/sysconfig/network-scripts/ifcfg-* files and restarts the network service, so they are read by the responsible SysV service.
This is very much Fedora-oriented, and not up with the new themes in Linux network configuration. Since we want oVirt and Vdsm to be distribution agnostic, and support new features, we have to change.
setupNetwork is responsible for two different things: (1) configure the host networking interfaces, and (2) create virtual networks for guests and connect the to the world over (1).
Functionality (2) is provided by building Linux software bridges, and vlan devices. I'd like to explore moving it to Open vSwitch, which would enable a host of functionalities that we currently lack (e.g. tunneling). One thing that worries me is the need to reimplement our config snapshot/recovery on ovs's database.
As far as I know, ovs is unable to maintain host level parameters of interfaces (e.g. eth0's IPv4 address), so we need another tool for functionality (1): either speak to NetworkManager directly, or to use NetCF, via its libvirt virInterface* wrapper.
I have minor worries about NetCF's breadth of testing and usage; I know it is intended to be cross-platform, but unlike ovs, I am not aware of a wide Debian usage thereof. On the other hand, its API is ready for vdsm's usage for quite a while.
NetworkManager has become ubiquitous, and we'd better integrate with it better than our current setting of NM_CONTROLLED=no. But as DPB tells us, https://lists.fedorahosted.org/pipermail/vdsm-devel/2012-November/001677.htm... we'd better offload integration with NM to libvirt.
We would like to take Network configuration in VDSM to the next level and make it distribution agnostic in addition for setting the infrastructure for more advanced features to be used going forward. The path we think of taking is to integrate with OVS and for feature completeness use NetCF, via its libvirt virInterface* wrapper. Any comments or feedback on this proposal is welcomed.
Thanks to the oVirt net team members who's input has helped writing this email.
Hi,
As far as I see this, network manager is a monster that is a huge dependency to have just to create bridges or configure network interfaces... It is true that on a host where network manager lives it would be not polite to define network resources not via its interface, however I don't like we force network manager.
libvirt is long not used as virtualization library but system management agent, I am not sure this is the best system agent I would have chosen.
I think that all the terms and building blocks got lost in time... and the result integration became more and more complex.
Stabilizing such multi-layered component environment is much harder than monolithic environment.
I would really want to see vdsm as monolithic component with full control over its resources, I believe this is the only way vdsm can be stable enough to be production grade.
Hypervisor should be a total slave of manager (or cluster), so I have no problem in bypassing/disabling any distribution specific tool in favour of atoms (brctl, iproute), in non persistence mode.
Do you mean just using the utilities (brctl, iproute) on demand and not keeping any network configuration on vdsm host? Then manager needs reconfigure network on every host reboot. Actually, I like this way. It could be more flexible than libvirt's virInterface (netcf or NM) and have fine-grained control to handle some tough cases. Moreover, it's clean than the current mangling network configuration files.
I know this derive some more work, but I don't think it is that complex to implement and maintain.
Just my 2 cents...
Regards, Alon _______________________________________________ vdsm-devel mailing list vdsm-devel@lists.fedorahosted.org https://lists.fedorahosted.org/mailman/listinfo/vdsm-devel
----- Original Message -----
From: "Mark Wu" wudxw@linux.vnet.ibm.com To: "Alon Bar-Lev" alonbl@redhat.com Cc: "Dan Kenigsberg" danken@redhat.com, vdsm-devel@fedorahosted.org Sent: Tuesday, November 13, 2012 5:39:12 AM Subject: Re: [vdsm] Future of Vdsm network configuration
On 11/11/2012 10:46 PM, Alon Bar-Lev wrote:
----- Original Message -----
From: "Dan Kenigsberg" danken@redhat.com To: vdsm-devel@fedorahosted.org Sent: Sunday, November 11, 2012 4:07:30 PM
Subject: [vdsm] Future of Vdsm network configuration
Hi,
Nowadays, when vdsm receives the setupNetowrk verb, it mangles
/etc/sysconfig/network-scripts/ifcfg-* files and restarts the network
service, so they are read by the responsible SysV service.
This is very much Fedora-oriented, and not up with the new themes
in Linux network configuration. Since we want oVirt and Vdsm to be
distribution agnostic, and support new features, we have to change.
setupNetwork is responsible for two different things:
(1) configure the host networking interfaces, and
(2) create virtual networks for guests and connect the to the world
over (1).
Functionality (2) is provided by building Linux software bridges, and
vlan devices. I'd like to explore moving it to Open vSwitch, which
would
enable a host of functionalities that we currently lack (e.g.
tunneling). One thing that worries me is the need to reimplement our
config snapshot/recovery on ovs's database.
As far as I know, ovs is unable to maintain host level parameters of
interfaces (e.g. eth0's IPv4 address), so we need another
tool for functionality (1): either speak to NetworkManager directly,
or
to use NetCF, via its libvirt virInterface* wrapper.
I have minor worries about NetCF's breadth of testing and usage; I
know
it is intended to be cross-platform, but unlike ovs, I am not aware
of a
wide Debian usage thereof. On the other hand, its API is ready for
vdsm's
usage for quite a while.
NetworkManager has become ubiquitous, and we'd better integrate with
it
better than our current setting of NM_CONTROLLED=no. But as DPB tells
us, https://lists.fedorahosted.org/pipermail/vdsm-devel/2012-November/001677.htm... we'd better offload integration with NM to libvirt.
We would like to take Network configuration in VDSM to the next level
and make it distribution agnostic in addition for setting the
infrastructure for more advanced features to be used going forward.
The path we think of taking is to integrate with OVS and for feature
completeness use NetCF, via its libvirt virInterface* wrapper. Any
comments or feedback on this proposal is welcomed.
Thanks to the oVirt net team members who's input has helped writing
this
email.
Hi,
As far as I see this, network manager is a monster that is a huge dependency to have just to create bridges or configure network interfaces... It is true that on a host where network manager lives it would be not polite to define network resources not via its interface, however I don't like we force network manager.
libvirt is long not used as virtualization library but system management agent, I am not sure this is the best system agent I would have chosen.
I think that all the terms and building blocks got lost in time... and the result integration became more and more complex.
Stabilizing such multi-layered component environment is much harder than monolithic environment.
I would really want to see vdsm as monolithic component with full control over its resources, I believe this is the only way vdsm can be stable enough to be production grade.
Hypervisor should be a total slave of manager (or cluster), so I have no problem in bypassing/disabling any distribution specific tool in favour of atoms (brctl, iproute), in non persistence mode.
Do you mean just using the utilities (brctl, iproute) on demand and not keeping any network configuration on vdsm host? Then manager needs reconfigure network on every host reboot. Actually, I like this way. It could be more flexible than libvirt's virInterface (netcf or NM) and have fine-grained control to handle some tough cases. Moreover, it's clean than the current mangling network configuration files.
Yes, exactly.
I know this derive some more work, but I don't think it is that complex to implement and maintain.
Just my 2 cents...
Regards,
Alon
vdsm-devel mailing list vdsm-devel@lists.fedorahosted.org https://lists.fedorahosted.org/mailman/listinfo/vdsm-devel
Hypervisor should be a total slave of manager (or cluster), so I have no problem in bypassing/disabling any distribution specific tool in favour of atoms (brctl, iproute), in non persistence mode.
Do you mean just using the utilities (brctl, iproute) on demand and not keeping any network configuration on vdsm host? Then manager needs reconfigure network on every host reboot. Actually, I like this way. It could be more flexible than libvirt's virInterface (netcf or NM) and have fine-grained control to handle some tough cases. Moreover, it's clean than the current mangling network configuration files.
+1, I've raised that is the past, I don't think the network configuration done by the engine should be persisted. This way the admin sets up the node in a persistent way such that it always succeeds to boot and has a rout to the engine. Engine on node activation updates the network, connects to storage etc.
Now that setupNetworks can do it in one atomic operation this is the way to go, very simple. It's also eases the move of a node from a cluster to cluster. With current concept, after you move the host you need to modify the node networks to fit the new cluster topology. With non-persistent, placing a host into maintenance should also revert the host to the original networking after boot same as it disconnects the storage. Now you can easily move the node from cluster to cluster or even a differed DC. As soon as you activate - it is configured with the new DC/Cluster pair requirements.
And it's a valuable step in the direction of -> Go go dynamic host allocation :)
----- Original Message -----
From: "Simon Grinberg" simon@redhat.com To: "Mark Wu" wudxw@linux.vnet.ibm.com Cc: vdsm-devel@fedorahosted.org, "Alon Bar-Lev" alonbl@redhat.com Sent: Tuesday, November 13, 2012 10:06:42 AM Subject: Re: [vdsm] Future of Vdsm network configuration
Hypervisor should be a total slave of manager (or cluster), so I have no problem in bypassing/disabling any distribution specific tool in favour of atoms (brctl, iproute), in non persistence mode.
Do you mean just using the utilities (brctl, iproute) on demand and not keeping any network configuration on vdsm host? Then manager needs reconfigure network on every host reboot. Actually, I like this way. It could be more flexible than libvirt's virInterface (netcf or NM) and have fine-grained control to handle some tough cases. Moreover, it's clean than the current mangling network configuration files.
+1, I've raised that is the past, I don't think the network configuration done by the engine should be persisted. This way the admin sets up the node in a persistent way such that it always succeeds to boot and has a rout to the engine. Engine on node activation updates the network, connects to storage etc.
Now that setupNetworks can do it in one atomic operation this is the way to go, very simple. It's also eases the move of a node from a cluster to cluster. With current concept, after you move the host you need to modify the node networks to fit the new cluster topology. With non-persistent, placing a host into maintenance should also revert the host to the original networking after boot same as it disconnects the storage. Now you can easily move the node from cluster to cluster or even a differed DC. As soon as you activate - it is configured with the new DC/Cluster pair requirements.
And it's a valuable step in the direction of -> Go go dynamic host allocation :)
So I am not the only insane one where! Good to know!
Alon
----- Original Message -----
From: "Alon Bar-Lev" alonbl@redhat.com To: "Simon Grinberg" simon@redhat.com Cc: vdsm-devel@fedorahosted.org, "Mark Wu" wudxw@linux.vnet.ibm.com Sent: Tuesday, November 13, 2012 10:13:41 AM Subject: Re: [vdsm] Future of Vdsm network configuration
----- Original Message -----
From: "Simon Grinberg" simon@redhat.com To: "Mark Wu" wudxw@linux.vnet.ibm.com Cc: vdsm-devel@fedorahosted.org, "Alon Bar-Lev" alonbl@redhat.com Sent: Tuesday, November 13, 2012 10:06:42 AM Subject: Re: [vdsm] Future of Vdsm network configuration
Hypervisor should be a total slave of manager (or cluster), so I have no problem in bypassing/disabling any distribution specific tool in favour of atoms (brctl, iproute), in non persistence mode.
Do you mean just using the utilities (brctl, iproute) on demand and not keeping any network configuration on vdsm host? Then manager needs reconfigure network on every host reboot. Actually, I like this way. It could be more flexible than libvirt's virInterface (netcf or NM) and have fine-grained control to handle some tough cases. Moreover, it's clean than the current mangling network configuration files.
+1, I've raised that is the past, I don't think the network configuration done by the engine should be persisted. This way the admin sets up the node in a persistent way such that it always succeeds to boot and has a rout to the engine. Engine on node activation updates the network, connects to storage etc.
Now that setupNetworks can do it in one atomic operation this is the way to go, very simple. It's also eases the move of a node from a cluster to cluster. With current concept, after you move the host you need to modify the node networks to fit the new cluster topology. With non-persistent, placing a host into maintenance should also revert the host to the original networking after boot same as it disconnects the storage. Now you can easily move the node from cluster to cluster or even a differed DC. As soon as you activate - it is configured with the new DC/Cluster pair requirements.
And it's a valuable step in the direction of -> Go go dynamic host allocation :)
So I am not the only insane one where! Good to know!
Hold your horses, all I've said is that I strongly agree that networks should be dynamically set from the engine, I did not comment on how.
If there is a cross distribution utility out there that can do this in <bold> reliable <\bold> manner and actually <bold> offloads <\bold> logic from VDSM, meaning it's simpler then direct usage of mkdev, ip, brctl, etc to configure the hosts networking, it should be considered.
What's important is the goal and not the way there. One of my goals, is returning to the stateless node concept the ovirt node had started with so many, many years (5?) ago. It just makes sense.
BTU, I've always been insane, they just didn't caught up with me yet.
Alon
On Sun, Nov 11, 2012 at 09:46:43AM -0500, Alon Bar-Lev wrote:
----- Original Message -----
From: "Dan Kenigsberg" danken@redhat.com To: vdsm-devel@fedorahosted.org Sent: Sunday, November 11, 2012 4:07:30 PM Subject: [vdsm] Future of Vdsm network configuration
Hi,
Nowadays, when vdsm receives the setupNetowrk verb, it mangles /etc/sysconfig/network-scripts/ifcfg-* files and restarts the network service, so they are read by the responsible SysV service.
This is very much Fedora-oriented, and not up with the new themes in Linux network configuration. Since we want oVirt and Vdsm to be distribution agnostic, and support new features, we have to change.
setupNetwork is responsible for two different things: (1) configure the host networking interfaces, and (2) create virtual networks for guests and connect the to the world over (1).
Functionality (2) is provided by building Linux software bridges, and vlan devices. I'd like to explore moving it to Open vSwitch, which would enable a host of functionalities that we currently lack (e.g. tunneling). One thing that worries me is the need to reimplement our config snapshot/recovery on ovs's database.
As far as I know, ovs is unable to maintain host level parameters of interfaces (e.g. eth0's IPv4 address), so we need another tool for functionality (1): either speak to NetworkManager directly, or to use NetCF, via its libvirt virInterface* wrapper.
I have minor worries about NetCF's breadth of testing and usage; I know it is intended to be cross-platform, but unlike ovs, I am not aware of a wide Debian usage thereof. On the other hand, its API is ready for vdsm's usage for quite a while.
NetworkManager has become ubiquitous, and we'd better integrate with it better than our current setting of NM_CONTROLLED=no. But as DPB tells us, https://lists.fedorahosted.org/pipermail/vdsm-devel/2012-November/001677.htm... we'd better offload integration with NM to libvirt.
We would like to take Network configuration in VDSM to the next level and make it distribution agnostic in addition for setting the infrastructure for more advanced features to be used going forward. The path we think of taking is to integrate with OVS and for feature completeness use NetCF, via its libvirt virInterface* wrapper. Any comments or feedback on this proposal is welcomed.
Thanks to the oVirt net team members who's input has helped writing this email.
Hi,
As far as I see this, network manager is a monster that is a huge dependency to have just to create bridges or configure network interfaces... It is true that on a host where network manager lives it would be not polite to define network resources not via its interface, however I don't like we force network manager.
libvirt is long not used as virtualization library but system management agent, I am not sure this is the best system agent I would have chosen.
I think that all the terms and building blocks got lost in time... and the result integration became more and more complex.
Stabilizing such multi-layered component environment is much harder than monolithic environment.
I would really want to see vdsm as monolithic component with full control over its resources, I believe this is the only way vdsm can be stable enough to be production grade.
Hypervisor should be a total slave of manager (or cluster), so I have no problem in bypassing/disabling any distribution specific tool in favour of atoms (brctl, iproute), in non persistence mode.
I know this derive some more work, but I don't think it is that complex to implement and maintain.
Just my 2 cents...
I couldn't disagree more. What you are suggesting requires that we reimplement every single networking feature in oVirt by ourselves. If we want to support the (absolutely critical) goal of being distro agnostic, then we need to implement the same functionality across multiple distros too. This is more work than we will ever be able to keep up with. If you think it's hard to stabilize the integration of an external networking library, imagine how hard it will be to stabilize our own rewritten and buggy version. This is not how open source is supposed to work. We should be assembling distinct, modular, pre-existing components together when they are available. If NetworkManager has integration problems, let's work upstream to fix them. If it's dependencies are too great, let's modularize it so we don't need to ship the parts that we don't need.
On 14/11/12 00:28, Adam Litke wrote:
On Sun, Nov 11, 2012 at 09:46:43AM -0500, Alon Bar-Lev wrote:
----- Original Message -----
From: "Dan Kenigsberg" danken@redhat.com To: vdsm-devel@fedorahosted.org Sent: Sunday, November 11, 2012 4:07:30 PM Subject: [vdsm] Future of Vdsm network configuration
Hi,
Nowadays, when vdsm receives the setupNetowrk verb, it mangles /etc/sysconfig/network-scripts/ifcfg-* files and restarts the network service, so they are read by the responsible SysV service.
This is very much Fedora-oriented, and not up with the new themes in Linux network configuration. Since we want oVirt and Vdsm to be distribution agnostic, and support new features, we have to change.
setupNetwork is responsible for two different things: (1) configure the host networking interfaces, and (2) create virtual networks for guests and connect the to the world over (1).
Functionality (2) is provided by building Linux software bridges, and vlan devices. I'd like to explore moving it to Open vSwitch, which would enable a host of functionalities that we currently lack (e.g. tunneling). One thing that worries me is the need to reimplement our config snapshot/recovery on ovs's database.
As far as I know, ovs is unable to maintain host level parameters of interfaces (e.g. eth0's IPv4 address), so we need another tool for functionality (1): either speak to NetworkManager directly, or to use NetCF, via its libvirt virInterface* wrapper.
I have minor worries about NetCF's breadth of testing and usage; I know it is intended to be cross-platform, but unlike ovs, I am not aware of a wide Debian usage thereof. On the other hand, its API is ready for vdsm's usage for quite a while.
NetworkManager has become ubiquitous, and we'd better integrate with it better than our current setting of NM_CONTROLLED=no. But as DPB tells us, https://lists.fedorahosted.org/pipermail/vdsm-devel/2012-November/001677.htm... we'd better offload integration with NM to libvirt.
We would like to take Network configuration in VDSM to the next level and make it distribution agnostic in addition for setting the infrastructure for more advanced features to be used going forward. The path we think of taking is to integrate with OVS and for feature completeness use NetCF, via its libvirt virInterface* wrapper. Any comments or feedback on this proposal is welcomed.
Thanks to the oVirt net team members who's input has helped writing this email.
Hi,
As far as I see this, network manager is a monster that is a huge dependency to have just to create bridges or configure network interfaces... It is true that on a host where network manager lives it would be not polite to define network resources not via its interface, however I don't like we force network manager.
libvirt is long not used as virtualization library but system management agent, I am not sure this is the best system agent I would have chosen.
I think that all the terms and building blocks got lost in time... and the result integration became more and more complex.
Stabilizing such multi-layered component environment is much harder than monolithic environment.
I would really want to see vdsm as monolithic component with full control over its resources, I believe this is the only way vdsm can be stable enough to be production grade.
Hypervisor should be a total slave of manager (or cluster), so I have no problem in bypassing/disabling any distribution specific tool in favour of atoms (brctl, iproute), in non persistence mode.
I know this derive some more work, but I don't think it is that complex to implement and maintain.
Just my 2 cents...
I couldn't disagree more. What you are suggesting requires that we reimplement every single networking feature in oVirt by ourselves. If we want to support the (absolutely critical) goal of being distro agnostic, then we need to implement the same functionality across multiple distros too. This is more work than we will ever be able to keep up with. If you think it's hard to stabilize the integration of an external networking library, imagine how hard it will be to stabilize our own rewritten and buggy version. This is not how open source is supposed to work. We should be assembling distinct, modular, pre-existing components together when they are available. If NetworkManager has integration problems, let's work upstream to fix them. If it's dependencies are too great, let's modularize it so we don't need to ship the parts that we don't need.
I agree with Adam on this one, reimplementing the networking management layer by ourselves using only atoms seems like duplication of work that was already done and available for our use both by NM and by libvirt.
Yes, it is not perfect (far from it actually) but I think we better focus our efforts around adding new functionalities to VDSM and improving the current robustness of the code (we have issues regardless of any external component we're using).
For the sake of being distribution agnostic I support the original plan proposed by danken, using OVS combined with libvirt virInterface* wrapper.
Livnat
On 11/14/2012 11:53 AM, Livnat Peer wrote:
On 14/11/12 00:28, Adam Litke wrote:
On Sun, Nov 11, 2012 at 09:46:43AM -0500, Alon Bar-Lev wrote:
----- Original Message -----
From: "Dan Kenigsberg"danken@redhat.com To: vdsm-devel@fedorahosted.org Sent: Sunday, November 11, 2012 4:07:30 PM Subject: [vdsm] Future of Vdsm network configuration
Hi,
Nowadays, when vdsm receives the setupNetowrk verb, it mangles /etc/sysconfig/network-scripts/ifcfg-* files and restarts the network service, so they are read by the responsible SysV service.
This is very much Fedora-oriented, and not up with the new themes in Linux network configuration. Since we want oVirt and Vdsm to be distribution agnostic, and support new features, we have to change.
setupNetwork is responsible for two different things: (1) configure the host networking interfaces, and (2) create virtual networks for guests and connect the to the world over (1).
Functionality (2) is provided by building Linux software bridges, and vlan devices. I'd like to explore moving it to Open vSwitch, which would enable a host of functionalities that we currently lack (e.g. tunneling). One thing that worries me is the need to reimplement our config snapshot/recovery on ovs's database.
As far as I know, ovs is unable to maintain host level parameters of interfaces (e.g. eth0's IPv4 address), so we need another tool for functionality (1): either speak to NetworkManager directly, or to use NetCF, via its libvirt virInterface* wrapper.
I have minor worries about NetCF's breadth of testing and usage; I know it is intended to be cross-platform, but unlike ovs, I am not aware of a wide Debian usage thereof. On the other hand, its API is ready for vdsm's usage for quite a while.
NetworkManager has become ubiquitous, and we'd better integrate with it better than our current setting of NM_CONTROLLED=no. But as DPB tells us, https://lists.fedorahosted.org/pipermail/vdsm-devel/2012-November/001677.htm... we'd better offload integration with NM to libvirt.
We would like to take Network configuration in VDSM to the next level and make it distribution agnostic in addition for setting the infrastructure for more advanced features to be used going forward. The path we think of taking is to integrate with OVS and for feature completeness use NetCF, via its libvirt virInterface* wrapper. Any comments or feedback on this proposal is welcomed.
Thanks to the oVirt net team members who's input has helped writing this email.
Hi,
As far as I see this, network manager is a monster that is a huge dependency to have just to create bridges or configure network interfaces... It is true that on a host where network manager lives it would be not polite to define network resources not via its interface, however I don't like we force network manager.
libvirt is long not used as virtualization library but system management agent, I am not sure this is the best system agent I would have chosen.
I think that all the terms and building blocks got lost in time... and the result integration became more and more complex.
Stabilizing such multi-layered component environment is much harder than monolithic environment.
I would really want to see vdsm as monolithic component with full control over its resources, I believe this is the only way vdsm can be stable enough to be production grade.
Hypervisor should be a total slave of manager (or cluster), so I have no problem in bypassing/disabling any distribution specific tool in favour of atoms (brctl, iproute), in non persistence mode.
I know this derive some more work, but I don't think it is that complex to implement and maintain.
Just my 2 cents...
I couldn't disagree more. What you are suggesting requires that we reimplement every single networking feature in oVirt by ourselves. If we want to support the (absolutely critical) goal of being distro agnostic, then we need to implement the same functionality across multiple distros too. This is more work than we will ever be able to keep up with. If you think it's hard to stabilize the integration of an external networking library, imagine how hard it will be to stabilize our own rewritten and buggy version. This is not how open source is supposed to work. We should be assembling distinct, modular, pre-existing components together when they are available. If NetworkManager has integration problems, let's work upstream to fix them. If it's dependencies are too great, let's modularize it so we don't need to ship the parts that we don't need.
I agree with Adam on this one, reimplementing the networking management layer by ourselves using only atoms seems like duplication of work that was already done and available for our use both by NM and by libvirt.
Yes, it is not perfect (far from it actually) but I think we better focus our efforts around adding new functionalities to VDSM and improving the current robustness of the code (we have issues regardless of any external component we're using).
For the sake of being distribution agnostic I support the original plan proposed by danken, using OVS combined with libvirt virInterface* wrapper.
The addition of OVS is nice and refreshing (it is the new black). The issue with OVS is that a controller is required, there are a number of proprietary once and there are open source solutions. Something/someone needs to configure and manage the OVS. Just adding the libvirt support is not enough - in a nut shell this is just a matter of setting the network type for the vnic and passing a few additional parameters. Managing and assigning physical NICs to OVS is interesting and challenging. Do you guys have any thoughts about how you want to go about this?
Thanks Gary
Livnat
vdsm-devel mailing list vdsm-devel@lists.fedorahosted.org https://lists.fedorahosted.org/mailman/listinfo/vdsm-devel
On 11/14/2012 07:53 PM, Gary Kotton wrote:
On 11/14/2012 11:53 AM, Livnat Peer wrote:
On 14/11/12 00:28, Adam Litke wrote:
On Sun, Nov 11, 2012 at 09:46:43AM -0500, Alon Bar-Lev wrote:
----- Original Message -----
From: "Dan Kenigsberg"danken@redhat.com To: vdsm-devel@fedorahosted.org Sent: Sunday, November 11, 2012 4:07:30 PM Subject: [vdsm] Future of Vdsm network configuration
Hi,
Nowadays, when vdsm receives the setupNetowrk verb, it mangles /etc/sysconfig/network-scripts/ifcfg-* files and restarts the network service, so they are read by the responsible SysV service.
This is very much Fedora-oriented, and not up with the new themes in Linux network configuration. Since we want oVirt and Vdsm to be distribution agnostic, and support new features, we have to change.
setupNetwork is responsible for two different things: (1) configure the host networking interfaces, and (2) create virtual networks for guests and connect the to the world over (1).
Functionality (2) is provided by building Linux software bridges, and vlan devices. I'd like to explore moving it to Open vSwitch, which would enable a host of functionalities that we currently lack (e.g. tunneling). One thing that worries me is the need to reimplement our config snapshot/recovery on ovs's database.
As far as I know, ovs is unable to maintain host level parameters of interfaces (e.g. eth0's IPv4 address), so we need another tool for functionality (1): either speak to NetworkManager directly, or to use NetCF, via its libvirt virInterface* wrapper.
I have minor worries about NetCF's breadth of testing and usage; I know it is intended to be cross-platform, but unlike ovs, I am not aware of a wide Debian usage thereof. On the other hand, its API is ready for vdsm's usage for quite a while.
NetworkManager has become ubiquitous, and we'd better integrate with it better than our current setting of NM_CONTROLLED=no. But as DPB tells us, https://lists.fedorahosted.org/pipermail/vdsm-devel/2012-November/001677.htm...
we'd better offload integration with NM to libvirt.
We would like to take Network configuration in VDSM to the next level and make it distribution agnostic in addition for setting the infrastructure for more advanced features to be used going forward. The path we think of taking is to integrate with OVS and for feature completeness use NetCF, via its libvirt virInterface* wrapper. Any comments or feedback on this proposal is welcomed.
Thanks to the oVirt net team members who's input has helped writing this email.
Hi,
As far as I see this, network manager is a monster that is a huge dependency to have just to create bridges or configure network interfaces... It is true that on a host where network manager lives it would be not polite to define network resources not via its interface, however I don't like we force network manager.
libvirt is long not used as virtualization library but system management agent, I am not sure this is the best system agent I would have chosen.
I think that all the terms and building blocks got lost in time... and the result integration became more and more complex.
Stabilizing such multi-layered component environment is much harder than monolithic environment.
I would really want to see vdsm as monolithic component with full control over its resources, I believe this is the only way vdsm can be stable enough to be production grade.
Hypervisor should be a total slave of manager (or cluster), so I have no problem in bypassing/disabling any distribution specific tool in favour of atoms (brctl, iproute), in non persistence mode.
I know this derive some more work, but I don't think it is that complex to implement and maintain.
Just my 2 cents...
I couldn't disagree more. What you are suggesting requires that we reimplement every single networking feature in oVirt by ourselves. If we want to support the (absolutely critical) goal of being distro agnostic, then we need to implement the same functionality across multiple distros too. This is more work than we will ever be able to keep up with. If you think it's hard to stabilize the integration of an external networking library, imagine how hard it will be to stabilize our own rewritten and buggy version. This is not how open source is supposed to work. We should be assembling distinct, modular, pre-existing components together when they are available. If NetworkManager has integration problems, let's work upstream to fix them. If it's dependencies are too great, let's modularize it so we don't need to ship the parts that we don't need.
I agree with Adam on this one, reimplementing the networking management layer by ourselves using only atoms seems like duplication of work that was already done and available for our use both by NM and by libvirt.
Yes, it is not perfect (far from it actually) but I think we better focus our efforts around adding new functionalities to VDSM and improving the current robustness of the code (we have issues regardless of any external component we're using).
For the sake of being distribution agnostic I support the original plan proposed by danken, using OVS combined with libvirt virInterface* wrapper.
The addition of OVS is nice and refreshing (it is the new black). The issue with OVS is that a controller is required, there are a number of proprietary once and there are open source solutions. Something/someone needs to configure and manage the OVS. Just adding the libvirt support is not enough - in a nut shell this is just a matter of setting the network type for the vnic and passing a few additional parameters. Managing and assigning physical NICs to OVS is interesting and challenging. Do you guys have any thoughts about how you want to go about this?
Can we just start with running ovs in standalone mode at first? It could have the basic forward function based on MAC-learning and bond/vlans/tunnel function by specifying related options when adding a new port. We could connect each physical nic for vm network with an ovs bridge, and then the VM can get external network access. I agree with that without adding a controller, we can't get a unified control panel. But I think the standalone mode could fit current oVirt network model well. Gary, please correct me if I am wrong or any suggestions from you?
Thanks Mark.
Thanks Gary
Livnat
vdsm-devel mailing list vdsm-devel@lists.fedorahosted.org https://lists.fedorahosted.org/mailman/listinfo/vdsm-devel
vdsm-devel mailing list vdsm-devel@lists.fedorahosted.org https://lists.fedorahosted.org/mailman/listinfo/vdsm-devel
On 11/14/2012 05:42 PM, Mark Wu wrote:
On 11/14/2012 07:53 PM, Gary Kotton wrote:
On 11/14/2012 11:53 AM, Livnat Peer wrote:
On 14/11/12 00:28, Adam Litke wrote:
On Sun, Nov 11, 2012 at 09:46:43AM -0500, Alon Bar-Lev wrote:
----- Original Message -----
From: "Dan Kenigsberg"danken@redhat.com To: vdsm-devel@fedorahosted.org Sent: Sunday, November 11, 2012 4:07:30 PM Subject: [vdsm] Future of Vdsm network configuration
Hi,
Nowadays, when vdsm receives the setupNetowrk verb, it mangles /etc/sysconfig/network-scripts/ifcfg-* files and restarts the network service, so they are read by the responsible SysV service.
This is very much Fedora-oriented, and not up with the new themes in Linux network configuration. Since we want oVirt and Vdsm to be distribution agnostic, and support new features, we have to change.
setupNetwork is responsible for two different things: (1) configure the host networking interfaces, and (2) create virtual networks for guests and connect the to the world over (1).
Functionality (2) is provided by building Linux software bridges, and vlan devices. I'd like to explore moving it to Open vSwitch, which would enable a host of functionalities that we currently lack (e.g. tunneling). One thing that worries me is the need to reimplement our config snapshot/recovery on ovs's database.
As far as I know, ovs is unable to maintain host level parameters of interfaces (e.g. eth0's IPv4 address), so we need another tool for functionality (1): either speak to NetworkManager directly, or to use NetCF, via its libvirt virInterface* wrapper.
I have minor worries about NetCF's breadth of testing and usage; I know it is intended to be cross-platform, but unlike ovs, I am not aware of a wide Debian usage thereof. On the other hand, its API is ready for vdsm's usage for quite a while.
NetworkManager has become ubiquitous, and we'd better integrate with it better than our current setting of NM_CONTROLLED=no. But as DPB tells us, https://lists.fedorahosted.org/pipermail/vdsm-devel/2012-November/001677.htm...
we'd better offload integration with NM to libvirt.
We would like to take Network configuration in VDSM to the next level and make it distribution agnostic in addition for setting the infrastructure for more advanced features to be used going forward. The path we think of taking is to integrate with OVS and for feature completeness use NetCF, via its libvirt virInterface* wrapper. Any comments or feedback on this proposal is welcomed.
Thanks to the oVirt net team members who's input has helped writing this email.
Hi,
As far as I see this, network manager is a monster that is a huge dependency to have just to create bridges or configure network interfaces... It is true that on a host where network manager lives it would be not polite to define network resources not via its interface, however I don't like we force network manager.
libvirt is long not used as virtualization library but system management agent, I am not sure this is the best system agent I would have chosen.
I think that all the terms and building blocks got lost in time... and the result integration became more and more complex.
Stabilizing such multi-layered component environment is much harder than monolithic environment.
I would really want to see vdsm as monolithic component with full control over its resources, I believe this is the only way vdsm can be stable enough to be production grade.
Hypervisor should be a total slave of manager (or cluster), so I have no problem in bypassing/disabling any distribution specific tool in favour of atoms (brctl, iproute), in non persistence mode.
I know this derive some more work, but I don't think it is that complex to implement and maintain.
Just my 2 cents...
I couldn't disagree more. What you are suggesting requires that we reimplement every single networking feature in oVirt by ourselves. If we want to support the (absolutely critical) goal of being distro agnostic, then we need to implement the same functionality across multiple distros too. This is more work than we will ever be able to keep up with. If you think it's hard to stabilize the integration of an external networking library, imagine how hard it will be to stabilize our own rewritten and buggy version. This is not how open source is supposed to work. We should be assembling distinct, modular, pre-existing components together when they are available. If NetworkManager has integration problems, let's work upstream to fix them. If it's dependencies are too great, let's modularize it so we don't need to ship the parts that we don't need.
I agree with Adam on this one, reimplementing the networking management layer by ourselves using only atoms seems like duplication of work that was already done and available for our use both by NM and by libvirt.
Yes, it is not perfect (far from it actually) but I think we better focus our efforts around adding new functionalities to VDSM and improving the current robustness of the code (we have issues regardless of any external component we're using).
For the sake of being distribution agnostic I support the original plan proposed by danken, using OVS combined with libvirt virInterface* wrapper.
The addition of OVS is nice and refreshing (it is the new black). The issue with OVS is that a controller is required, there are a number of proprietary once and there are open source solutions. Something/someone needs to configure and manage the OVS. Just adding the libvirt support is not enough - in a nut shell this is just a matter of setting the network type for the vnic and passing a few additional parameters. Managing and assigning physical NICs to OVS is interesting and challenging. Do you guys have any thoughts about how you want to go about this?
Can we just start with running ovs in standalone mode at first?
Yes, most certainly.
It could have the basic forward function based on MAC-learning and bond/vlans/tunnel function by specifying related options when adding a new port. We could connect each physical nic for vm network with an ovs bridge, and then the VM can get external network access. I agree with that without adding a controller, we can't get a unified control panel. But I think the standalone mode could fit current oVirt network model well. Gary, please correct me if I am wrong or any suggestions from you?
You are correct. This is certainly one way of achieving a first step for integrating with the OVS. My concerns are as follows (maybe some of them do not exist :)): 1. Boot process with binding to physical NICS to the OVS 2. The OVS maintains a database. This may need to be cleaned of tap devices when the appliance reboots - lets take an edge case into account - say the appliance has a number of VMs running - there will be tap devices for these VMs registered with the OVS. If there is a exception or power failure the appliance will reset. These devices will still be registered when the appliance reboots. Who cleans them and when? 3. What about the traditional bridged network - will these be migrated to OVS. The idea of moving to OVS is great. I just think that all of the flows should be mapped out listed on a wiki. This will give a nice picture of how the integration can achieved. Thanks Gary
Thanks Mark.
Thanks Gary
Livnat
vdsm-devel mailing list vdsm-devel@lists.fedorahosted.org https://lists.fedorahosted.org/mailman/listinfo/vdsm-devel
vdsm-devel mailing list vdsm-devel@lists.fedorahosted.org https://lists.fedorahosted.org/mailman/listinfo/vdsm-devel
On Thu, 15 Nov 2012 17:54:42 +0800, Gary Kotton gkotton@redhat.com wrote:
On 11/14/2012 05:42 PM, Mark Wu wrote:
On 11/14/2012 07:53 PM, Gary Kotton wrote:
On 11/14/2012 11:53 AM, Livnat Peer wrote:
On 14/11/12 00:28, Adam Litke wrote:
On Sun, Nov 11, 2012 at 09:46:43AM -0500, Alon Bar-Lev wrote:
<snip>
Can we just start with running ovs in standalone mode at first?
Yes, most certainly.
It could have the basic forward function based on MAC-learning and bond/vlans/tunnel function by specifying related options when adding a new port. We could connect each physical nic for vm network with an ovs bridge, and then the VM can get external network access. I agree with that without adding a controller, we can't get a unified control panel. But I think the standalone mode could fit current oVirt network model well. Gary, please correct me if I am wrong or any suggestions from you?
You are correct. This is certainly one way of achieving a first step for integrating with the OVS. My concerns are as follows (maybe some of them do not exist :)):
+1 for a standalone ovs as a first step.
- Boot process with binding to physical NICS to the OVS
Both ifup/down scripts shipped with upstream ovs and bridge compatible mode work well in my test.
- The OVS maintains a database. This may need to be cleaned of tap
devices when the appliance reboots - lets take an edge case into account
- say the appliance has a number of VMs running - there will be tap
devices for these VMs registered with the OVS. If there is a exception or power failure the appliance will reset. These devices will still be registered when the appliance reboots. Who cleans them and when?
Yes, I also prefer the ovs database be clean every time it starts. It should know nothing about the configuration when starting, except for essential ones for the host to connect to the management end. In my mind, vdsm/ovs should configure the machine only if requested by the management end. The configuration, however, is centralized.
- What about the traditional bridged network - will these be migrated
to OVS.
I don't think we are going to drop the traditional bridged network support. Isn't providing another choice better than only one? Could we implement a generic layer providing consistent APIs for management, as well as calling different low-level libs/tools among environments which requirements varies from one to another?
Thanks Gary
Thanks Mark.
On 11/15/2012 08:56 PM, huntxu wrote:
On Thu, 15 Nov 2012 17:54:42 +0800, Gary Kotton gkotton@redhat.com wrote:
On 11/14/2012 05:42 PM, Mark Wu wrote:
On 11/14/2012 07:53 PM, Gary Kotton wrote:
On 11/14/2012 11:53 AM, Livnat Peer wrote:
On 14/11/12 00:28, Adam Litke wrote:
On Sun, Nov 11, 2012 at 09:46:43AM -0500, Alon Bar-Lev wrote:
<snip> >> Can we just start with running ovs in standalone mode at first? > > Yes, most certainly. >> It could have the basic forward function based on MAC-learning and >> bond/vlans/tunnel function by specifying related options when adding >> a new >> port. We could connect each physical nic for vm network with an ovs >> bridge, and then the VM can get >> external network access. I agree with that without adding a >> controller, we can't get a unified control panel. But I think the >> standalone mode could fit current oVirt network model well. >> Gary, please correct me if I am wrong or any suggestions from you? > > You are correct. This is certainly one way of achieving a first step > for integrating with the OVS. My concerns are as follows (maybe some > of them do not exist :)): +1 for a standalone ovs as a first step.
- Boot process with binding to physical NICS to the OVS
Both ifup/down scripts shipped with upstream ovs and bridge compatible mode work well in my test.
- The OVS maintains a database. This may need to be cleaned of tap
devices when the appliance reboots - lets take an edge case into account - say the appliance has a number of VMs running - there will be tap devices for these VMs registered with the OVS. If there is a exception or power failure the appliance will reset. These devices will still be registered when the appliance reboots. Who cleans them and when?
Yes, I also prefer the ovs database be clean every time it starts. It should know nothing about the configuration when starting, except for essential ones for the host to connect to the management end. In my mind, vdsm/ovs should configure the machine only if requested by the management end. The configuration, however, is centralized.
I think it's better to continue this discussion after we get the first draft of ovs integration page done as Gary suggested.
- What about the traditional bridged network - will these be
migrated to OVS.
I don't think we are going to drop the traditional bridged network support. Isn't providing another choice better than only one? Could we implement a generic layer providing consistent APIs for management, as well as calling different low-level libs/tools among environments which requirements varies from one to another?
I have submit a patch for it: http://gerrit.ovirt.org/#/c/7915/ It would be appreciated if you could review it.
Thanks Gary
Thanks Mark.
Hello,
After discussion calm down, I want to once again to ask a question.
Why isn't this discussion focusing on the interface vdsm will use to access "network provider"? Why should vdsm core care which "network technology" it actually uses?
With proper design of such interface, and the ability to select interface implementation using configuration, vdsm will be able to work with various of technologies without a change.
Technologies can be either network manager, ovs, libvirt or basic. What popular now can be unpopular in future, what is considered stable enough for now, may be not stable enough for future uses, what is maintained now may be unmaintained in future.
Developing tightly coupled software is something I would avoid if not absolutely required.
People may vote which interface they like to have now and we can implement one, while in time we may see other implementations as contributions. This will also allow us to move from one technology to another with decent effort/costs if required for any reason.
Best Regards, Alon Bar-Lev.
On 11/17/2012 11:00 AM, Alon Bar-Lev wrote:
Hello,
After discussion calm down, I want to once again to ask a question.
Why isn't this discussion focusing on the interface vdsm will use to access "network provider"? Why should vdsm core care which "network technology" it actually uses?
Quantum?
With proper design of such interface, and the ability to select interface implementation using configuration, vdsm will be able to work with various of technologies without a change.
Technologies can be either network manager, ovs, libvirt or basic. What popular now can be unpopular in future, what is considered stable enough for now, may be not stable enough for future uses, what is maintained now may be unmaintained in future.
Developing tightly coupled software is something I would avoid if not absolutely required.
People may vote which interface they like to have now and we can implement one, while in time we may see other implementations as contributions. This will also allow us to move from one technology to another with decent effort/costs if required for any reason.
Best Regards, Alon Bar-Lev. _______________________________________________ vdsm-devel mailing list vdsm-devel@lists.fedorahosted.org https://lists.fedorahosted.org/mailman/listinfo/vdsm-devel
On 11/17/2012 11:56 AM, Gary Kotton wrote:
On 11/17/2012 11:00 AM, Alon Bar-Lev wrote:
Hello,
After discussion calm down, I want to once again to ask a question.
Why isn't this discussion focusing on the interface vdsm will use to access "network provider"? Why should vdsm core care which "network technology" it actually uses?
Quantum?
1. that's still a specific implementation. 2. last i checked, it is far from covering the API needed by vdsm for provisioning network configurations, rather than just consuming them? (i.e., i don't remember quantum ever intends to provide an api to bond physical interfaces, etc).
With proper design of such interface, and the ability to select interface implementation using configuration, vdsm will be able to work with various of technologies without a change.
Technologies can be either network manager, ovs, libvirt or basic. What popular now can be unpopular in future, what is considered stable enough for now, may be not stable enough for future uses, what is maintained now may be unmaintained in future.
Developing tightly coupled software is something I would avoid if not absolutely required.
People may vote which interface they like to have now and we can implement one, while in time we may see other implementations as contributions. This will also allow us to move from one technology to another with decent effort/costs if required for any reason.
Best Regards, Alon Bar-Lev. _______________________________________________ vdsm-devel mailing list vdsm-devel@lists.fedorahosted.org https://lists.fedorahosted.org/mailman/listinfo/vdsm-devel
vdsm-devel mailing list vdsm-devel@lists.fedorahosted.org https://lists.fedorahosted.org/mailman/listinfo/vdsm-devel
On 11/17/2012 09:13 PM, Itamar Heim wrote:
On 11/17/2012 11:56 AM, Gary Kotton wrote:
On 11/17/2012 11:00 AM, Alon Bar-Lev wrote:
Hello,
After discussion calm down, I want to once again to ask a question.
Why isn't this discussion focusing on the interface vdsm will use to access "network provider"? Why should vdsm core care which "network technology" it actually uses?
Quantum?
- that's still a specific implementation.
I tend to disgree, Quantum is an interface enabling one to manage virtual networks. If I understand correctly this is similar to what Alon is suggesting. At the end of the day VDSM will need to interface with linuxbridge, openvswitch, nics that provide SRIOV etc. This may be done either by VDSM or Quantum agents (in some case there may be no Quantum agents - for example if a NVP controller is used). Quantum enables VDSM and oVirt to consume external technologies that are currently not supported today. For example, if one want to use openvswicth. There is a open source implementation of OVS that is managed by Quantum. That is, a Quantum agent builds and manages all flows. Do you want VDSM to do this?
- last i checked, it is far from covering the API needed by vdsm for
provisioning network configurations, rather than just consuming them? (i.e., i don't remember quantum ever intends to provide an api to bond physical interfaces, etc).
Quantum agents may do this. Yes, it will entail some hooks in VDSM but it will provide a large majority of the work that you guys are talking about. The added bonus is that it works with a number of technologies that are not supported by VDSM. I have yet to understand why VDSM has to invent the wheel again.
At the moment there is a lot of work being done in Quantum to expose additional services - for example LBaaS. It would be interesting to know if the current networking plans address this. This should be something on the radar and in my opinion is something essential to any networking infrastructure.
Thanks Gary
With proper design of such interface, and the ability to select interface implementation using configuration, vdsm will be able to work with various of technologies without a change.
Technologies can be either network manager, ovs, libvirt or basic. What popular now can be unpopular in future, what is considered stable enough for now, may be not stable enough for future uses, what is maintained now may be unmaintained in future.
Developing tightly coupled software is something I would avoid if not absolutely required.
People may vote which interface they like to have now and we can implement one, while in time we may see other implementations as contributions. This will also allow us to move from one technology to another with decent effort/costs if required for any reason.
Best Regards, Alon Bar-Lev. _______________________________________________ vdsm-devel mailing list vdsm-devel@lists.fedorahosted.org https://lists.fedorahosted.org/mailman/listinfo/vdsm-devel
vdsm-devel mailing list vdsm-devel@lists.fedorahosted.org https://lists.fedorahosted.org/mailman/listinfo/vdsm-devel
On 11/18/2012 07:55 AM, Gary Kotton wrote:
On 11/17/2012 09:13 PM, Itamar Heim wrote:
On 11/17/2012 11:56 AM, Gary Kotton wrote:
On 11/17/2012 11:00 AM, Alon Bar-Lev wrote:
Hello,
After discussion calm down, I want to once again to ask a question.
Why isn't this discussion focusing on the interface vdsm will use to access "network provider"? Why should vdsm core care which "network technology" it actually uses?
Quantum?
- that's still a specific implementation.
I tend to disgree, Quantum is an interface enabling one to manage virtual networks. If I understand correctly this is similar to what Alon is suggesting. At the end of the day VDSM will need to interface with linuxbridge, openvswitch, nics that provide SRIOV etc. This may be done either by VDSM or Quantum agents (in some case there may be no Quantum agents - for example if a NVP controller is used). Quantum enables VDSM and oVirt to consume external technologies that are currently not supported today. For example, if one want to use openvswicth. There is a open source implementation of OVS that is managed by Quantum. That is, a Quantum agent builds and manages all flows. Do you want VDSM to do this?
- last i checked, it is far from covering the API needed by vdsm for
provisioning network configurations, rather than just consuming them? (i.e., i don't remember quantum ever intends to provide an api to bond physical interfaces, etc).
Quantum agents may do this. Yes, it will entail some hooks in VDSM but it will provide a large majority of the work that you guys are talking about. The added bonus is that it works with a number of technologies that are not supported by VDSM. I have yet to understand why VDSM has to invent the wheel again.
At the moment there is a lot of work being done in Quantum to expose additional services - for example LBaaS. It would be interesting to know if the current networking plans address this. This should be something on the radar and in my opinion is something essential to any networking infrastructure.
i didn't see anything in quantum leading me to feel it plans to expose a stable api for configuring/provisioning itself?
On 11/18/2012 10:52 AM, Itamar Heim wrote:
On 11/18/2012 07:55 AM, Gary Kotton wrote:
On 11/17/2012 09:13 PM, Itamar Heim wrote:
On 11/17/2012 11:56 AM, Gary Kotton wrote:
On 11/17/2012 11:00 AM, Alon Bar-Lev wrote:
Hello,
After discussion calm down, I want to once again to ask a question.
Why isn't this discussion focusing on the interface vdsm will use to access "network provider"? Why should vdsm core care which "network technology" it actually uses?
Quantum?
- that's still a specific implementation.
I tend to disgree, Quantum is an interface enabling one to manage virtual networks. If I understand correctly this is similar to what Alon is suggesting. At the end of the day VDSM will need to interface with linuxbridge, openvswitch, nics that provide SRIOV etc. This may be done either by VDSM or Quantum agents (in some case there may be no Quantum agents - for example if a NVP controller is used). Quantum enables VDSM and oVirt to consume external technologies that are currently not supported today. For example, if one want to use openvswicth. There is a open source implementation of OVS that is managed by Quantum. That is, a Quantum agent builds and manages all flows. Do you want VDSM to do this?
- last i checked, it is far from covering the API needed by vdsm for
provisioning network configurations, rather than just consuming them? (i.e., i don't remember quantum ever intends to provide an api to bond physical interfaces, etc).
Quantum agents may do this. Yes, it will entail some hooks in VDSM but it will provide a large majority of the work that you guys are talking about. The added bonus is that it works with a number of technologies that are not supported by VDSM. I have yet to understand why VDSM has to invent the wheel again.
At the moment there is a lot of work being done in Quantum to expose additional services - for example LBaaS. It would be interesting to know if the current networking plans address this. This should be something on the radar and in my opinion is something essential to any networking infrastructure.
i didn't see anything in quantum leading me to feel it plans to expose a stable api for configuring/provisioning itself?
I do not understand your comment. Via Quantum provider networks Quantum enables one to connect a specific network interface to a virtual network. At the end of the day this "connection" is done by configuring the agent. If the community ever decides to adopt Quantum, which I would consider a healthy and forward moving decision, then this is something that would need to be managed by VDSM (my understanding is that the only free lunch is one at a youth hostel in the outback in Australia - one needs to by his/her drink). This is why I am in favor of what Dan and Mark have suggested regarding the OVS integration. At the end of the day someone needs to do the wiring.
----- Original Message -----
From: "Gary Kotton" gkotton@redhat.com To: "Itamar Heim" iheim@redhat.com Cc: vdsm-devel@lists.fedorahosted.org Sent: Sunday, November 18, 2012 11:07:58 AM Subject: Re: [vdsm] Future of Vdsm network configuration
On 11/18/2012 10:52 AM, Itamar Heim wrote:
On 11/18/2012 07:55 AM, Gary Kotton wrote:
On 11/17/2012 09:13 PM, Itamar Heim wrote:
On 11/17/2012 11:56 AM, Gary Kotton wrote:
On 11/17/2012 11:00 AM, Alon Bar-Lev wrote:
Hello,
After discussion calm down, I want to once again to ask a question.
Why isn't this discussion focusing on the interface vdsm will use to access "network provider"? Why should vdsm core care which "network technology" it actually uses?
Quantum?
- that's still a specific implementation.
I tend to disgree, Quantum is an interface enabling one to manage virtual networks. If I understand correctly this is similar to what Alon is suggesting. At the end of the day VDSM will need to interface with linuxbridge, openvswitch, nics that provide SRIOV etc. This may be done either by VDSM or Quantum agents (in some case there may be no Quantum agents - for example if a NVP controller is used). Quantum enables VDSM and oVirt to consume external technologies that are currently not supported today. For example, if one want to use openvswicth. There is a open source implementation of OVS that is managed by Quantum. That is, a Quantum agent builds and manages all flows. Do you want VDSM to do this?
- last i checked, it is far from covering the API needed by vdsm
for provisioning network configurations, rather than just consuming them? (i.e., i don't remember quantum ever intends to provide an api to bond physical interfaces, etc).
Quantum agents may do this. Yes, it will entail some hooks in VDSM but it will provide a large majority of the work that you guys are talking about. The added bonus is that it works with a number of technologies that are not supported by VDSM. I have yet to understand why VDSM has to invent the wheel again.
At the moment there is a lot of work being done in Quantum to expose additional services - for example LBaaS. It would be interesting to know if the current networking plans address this. This should be something on the radar and in my opinion is something essential to any networking infrastructure.
i didn't see anything in quantum leading me to feel it plans to expose a stable api for configuring/provisioning itself?
I do not understand your comment. Via Quantum provider networks Quantum enables one to connect a specific network interface to a virtual network. At the end of the day this "connection" is done by configuring the agent. If the community ever decides to adopt Quantum, which I would consider a healthy and forward moving decision, then this is something that would need to be managed by VDSM (my understanding is that the only free lunch is one at a youth hostel in the outback in Australia - one needs to by his/her drink). This is why I am in favor of what Dan and Mark have suggested regarding the OVS integration. At the end of the day someone needs to do the wiring.
Hello,
Quantum is just like any other network management technology VDSM can use.
What I would like to see is an interface without any 3rd party dependency of VDSM to network management technology, aka network management provider.
This interface should specify all services VDSM expects from a network management technology, such as defining bond, bridge, vlan, interface configuration, enumeration, events.
Then if we decide to implement a provider using Quantum it will be possible, exactly as it will be possible to implement a provider based on network manager or low level.
Of course that there are advantages and features that VDSM may have if using a provider of specific technology, but the interface it-self should not have any affiliation with 3rd party component to allow us the flexibility of choice in future.
Regards, Alon Bar-Lev.
On 11/15/2012 05:54 PM, Gary Kotton wrote:
On 11/14/2012 05:42 PM, Mark Wu wrote:
On 11/14/2012 07:53 PM, Gary Kotton wrote:
On 11/14/2012 11:53 AM, Livnat Peer wrote:
On 14/11/12 00:28, Adam Litke wrote:
On Sun, Nov 11, 2012 at 09:46:43AM -0500, Alon Bar-Lev wrote:
----- Original Message ----- > From: "Dan Kenigsberg"danken@redhat.com > To: vdsm-devel@fedorahosted.org > Sent: Sunday, November 11, 2012 4:07:30 PM > Subject: [vdsm] Future of Vdsm network configuration > > Hi, > > Nowadays, when vdsm receives the setupNetowrk verb, it mangles > /etc/sysconfig/network-scripts/ifcfg-* files and restarts the > network > service, so they are read by the responsible SysV service. > > This is very much Fedora-oriented, and not up with the new themes > in Linux network configuration. Since we want oVirt and Vdsm to be > distribution agnostic, and support new features, we have to change. > > setupNetwork is responsible for two different things: > (1) configure the host networking interfaces, and > (2) create virtual networks for guests and connect the to the world > over (1). > > Functionality (2) is provided by building Linux software > bridges, and > vlan devices. I'd like to explore moving it to Open vSwitch, which > would > enable a host of functionalities that we currently lack (e.g. > tunneling). One thing that worries me is the need to reimplement > our > config snapshot/recovery on ovs's database. > > As far as I know, ovs is unable to maintain host level > parameters of > interfaces (e.g. eth0's IPv4 address), so we need another > tool for functionality (1): either speak to NetworkManager > directly, > or > to use NetCF, via its libvirt virInterface* wrapper. > > I have minor worries about NetCF's breadth of testing and usage; I > know > it is intended to be cross-platform, but unlike ovs, I am not aware > of a > wide Debian usage thereof. On the other hand, its API is ready for > vdsm's > usage for quite a while. > > NetworkManager has become ubiquitous, and we'd better integrate > with > it > better than our current setting of NM_CONTROLLED=no. But as DPB > tells > us, > https://lists.fedorahosted.org/pipermail/vdsm-devel/2012-November/001677.htm... > > we'd better offload integration with NM to libvirt. > > We would like to take Network configuration in VDSM to the next > level > and make it distribution agnostic in addition for setting the > infrastructure for more advanced features to be used going forward. > The path we think of taking is to integrate with OVS and for > feature > completeness use NetCF, via its libvirt virInterface* wrapper. Any > comments or feedback on this proposal is welcomed. > > Thanks to the oVirt net team members who's input has helped writing > this > email. Hi,
As far as I see this, network manager is a monster that is a huge dependency to have just to create bridges or configure network interfaces... It is true that on a host where network manager lives it would be not polite to define network resources not via its interface, however I don't like we force network manager.
libvirt is long not used as virtualization library but system management agent, I am not sure this is the best system agent I would have chosen.
I think that all the terms and building blocks got lost in time... and the result integration became more and more complex.
Stabilizing such multi-layered component environment is much harder than monolithic environment.
I would really want to see vdsm as monolithic component with full control over its resources, I believe this is the only way vdsm can be stable enough to be production grade.
Hypervisor should be a total slave of manager (or cluster), so I have no problem in bypassing/disabling any distribution specific tool in favour of atoms (brctl, iproute), in non persistence mode.
I know this derive some more work, but I don't think it is that complex to implement and maintain.
Just my 2 cents...
I couldn't disagree more. What you are suggesting requires that we reimplement every single networking feature in oVirt by ourselves. If we want to support the (absolutely critical) goal of being distro agnostic, then we need to implement the same functionality across multiple distros too. This is more work than we will ever be able to keep up with. If you think it's hard to stabilize the integration of an external networking library, imagine how hard it will be to stabilize our own rewritten and buggy version. This is not how open source is supposed to work. We should be assembling distinct, modular, pre-existing components together when they are available. If NetworkManager has integration problems, let's work upstream to fix them. If it's dependencies are too great, let's modularize it so we don't need to ship the parts that we don't need.
I agree with Adam on this one, reimplementing the networking management layer by ourselves using only atoms seems like duplication of work that was already done and available for our use both by NM and by libvirt.
Yes, it is not perfect (far from it actually) but I think we better focus our efforts around adding new functionalities to VDSM and improving the current robustness of the code (we have issues regardless of any external component we're using).
For the sake of being distribution agnostic I support the original plan proposed by danken, using OVS combined with libvirt virInterface* wrapper.
The addition of OVS is nice and refreshing (it is the new black). The issue with OVS is that a controller is required, there are a number of proprietary once and there are open source solutions. Something/someone needs to configure and manage the OVS. Just adding the libvirt support is not enough - in a nut shell this is just a matter of setting the network type for the vnic and passing a few additional parameters. Managing and assigning physical NICs to OVS is interesting and challenging. Do you guys have any thoughts about how you want to go about this?
Can we just start with running ovs in standalone mode at first?
Yes, most certainly.
It could have the basic forward function based on MAC-learning and bond/vlans/tunnel function by specifying related options when adding a new port. We could connect each physical nic for vm network with an ovs bridge, and then the VM can get external network access. I agree with that without adding a controller, we can't get a unified control panel. But I think the standalone mode could fit current oVirt network model well. Gary, please correct me if I am wrong or any suggestions from you?
You are correct. This is certainly one way of achieving a first step for integrating with the OVS. My concerns are as follows (maybe some of them do not exist :)):
- Boot process with binding to physical NICS to the OVS
- The OVS maintains a database. This may need to be cleaned of tap
devices when the appliance reboots - lets take an edge case into account - say the appliance has a number of VMs running - there will be tap devices for these VMs registered with the OVS. If there is a exception or power failure the appliance will reset. These devices will still be registered when the appliance reboots. Who cleans them and when? 3. What about the traditional bridged network - will these be migrated to OVS. The idea of moving to OVS is great. I just think that all of the flows should be mapped out listed on a wiki. This will give a nice picture of how the integration can achieved.
Yes, exactly! I will try to work out a draft and post it here for review.
Thanks Gary
Thanks Mark.
Thanks Gary
Livnat
vdsm-devel mailing list vdsm-devel@lists.fedorahosted.org https://lists.fedorahosted.org/mailman/listinfo/vdsm-devel
vdsm-devel mailing list vdsm-devel@lists.fedorahosted.org https://lists.fedorahosted.org/mailman/listinfo/vdsm-devel
I'm late to the party as usual...
I'm all for dynamic set up of hosts, I think it's the only way to go. I don't understand how it can work anyway else.
That being said, if everything is set up dynamically it doesn't matter what backend we use to set it up as long as we can query the state. We can even mix and match. Or am I missing something.
----- Original Message -----
From: "Gary Kotton" gkotton@redhat.com To: vdsm-devel@lists.fedorahosted.org Sent: Wednesday, November 14, 2012 6:53:56 AM Subject: Re: [vdsm] Future of Vdsm network configuration
On 11/14/2012 11:53 AM, Livnat Peer wrote:
On 14/11/12 00:28, Adam Litke wrote:
On Sun, Nov 11, 2012 at 09:46:43AM -0500, Alon Bar-Lev wrote:
----- Original Message -----
From: "Dan Kenigsberg"danken@redhat.com To: vdsm-devel@fedorahosted.org Sent: Sunday, November 11, 2012 4:07:30 PM Subject: [vdsm] Future of Vdsm network configuration
Hi,
Nowadays, when vdsm receives the setupNetowrk verb, it mangles /etc/sysconfig/network-scripts/ifcfg-* files and restarts the network service, so they are read by the responsible SysV service.
This is very much Fedora-oriented, and not up with the new themes in Linux network configuration. Since we want oVirt and Vdsm to be distribution agnostic, and support new features, we have to change.
setupNetwork is responsible for two different things: (1) configure the host networking interfaces, and (2) create virtual networks for guests and connect the to the world over (1).
Functionality (2) is provided by building Linux software bridges, and vlan devices. I'd like to explore moving it to Open vSwitch, which would enable a host of functionalities that we currently lack (e.g. tunneling). One thing that worries me is the need to reimplement our config snapshot/recovery on ovs's database.
As far as I know, ovs is unable to maintain host level parameters of interfaces (e.g. eth0's IPv4 address), so we need another tool for functionality (1): either speak to NetworkManager directly, or to use NetCF, via its libvirt virInterface* wrapper.
I have minor worries about NetCF's breadth of testing and usage; I know it is intended to be cross-platform, but unlike ovs, I am not aware of a wide Debian usage thereof. On the other hand, its API is ready for vdsm's usage for quite a while.
NetworkManager has become ubiquitous, and we'd better integrate with it better than our current setting of NM_CONTROLLED=no. But as DPB tells us, https://lists.fedorahosted.org/pipermail/vdsm-devel/2012-November/001677.htm... we'd better offload integration with NM to libvirt.
We would like to take Network configuration in VDSM to the next level and make it distribution agnostic in addition for setting the infrastructure for more advanced features to be used going forward. The path we think of taking is to integrate with OVS and for feature completeness use NetCF, via its libvirt virInterface* wrapper. Any comments or feedback on this proposal is welcomed.
Thanks to the oVirt net team members who's input has helped writing this email.
Hi,
As far as I see this, network manager is a monster that is a huge dependency to have just to create bridges or configure network interfaces... It is true that on a host where network manager lives it would be not polite to define network resources not via its interface, however I don't like we force network manager.
libvirt is long not used as virtualization library but system management agent, I am not sure this is the best system agent I would have chosen.
I think that all the terms and building blocks got lost in time... and the result integration became more and more complex.
Stabilizing such multi-layered component environment is much harder than monolithic environment.
I would really want to see vdsm as monolithic component with full control over its resources, I believe this is the only way vdsm can be stable enough to be production grade.
Hypervisor should be a total slave of manager (or cluster), so I have no problem in bypassing/disabling any distribution specific tool in favour of atoms (brctl, iproute), in non persistence mode.
I know this derive some more work, but I don't think it is that complex to implement and maintain.
Just my 2 cents...
I couldn't disagree more. What you are suggesting requires that we reimplement every single networking feature in oVirt by ourselves. If we want to support the (absolutely critical) goal of being distro agnostic, then we need to implement the same functionality across multiple distros too. This is more work than we will ever be able to keep up with. If you think it's hard to stabilize the integration of an external networking library, imagine how hard it will be to stabilize our own rewritten and buggy version. This is not how open source is supposed to work. We should be assembling distinct, modular, pre-existing components together when they are available. If NetworkManager has integration problems, let's work upstream to fix them. If it's dependencies are too great, let's modularize it so we don't need to ship the parts that we don't need.
I agree with Adam on this one, reimplementing the networking management layer by ourselves using only atoms seems like duplication of work that was already done and available for our use both by NM and by libvirt.
Yes, it is not perfect (far from it actually) but I think we better focus our efforts around adding new functionalities to VDSM and improving the current robustness of the code (we have issues regardless of any external component we're using).
For the sake of being distribution agnostic I support the original plan proposed by danken, using OVS combined with libvirt virInterface* wrapper.
The addition of OVS is nice and refreshing (it is the new black). The issue with OVS is that a controller is required, there are a number of proprietary once and there are open source solutions. Something/someone needs to configure and manage the OVS. Just adding the libvirt support is not enough - in a nut shell this is just a matter of setting the network type for the vnic and passing a few additional parameters. Managing and assigning physical NICs to OVS is interesting and challenging. Do you guys have any thoughts about how you want to go about this?
Thanks Gary
Livnat
vdsm-devel mailing list vdsm-devel@lists.fedorahosted.org https://lists.fedorahosted.org/mailman/listinfo/vdsm-devel
vdsm-devel mailing list vdsm-devel@lists.fedorahosted.org https://lists.fedorahosted.org/mailman/listinfo/vdsm-devel
On Wed, Nov 14, 2012 at 10:54:34AM -0500, Saggi Mizrahi wrote:
I'm late to the party as usual...
I'm all for dynamic set up of hosts, I think it's the only way to go. I don't understand how it can work anyway else.
I did not expect the thread to go this way, but I agree that network setup is an exception: for storage and virtual machines, we do not persist anything on the node.
For networking we need to persist only the management connection. Everything else can be volatile, created by the client after the node boots.
That being said, if everything is set up dynamically it doesn't matter what backend we use to set it up as long as we can query the state. We can even mix and match. Or am I missing something.
Choosing a backend is important, as all implementations are. We have the setupNetwork API. We could change its semantics to mean "do not persist". Now is the time to consider implementations, too.
Dan.
On Wed, Nov 14, 2012 at 11:53:06AM +0200, Livnat Peer wrote:
On 14/11/12 00:28, Adam Litke wrote:
On Sun, Nov 11, 2012 at 09:46:43AM -0500, Alon Bar-Lev wrote:
----- Original Message -----
From: "Dan Kenigsberg" danken@redhat.com To: vdsm-devel@fedorahosted.org Sent: Sunday, November 11, 2012 4:07:30 PM Subject: [vdsm] Future of Vdsm network configuration
Hi,
Nowadays, when vdsm receives the setupNetowrk verb, it mangles /etc/sysconfig/network-scripts/ifcfg-* files and restarts the network service, so they are read by the responsible SysV service.
This is very much Fedora-oriented, and not up with the new themes in Linux network configuration. Since we want oVirt and Vdsm to be distribution agnostic, and support new features, we have to change.
setupNetwork is responsible for two different things: (1) configure the host networking interfaces, and (2) create virtual networks for guests and connect the to the world over (1).
Functionality (2) is provided by building Linux software bridges, and vlan devices. I'd like to explore moving it to Open vSwitch, which would enable a host of functionalities that we currently lack (e.g. tunneling). One thing that worries me is the need to reimplement our config snapshot/recovery on ovs's database.
As far as I know, ovs is unable to maintain host level parameters of interfaces (e.g. eth0's IPv4 address), so we need another tool for functionality (1): either speak to NetworkManager directly, or to use NetCF, via its libvirt virInterface* wrapper.
I have minor worries about NetCF's breadth of testing and usage; I know it is intended to be cross-platform, but unlike ovs, I am not aware of a wide Debian usage thereof. On the other hand, its API is ready for vdsm's usage for quite a while.
NetworkManager has become ubiquitous, and we'd better integrate with it better than our current setting of NM_CONTROLLED=no. But as DPB tells us, https://lists.fedorahosted.org/pipermail/vdsm-devel/2012-November/001677.htm... we'd better offload integration with NM to libvirt.
We would like to take Network configuration in VDSM to the next level and make it distribution agnostic in addition for setting the infrastructure for more advanced features to be used going forward. The path we think of taking is to integrate with OVS and for feature completeness use NetCF, via its libvirt virInterface* wrapper. Any comments or feedback on this proposal is welcomed.
Thanks to the oVirt net team members who's input has helped writing this email.
Hi,
As far as I see this, network manager is a monster that is a huge dependency to have just to create bridges or configure network interfaces... It is true that on a host where network manager lives it would be not polite to define network resources not via its interface, however I don't like we force network manager.
libvirt is long not used as virtualization library but system management agent, I am not sure this is the best system agent I would have chosen.
I think that all the terms and building blocks got lost in time... and the result integration became more and more complex.
Stabilizing such multi-layered component environment is much harder than monolithic environment.
I would really want to see vdsm as monolithic component with full control over its resources, I believe this is the only way vdsm can be stable enough to be production grade.
Hypervisor should be a total slave of manager (or cluster), so I have no problem in bypassing/disabling any distribution specific tool in favour of atoms (brctl, iproute), in non persistence mode.
I know this derive some more work, but I don't think it is that complex to implement and maintain.
Just my 2 cents...
I couldn't disagree more. What you are suggesting requires that we reimplement every single networking feature in oVirt by ourselves. If we want to support the (absolutely critical) goal of being distro agnostic, then we need to implement the same functionality across multiple distros too. This is more work than we will ever be able to keep up with. If you think it's hard to stabilize the integration of an external networking library, imagine how hard it will be to stabilize our own rewritten and buggy version. This is not how open source is supposed to work. We should be assembling distinct, modular, pre-existing components together when they are available. If NetworkManager has integration problems, let's work upstream to fix them. If it's dependencies are too great, let's modularize it so we don't need to ship the parts that we don't need.
I agree with Adam on this one, reimplementing the networking management layer by ourselves using only atoms seems like duplication of work that was already done and available for our use both by NM and by libvirt.
Yes, it is not perfect (far from it actually) but I think we better focus our efforts around adding new functionalities to VDSM and improving the current robustness of the code (we have issues regardless of any external component we're using).
For the sake of being distribution agnostic I support the original plan proposed by danken, using OVS combined with libvirt virInterface* wrapper.
ACK.
----- Original Message -----
From: "Livnat Peer" lpeer@redhat.com To: "Alon Bar-Lev" alonbl@redhat.com Cc: vdsm-devel@fedorahosted.org Sent: Wednesday, November 14, 2012 11:53:06 AM Subject: Re: [vdsm] Future of Vdsm network configuration
On 14/11/12 00:28, Adam Litke wrote:
On Sun, Nov 11, 2012 at 09:46:43AM -0500, Alon Bar-Lev wrote:
----- Original Message -----
From: "Dan Kenigsberg" danken@redhat.com To: vdsm-devel@fedorahosted.org Sent: Sunday, November 11, 2012 4:07:30 PM Subject: [vdsm] Future of Vdsm network configuration
Hi,
Nowadays, when vdsm receives the setupNetowrk verb, it mangles /etc/sysconfig/network-scripts/ifcfg-* files and restarts the network service, so they are read by the responsible SysV service.
This is very much Fedora-oriented, and not up with the new themes in Linux network configuration. Since we want oVirt and Vdsm to be distribution agnostic, and support new features, we have to change.
setupNetwork is responsible for two different things: (1) configure the host networking interfaces, and (2) create virtual networks for guests and connect the to the world over (1).
Functionality (2) is provided by building Linux software bridges, and vlan devices. I'd like to explore moving it to Open vSwitch, which would enable a host of functionalities that we currently lack (e.g. tunneling). One thing that worries me is the need to reimplement our config snapshot/recovery on ovs's database.
As far as I know, ovs is unable to maintain host level parameters of interfaces (e.g. eth0's IPv4 address), so we need another tool for functionality (1): either speak to NetworkManager directly, or to use NetCF, via its libvirt virInterface* wrapper.
I have minor worries about NetCF's breadth of testing and usage; I know it is intended to be cross-platform, but unlike ovs, I am not aware of a wide Debian usage thereof. On the other hand, its API is ready for vdsm's usage for quite a while.
NetworkManager has become ubiquitous, and we'd better integrate with it better than our current setting of NM_CONTROLLED=no. But as DPB tells us, https://lists.fedorahosted.org/pipermail/vdsm-devel/2012-November/001677.htm... we'd better offload integration with NM to libvirt.
We would like to take Network configuration in VDSM to the next level and make it distribution agnostic in addition for setting the infrastructure for more advanced features to be used going forward. The path we think of taking is to integrate with OVS and for feature completeness use NetCF, via its libvirt virInterface* wrapper. Any comments or feedback on this proposal is welcomed.
Thanks to the oVirt net team members who's input has helped writing this email.
Hi,
As far as I see this, network manager is a monster that is a huge dependency to have just to create bridges or configure network interfaces... It is true that on a host where network manager lives it would be not polite to define network resources not via its interface, however I don't like we force network manager.
libvirt is long not used as virtualization library but system management agent, I am not sure this is the best system agent I would have chosen.
I think that all the terms and building blocks got lost in time... and the result integration became more and more complex.
Stabilizing such multi-layered component environment is much harder than monolithic environment.
I would really want to see vdsm as monolithic component with full control over its resources, I believe this is the only way vdsm can be stable enough to be production grade.
Hypervisor should be a total slave of manager (or cluster), so I have no problem in bypassing/disabling any distribution specific tool in favour of atoms (brctl, iproute), in non persistence mode.
I know this derive some more work, but I don't think it is that complex to implement and maintain.
Just my 2 cents...
I couldn't disagree more. What you are suggesting requires that we reimplement every single networking feature in oVirt by ourselves. If we want to support the (absolutely critical) goal of being distro agnostic, then we need to implement the same functionality across multiple distros too. This is more work than we will ever be able to keep up with. If you think it's hard to stabilize the integration of an external networking library, imagine how hard it will be to stabilize our own rewritten and buggy version. This is not how open source is supposed to work. We should be assembling distinct, modular, pre-existing components together when they are available. If NetworkManager has integration problems, let's work upstream to fix them. If it's dependencies are too great, let's modularize it so we don't need to ship the parts that we don't need.
I agree with Adam on this one, reimplementing the networking management layer by ourselves using only atoms seems like duplication of work that was already done and available for our use both by NM and by libvirt.
Yes, it is not perfect (far from it actually) but I think we better focus our efforts around adding new functionalities to VDSM and improving the current robustness of the code (we have issues regardless of any external component we're using).
For the sake of being distribution agnostic I support the original plan proposed by danken, using OVS combined with libvirt virInterface* wrapper.
Agree, +1
Livnat
vdsm-devel mailing list vdsm-devel@lists.fedorahosted.org https://lists.fedorahosted.org/mailman/listinfo/vdsm-devel
----- Original Message -----
From: "Dan Kenigsberg" danken@redhat.com To: vdsm-devel@fedorahosted.org Sent: Sunday, November 11, 2012 4:07:30 PM Subject: [vdsm] Future of Vdsm network configuration
Hi,
Nowadays, when vdsm receives the setupNetowrk verb, it mangles /etc/sysconfig/network-scripts/ifcfg-* files and restarts the network service, so they are read by the responsible SysV service.
This is very much Fedora-oriented, and not up with the new themes in Linux network configuration. Since we want oVirt and Vdsm to be distribution agnostic, and support new features, we have to change.
setupNetwork is responsible for two different things: (1) configure the host networking interfaces, and (2) create virtual networks for guests and connect the to the world over (1).
Functionality (2) is provided by building Linux software bridges, and vlan devices. I'd like to explore moving it to Open vSwitch, which would enable a host of functionalities that we currently lack (e.g. tunneling). One thing that worries me is the need to reimplement our config snapshot/recovery on ovs's database.
As far as I know, ovs is unable to maintain host level parameters of interfaces (e.g. eth0's IPv4 address), so we need another tool for functionality (1): either speak to NetworkManager directly, or to use NetCF, via its libvirt virInterface* wrapper.
I have minor worries about NetCF's breadth of testing and usage; I know it is intended to be cross-platform, but unlike ovs, I am not aware of a wide Debian usage thereof. On the other hand, its API is ready for vdsm's usage for quite a while.
NetworkManager has become ubiquitous, and we'd better integrate with it better than our current setting of NM_CONTROLLED=no. But as DPB tells us, https://lists.fedorahosted.org/pipermail/vdsm-devel/2012-November/001677.htm... we'd better offload integration with NM to libvirt.
For NetworkManager bridge support is in introduction stages https://bugzilla.gnome.org/show_bug.cgi?id=546197
We would like to take Network configuration in VDSM to the next level and make it distribution agnostic in addition for setting the infrastructure for more advanced features to be used going forward. The path we think of taking is to integrate with OVS and for feature completeness use NetCF, via its libvirt virInterface* wrapper. Any comments or feedback on this proposal is welcomed.
+ from netCF site " How can I help ? netcf is in its very early stages, and can be improved in any number of ways. The most pressing needs right now are
implementing backends for other distributions (Debian, Ubuntu, etc.) and operating systems (Solaris) testing and using netcf "
At a short term I think that it will only add overhead an instability if you'll depend on them. Long term I don't see why not.
Thanks to the oVirt net team members who's input has helped writing this email.
Dan. _______________________________________________ vdsm-devel mailing list vdsm-devel@lists.fedorahosted.org https://lists.fedorahosted.org/mailman/listinfo/vdsm-devel
On 11/11/2012 10:07 PM, Dan Kenigsberg wrote:
Hi,
Nowadays, when vdsm receives the setupNetowrk verb, it mangles /etc/sysconfig/network-scripts/ifcfg-* files and restarts the network service, so they are read by the responsible SysV service.
This is very much Fedora-oriented, and not up with the new themes in Linux network configuration. Since we want oVirt and Vdsm to be distribution agnostic, and support new features, we have to change.
setupNetwork is responsible for two different things: (1) configure the host networking interfaces, and (2) create virtual networks for guests and connect the to the world over (1).
Functionality (2) is provided by building Linux software bridges, and vlan devices. I'd like to explore moving it to Open vSwitch, which would
As far as I know, ovs also supports bond. Even though it doesn't have as many modes as what Linux bonding has, it supports load balancing and fail-over at least. For vlan, we could make the nic port as trunk port and use libvirt's 'portgroup' feature to create an access or trunk port for vif. Then it could replace Linux vlan configuration.
enable a host of functionalities that we currently lack (e.g. tunneling). One thing that worries me is the need to reimplement our config snapshot/recovery on ovs's database.
Undo the ovs command could restore the ovs database to the state before the new configuration call. This could help in some cases. If we can separate vm network from management network, it should not be a big problem.
As far as I know, ovs is unable to maintain host level parameters of interfaces (e.g. eth0's IPv4 address), so we need another tool for functionality (1): either speak to NetworkManager directly, or to use NetCF, via its libvirt virInterface* wrapper.
IMHO, ip address is not useful for pure vm network. We can leave the network address configuration of management network to deployment. So ovs can cover all typical network configurations. I think we don't need to wait for NetworkManager is ready to consume via libvirt virInterface. We could have multiple network management drivers in vdsm. And part of openvswitch support code is also useful to integrate quantum into oVirt.
I have minor worries about NetCF's breadth of testing and usage; I know it is intended to be cross-platform, but unlike ovs, I am not aware of a wide Debian usage thereof. On the other hand, its API is ready for vdsm's usage for quite a while.
NetworkManager has become ubiquitous, and we'd better integrate with it better than our current setting of NM_CONTROLLED=no. But as DPB tells us, https://lists.fedorahosted.org/pipermail/vdsm-devel/2012-November/001677.htm... we'd better offload integration with NM to libvirt.
We would like to take Network configuration in VDSM to the next level and make it distribution agnostic in addition for setting the infrastructure for more advanced features to be used going forward. The path we think of taking is to integrate with OVS and for feature completeness use NetCF, via its libvirt virInterface* wrapper. Any comments or feedback on this proposal is welcomed.
Thanks to the oVirt net team members who's input has helped writing this email.
Dan.
On Sun, Nov 11, 2012 at 04:07:30PM +0200, Dan Kenigsberg wrote:
Hi,
Nowadays, when vdsm receives the setupNetowrk verb, it mangles /etc/sysconfig/network-scripts/ifcfg-* files and restarts the network service, so they are read by the responsible SysV service.
This is very much Fedora-oriented, and not up with the new themes in Linux network configuration. Since we want oVirt and Vdsm to be distribution agnostic, and support new features, we have to change.
setupNetwork is responsible for two different things: (1) configure the host networking interfaces, and (2) create virtual networks for guests and connect the to the world over (1).
Functionality (2) is provided by building Linux software bridges, and vlan devices. I'd like to explore moving it to Open vSwitch, which would enable a host of functionalities that we currently lack (e.g. tunneling). One thing that worries me is the need to reimplement our config snapshot/recovery on ovs's database.
As far as I know, ovs is unable to maintain host level parameters of interfaces (e.g. eth0's IPv4 address), so we need another tool for functionality (1): either speak to NetworkManager directly, or to use NetCF, via its libvirt virInterface* wrapper.
I have minor worries about NetCF's breadth of testing and usage; I know it is intended to be cross-platform, but unlike ovs, I am not aware of a wide Debian usage thereof. On the other hand, its API is ready for vdsm's usage for quite a while.
NetworkManager has become ubiquitous, and we'd better integrate with it better than our current setting of NM_CONTROLLED=no. But as DPB tells us, https://lists.fedorahosted.org/pipermail/vdsm-devel/2012-November/001677.htm... we'd better offload integration with NM to libvirt.
NM is not entirely ubiquitous, and I think you'll find that even with NM having proper bridging/bonding/etc support there will be many sysadmins and distros who will not be prepared to mandate its use. This is where I see libvirt's value in being. The virInterface drivers will be able to take care of providing a consistent API for configuration regardless of whether the host is using legacy initscripts network config, network manager, or even conman. In other words if you don't use libvirt for this, I think you'll find yourself re-inventing libvirt's functionality in the end.
Daniel
On Sun, 11 Nov 2012 22:07:30 +0800, Dan Kenigsberg danken@redhat.com wrote:
Hi,
Nowadays, when vdsm receives the setupNetowrk verb, it mangles /etc/sysconfig/network-scripts/ifcfg-* files and restarts the network service, so they are read by the responsible SysV service.
This is very much Fedora-oriented, and not up with the new themes in Linux network configuration. Since we want oVirt and Vdsm to be distribution agnostic, and support new features, we have to change.
setupNetwork is responsible for two different things: (1) configure the host networking interfaces, and (2) create virtual networks for guests and connect the to the world over (1).
Functionality (2) is provided by building Linux software bridges, and vlan devices. I'd like to explore moving it to Open vSwitch, which would enable a host of functionalities that we currently lack (e.g. tunneling). One thing that worries me is the need to reimplement our config snapshot/recovery on ovs's database.
I have tried replacing Linux bridge with ovs in ovirt-node, using a earlier version(like 1.3 or so) with brigde-compatible support. Then I had nothing extra to do other than loading ovs's brcompat module instead of linux bridge module. And newer version's ovs also has network-scripts support for rhel-based distros. So the problem is more about the tool and the way we achieve the configuring work.
As far as I know, ovs is unable to maintain host level parameters of interfaces (e.g. eth0's IPv4 address), so we need another tool for functionality (1): either speak to NetworkManager directly, or to use NetCF, via its libvirt virInterface* wrapper.
I have minor worries about NetCF's breadth of testing and usage; I know it is intended to be cross-platform, but unlike ovs, I am not aware of a wide Debian usage thereof. On the other hand, its API is ready for vdsm's usage for quite a while.
NetworkManager has become ubiquitous, and we'd better integrate with it better than our current setting of NM_CONTROLLED=no. But as DPB tells us, https://lists.fedorahosted.org/pipermail/vdsm-devel/2012-November/001677.htm... we'd better offload integration with NM to libvirt.
We would like to take Network configuration in VDSM to the next level and make it distribution agnostic in addition for setting the infrastructure for more advanced features to be used going forward. The path we think of taking is to integrate with OVS and for feature completeness use NetCF, via its libvirt virInterface* wrapper. Any comments or feedback on this proposal is welcomed.
If we use NetCF, are we limited to use only NM or only ovs? If not, as long as virInterface driver can provide a set of consistent and workable APIs, vdsm shouldn't care much about how functionality (1) and (2) are actually achieved. For different purpose, it is nice to maintain the variety to achieve the functionalities, maybe with plugins for those several tools and ways.
vdsm-devel@lists.fedorahosted.org