[vdsm] Future of Vdsm network configuration

Gary Kotton gkotton at redhat.com
Thu Nov 15 09:54:42 UTC 2012


On 11/14/2012 05:42 PM, Mark Wu wrote:
> On 11/14/2012 07:53 PM, Gary Kotton wrote:
>> On 11/14/2012 11:53 AM, Livnat Peer wrote:
>>> On 14/11/12 00:28, Adam Litke wrote:
>>>> On Sun, Nov 11, 2012 at 09:46:43AM -0500, Alon Bar-Lev wrote:
>>>>>
>>>>> ----- Original Message -----
>>>>>> From: "Dan Kenigsberg"<danken at redhat.com>
>>>>>> To: vdsm-devel at fedorahosted.org
>>>>>> Sent: Sunday, November 11, 2012 4:07:30 PM
>>>>>> Subject: [vdsm] Future of Vdsm network configuration
>>>>>>
>>>>>> Hi,
>>>>>>
>>>>>> Nowadays, when vdsm receives the setupNetowrk verb, it mangles
>>>>>> /etc/sysconfig/network-scripts/ifcfg-* files and restarts the 
>>>>>> network
>>>>>> service, so they are read by the responsible SysV service.
>>>>>>
>>>>>> This is very much Fedora-oriented, and not up with the new themes
>>>>>> in Linux network configuration. Since we want oVirt and Vdsm to be
>>>>>> distribution agnostic, and support new features, we have to change.
>>>>>>
>>>>>> setupNetwork is responsible for two different things:
>>>>>> (1) configure the host networking interfaces, and
>>>>>> (2) create virtual networks for guests and connect the to the world
>>>>>> over (1).
>>>>>>
>>>>>> Functionality (2) is provided by building Linux software bridges, 
>>>>>> and
>>>>>> vlan devices. I'd like to explore moving it to Open vSwitch, which
>>>>>> would
>>>>>> enable a host of functionalities that we currently lack (e.g.
>>>>>> tunneling). One thing that worries me is the need to reimplement our
>>>>>> config snapshot/recovery on ovs's database.
>>>>>>
>>>>>> As far as I know, ovs is unable to maintain host level parameters of
>>>>>> interfaces (e.g. eth0's IPv4 address), so we need another
>>>>>> tool for functionality (1): either speak to NetworkManager directly,
>>>>>> or
>>>>>> to use NetCF, via its libvirt virInterface* wrapper.
>>>>>>
>>>>>> I have minor worries about NetCF's breadth of testing and usage; I
>>>>>> know
>>>>>> it is intended to be cross-platform, but unlike ovs, I am not aware
>>>>>> of a
>>>>>> wide Debian usage thereof. On the other hand, its API is ready for
>>>>>> vdsm's
>>>>>> usage for quite a while.
>>>>>>
>>>>>> NetworkManager has become ubiquitous, and we'd better integrate with
>>>>>> it
>>>>>> better than our current setting of NM_CONTROLLED=no. But as DPB 
>>>>>> tells
>>>>>> us,
>>>>>> https://lists.fedorahosted.org/pipermail/vdsm-devel/2012-November/001677.html 
>>>>>>
>>>>>> we'd better offload integration with NM to libvirt.
>>>>>>
>>>>>> We would like to take Network configuration in VDSM to the next 
>>>>>> level
>>>>>> and make it distribution agnostic in addition for setting the
>>>>>> infrastructure for more advanced features to be used going forward.
>>>>>> The path we think of taking is to integrate with OVS and for feature
>>>>>> completeness use NetCF, via its libvirt virInterface* wrapper. Any
>>>>>> comments or feedback on this proposal is welcomed.
>>>>>>
>>>>>> Thanks to the oVirt net team members who's input has helped writing
>>>>>> this
>>>>>> email.
>>>>> Hi,
>>>>>
>>>>> As far as I see this, network manager is a monster that is a huge 
>>>>> dependency
>>>>> to have just to create bridges or configure network interfaces... 
>>>>> It is true
>>>>> that on a host where network manager lives it would be not polite 
>>>>> to define
>>>>> network resources not via its interface, however I don't like we 
>>>>> force network
>>>>> manager.
>>>>>
>>>>> libvirt is long not used as virtualization library but system 
>>>>> management
>>>>> agent, I am not sure this is the best system agent I would have 
>>>>> chosen.
>>>>>
>>>>> I think that all the terms and building blocks got lost in time... 
>>>>> and the
>>>>> result integration became more and more complex.
>>>>>
>>>>> Stabilizing such multi-layered component environment is much 
>>>>> harder than
>>>>> monolithic environment.
>>>>>
>>>>> I would really want to see vdsm as monolithic component with full 
>>>>> control over
>>>>> its resources, I believe this is the only way vdsm can be stable 
>>>>> enough to be
>>>>> production grade.
>>>>>
>>>>> Hypervisor should be a total slave of manager (or cluster), so I 
>>>>> have no
>>>>> problem in bypassing/disabling any distribution specific tool in 
>>>>> favour of
>>>>> atoms (brctl, iproute), in non persistence mode.
>>>>>
>>>>> I know this derive some more work, but I don't think it is that 
>>>>> complex to
>>>>> implement and maintain.
>>>>>
>>>>> Just my 2 cents...
>>>> I couldn't disagree more.  What you are suggesting requires that we 
>>>> reimplement
>>>> every single networking feature in oVirt by ourselves.  If we want 
>>>> to support
>>>> the (absolutely critical) goal of being distro agnostic, then we 
>>>> need to
>>>> implement the same functionality across multiple distros too. This 
>>>> is more work
>>>> than we will ever be able to keep up with.  If you think it's hard 
>>>> to stabilize
>>>> the integration of an external networking library, imagine how hard 
>>>> it will be
>>>> to stabilize our own rewritten and buggy version.  This is not how 
>>>> open source
>>>> is supposed to work.  We should be assembling distinct, modular, 
>>>> pre-existing
>>>> components together when they are available.  If NetworkManager has 
>>>> integration
>>>> problems, let's work upstream to fix them.  If it's dependencies 
>>>> are too great,
>>>> let's modularize it so we don't need to ship the parts that we 
>>>> don't need.
>>>>
>>> I agree with Adam on this one, reimplementing the networking management
>>> layer by ourselves using only atoms seems like duplication of work that
>>> was already done and available for our use both by NM and by libvirt.
>>>
>>> Yes, it is not perfect (far from it actually) but I think we better
>>> focus our efforts around adding new functionalities to VDSM and
>>> improving the current robustness of the code (we have issues regardless
>>> of any external component we're using).
>>>
>>> For the sake of being distribution agnostic I support the original plan
>>> proposed by danken, using OVS combined with libvirt virInterface* 
>>> wrapper.
>>
>> The addition of OVS is nice and refreshing (it is the new black). The 
>> issue with OVS is that a controller is required, there are a number 
>> of proprietary once and there are open source solutions. 
>> Something/someone needs to configure and manage the OVS. Just adding 
>> the libvirt support is not enough - in a nut shell this is just a 
>> matter of setting the network type for the vnic and passing a few 
>> additional parameters. Managing and assigning physical NICs to OVS is 
>> interesting and challenging. Do you guys have any thoughts about how 
>> you want to go about this?
>>
> Can we just start with running ovs in standalone mode at first? 

Yes, most certainly.
> It could have the basic forward function based on MAC-learning and 
> bond/vlans/tunnel function by specifying related options when adding a 
> new
> port. We could connect each physical nic for vm network with an ovs 
> bridge, and then the VM can get
> external network access.  I agree with that without adding a 
> controller, we can't get a unified control panel. But I think the 
> standalone mode could fit current oVirt network model well.
> Gary, please correct me if I am wrong or any suggestions from you?

You are correct. This is certainly one way of achieving a first step for 
integrating with the OVS. My concerns are as follows (maybe some of them 
do not exist :)):
1. Boot process with binding to physical NICS to the OVS
2. The OVS maintains a database. This may need to be cleaned of tap 
devices when the appliance reboots - lets take an edge case into account 
- say the appliance has a number of VMs running - there will be tap 
devices for these VMs registered with the OVS. If there is a exception 
or power failure the appliance will reset. These devices will still be 
registered when the appliance reboots. Who cleans them and when?
3. What about the traditional bridged network - will these be migrated 
to OVS.
The idea of moving to OVS is great. I just think that all of the flows 
should be mapped out listed on a wiki. This will give a nice picture of 
how the integration can achieved.
Thanks
Gary
>
> Thanks
> Mark.
>> Thanks
>> Gary
>>
>>
>>>
>>>
>>> Livnat
>>>
>>> _______________________________________________
>>> vdsm-devel mailing list
>>> vdsm-devel at lists.fedorahosted.org
>>> https://lists.fedorahosted.org/mailman/listinfo/vdsm-devel
>>
>> _______________________________________________
>> vdsm-devel mailing list
>> vdsm-devel at lists.fedorahosted.org
>> https://lists.fedorahosted.org/mailman/listinfo/vdsm-devel
>



More information about the vdsm-devel mailing list