Hi All, We have been discussing $subject for a while and I'd like to summarized what we agreed and disagreed on thus far.
The way I see it there are two related discussions:
1. Getting VDSM networking stack to be distribution agnostic. - We are all in agreement that VDSM API should be generic enough to incorporate multiple implementation. (discussed on this thread: Alon's suggestion, Mark's patch for adding support for netcf etc.)
- We would like to maintain at least one implementation as the working/up-to-date implementation for our users, this implementation should be distribution agnostic (as we all acknowledge this is an important goal for VDSM). I also think that with the agreement of this community we can choose to change our focus, from time to time, from one implementation to another as we see fit (today it can be OVS+netcf and in a few months we'll use the quantum based implementation if we agree it is better)
2. The second discussion is about persisting the network configuration on the host vs. dynamically retrieving it from a centralized location like the engine. Danken raised a concern that even if going with the dynamic approach the host should persist the management network configuration.
Obviously the second discussion influences the API modeling. Since I think it would be challenging to add support for generic API and change the current implementation to match the dynamic configuration approach simultaneously I suggest we'll focus our efforts on one change at a time.
I suggest to have a discussion on the pro's and con's of dynamic configuration and after we get to a consensus around that we can start modeling the generic API.
thoughts? comments?
Livnat
Livnat,
Thanks for your summary. I got comments below.
2012-11-25 18:53, Livnat Peer:
Hi All, We have been discussing $subject for a while and I'd like to summarized what we agreed and disagreed on thus far.
The way I see it there are two related discussions:
- Getting VDSM networking stack to be distribution agnostic.
- We are all in agreement that VDSM API should be generic enough to
incorporate multiple implementation. (discussed on this thread: Alon's suggestion, Mark's patch for adding support for netcf etc.)
- We would like to maintain at least one implementation as the
working/up-to-date implementation for our users, this implementation should be distribution agnostic (as we all acknowledge this is an important goal for VDSM). I also think that with the agreement of this community we can choose to change our focus, from time to time, from one implementation to another as we see fit (today it can be OVS+netcf and in a few months we'll use the quantum based implementation if we agree it is better)
- The second discussion is about persisting the network configuration
on the host vs. dynamically retrieving it from a centralized location like the engine. Danken raised a concern that even if going with the dynamic approach the host should persist the management network configuration.
About dynamical retrieving from a centralized location, when will the retrieving start? Just in the very early stage of host booting before network functions? Or after the host startup and in the normal running state of the host? Before retrieving the configuration, how does the host network connecting to the engine? I think we need a basic well known network between hosts and the engine first. Then after the retrieving, hosts should reconfigure the network for later management. However, the timing to retrieve and reconfigure are challenging.
Obviously the second discussion influences the API modeling. Since I think it would be challenging to add support for generic API and change the current implementation to match the dynamic configuration approach simultaneously I suggest we'll focus our efforts on one change at a time.
I suggest to have a discussion on the pro's and con's of dynamic configuration and after we get to a consensus around that we can start modeling the generic API.
thoughts? comments?
Livnat _______________________________________________ vdsm-devel mailing list vdsm-devel@lists.fedorahosted.org https://lists.fedorahosted.org/mailman/listinfo/vdsm-devel
On 26/11/12 03:15, Shu Ming wrote:
Livnat,
Thanks for your summary. I got comments below.
2012-11-25 18:53, Livnat Peer:
Hi All, We have been discussing $subject for a while and I'd like to summarized what we agreed and disagreed on thus far.
The way I see it there are two related discussions:
- Getting VDSM networking stack to be distribution agnostic.
- We are all in agreement that VDSM API should be generic enough to
incorporate multiple implementation. (discussed on this thread: Alon's suggestion, Mark's patch for adding support for netcf etc.)
- We would like to maintain at least one implementation as the
working/up-to-date implementation for our users, this implementation should be distribution agnostic (as we all acknowledge this is an important goal for VDSM). I also think that with the agreement of this community we can choose to change our focus, from time to time, from one implementation to another as we see fit (today it can be OVS+netcf and in a few months we'll use the quantum based implementation if we agree it is better)
- The second discussion is about persisting the network configuration
on the host vs. dynamically retrieving it from a centralized location like the engine. Danken raised a concern that even if going with the dynamic approach the host should persist the management network configuration.
About dynamical retrieving from a centralized location, when will the retrieving start? Just in the very early stage of host booting before network functions? Or after the host startup and in the normal running state of the host? Before retrieving the configuration, how does the host network connecting to the engine? I think we need a basic well known network between hosts and the engine first. Then after the retrieving, hosts should reconfigure the network for later management. However, the timing to retrieve and reconfigure are challenging.
We did not discuss the dynamic approach in details on the list so far and I think this is a good opportunity to start this discussion...
From what was discussed previously I can say that the need for a well
known network was raised by danken, it was referred to as the management network, this network would be used for pulling the full host network configuration from the centralized location, at this point the engine.
About the timing for retrieving the configuration, there are several approaches. One of them was described by Alon, and I think he'll join this discussion and maybe put it in his own words, but the idea was to 'keep' the network synchronized at all times. When the host have communication channel to the engine and the engine detects there is a mismatch in the host configuration, the engine initiates 'apply network configuration' action on the host.
Using this approach we'll have a single path of code to maintain and that would reduce code complexity and bugs - That's quoting Alon Bar Lev (Alon I hope I did not twisted your words/idea).
On the other hand the above approach makes local tweaks on the host (done manually by the administrator) much harder.
Any other approaches ?
I'd like to add a more general question to the discussion what are the advantages of taking the dynamic approach? So far I collected two reasons:
-It is a 'cleaner' design, removes complexity on VDSM code, easier to maintain going forward, and less bug prone (I agree with that one, as long as we keep the retrieving configuration mechanism/algorithm simple).
-It adheres to the idea of having a stateless hypervisor - some more input on this point would be appreciated
Any other advantages?
discussing the benefits of having the persisted
Livnat
On Mon, Nov 26, 2012 at 02:57:19PM +0200, Livnat Peer wrote:
On 26/11/12 03:15, Shu Ming wrote:
Livnat,
Thanks for your summary. I got comments below.
2012-11-25 18:53, Livnat Peer:
Hi All, We have been discussing $subject for a while and I'd like to summarized what we agreed and disagreed on thus far.
The way I see it there are two related discussions:
- Getting VDSM networking stack to be distribution agnostic.
- We are all in agreement that VDSM API should be generic enough to
incorporate multiple implementation. (discussed on this thread: Alon's suggestion, Mark's patch for adding support for netcf etc.)
- We would like to maintain at least one implementation as the
working/up-to-date implementation for our users, this implementation should be distribution agnostic (as we all acknowledge this is an important goal for VDSM). I also think that with the agreement of this community we can choose to change our focus, from time to time, from one implementation to another as we see fit (today it can be OVS+netcf and in a few months we'll use the quantum based implementation if we agree it is better)
- The second discussion is about persisting the network configuration
on the host vs. dynamically retrieving it from a centralized location like the engine. Danken raised a concern that even if going with the dynamic approach the host should persist the management network configuration.
About dynamical retrieving from a centralized location, when will the retrieving start? Just in the very early stage of host booting before network functions? Or after the host startup and in the normal running state of the host? Before retrieving the configuration, how does the host network connecting to the engine? I think we need a basic well known network between hosts and the engine first. Then after the retrieving, hosts should reconfigure the network for later management. However, the timing to retrieve and reconfigure are challenging.
We did not discuss the dynamic approach in details on the list so far and I think this is a good opportunity to start this discussion...
From what was discussed previously I can say that the need for a well known network was raised by danken, it was referred to as the management network, this network would be used for pulling the full host network configuration from the centralized location, at this point the engine.
About the timing for retrieving the configuration, there are several approaches. One of them was described by Alon, and I think he'll join this discussion and maybe put it in his own words, but the idea was to 'keep' the network synchronized at all times. When the host have communication channel to the engine and the engine detects there is a mismatch in the host configuration, the engine initiates 'apply network configuration' action on the host.
Using this approach we'll have a single path of code to maintain and that would reduce code complexity and bugs - That's quoting Alon Bar Lev (Alon I hope I did not twisted your words/idea).
On the other hand the above approach makes local tweaks on the host (done manually by the administrator) much harder.
I worry a lot about the above if we take the dynamic approach. It seems we'd need to introduce before/after 'apply network configuration' hooks where the admin could add custom config commands that aren't yet modeled by engine.
Any other approaches ?
Static configuration has the advantage of allowing a host to bring itself back online independent of the engine. This is also useful for anyone who may want to deploy a vdsm node in standalone mode.
I think it would be possible to easily support a quasi-static configuration mode simply by extending the design of the dynamic approach slightly. In dynamic mode, the network configuration is passed down as a well-defined data structure. When a particular configuration has been committed, vdsm could write a copy of that configuration data structure to /var/run/vdsm/network-config.json. During a subsequent boot, if the engine cannot be contacted after activating the management network, the cached configuration can be applied using the same code as for dynamic mode. We'd have to flesh out the circumstances under which this would happen.
I'd like to add a more general question to the discussion what are the advantages of taking the dynamic approach? So far I collected two reasons:
-It is a 'cleaner' design, removes complexity on VDSM code, easier to maintain going forward, and less bug prone (I agree with that one, as long as we keep the retrieving configuration mechanism/algorithm simple).
-It adheres to the idea of having a stateless hypervisor - some more input on this point would be appreciated
Any other advantages?
discussing the benefits of having the persisted
As I mentioned above, the main benefit I see of having some sort of persistent configuration is:
- To allow the host to operate independently of the engine in either a failure scenario or in a standalone configuration.
* Adam Litke agl@us.ibm.com [2012-11-26 09:03]:
On Mon, Nov 26, 2012 at 02:57:19PM +0200, Livnat Peer wrote:
On 26/11/12 03:15, Shu Ming wrote:
Livnat,
Thanks for your summary. I got comments below.
2012-11-25 18:53, Livnat Peer:
Hi All, We have been discussing $subject for a while and I'd like to summarized what we agreed and disagreed on thus far.
The way I see it there are two related discussions:
- Getting VDSM networking stack to be distribution agnostic.
- We are all in agreement that VDSM API should be generic enough to
incorporate multiple implementation. (discussed on this thread: Alon's suggestion, Mark's patch for adding support for netcf etc.)
- We would like to maintain at least one implementation as the
working/up-to-date implementation for our users, this implementation should be distribution agnostic (as we all acknowledge this is an important goal for VDSM). I also think that with the agreement of this community we can choose to change our focus, from time to time, from one implementation to another as we see fit (today it can be OVS+netcf and in a few months we'll use the quantum based implementation if we agree it is better)
- The second discussion is about persisting the network configuration
on the host vs. dynamically retrieving it from a centralized location like the engine. Danken raised a concern that even if going with the dynamic approach the host should persist the management network configuration.
About dynamical retrieving from a centralized location, when will the retrieving start? Just in the very early stage of host booting before network functions? Or after the host startup and in the normal running state of the host? Before retrieving the configuration, how does the host network connecting to the engine? I think we need a basic well known network between hosts and the engine first. Then after the retrieving, hosts should reconfigure the network for later management. However, the timing to retrieve and reconfigure are challenging.
We did not discuss the dynamic approach in details on the list so far and I think this is a good opportunity to start this discussion...
From what was discussed previously I can say that the need for a well known network was raised by danken, it was referred to as the management network, this network would be used for pulling the full host network configuration from the centralized location, at this point the engine.
About the timing for retrieving the configuration, there are several approaches. One of them was described by Alon, and I think he'll join this discussion and maybe put it in his own words, but the idea was to 'keep' the network synchronized at all times. When the host have communication channel to the engine and the engine detects there is a mismatch in the host configuration, the engine initiates 'apply network configuration' action on the host.
Using this approach we'll have a single path of code to maintain and that would reduce code complexity and bugs - That's quoting Alon Bar Lev (Alon I hope I did not twisted your words/idea).
On the other hand the above approach makes local tweaks on the host (done manually by the administrator) much harder.
I worry a lot about the above if we take the dynamic approach. It seems we'd need to introduce before/after 'apply network configuration' hooks where the admin could add custom config commands that aren't yet modeled by engine.
Any other approaches ?
Static configuration has the advantage of allowing a host to bring itself back online independent of the engine. This is also useful for anyone who may want to deploy a vdsm node in standalone mode.
I think it would be possible to easily support a quasi-static configuration mode simply by extending the design of the dynamic approach slightly. In dynamic mode, the network configuration is passed down as a well-defined data structure. When a particular configuration has been committed, vdsm could write a copy of that configuration data structure to /var/run/vdsm/network-config.json. During a subsequent boot, if the engine cannot be contacted after activating the management network, the cached configuration can be applied using the same code as for dynamic mode. We'd have to flesh out the circumstances under which this would happen.
I like this a lot; it's a fall-back mode. dhclient does this when it cannot contact a DHCP server and it has an existing lease files.
I'd like to add a more general question to the discussion what are the advantages of taking the dynamic approach? So far I collected two reasons:
-It is a 'cleaner' design, removes complexity on VDSM code, easier to maintain going forward, and less bug prone (I agree with that one, as long as we keep the retrieving configuration mechanism/algorithm simple).
-It adheres to the idea of having a stateless hypervisor - some more input on this point would be appreciated
Any other advantages?
discussing the benefits of having the persisted
As I mentioned above, the main benefit I see of having some sort of persistent configuration is:
- To allow the host to operate independently of the engine in either a failure scenario or in a standalone configuration.
-- Adam Litke agl@us.ibm.com IBM Linux Technology Center
vdsm-devel mailing list vdsm-devel@lists.fedorahosted.org https://lists.fedorahosted.org/mailman/listinfo/vdsm-devel
Adam Litke:
On Mon, Nov 26, 2012 at 02:57:19PM +0200, Livnat Peer wrote:
On 26/11/12 03:15, Shu Ming wrote:
Livnat,
Thanks for your summary. I got comments below.
2012-11-25 18:53, Livnat Peer:
Hi All, We have been discussing $subject for a while and I'd like to summarized what we agreed and disagreed on thus far.
The way I see it there are two related discussions:
- Getting VDSM networking stack to be distribution agnostic.
- We are all in agreement that VDSM API should be generic enough to
incorporate multiple implementation. (discussed on this thread: Alon's suggestion, Mark's patch for adding support for netcf etc.)
- We would like to maintain at least one implementation as the
working/up-to-date implementation for our users, this implementation should be distribution agnostic (as we all acknowledge this is an important goal for VDSM). I also think that with the agreement of this community we can choose to change our focus, from time to time, from one implementation to another as we see fit (today it can be OVS+netcf and in a few months we'll use the quantum based implementation if we agree it is better)
- The second discussion is about persisting the network configuration
on the host vs. dynamically retrieving it from a centralized location like the engine. Danken raised a concern that even if going with the dynamic approach the host should persist the management network configuration.
About dynamical retrieving from a centralized location, when will the retrieving start? Just in the very early stage of host booting before network functions? Or after the host startup and in the normal running state of the host? Before retrieving the configuration, how does the host network connecting to the engine? I think we need a basic well known network between hosts and the engine first. Then after the retrieving, hosts should reconfigure the network for later management. However, the timing to retrieve and reconfigure are challenging.
We did not discuss the dynamic approach in details on the list so far and I think this is a good opportunity to start this discussion...
From what was discussed previously I can say that the need for a well known network was raised by danken, it was referred to as the management network, this network would be used for pulling the full host network configuration from the centralized location, at this point the engine.
About the timing for retrieving the configuration, there are several approaches. One of them was described by Alon, and I think he'll join this discussion and maybe put it in his own words, but the idea was to 'keep' the network synchronized at all times. When the host have communication channel to the engine and the engine detects there is a mismatch in the host configuration, the engine initiates 'apply network configuration' action on the host.
Using this approach we'll have a single path of code to maintain and that would reduce code complexity and bugs - That's quoting Alon Bar Lev (Alon I hope I did not twisted your words/idea).
On the other hand the above approach makes local tweaks on the host (done manually by the administrator) much harder.
I worry a lot about the above if we take the dynamic approach. It seems we'd need to introduce before/after 'apply network configuration' hooks where the admin could add custom config commands that aren't yet modeled by engine.
Any other approaches ?
Static configuration has the advantage of allowing a host to bring itself back online independent of the engine. This is also useful for anyone who may want to deploy a vdsm node in standalone mode.
I think it would be possible to easily support a quasi-static configuration mode simply by extending the design of the dynamic approach slightly. In dynamic mode, the network configuration is passed down as a well-defined data structure. When a particular configuration has been committed, vdsm could write a copy of that configuration data structure to /var/run/vdsm/network-config.json. During a subsequent boot, if the engine cannot be contacted after activating the management network, the cached configuration can be applied using the same code as for dynamic mode. We'd have to flesh out the circumstances under which this would happen.
Multiple physical network devices are quite popular in x86 servers now. Can we add some smart algorithm to leverage multiple physical network devices? Say, one specific device for the management network and others for vdsm host networks. By isolating the management and host networks, the vdsm host can maintain a permanent management work and it is much more clean and not bug prone. If the host doesn't get multiple network devices, we should fall back to the traditional way.
I'd like to add a more general question to the discussion what are the advantages of taking the dynamic approach? So far I collected two reasons:
-It is a 'cleaner' design, removes complexity on VDSM code, easier to maintain going forward, and less bug prone (I agree with that one, as long as we keep the retrieving configuration mechanism/algorithm simple).
-It adheres to the idea of having a stateless hypervisor - some more input on this point would be appreciated
Any other advantages?
discussing the benefits of having the persisted
As I mentioned above, the main benefit I see of having some sort of persistent configuration is:
- To allow the host to operate independently of the engine in either a failure scenario or in a standalone configuration.
On 26/11/12 16:59, Adam Litke wrote:
On Mon, Nov 26, 2012 at 02:57:19PM +0200, Livnat Peer wrote:
On 26/11/12 03:15, Shu Ming wrote:
Livnat,
Thanks for your summary. I got comments below.
2012-11-25 18:53, Livnat Peer:
Hi All, We have been discussing $subject for a while and I'd like to summarized what we agreed and disagreed on thus far.
The way I see it there are two related discussions:
- Getting VDSM networking stack to be distribution agnostic.
- We are all in agreement that VDSM API should be generic enough to
incorporate multiple implementation. (discussed on this thread: Alon's suggestion, Mark's patch for adding support for netcf etc.)
- We would like to maintain at least one implementation as the
working/up-to-date implementation for our users, this implementation should be distribution agnostic (as we all acknowledge this is an important goal for VDSM). I also think that with the agreement of this community we can choose to change our focus, from time to time, from one implementation to another as we see fit (today it can be OVS+netcf and in a few months we'll use the quantum based implementation if we agree it is better)
- The second discussion is about persisting the network configuration
on the host vs. dynamically retrieving it from a centralized location like the engine. Danken raised a concern that even if going with the dynamic approach the host should persist the management network configuration.
About dynamical retrieving from a centralized location, when will the retrieving start? Just in the very early stage of host booting before network functions? Or after the host startup and in the normal running state of the host? Before retrieving the configuration, how does the host network connecting to the engine? I think we need a basic well known network between hosts and the engine first. Then after the retrieving, hosts should reconfigure the network for later management. However, the timing to retrieve and reconfigure are challenging.
We did not discuss the dynamic approach in details on the list so far and I think this is a good opportunity to start this discussion...
From what was discussed previously I can say that the need for a well known network was raised by danken, it was referred to as the management network, this network would be used for pulling the full host network configuration from the centralized location, at this point the engine.
About the timing for retrieving the configuration, there are several approaches. One of them was described by Alon, and I think he'll join this discussion and maybe put it in his own words, but the idea was to 'keep' the network synchronized at all times. When the host have communication channel to the engine and the engine detects there is a mismatch in the host configuration, the engine initiates 'apply network configuration' action on the host.
Using this approach we'll have a single path of code to maintain and that would reduce code complexity and bugs - That's quoting Alon Bar Lev (Alon I hope I did not twisted your words/idea).
On the other hand the above approach makes local tweaks on the host (done manually by the administrator) much harder.
I worry a lot about the above if we take the dynamic approach. It seems we'd need to introduce before/after 'apply network configuration' hooks where the admin could add custom config commands that aren't yet modeled by engine.
yes, and I'm not sure the administrators would like the fact that we are 'forcing' them to write everything in a script and getting familiar with VDSM hooking mechanism (which in some cases require the use of custom properties on the engine level) instead of running a simple command line.
Any other approaches ?
Static configuration has the advantage of allowing a host to bring itself back online independent of the engine. This is also useful for anyone who may want to deploy a vdsm node in standalone mode.
I think it would be possible to easily support a quasi-static configuration mode simply by extending the design of the dynamic approach slightly. In dynamic mode, the network configuration is passed down as a well-defined data structure. When a particular configuration has been committed, vdsm could write a copy of that configuration data structure to /var/run/vdsm/network-config.json. During a subsequent boot, if the engine cannot be contacted after activating the management network, the cached configuration can be applied using the same code as for dynamic mode. We'd have to flesh out the circumstances under which this would happen.
I like this approach a lot but we need to consider that network configuration is an accumulated state, for example -
1. The engine sends a setup-network command with the full host network configuration 2. The user configures new network on the host, the engine sends a new setup-network request to VDSM which includes only the delta requested by the user (adding the required network) 3. VDSM adds the new network
and this can go on and on, for dealing with this issue:
We can either hold network-config.json per setup-network command and then for recovering the network configuration state we need to execute chain of set-up networks commands.
Or we can move the logic of calculating the delta from engine to VDSM and on each setup network have the engine pass the full configuration. The problem with that approach is that the analysis logic of the delta has to be done on the engine anyway to give a quick feedback to the user on the validity of his action. Maintaining this logic/code twice is not something we want (it's bad enough to do it once....)
A third option is to extend the current API of setup network to include the full configuration in addition to the delta that is sent today. The full configuration would be used for creating network-config.json and for that alone, VDSM would change network configuration according to the delta sent as it does today. The problem with that approach is that I'm sure someone on the list would say it is a contamination to the API, and we should 'never' pass 'duplicate' information. Personally I find this option the easiest one to deal with the above issue.
Livnat
I'd like to add a more general question to the discussion what are the advantages of taking the dynamic approach? So far I collected two reasons:
-It is a 'cleaner' design, removes complexity on VDSM code, easier to maintain going forward, and less bug prone (I agree with that one, as long as we keep the retrieving configuration mechanism/algorithm simple).
-It adheres to the idea of having a stateless hypervisor - some more input on this point would be appreciated
Any other advantages?
discussing the benefits of having the persisted
As I mentioned above, the main benefit I see of having some sort of persistent configuration is:
- To allow the host to operate independently of the engine in either a failure scenario or in a standalone configuration.
----- Original Message -----
From: "Livnat Peer" lpeer@redhat.com To: "Adam Litke" agl@us.ibm.com Cc: "Alon Bar-Lev" abarlev@redhat.com, "VDSM Project Development" vdsm-devel@lists.fedorahosted.org Sent: Tuesday, November 27, 2012 10:42:00 AM Subject: Re: [vdsm] Future of Vdsm network configuration - Thread mid-summary
On 26/11/12 16:59, Adam Litke wrote:
On Mon, Nov 26, 2012 at 02:57:19PM +0200, Livnat Peer wrote:
On 26/11/12 03:15, Shu Ming wrote:
Livnat,
Thanks for your summary. I got comments below.
2012-11-25 18:53, Livnat Peer:
Hi All, We have been discussing $subject for a while and I'd like to summarized what we agreed and disagreed on thus far.
The way I see it there are two related discussions:
- Getting VDSM networking stack to be distribution agnostic.
- We are all in agreement that VDSM API should be generic enough
to incorporate multiple implementation. (discussed on this thread: Alon's suggestion, Mark's patch for adding support for netcf etc.)
- We would like to maintain at least one implementation as the
working/up-to-date implementation for our users, this implementation should be distribution agnostic (as we all acknowledge this is an important goal for VDSM). I also think that with the agreement of this community we can choose to change our focus, from time to time, from one implementation to another as we see fit (today it can be OVS+netcf and in a few months we'll use the quantum based implementation if we agree it is better)
- The second discussion is about persisting the network
configuration on the host vs. dynamically retrieving it from a centralized location like the engine. Danken raised a concern that even if going with the dynamic approach the host should persist the management network configuration.
About dynamical retrieving from a centralized location, when will the retrieving start? Just in the very early stage of host booting before network functions? Or after the host startup and in the normal running state of the host? Before retrieving the configuration, how does the host network connecting to the engine? I think we need a basic well known network between hosts and the engine first. Then after the retrieving, hosts should reconfigure the network for later management. However, the timing to retrieve and reconfigure are challenging.
We did not discuss the dynamic approach in details on the list so far and I think this is a good opportunity to start this discussion...
From what was discussed previously I can say that the need for a well known network was raised by danken, it was referred to as the management network, this network would be used for pulling the full host network configuration from the centralized location, at this point the engine.
About the timing for retrieving the configuration, there are several approaches. One of them was described by Alon, and I think he'll join this discussion and maybe put it in his own words, but the idea was to 'keep' the network synchronized at all times. When the host have communication channel to the engine and the engine detects there is a mismatch in the host configuration, the engine initiates 'apply network configuration' action on the host.
Using this approach we'll have a single path of code to maintain and that would reduce code complexity and bugs - That's quoting Alon Bar Lev (Alon I hope I did not twisted your words/idea).
On the other hand the above approach makes local tweaks on the host (done manually by the administrator) much harder.
I worry a lot about the above if we take the dynamic approach. It seems we'd need to introduce before/after 'apply network configuration' hooks where the admin could add custom config commands that aren't yet modeled by engine.
yes, and I'm not sure the administrators would like the fact that we are 'forcing' them to write everything in a script and getting familiar with VDSM hooking mechanism (which in some cases require the use of custom properties on the engine level) instead of running a simple command line.
In which case will we force? Please be more specific. If we can pass most of the iproute2, brctl, bond parameters via key/value pairs via the API, what in your view that is common or even seldom should be used? This hook mechanism is only as fallback, provided to calm people down.
Any other approaches ?
Static configuration has the advantage of allowing a host to bring itself back online independent of the engine. This is also useful for anyone who may want to deploy a vdsm node in standalone mode.
I think it would be possible to easily support a quasi-static configuration mode simply by extending the design of the dynamic approach slightly. In dynamic mode, the network configuration is passed down as a well-defined data structure. When a particular configuration has been committed, vdsm could write a copy of that configuration data structure to /var/run/vdsm/network-config.json. During a subsequent boot, if the engine cannot be contacted after activating the management network, the cached configuration can be applied using the same code as for dynamic mode. We'd have to flesh out the circumstances under which this would happen.
I like this approach a lot but we need to consider that network configuration is an accumulated state, for example -
- The engine sends a setup-network command with the full host
network configuration 2. The user configures new network on the host, the engine sends a new setup-network request to VDSM which includes only the delta requested by the user (adding the required network) 3. VDSM adds the new network
THIS IS COMPLEX!!!!!!! Almost AI. As you need to complete the network setting with what you know.
and this can go on and on, for dealing with this issue:
We can either hold network-config.json per setup-network command and then for recovering the network configuration state we need to execute chain of set-up networks commands.
Or we can move the logic of calculating the delta from engine to VDSM and on each setup network have the engine pass the full configuration. The problem with that approach is that the analysis logic of the delta has to be done on the engine anyway to give a quick feedback to the user on the validity of his action. Maintaining this logic/code twice is not something we want (it's bad enough to do it once....)
I don't understand how the two algorithm are the same... UI is much more/less verbose at different aspects, while taking the full configuration and convert to actual setting is a completely different sequence. What the feedback of the user? as far as I understand the user is only interested in the end-result... building his own network and expect it to be applied.
A third option is to extend the current API of setup network to include the full configuration in addition to the delta that is sent today. The full configuration would be used for creating network-config.json and for that alone, VDSM would change network configuration according to the delta sent as it does today.
Always pass full configuration, why deal with two cases?
The problem with that approach is that I'm sure someone on the list would say it is a contamination to the API, and we should 'never' pass 'duplicate' information. Personally I find this option the easiest one to deal with the above issue.
Livnat, I don't see any argument of persistence vs non persistence as the above is common to any approach taken.
Only this "manual configuration" argument keeps poping, which as I wrote is irrelevant in large scale and we do want to go into large scale.
Alon
Livnat
I'd like to add a more general question to the discussion what are the advantages of taking the dynamic approach? So far I collected two reasons:
-It is a 'cleaner' design, removes complexity on VDSM code, easier to maintain going forward, and less bug prone (I agree with that one, as long as we keep the retrieving configuration mechanism/algorithm simple).
-It adheres to the idea of having a stateless hypervisor - some more input on this point would be appreciated
Any other advantages?
discussing the benefits of having the persisted
As I mentioned above, the main benefit I see of having some sort of persistent configuration is:
- To allow the host to operate independently of the engine in
either a failure scenario or in a standalone configuration.
vdsm-devel mailing list vdsm-devel@lists.fedorahosted.org https://lists.fedorahosted.org/mailman/listinfo/vdsm-devel
On 27/11/12 10:53, Alon Bar-Lev wrote:
----- Original Message -----
From: "Livnat Peer" lpeer@redhat.com To: "Adam Litke" agl@us.ibm.com Cc: "Alon Bar-Lev" abarlev@redhat.com, "VDSM Project Development" vdsm-devel@lists.fedorahosted.org Sent: Tuesday, November 27, 2012 10:42:00 AM Subject: Re: [vdsm] Future of Vdsm network configuration - Thread mid-summary
On 26/11/12 16:59, Adam Litke wrote:
On Mon, Nov 26, 2012 at 02:57:19PM +0200, Livnat Peer wrote:
On 26/11/12 03:15, Shu Ming wrote:
Livnat,
Thanks for your summary. I got comments below.
2012-11-25 18:53, Livnat Peer:
Hi All, We have been discussing $subject for a while and I'd like to summarized what we agreed and disagreed on thus far.
The way I see it there are two related discussions:
- Getting VDSM networking stack to be distribution agnostic.
- We are all in agreement that VDSM API should be generic enough
to incorporate multiple implementation. (discussed on this thread: Alon's suggestion, Mark's patch for adding support for netcf etc.)
- We would like to maintain at least one implementation as the
working/up-to-date implementation for our users, this implementation should be distribution agnostic (as we all acknowledge this is an important goal for VDSM). I also think that with the agreement of this community we can choose to change our focus, from time to time, from one implementation to another as we see fit (today it can be OVS+netcf and in a few months we'll use the quantum based implementation if we agree it is better)
- The second discussion is about persisting the network
configuration on the host vs. dynamically retrieving it from a centralized location like the engine. Danken raised a concern that even if going with the dynamic approach the host should persist the management network configuration.
About dynamical retrieving from a centralized location, when will the retrieving start? Just in the very early stage of host booting before network functions? Or after the host startup and in the normal running state of the host? Before retrieving the configuration, how does the host network connecting to the engine? I think we need a basic well known network between hosts and the engine first. Then after the retrieving, hosts should reconfigure the network for later management. However, the timing to retrieve and reconfigure are challenging.
We did not discuss the dynamic approach in details on the list so far and I think this is a good opportunity to start this discussion...
From what was discussed previously I can say that the need for a well known network was raised by danken, it was referred to as the management network, this network would be used for pulling the full host network configuration from the centralized location, at this point the engine.
About the timing for retrieving the configuration, there are several approaches. One of them was described by Alon, and I think he'll join this discussion and maybe put it in his own words, but the idea was to 'keep' the network synchronized at all times. When the host have communication channel to the engine and the engine detects there is a mismatch in the host configuration, the engine initiates 'apply network configuration' action on the host.
Using this approach we'll have a single path of code to maintain and that would reduce code complexity and bugs - That's quoting Alon Bar Lev (Alon I hope I did not twisted your words/idea).
On the other hand the above approach makes local tweaks on the host (done manually by the administrator) much harder.
I worry a lot about the above if we take the dynamic approach. It seems we'd need to introduce before/after 'apply network configuration' hooks where the admin could add custom config commands that aren't yet modeled by engine.
yes, and I'm not sure the administrators would like the fact that we are 'forcing' them to write everything in a script and getting familiar with VDSM hooking mechanism (which in some cases require the use of custom properties on the engine level) instead of running a simple command line.
In which case will we force? Please be more specific. If we can pass most of the iproute2, brctl, bond parameters via key/value pairs via the API, what in your view that is common or even seldom should be used? This hook mechanism is only as fallback, provided to calm people down.
I understand, I'm saying it can irritate the administrators that needs to use it, it does not help that we are calmed down ;)
Just to make it clear I'm not against the stateless approach, I'm trying to understand it better and make sure we are all aware of the drawbacks this approach has. Complicating local tweaks to the admin is one of them.
I'll reply on your original mail with the questions I have on your proposal.
Any other approaches ?
Static configuration has the advantage of allowing a host to bring itself back online independent of the engine. This is also useful for anyone who may want to deploy a vdsm node in standalone mode.
I think it would be possible to easily support a quasi-static configuration mode simply by extending the design of the dynamic approach slightly. In dynamic mode, the network configuration is passed down as a well-defined data structure. When a particular configuration has been committed, vdsm could write a copy of that configuration data structure to /var/run/vdsm/network-config.json. During a subsequent boot, if the engine cannot be contacted after activating the management network, the cached configuration can be applied using the same code as for dynamic mode. We'd have to flesh out the circumstances under which this would happen.
I like this approach a lot but we need to consider that network configuration is an accumulated state, for example -
- The engine sends a setup-network command with the full host
network configuration 2. The user configures new network on the host, the engine sends a new setup-network request to VDSM which includes only the delta requested by the user (adding the required network) 3. VDSM adds the new network
THIS IS COMPLEX!!!!!!! Almost AI. As you need to complete the network setting with what you know.
I think we should clear this first - We have a running hypervisor with running VMs on it, we would like to configure an additional network on the host. You don't want to apply the network-configuration from scratch and mess with the running VMs networks or the storage network or anything else that is running and was not change by the administrator ==> you need to calculate the delta of the changes to perform as less intrusive operation as possible.
and this can go on and on, for dealing with this issue:
We can either hold network-config.json per setup-network command and then for recovering the network configuration state we need to execute chain of set-up networks commands.
Or we can move the logic of calculating the delta from engine to VDSM and on each setup network have the engine pass the full configuration. The problem with that approach is that the analysis logic of the delta has to be done on the engine anyway to give a quick feedback to the user on the validity of his action. Maintaining this logic/code twice is not something we want (it's bad enough to do it once....)
I don't understand how the two algorithm are the same... UI is much more/less verbose at different aspects, while taking the full configuration and convert to actual setting is a completely different sequence. What the feedback of the user? as far as I understand the user is only interested in the end-result... building his own network and expect it to be applied.
A third option is to extend the current API of setup network to include the full configuration in addition to the delta that is sent today. The full configuration would be used for creating network-config.json and for that alone, VDSM would change network configuration according to the delta sent as it does today.
Always pass full configuration, why deal with two cases?
The problem with that approach is that I'm sure someone on the list would say it is a contamination to the API, and we should 'never' pass 'duplicate' information. Personally I find this option the easiest one to deal with the above issue.
Livnat, I don't see any argument of persistence vs non persistence as the above is common to any approach taken.
Only this "manual configuration" argument keeps poping, which as I wrote is irrelevant in large scale and we do want to go into large scale.
Alon
Livnat
I'd like to add a more general question to the discussion what are the advantages of taking the dynamic approach? So far I collected two reasons:
-It is a 'cleaner' design, removes complexity on VDSM code, easier to maintain going forward, and less bug prone (I agree with that one, as long as we keep the retrieving configuration mechanism/algorithm simple).
-It adheres to the idea of having a stateless hypervisor - some more input on this point would be appreciated
Any other advantages?
discussing the benefits of having the persisted
As I mentioned above, the main benefit I see of having some sort of persistent configuration is:
- To allow the host to operate independently of the engine in
either a failure scenario or in a standalone configuration.
vdsm-devel mailing list vdsm-devel@lists.fedorahosted.org https://lists.fedorahosted.org/mailman/listinfo/vdsm-devel
On Tue, Nov 27, 2012 at 11:56:54AM +0200, Livnat Peer wrote:
On 27/11/12 10:53, Alon Bar-Lev wrote:
----- Original Message -----
From: "Livnat Peer" lpeer@redhat.com To: "Adam Litke" agl@us.ibm.com Cc: "Alon Bar-Lev" abarlev@redhat.com, "VDSM Project Development" vdsm-devel@lists.fedorahosted.org Sent: Tuesday, November 27, 2012 10:42:00 AM Subject: Re: [vdsm] Future of Vdsm network configuration - Thread mid-summary
On 26/11/12 16:59, Adam Litke wrote:
On Mon, Nov 26, 2012 at 02:57:19PM +0200, Livnat Peer wrote:
On 26/11/12 03:15, Shu Ming wrote:
Livnat,
Thanks for your summary. I got comments below.
2012-11-25 18:53, Livnat Peer: > Hi All, > We have been discussing $subject for a while and I'd like to > summarized > what we agreed and disagreed on thus far. > > The way I see it there are two related discussions: > > > 1. Getting VDSM networking stack to be distribution agnostic. > - We are all in agreement that VDSM API should be generic enough > to > incorporate multiple implementation. (discussed on this thread: > Alon's > suggestion, Mark's patch for adding support for netcf etc.) > > - We would like to maintain at least one implementation as the > working/up-to-date implementation for our users, this > implementation > should be distribution agnostic (as we all acknowledge this is > an > important goal for VDSM). > I also think that with the agreement of this community we can > choose to > change our focus, from time to time, from one implementation to > another > as we see fit (today it can be OVS+netcf and in a few months > we'll use > the quantum based implementation if we agree it is better) > > 2. The second discussion is about persisting the network > configuration > on the host vs. dynamically retrieving it from a centralized > location > like the engine. Danken raised a concern that even if going with > the > dynamic approach the host should persist the management network > configuration.
About dynamical retrieving from a centralized location, when will the retrieving start? Just in the very early stage of host booting before network functions? Or after the host startup and in the normal running state of the host? Before retrieving the configuration, how does the host network connecting to the engine? I think we need a basic well known network between hosts and the engine first. Then after the retrieving, hosts should reconfigure the network for later management. However, the timing to retrieve and reconfigure are challenging.
We did not discuss the dynamic approach in details on the list so far and I think this is a good opportunity to start this discussion...
From what was discussed previously I can say that the need for a well known network was raised by danken, it was referred to as the management network, this network would be used for pulling the full host network configuration from the centralized location, at this point the engine.
About the timing for retrieving the configuration, there are several approaches. One of them was described by Alon, and I think he'll join this discussion and maybe put it in his own words, but the idea was to 'keep' the network synchronized at all times. When the host have communication channel to the engine and the engine detects there is a mismatch in the host configuration, the engine initiates 'apply network configuration' action on the host.
Using this approach we'll have a single path of code to maintain and that would reduce code complexity and bugs - That's quoting Alon Bar Lev (Alon I hope I did not twisted your words/idea).
On the other hand the above approach makes local tweaks on the host (done manually by the administrator) much harder.
I worry a lot about the above if we take the dynamic approach. It seems we'd need to introduce before/after 'apply network configuration' hooks where the admin could add custom config commands that aren't yet modeled by engine.
yes, and I'm not sure the administrators would like the fact that we are 'forcing' them to write everything in a script and getting familiar with VDSM hooking mechanism (which in some cases require the use of custom properties on the engine level) instead of running a simple command line.
In which case will we force? Please be more specific. If we can pass most of the iproute2, brctl, bond parameters via key/value pairs via the API, what in your view that is common or even seldom should be used? This hook mechanism is only as fallback, provided to calm people down.
I understand, I'm saying it can irritate the administrators that needs to use it, it does not help that we are calmed down ;)
Just to make it clear I'm not against the stateless approach, I'm trying to understand it better and make sure we are all aware of the drawbacks this approach has. Complicating local tweaks to the admin is one of them.
I'll reply on your original mail with the questions I have on your proposal.
Any other approaches ?
Static configuration has the advantage of allowing a host to bring itself back online independent of the engine. This is also useful for anyone who may want to deploy a vdsm node in standalone mode.
I think it would be possible to easily support a quasi-static configuration mode simply by extending the design of the dynamic approach slightly. In dynamic mode, the network configuration is passed down as a well-defined data structure. When a particular configuration has been committed, vdsm could write a copy of that configuration data structure to /var/run/vdsm/network-config.json. During a subsequent boot, if the engine cannot be contacted after activating the management network, the cached configuration can be applied using the same code as for dynamic mode. We'd have to flesh out the circumstances under which this would happen.
I like this approach a lot but we need to consider that network configuration is an accumulated state, for example -
- The engine sends a setup-network command with the full host
network configuration 2. The user configures new network on the host, the engine sends a new setup-network request to VDSM which includes only the delta requested by the user (adding the required network) 3. VDSM adds the new network
THIS IS COMPLEX!!!!!!! Almost AI. As you need to complete the network setting with what you know.
I think we should clear this first - We have a running hypervisor with running VMs on it, we would like to configure an additional network on the host. You don't want to apply the network-configuration from scratch and mess with the running VMs networks or the storage network or anything else that is running and was not change by the administrator ==> you need to calculate the delta of the changes to perform as less intrusive operation as possible.
and this can go on and on, for dealing with this issue:
We can either hold network-config.json per setup-network command and then for recovering the network configuration state we need to execute chain of set-up networks commands.
Or we can move the logic of calculating the delta from engine to VDSM and on each setup network have the engine pass the full configuration. The problem with that approach is that the analysis logic of the delta has to be done on the engine anyway to give a quick feedback to the user on the validity of his action. Maintaining this logic/code twice is not something we want (it's bad enough to do it once....)
I don't understand how the two algorithm are the same... UI is much more/less verbose at different aspects, while taking the full configuration and convert to actual setting is a completely different sequence. What the feedback of the user? as far as I understand the user is only interested in the end-result... building his own network and expect it to be applied.
A third option is to extend the current API of setup network to include the full configuration in addition to the delta that is sent today. The full configuration would be used for creating network-config.json and for that alone, VDSM would change network configuration according to the delta sent as it does today.
Always pass full configuration, why deal with two cases?
The problem with that approach is that I'm sure someone on the list would say it is a contamination to the API, and we should 'never' pass 'duplicate' information. Personally I find this option the easiest one to deal with the above issue.
Current setupNetwork API allows passing the complete image. The only problem is with vdsm's brutal implementation when it sees a network that it already knows about: it tears the net completely, and rebuilds according to the current request.
Also, Vdsm needs explicit request to remove a network - if it is not mentioned in setupNetwork, it is left unchanged.
Livnat, I don't see any argument of persistence vs non persistence as the above is common to any approach taken.
Only this "manual configuration" argument keeps poping, which as I wrote is irrelevant in large scale and we do want to go into large scale.
Well, we call it "manual configuration", but it applies just as well to "puppet-based configuration".
Dan.
----- Original Message -----
From: "Dan Kenigsberg" danken@redhat.com To: "Livnat Peer" lpeer@redhat.com Cc: "Alon Bar-Lev" alonbl@redhat.com, "VDSM Project Development" vdsm-devel@lists.fedorahosted.org Sent: Tuesday, November 27, 2012 4:22:19 PM Subject: Re: [vdsm] Future of Vdsm network configuration - Thread mid-summary
<snip>
Livnat, I don't see any argument of persistence vs non persistence as the above is common to any approach taken.
Only this "manual configuration" argument keeps poping, which as I wrote is irrelevant in large scale and we do want to go into large scale.
Well, we call it "manual configuration", but it applies just as well to "puppet-based configuration".
Dan.
There can by only one (manager to each host).
Alon.
On Tue, Nov 27, 2012 at 10:42:00AM +0200, Livnat Peer wrote:
On 26/11/12 16:59, Adam Litke wrote:
On Mon, Nov 26, 2012 at 02:57:19PM +0200, Livnat Peer wrote:
On 26/11/12 03:15, Shu Ming wrote:
Livnat,
Thanks for your summary. I got comments below.
2012-11-25 18:53, Livnat Peer:
Hi All, We have been discussing $subject for a while and I'd like to summarized what we agreed and disagreed on thus far.
The way I see it there are two related discussions:
- Getting VDSM networking stack to be distribution agnostic.
- We are all in agreement that VDSM API should be generic enough to
incorporate multiple implementation. (discussed on this thread: Alon's suggestion, Mark's patch for adding support for netcf etc.)
- We would like to maintain at least one implementation as the
working/up-to-date implementation for our users, this implementation should be distribution agnostic (as we all acknowledge this is an important goal for VDSM). I also think that with the agreement of this community we can choose to change our focus, from time to time, from one implementation to another as we see fit (today it can be OVS+netcf and in a few months we'll use the quantum based implementation if we agree it is better)
- The second discussion is about persisting the network configuration
on the host vs. dynamically retrieving it from a centralized location like the engine. Danken raised a concern that even if going with the dynamic approach the host should persist the management network configuration.
About dynamical retrieving from a centralized location, when will the retrieving start? Just in the very early stage of host booting before network functions? Or after the host startup and in the normal running state of the host? Before retrieving the configuration, how does the host network connecting to the engine? I think we need a basic well known network between hosts and the engine first. Then after the retrieving, hosts should reconfigure the network for later management. However, the timing to retrieve and reconfigure are challenging.
We did not discuss the dynamic approach in details on the list so far and I think this is a good opportunity to start this discussion...
From what was discussed previously I can say that the need for a well known network was raised by danken, it was referred to as the management network, this network would be used for pulling the full host network configuration from the centralized location, at this point the engine.
About the timing for retrieving the configuration, there are several approaches. One of them was described by Alon, and I think he'll join this discussion and maybe put it in his own words, but the idea was to 'keep' the network synchronized at all times. When the host have communication channel to the engine and the engine detects there is a mismatch in the host configuration, the engine initiates 'apply network configuration' action on the host.
Using this approach we'll have a single path of code to maintain and that would reduce code complexity and bugs - That's quoting Alon Bar Lev (Alon I hope I did not twisted your words/idea).
On the other hand the above approach makes local tweaks on the host (done manually by the administrator) much harder.
I worry a lot about the above if we take the dynamic approach. It seems we'd need to introduce before/after 'apply network configuration' hooks where the admin could add custom config commands that aren't yet modeled by engine.
yes, and I'm not sure the administrators would like the fact that we are 'forcing' them to write everything in a script and getting familiar with VDSM hooking mechanism (which in some cases require the use of custom properties on the engine level) instead of running a simple command line.
Any other approaches ?
Static configuration has the advantage of allowing a host to bring itself back online independent of the engine. This is also useful for anyone who may want to deploy a vdsm node in standalone mode.
I think it would be possible to easily support a quasi-static configuration mode simply by extending the design of the dynamic approach slightly. In dynamic mode, the network configuration is passed down as a well-defined data structure. When a particular configuration has been committed, vdsm could write a copy of that configuration data structure to /var/run/vdsm/network-config.json. During a subsequent boot, if the engine cannot be contacted after activating the management network, the cached configuration can be applied using the same code as for dynamic mode. We'd have to flesh out the circumstances under which this would happen.
I like this approach a lot but we need to consider that network configuration is an accumulated state, for example -
- The engine sends a setup-network command with the full host network
configuration 2. The user configures new network on the host, the engine sends a new setup-network request to VDSM which includes only the delta requested by the user (adding the required network) 3. VDSM adds the new network
and this can go on and on, for dealing with this issue:
We can either hold network-config.json per setup-network command and then for recovering the network configuration state we need to execute chain of set-up networks commands.
Or we can move the logic of calculating the delta from engine to VDSM and on each setup network have the engine pass the full configuration. The problem with that approach is that the analysis logic of the delta has to be done on the engine anyway to give a quick feedback to the user on the validity of his action. Maintaining this logic/code twice is not something we want (it's bad enough to do it once....)
A third option is to extend the current API of setup network to include the full configuration in addition to the delta that is sent today. The full configuration would be used for creating network-config.json and for that alone, VDSM would change network configuration according to the delta sent as it does today. The problem with that approach is that I'm sure someone on the list would say it is a contamination to the API, and we should 'never' pass 'duplicate' information. Personally I find this option the easiest one to deal with the above issue.
Why don't we just always rebuild the network configuration from scratch each time a change is made? I realize this is a heavy operation, but how often will it really be done? If we do it this way, then there will never be deltas to worry about. engine always sends the full updated config and vdsm builds that config from scratch.
Livnat
I'd like to add a more general question to the discussion what are the advantages of taking the dynamic approach? So far I collected two reasons:
-It is a 'cleaner' design, removes complexity on VDSM code, easier to maintain going forward, and less bug prone (I agree with that one, as long as we keep the retrieving configuration mechanism/algorithm simple).
-It adheres to the idea of having a stateless hypervisor - some more input on this point would be appreciated
Any other advantages?
discussing the benefits of having the persisted
As I mentioned above, the main benefit I see of having some sort of persistent configuration is:
- To allow the host to operate independently of the engine in either a failure scenario or in a standalone configuration.
----- Original Message -----
From: "Livnat Peer" lpeer@redhat.com To: "Shu Ming" shuming@linux.vnet.ibm.com Cc: "Alon Bar-Lev" abarlev@redhat.com, "VDSM Project Development" vdsm-devel@lists.fedorahosted.org Sent: Monday, November 26, 2012 2:57:19 PM Subject: Re: [vdsm] Future of Vdsm network configuration - Thread mid-summary
On 26/11/12 03:15, Shu Ming wrote:
Livnat,
Thanks for your summary. I got comments below.
2012-11-25 18:53, Livnat Peer:
Hi All, We have been discussing $subject for a while and I'd like to summarized what we agreed and disagreed on thus far.
The way I see it there are two related discussions:
- Getting VDSM networking stack to be distribution agnostic.
- We are all in agreement that VDSM API should be generic enough
to incorporate multiple implementation. (discussed on this thread: Alon's suggestion, Mark's patch for adding support for netcf etc.)
- We would like to maintain at least one implementation as the
working/up-to-date implementation for our users, this implementation should be distribution agnostic (as we all acknowledge this is an important goal for VDSM). I also think that with the agreement of this community we can choose to change our focus, from time to time, from one implementation to another as we see fit (today it can be OVS+netcf and in a few months we'll use the quantum based implementation if we agree it is better)
- The second discussion is about persisting the network
configuration on the host vs. dynamically retrieving it from a centralized location like the engine. Danken raised a concern that even if going with the dynamic approach the host should persist the management network configuration.
About dynamical retrieving from a centralized location, when will the retrieving start? Just in the very early stage of host booting before network functions? Or after the host startup and in the normal running state of the host? Before retrieving the configuration, how does the host network connecting to the engine? I think we need a basic well known network between hosts and the engine first. Then after the retrieving, hosts should reconfigure the network for later management. However, the timing to retrieve and reconfigure are challenging.
We did not discuss the dynamic approach in details on the list so far and I think this is a good opportunity to start this discussion...
From what was discussed previously I can say that the need for a well known network was raised by danken, it was referred to as the management network, this network would be used for pulling the full host network configuration from the centralized location, at this point the engine.
About the timing for retrieving the configuration, there are several approaches. One of them was described by Alon, and I think he'll join this discussion and maybe put it in his own words, but the idea was to 'keep' the network synchronized at all times. When the host have communication channel to the engine and the engine detects there is a mismatch in the host configuration, the engine initiates 'apply network configuration' action on the host.
Using this approach we'll have a single path of code to maintain and that would reduce code complexity and bugs - That's quoting Alon Bar Lev (Alon I hope I did not twisted your words/idea).
On the other hand the above approach makes local tweaks on the host (done manually by the administrator) much harder.
Any other approaches ?
I'd like to add a more general question to the discussion what are the advantages of taking the dynamic approach? So far I collected two reasons:
-It is a 'cleaner' design, removes complexity on VDSM code, easier to maintain going forward, and less bug prone (I agree with that one, as long as we keep the retrieving configuration mechanism/algorithm simple).
-It adheres to the idea of having a stateless hypervisor - some more input on this point would be appreciated
Any other advantages?
discussing the benefits of having the persisted
Livnat
Sorry for the delay. Some more expansion.
ASSUMPTION
After boot a host running vdsm is able to receive communication from engine. This means that host has legitimate layer 2 configuration and layer 3 configuration for the interface used to communicate to engine.
MISSION
Reduce complexity of implementation, so that only one algorithm is used in order to reach to operative state as far as networking is concerned.
(Storage is extremely similar I can s/network/storage/ and still be relevant).
DESIGN FOCAL POINT
Host running vdsm is a complete slave of its master, will it be ovirt-engine or other engine.
Having a complete slave ease implementation:
1. Master always apply the setting as-is. 2. No need to consider slave state. 3. No need to implement AI to reach from unknown state X to known state Y + delta. 4. After reboot (or fence) host is always in known state.
ALGORITHM
A. Given communication to vdsm, construct required vlan, bonding, bridge setup on machine.
B. Reboot/Fence - host is reset, apply A.
C. Network configuration is changed at engine: (1) Drop all resources that are not used by active VMs. (2) Apply A.
D. Host in maintenance - network configuration can be changed, will be applied when host go into active, apply C (no resources are used by VMs, all resources are dropped).
E. Critical network is down (Host not operative) - network configuration is not changed.
F. Host unreachable (None responsive) - network configuration cannot be changed.
BENEFITS
Single deterministic algorithm to apply network configuration.
Pre-defined state after host reboot/fence, host always reachable, previous network configuration that may be malformed is not in effect.
Easy to integrate with various network management solution, can it be primitive iproute, brctl implementation, NetworkManager, OVS or any other configuration, as Linux is Linux is Linux, the ability to interact with the kernel is single, while in order to persist implementation requires to interact with the distribution.
Moreover, a stateless implementation may be integrated with larger set of network management tools, as no assumption of persistence is added to the requirements, so if OVS is non-persistent, we use it as-is.
We should aspire to reach to a state in which ovirt-node or any other similar solution is totally stateless, adding a new node to a cluster should be some blade rebooting from PXE, each persistence layer we drop, the closer we reach to managing a large data center built on huge number of machines go up/down as required joining different clusters.
While discussing clusters, we should also consider autonomic clusters that enforces policy even if ovirt-engine is unreachable, in this mode we would like a primitive manager to be able to enforce policy including networking, while allowing adding/removing nodes without performing any local configuration.
IMPLICATIONS
System administrator will not be allowed to modify 'by hand' any of the network settings (except of this basic engine reachability).
Special settings can be set in the master, which will apply them via the master->vdsm protocol, which in turn use the network management interface in order to push them, this method should be generic enough to allow pushing most of the configuration setting allowed (key=value). This approach will also help replacing/adding nodes in cluster and/or mass deployment.
Edge conditions can be handled by executing some script on host machine, allowing administrator to override network configuration upon network configuration event.
SUMMARY
Assuming the host running vdsm as a complete slave and stateless will enable us to provide better control over that host in the short and long run.
Manual intervention on hosts serving as hypervisors has the flexibility argument. However at mass deployment, large data-center or dynamic environment this flexibility argument becomes liability.
Thank you, Alon Bar-Lev
Nice writeup! I like where this is going but see my comments inline below.
On Mon, Nov 26, 2012 at 03:18:22PM -0500, Alon Bar-Lev wrote:
----- Original Message -----
From: "Livnat Peer" lpeer@redhat.com To: "Shu Ming" shuming@linux.vnet.ibm.com Cc: "Alon Bar-Lev" abarlev@redhat.com, "VDSM Project Development" vdsm-devel@lists.fedorahosted.org Sent: Monday, November 26, 2012 2:57:19 PM Subject: Re: [vdsm] Future of Vdsm network configuration - Thread mid-summary
On 26/11/12 03:15, Shu Ming wrote:
Livnat,
Thanks for your summary. I got comments below.
2012-11-25 18:53, Livnat Peer:
Hi All, We have been discussing $subject for a while and I'd like to summarized what we agreed and disagreed on thus far.
The way I see it there are two related discussions:
- Getting VDSM networking stack to be distribution agnostic.
- We are all in agreement that VDSM API should be generic enough
to incorporate multiple implementation. (discussed on this thread: Alon's suggestion, Mark's patch for adding support for netcf etc.)
- We would like to maintain at least one implementation as the
working/up-to-date implementation for our users, this implementation should be distribution agnostic (as we all acknowledge this is an important goal for VDSM). I also think that with the agreement of this community we can choose to change our focus, from time to time, from one implementation to another as we see fit (today it can be OVS+netcf and in a few months we'll use the quantum based implementation if we agree it is better)
- The second discussion is about persisting the network
configuration on the host vs. dynamically retrieving it from a centralized location like the engine. Danken raised a concern that even if going with the dynamic approach the host should persist the management network configuration.
About dynamical retrieving from a centralized location, when will the retrieving start? Just in the very early stage of host booting before network functions? Or after the host startup and in the normal running state of the host? Before retrieving the configuration, how does the host network connecting to the engine? I think we need a basic well known network between hosts and the engine first. Then after the retrieving, hosts should reconfigure the network for later management. However, the timing to retrieve and reconfigure are challenging.
We did not discuss the dynamic approach in details on the list so far and I think this is a good opportunity to start this discussion...
From what was discussed previously I can say that the need for a well known network was raised by danken, it was referred to as the management network, this network would be used for pulling the full host network configuration from the centralized location, at this point the engine.
About the timing for retrieving the configuration, there are several approaches. One of them was described by Alon, and I think he'll join this discussion and maybe put it in his own words, but the idea was to 'keep' the network synchronized at all times. When the host have communication channel to the engine and the engine detects there is a mismatch in the host configuration, the engine initiates 'apply network configuration' action on the host.
Using this approach we'll have a single path of code to maintain and that would reduce code complexity and bugs - That's quoting Alon Bar Lev (Alon I hope I did not twisted your words/idea).
On the other hand the above approach makes local tweaks on the host (done manually by the administrator) much harder.
Any other approaches ?
I'd like to add a more general question to the discussion what are the advantages of taking the dynamic approach? So far I collected two reasons:
-It is a 'cleaner' design, removes complexity on VDSM code, easier to maintain going forward, and less bug prone (I agree with that one, as long as we keep the retrieving configuration mechanism/algorithm simple).
-It adheres to the idea of having a stateless hypervisor - some more input on this point would be appreciated
Any other advantages?
discussing the benefits of having the persisted
Livnat
Sorry for the delay. Some more expansion.
ASSUMPTION
After boot a host running vdsm is able to receive communication from engine. This means that host has legitimate layer 2 configuration and layer 3 configuration for the interface used to communicate to engine.
MISSION
Reduce complexity of implementation, so that only one algorithm is used in order to reach to operative state as far as networking is concerned.
(Storage is extremely similar I can s/network/storage/ and still be relevant).
DESIGN FOCAL POINT
Host running vdsm is a complete slave of its master, will it be ovirt-engine or other engine.
I do not agree with this direction. It reinforces the single point of failure of the centralized manager. Also, I am actively working to make vdsm a self contained component that is independently useful. This proposal will effectively cripple that effort.
I would prefer a statement that a node _CAN_ be a slave to engine but can also re-apply a previous configuration in the absense of a management server. See my other post for how this can be achieved without adding much complexity to the design.
Having a complete slave ease implementation:
- Master always apply the setting as-is.
- No need to consider slave state.
- No need to implement AI to reach from unknown state X to known state Y + delta.
These would be properties of any intelligent design regardless if engine is responsible for triggering the configuration changes or if vdsm does it autonomously. In either case you need to write an algorithm that is capable of deleting all networking config (except for the management interface). Without this, you would be unable to apply incremental configuration changes from engine reliably.
- After reboot (or fence) host is always in known state.
Once you have a method to strip networking config down to only the management interface, you can always get back to a known state. I suggest having a vdsm API that can do this for you.
ALGORITHM
A. Given communication to vdsm, construct required vlan, bonding, bridge setup on machine.
B. Reboot/Fence - host is reset, apply A.
C. Network configuration is changed at engine: (1) Drop all resources that are not used by active VMs.
This is the 'network reset' operation I am referring to.
(2) Apply A.
D. Host in maintenance - network configuration can be changed, will be applied when host go into active, apply C (no resources are used by VMs, all resources are dropped).
E. Critical network is down (Host not operative) - network configuration is not changed.
F. Host unreachable (None responsive) - network configuration cannot be changed.
BENEFITS
Single deterministic algorithm to apply network configuration.
Pre-defined state after host reboot/fence, host always reachable, previous network configuration that may be malformed is not in effect.
Do you plan to keep the transactional nature of the current API (ie. setSafeNetworkingConfig must be called after setupNetworks in order to persist it)?
Easy to integrate with various network management solution, can it be primitive iproute, brctl implementation, NetworkManager, OVS or any other configuration, as Linux is Linux is Linux, the ability to interact with the kernel is single, while in order to persist implementation requires to interact with the distribution.
Moreover, a stateless implementation may be integrated with larger set of network management tools, as no assumption of persistence is added to the requirements, so if OVS is non-persistent, we use it as-is.
We should aspire to reach to a state in which ovirt-node or any other similar solution is totally stateless, adding a new node to a cluster should be some blade rebooting from PXE, each persistence layer we drop, the closer we reach to managing a large data center built on huge number of machines go up/down as required joining different clusters.
While discussing clusters, we should also consider autonomic clusters that enforces policy even if ovirt-engine is unreachable, in this mode we would like a primitive manager to be able to enforce policy including networking, while allowing adding/removing nodes without performing any local configuration.
This could also be done without requiring another redundant management entity by storing a fallback config to apply when engine is unreachable. Yes, it's stateful, but that's not always a problem.
IMPLICATIONS
System administrator will not be allowed to modify 'by hand' any of the network settings (except of this basic engine reachability).
Good luck making that requirement stick in the face of real customers :) You'll need (at the very least) a hooking mechanism for admins to override some configuration that hasn't yet been modeled by oVirt.
Special settings can be set in the master, which will apply them via the master->vdsm protocol, which in turn use the network management interface in order to push them, this method should be generic enough to allow pushing most of the configuration setting allowed (key=value). This approach will also help replacing/adding nodes in cluster and/or mass deployment.
Edge conditions can be handled by executing some script on host machine, allowing administrator to override network configuration upon network configuration event.
SUMMARY
Assuming the host running vdsm as a complete slave and stateless will enable us to provide better control over that host in the short and long run.
Manual intervention on hosts serving as hypervisors has the flexibility argument. However at mass deployment, large data-center or dynamic environment this flexibility argument becomes liability.
Today oVirt plays in the small data center realm so I do think it's important to give appropriate weight to the flexibility argument. It should be possible to build different environments based on the needs of the deployment.
Hello,
----- Original Message -----
From: "Adam Litke" agl@us.ibm.com To: "Alon Bar-Lev" alonbl@redhat.com Cc: "Livnat Peer" lpeer@redhat.com, "VDSM Project Development" vdsm-devel@lists.fedorahosted.org Sent: Tuesday, November 27, 2012 12:51:36 AM Subject: Re: [vdsm] Future of Vdsm network configuration - Thread mid-summary
Nice writeup! I like where this is going but see my comments inline below.
On Mon, Nov 26, 2012 at 03:18:22PM -0500, Alon Bar-Lev wrote:
----- Original Message -----
From: "Livnat Peer" lpeer@redhat.com To: "Shu Ming" shuming@linux.vnet.ibm.com Cc: "Alon Bar-Lev" abarlev@redhat.com, "VDSM Project Development" vdsm-devel@lists.fedorahosted.org Sent: Monday, November 26, 2012 2:57:19 PM Subject: Re: [vdsm] Future of Vdsm network configuration - Thread mid-summary
On 26/11/12 03:15, Shu Ming wrote:
Livnat,
Thanks for your summary. I got comments below.
2012-11-25 18:53, Livnat Peer:
Hi All, We have been discussing $subject for a while and I'd like to summarized what we agreed and disagreed on thus far.
The way I see it there are two related discussions:
- Getting VDSM networking stack to be distribution agnostic.
- We are all in agreement that VDSM API should be generic
enough to incorporate multiple implementation. (discussed on this thread: Alon's suggestion, Mark's patch for adding support for netcf etc.)
- We would like to maintain at least one implementation as the
working/up-to-date implementation for our users, this implementation should be distribution agnostic (as we all acknowledge this is an important goal for VDSM). I also think that with the agreement of this community we can choose to change our focus, from time to time, from one implementation to another as we see fit (today it can be OVS+netcf and in a few months we'll use the quantum based implementation if we agree it is better)
- The second discussion is about persisting the network
configuration on the host vs. dynamically retrieving it from a centralized location like the engine. Danken raised a concern that even if going with the dynamic approach the host should persist the management network configuration.
About dynamical retrieving from a centralized location, when will the retrieving start? Just in the very early stage of host booting before network functions? Or after the host startup and in the normal running state of the host? Before retrieving the configuration, how does the host network connecting to the engine? I think we need a basic well known network between hosts and the engine first. Then after the retrieving, hosts should reconfigure the network for later management. However, the timing to retrieve and reconfigure are challenging.
We did not discuss the dynamic approach in details on the list so far and I think this is a good opportunity to start this discussion...
From what was discussed previously I can say that the need for a well known network was raised by danken, it was referred to as the management network, this network would be used for pulling the full host network configuration from the centralized location, at this point the engine.
About the timing for retrieving the configuration, there are several approaches. One of them was described by Alon, and I think he'll join this discussion and maybe put it in his own words, but the idea was to 'keep' the network synchronized at all times. When the host have communication channel to the engine and the engine detects there is a mismatch in the host configuration, the engine initiates 'apply network configuration' action on the host.
Using this approach we'll have a single path of code to maintain and that would reduce code complexity and bugs - That's quoting Alon Bar Lev (Alon I hope I did not twisted your words/idea).
On the other hand the above approach makes local tweaks on the host (done manually by the administrator) much harder.
Any other approaches ?
I'd like to add a more general question to the discussion what are the advantages of taking the dynamic approach? So far I collected two reasons:
-It is a 'cleaner' design, removes complexity on VDSM code, easier to maintain going forward, and less bug prone (I agree with that one, as long as we keep the retrieving configuration mechanism/algorithm simple).
-It adheres to the idea of having a stateless hypervisor - some more input on this point would be appreciated
Any other advantages?
discussing the benefits of having the persisted
Livnat
Sorry for the delay. Some more expansion.
ASSUMPTION
After boot a host running vdsm is able to receive communication from engine. This means that host has legitimate layer 2 configuration and layer 3 configuration for the interface used to communicate to engine.
MISSION
Reduce complexity of implementation, so that only one algorithm is used in order to reach to operative state as far as networking is concerned.
(Storage is extremely similar I can s/network/storage/ and still be relevant).
DESIGN FOCAL POINT
Host running vdsm is a complete slave of its master, will it be ovirt-engine or other engine.
I do not agree with this direction. It reinforces the single point of failure of the centralized manager. Also, I am actively working to make vdsm a self contained component that is independently useful. This proposal will effectively cripple that effort.
I would prefer a statement that a node _CAN_ be a slave to engine but can also re-apply a previous configuration in the absense of a management server. See my other post for how this can be achieved without adding much complexity to the design.
I strongly disagree. I think you going to mix two separate components into one. This is a fundamental issue, so I won't answer all the point you raise because all derives from this one.
vdsm is a slave, and should move to a stateless slave in order to keep it simple and stupid, which is actually smart. A management component can be installed on the same host or at different host. This management component can be a) a custom management component that is not part of the ovirt architecture. b) a component that manages a cluster on behalf of the ovirt-engine.
There is no problem in implementing this component at: a) vdsm protocol proxy, a component that sits between vdsm and the ovirt-engine or whatever north connection. b) entirely different entity which communicate with vdsm and different protocol to north.
If we follow this design, we have simple building blocks, that together can build complex solutions. As each building block is simple the cost of maintenance of each is lower.
Alon.
Having a complete slave ease implementation:
- Master always apply the setting as-is.
- No need to consider slave state.
- No need to implement AI to reach from unknown state X to known
state Y + delta.
These would be properties of any intelligent design regardless if engine is responsible for triggering the configuration changes or if vdsm does it autonomously. In either case you need to write an algorithm that is capable of deleting all networking config (except for the management interface). Without this, you would be unable to apply incremental configuration changes from engine reliably.
- After reboot (or fence) host is always in known state.
Once you have a method to strip networking config down to only the management interface, you can always get back to a known state. I suggest having a vdsm API that can do this for you.
ALGORITHM
A. Given communication to vdsm, construct required vlan, bonding, bridge setup on machine.
B. Reboot/Fence - host is reset, apply A.
C. Network configuration is changed at engine: (1) Drop all resources that are not used by active VMs.
This is the 'network reset' operation I am referring to.
(2) Apply A.
D. Host in maintenance - network configuration can be changed, will be applied when host go into active, apply C (no resources are used by VMs, all resources are dropped).
E. Critical network is down (Host not operative) - network configuration is not changed.
F. Host unreachable (None responsive) - network configuration cannot be changed.
BENEFITS
Single deterministic algorithm to apply network configuration.
Pre-defined state after host reboot/fence, host always reachable, previous network configuration that may be malformed is not in effect.
Do you plan to keep the transactional nature of the current API (ie. setSafeNetworkingConfig must be called after setupNetworks in order to persist it)?
Easy to integrate with various network management solution, can it be primitive iproute, brctl implementation, NetworkManager, OVS or any other configuration, as Linux is Linux is Linux, the ability to interact with the kernel is single, while in order to persist implementation requires to interact with the distribution.
Moreover, a stateless implementation may be integrated with larger set of network management tools, as no assumption of persistence is added to the requirements, so if OVS is non-persistent, we use it as-is.
We should aspire to reach to a state in which ovirt-node or any other similar solution is totally stateless, adding a new node to a cluster should be some blade rebooting from PXE, each persistence layer we drop, the closer we reach to managing a large data center built on huge number of machines go up/down as required joining different clusters.
While discussing clusters, we should also consider autonomic clusters that enforces policy even if ovirt-engine is unreachable, in this mode we would like a primitive manager to be able to enforce policy including networking, while allowing adding/removing nodes without performing any local configuration.
This could also be done without requiring another redundant management entity by storing a fallback config to apply when engine is unreachable. Yes, it's stateful, but that's not always a problem.
IMPLICATIONS
System administrator will not be allowed to modify 'by hand' any of the network settings (except of this basic engine reachability).
Good luck making that requirement stick in the face of real customers :) You'll need (at the very least) a hooking mechanism for admins to override some configuration that hasn't yet been modeled by oVirt.
Special settings can be set in the master, which will apply them via the master->vdsm protocol, which in turn use the network management interface in order to push them, this method should be generic enough to allow pushing most of the configuration setting allowed (key=value). This approach will also help replacing/adding nodes in cluster and/or mass deployment.
Edge conditions can be handled by executing some script on host machine, allowing administrator to override network configuration upon network configuration event.
SUMMARY
Assuming the host running vdsm as a complete slave and stateless will enable us to provide better control over that host in the short and long run.
Manual intervention on hosts serving as hypervisors has the flexibility argument. However at mass deployment, large data-center or dynamic environment this flexibility argument becomes liability.
Today oVirt plays in the small data center realm so I do think it's important to give appropriate weight to the flexibility argument. It should be possible to build different environments based on the needs of the deployment.
-- Adam Litke agl@us.ibm.com IBM Linux Technology Center
On Mon, Nov 26, 2012 at 06:13:01PM -0500, Alon Bar-Lev wrote:
Hello,
----- Original Message -----
From: "Adam Litke" agl@us.ibm.com To: "Alon Bar-Lev" alonbl@redhat.com Cc: "Livnat Peer" lpeer@redhat.com, "VDSM Project Development" vdsm-devel@lists.fedorahosted.org Sent: Tuesday, November 27, 2012 12:51:36 AM Subject: Re: [vdsm] Future of Vdsm network configuration - Thread mid-summary
Nice writeup! I like where this is going but see my comments inline below.
On Mon, Nov 26, 2012 at 03:18:22PM -0500, Alon Bar-Lev wrote:
----- Original Message -----
From: "Livnat Peer" lpeer@redhat.com To: "Shu Ming" shuming@linux.vnet.ibm.com Cc: "Alon Bar-Lev" abarlev@redhat.com, "VDSM Project Development" vdsm-devel@lists.fedorahosted.org Sent: Monday, November 26, 2012 2:57:19 PM Subject: Re: [vdsm] Future of Vdsm network configuration - Thread mid-summary
On 26/11/12 03:15, Shu Ming wrote:
Livnat,
Thanks for your summary. I got comments below.
2012-11-25 18:53, Livnat Peer:
Hi All, We have been discussing $subject for a while and I'd like to summarized what we agreed and disagreed on thus far.
The way I see it there are two related discussions:
- Getting VDSM networking stack to be distribution agnostic. - We
are all in agreement that VDSM API should be generic enough to incorporate multiple implementation. (discussed on this thread: Alon's suggestion, Mark's patch for adding support for netcf etc.)
- We would like to maintain at least one implementation as the
working/up-to-date implementation for our users, this implementation should be distribution agnostic (as we all acknowledge this is an important goal for VDSM). I also think that with the agreement of this community we can choose to change our focus, from time to time, from one implementation to another as we see fit (today it can be OVS+netcf and in a few months we'll use the quantum based implementation if we agree it is better)
- The second discussion is about persisting the network
configuration on the host vs. dynamically retrieving it from a centralized location like the engine. Danken raised a concern that even if going with the dynamic approach the host should persist the management network configuration.
About dynamical retrieving from a centralized location, when will the retrieving start? Just in the very early stage of host booting before network functions? Or after the host startup and in the normal running state of the host? Before retrieving the configuration, how does the host network connecting to the engine? I think we need a basic well known network between hosts and the engine first. Then after the retrieving, hosts should reconfigure the network for later management. However, the timing to retrieve and reconfigure are challenging.
We did not discuss the dynamic approach in details on the list so far and I think this is a good opportunity to start this discussion...
From what was discussed previously I can say that the need for a well known network was raised by danken, it was referred to as the management network, this network would be used for pulling the full host network configuration from the centralized location, at this point the engine.
About the timing for retrieving the configuration, there are several approaches. One of them was described by Alon, and I think he'll join this discussion and maybe put it in his own words, but the idea was to 'keep' the network synchronized at all times. When the host have communication channel to the engine and the engine detects there is a mismatch in the host configuration, the engine initiates 'apply network configuration' action on the host.
Using this approach we'll have a single path of code to maintain and that would reduce code complexity and bugs - That's quoting Alon Bar Lev (Alon I hope I did not twisted your words/idea).
On the other hand the above approach makes local tweaks on the host (done manually by the administrator) much harder.
Any other approaches ?
I'd like to add a more general question to the discussion what are the advantages of taking the dynamic approach? So far I collected two reasons:
-It is a 'cleaner' design, removes complexity on VDSM code, easier to maintain going forward, and less bug prone (I agree with that one, as long as we keep the retrieving configuration mechanism/algorithm simple).
-It adheres to the idea of having a stateless hypervisor - some more input on this point would be appreciated
Any other advantages?
discussing the benefits of having the persisted
Livnat
Sorry for the delay. Some more expansion.
ASSUMPTION
After boot a host running vdsm is able to receive communication from engine. This means that host has legitimate layer 2 configuration and layer 3 configuration for the interface used to communicate to engine.
MISSION
Reduce complexity of implementation, so that only one algorithm is used in order to reach to operative state as far as networking is concerned.
(Storage is extremely similar I can s/network/storage/ and still be relevant).
DESIGN FOCAL POINT
Host running vdsm is a complete slave of its master, will it be ovirt-engine or other engine.
I do not agree with this direction. It reinforces the single point of failure of the centralized manager. Also, I am actively working to make vdsm a self contained component that is independently useful. This proposal will effectively cripple that effort.
I would prefer a statement that a node _CAN_ be a slave to engine but can also re-apply a previous configuration in the absense of a management server. See my other post for how this can be achieved without adding much complexity to the design.
I strongly disagree. I think you going to mix two separate components into one. This is a fundamental issue, so I won't answer all the point you raise because all derives from this one.
vdsm is a slave, and should move to a stateless slave in order to keep it simple and stupid, which is actually smart. A management component can be installed on the same host or at different host. This management component can be a) a custom management component that is not part of the ovirt architecture. b) a component that manages a cluster on behalf of the ovirt-engine.
There is no problem in implementing this component at: a) vdsm protocol proxy, a component that sits between vdsm and the ovirt-engine or whatever north connection. b) entirely different entity which communicate with vdsm and different protocol to north.
If we follow this design, we have simple building blocks, that together can build complex solutions. As each building block is simple the cost of maintenance of each is lower.
I am not opposed to implementing the 'static' mode as a separate service that depends on vdsm. My only concern with having this functionality outside of vdsm is that it might break more easily as the code changes. Hopefully, the stable node-level API will mostly prevent problems in this area but we will need to be more careful.
Having a complete slave ease implementation:
- Master always apply the setting as-is. 2. No need to consider slave
state. 3. No need to implement AI to reach from unknown state X to known state Y + delta.
These would be properties of any intelligent design regardless if engine is responsible for triggering the configuration changes or if vdsm does it autonomously. In either case you need to write an algorithm that is capable of deleting all networking config (except for the management interface). Without this, you would be unable to apply incremental configuration changes from engine reliably.
- After reboot (or fence) host is always in known state.
Once you have a method to strip networking config down to only the management interface, you can always get back to a known state. I suggest having a vdsm API that can do this for you.
ALGORITHM
A. Given communication to vdsm, construct required vlan, bonding, bridge setup on machine.
B. Reboot/Fence - host is reset, apply A.
C. Network configuration is changed at engine: (1) Drop all resources that are not used by active VMs.
This is the 'network reset' operation I am referring to.
(2) Apply A.
D. Host in maintenance - network configuration can be changed, will be applied when host go into active, apply C (no resources are used by VMs, all resources are dropped).
E. Critical network is down (Host not operative) - network configuration is not changed.
F. Host unreachable (None responsive) - network configuration cannot be changed.
BENEFITS
Single deterministic algorithm to apply network configuration.
Pre-defined state after host reboot/fence, host always reachable, previous network configuration that may be malformed is not in effect.
Do you plan to keep the transactional nature of the current API (ie. setSafeNetworkingConfig must be called after setupNetworks in order to persist it)?
Easy to integrate with various network management solution, can it be primitive iproute, brctl implementation, NetworkManager, OVS or any other configuration, as Linux is Linux is Linux, the ability to interact with the kernel is single, while in order to persist implementation requires to interact with the distribution.
Moreover, a stateless implementation may be integrated with larger set of network management tools, as no assumption of persistence is added to the requirements, so if OVS is non-persistent, we use it as-is.
We should aspire to reach to a state in which ovirt-node or any other similar solution is totally stateless, adding a new node to a cluster should be some blade rebooting from PXE, each persistence layer we drop, the closer we reach to managing a large data center built on huge number of machines go up/down as required joining different clusters.
While discussing clusters, we should also consider autonomic clusters that enforces policy even if ovirt-engine is unreachable, in this mode we would like a primitive manager to be able to enforce policy including networking, while allowing adding/removing nodes without performing any local configuration.
This could also be done without requiring another redundant management entity by storing a fallback config to apply when engine is unreachable. Yes, it's stateful, but that's not always a problem.
IMPLICATIONS
System administrator will not be allowed to modify 'by hand' any of the network settings (except of this basic engine reachability).
Good luck making that requirement stick in the face of real customers :) You'll need (at the very least) a hooking mechanism for admins to override some configuration that hasn't yet been modeled by oVirt.
Special settings can be set in the master, which will apply them via the master->vdsm protocol, which in turn use the network management interface in order to push them, this method should be generic enough to allow pushing most of the configuration setting allowed (key=value). This approach will also help replacing/adding nodes in cluster and/or mass deployment.
Edge conditions can be handled by executing some script on host machine, allowing administrator to override network configuration upon network configuration event.
SUMMARY
Assuming the host running vdsm as a complete slave and stateless will enable us to provide better control over that host in the short and long run.
Manual intervention on hosts serving as hypervisors has the flexibility argument. However at mass deployment, large data-center or dynamic environment this flexibility argument becomes liability.
Today oVirt plays in the small data center realm so I do think it's important to give appropriate weight to the flexibility argument. It should be possible to build different environments based on the needs of the deployment.
On 26/11/12 22:18, Alon Bar-Lev wrote:
----- Original Message -----
From: "Livnat Peer" lpeer@redhat.com To: "Shu Ming" shuming@linux.vnet.ibm.com Cc: "Alon Bar-Lev" abarlev@redhat.com, "VDSM Project Development" vdsm-devel@lists.fedorahosted.org Sent: Monday, November 26, 2012 2:57:19 PM Subject: Re: [vdsm] Future of Vdsm network configuration - Thread mid-summary
On 26/11/12 03:15, Shu Ming wrote:
Livnat,
Thanks for your summary. I got comments below.
2012-11-25 18:53, Livnat Peer:
Hi All, We have been discussing $subject for a while and I'd like to summarized what we agreed and disagreed on thus far.
The way I see it there are two related discussions:
- Getting VDSM networking stack to be distribution agnostic.
- We are all in agreement that VDSM API should be generic enough
to incorporate multiple implementation. (discussed on this thread: Alon's suggestion, Mark's patch for adding support for netcf etc.)
- We would like to maintain at least one implementation as the
working/up-to-date implementation for our users, this implementation should be distribution agnostic (as we all acknowledge this is an important goal for VDSM). I also think that with the agreement of this community we can choose to change our focus, from time to time, from one implementation to another as we see fit (today it can be OVS+netcf and in a few months we'll use the quantum based implementation if we agree it is better)
- The second discussion is about persisting the network
configuration on the host vs. dynamically retrieving it from a centralized location like the engine. Danken raised a concern that even if going with the dynamic approach the host should persist the management network configuration.
About dynamical retrieving from a centralized location, when will the retrieving start? Just in the very early stage of host booting before network functions? Or after the host startup and in the normal running state of the host? Before retrieving the configuration, how does the host network connecting to the engine? I think we need a basic well known network between hosts and the engine first. Then after the retrieving, hosts should reconfigure the network for later management. However, the timing to retrieve and reconfigure are challenging.
We did not discuss the dynamic approach in details on the list so far and I think this is a good opportunity to start this discussion...
From what was discussed previously I can say that the need for a well known network was raised by danken, it was referred to as the management network, this network would be used for pulling the full host network configuration from the centralized location, at this point the engine.
About the timing for retrieving the configuration, there are several approaches. One of them was described by Alon, and I think he'll join this discussion and maybe put it in his own words, but the idea was to 'keep' the network synchronized at all times. When the host have communication channel to the engine and the engine detects there is a mismatch in the host configuration, the engine initiates 'apply network configuration' action on the host.
Using this approach we'll have a single path of code to maintain and that would reduce code complexity and bugs - That's quoting Alon Bar Lev (Alon I hope I did not twisted your words/idea).
On the other hand the above approach makes local tweaks on the host (done manually by the administrator) much harder.
Any other approaches ?
I'd like to add a more general question to the discussion what are the advantages of taking the dynamic approach? So far I collected two reasons:
-It is a 'cleaner' design, removes complexity on VDSM code, easier to maintain going forward, and less bug prone (I agree with that one, as long as we keep the retrieving configuration mechanism/algorithm simple).
-It adheres to the idea of having a stateless hypervisor - some more input on this point would be appreciated
Any other advantages?
discussing the benefits of having the persisted
Livnat
Sorry for the delay. Some more expansion.
ASSUMPTION
After boot a host running vdsm is able to receive communication from engine. This means that host has legitimate layer 2 configuration and layer 3 configuration for the interface used to communicate to engine.
MISSION
Reduce complexity of implementation, so that only one algorithm is used in order to reach to operative state as far as networking is concerned.
(Storage is extremely similar I can s/network/storage/ and still be relevant).
For reaching the mission above we can also use the approach suggested by Adam. start from a clean configuration and execute setup network to set the host networking configuration. In Adam's proposal VDSM itself is issuing the setupNetwork and in your approach the engine does.
DESIGN FOCAL POINT
Host running vdsm is a complete slave of its master, will it be ovirt-engine or other engine.
Having a complete slave ease implementation:
- Master always apply the setting as-is.
- No need to consider slave state.
- No need to implement AI to reach from unknown state X to known state Y + delta.
- After reboot (or fence) host is always in known state.
ALGORITHM
A. Given communication to vdsm, construct required vlan, bonding, bridge setup on machine.
B. Reboot/Fence - host is reset, apply A.
C. Network configuration is changed at engine: (1) Drop all resources that are not used by active VMs.
I'm not sure what you mean by the above, drop all resources *not* used by VMs?
(2) Apply A.
D. Host in maintenance - network configuration can be changed, will be applied when host go into active, apply C (no resources are used by VMs, all resources are dropped).
E. Critical network is down (Host not operative) - network configuration is not changed.
F. Host unreachable (None responsive) - network configuration cannot be changed.
What happens if we have a host that is added to the engine (or used to be non-operational and now returns to up) and reports a network configuration different than what is configured in the engine?
BENEFITS
Single deterministic algorithm to apply network configuration.
Pre-defined state after host reboot/fence, host always reachable, previous network configuration that may be malformed is not in effect.
Easy to integrate with various network management solution, can it be primitive iproute, brctl implementation, NetworkManager, OVS or any other configuration, as Linux is Linux is Linux, the ability to interact with the kernel is single, while in order to persist implementation requires to interact with the distribution.
Moreover, a stateless implementation may be integrated with larger set of network management tools, as no assumption of persistence is added to the requirements, so if OVS is non-persistent, we use it as-is.
We should aspire to reach to a state in which ovirt-node or any other similar solution is totally stateless, adding a new node to a cluster should be some blade rebooting from PXE, each persistence layer we drop, the closer we reach to managing a large data center built on huge number of machines go up/down as required joining different clusters.
While discussing clusters, we should also consider autonomic clusters that enforces policy even if ovirt-engine is unreachable, in this mode we would like a primitive manager to be able to enforce policy including networking, while allowing adding/removing nodes without performing any local configuration.
IMPLICATIONS
System administrator will not be allowed to modify 'by hand' any of the network settings (except of this basic engine reachability).
Special settings can be set in the master, which will apply them via the master->vdsm protocol, which in turn use the network management interface in order to push them, this method should be generic enough to allow pushing most of the configuration setting allowed (key=value). This approach will also help replacing/adding nodes in cluster and/or mass deployment.
Edge conditions can be handled by executing some script on host machine, allowing administrator to override network configuration upon network configuration event.
SUMMARY
Assuming the host running vdsm as a complete slave and stateless will enable us to provide better control over that host in the short and long run.
Manual intervention on hosts serving as hypervisors has the flexibility argument. However at mass deployment, large data-center or dynamic environment this flexibility argument becomes liability.
Thank you, Alon Bar-Lev
----- Original Message -----
From: "Livnat Peer" lpeer@redhat.com To: "Alon Bar-Lev" alonbl@redhat.com Cc: "VDSM Project Development" vdsm-devel@lists.fedorahosted.org, "Shu Ming" shuming@linux.vnet.ibm.com, "Saggi Mizrahi" smizrahi@redhat.com, "Dan Kenigsberg" danken@redhat.com Sent: Tuesday, November 27, 2012 12:18:31 PM Subject: Re: [vdsm] Future of Vdsm network configuration - Thread mid-summary
On 26/11/12 22:18, Alon Bar-Lev wrote:
----- Original Message -----
From: "Livnat Peer" lpeer@redhat.com To: "Shu Ming" shuming@linux.vnet.ibm.com Cc: "Alon Bar-Lev" abarlev@redhat.com, "VDSM Project Development" vdsm-devel@lists.fedorahosted.org Sent: Monday, November 26, 2012 2:57:19 PM Subject: Re: [vdsm] Future of Vdsm network configuration - Thread mid-summary
On 26/11/12 03:15, Shu Ming wrote:
Livnat,
Thanks for your summary. I got comments below.
2012-11-25 18:53, Livnat Peer:
Hi All, We have been discussing $subject for a while and I'd like to summarized what we agreed and disagreed on thus far.
The way I see it there are two related discussions:
- Getting VDSM networking stack to be distribution agnostic.
- We are all in agreement that VDSM API should be generic enough
to incorporate multiple implementation. (discussed on this thread: Alon's suggestion, Mark's patch for adding support for netcf etc.)
- We would like to maintain at least one implementation as the
working/up-to-date implementation for our users, this implementation should be distribution agnostic (as we all acknowledge this is an important goal for VDSM). I also think that with the agreement of this community we can choose to change our focus, from time to time, from one implementation to another as we see fit (today it can be OVS+netcf and in a few months we'll use the quantum based implementation if we agree it is better)
- The second discussion is about persisting the network
configuration on the host vs. dynamically retrieving it from a centralized location like the engine. Danken raised a concern that even if going with the dynamic approach the host should persist the management network configuration.
About dynamical retrieving from a centralized location, when will the retrieving start? Just in the very early stage of host booting before network functions? Or after the host startup and in the normal running state of the host? Before retrieving the configuration, how does the host network connecting to the engine? I think we need a basic well known network between hosts and the engine first. Then after the retrieving, hosts should reconfigure the network for later management. However, the timing to retrieve and reconfigure are challenging.
We did not discuss the dynamic approach in details on the list so far and I think this is a good opportunity to start this discussion...
From what was discussed previously I can say that the need for a well known network was raised by danken, it was referred to as the management network, this network would be used for pulling the full host network configuration from the centralized location, at this point the engine.
About the timing for retrieving the configuration, there are several approaches. One of them was described by Alon, and I think he'll join this discussion and maybe put it in his own words, but the idea was to 'keep' the network synchronized at all times. When the host have communication channel to the engine and the engine detects there is a mismatch in the host configuration, the engine initiates 'apply network configuration' action on the host.
Using this approach we'll have a single path of code to maintain and that would reduce code complexity and bugs - That's quoting Alon Bar Lev (Alon I hope I did not twisted your words/idea).
On the other hand the above approach makes local tweaks on the host (done manually by the administrator) much harder.
Any other approaches ?
I'd like to add a more general question to the discussion what are the advantages of taking the dynamic approach? So far I collected two reasons:
-It is a 'cleaner' design, removes complexity on VDSM code, easier to maintain going forward, and less bug prone (I agree with that one, as long as we keep the retrieving configuration mechanism/algorithm simple).
-It adheres to the idea of having a stateless hypervisor - some more input on this point would be appreciated
Any other advantages?
discussing the benefits of having the persisted
Livnat
Sorry for the delay. Some more expansion.
ASSUMPTION
After boot a host running vdsm is able to receive communication from engine. This means that host has legitimate layer 2 configuration and layer 3 configuration for the interface used to communicate to engine.
MISSION
Reduce complexity of implementation, so that only one algorithm is used in order to reach to operative state as far as networking is concerned.
(Storage is extremely similar I can s/network/storage/ and still be relevant).
For reaching the mission above we can also use the approach suggested by Adam. start from a clean configuration and execute setup network to set the host networking configuration. In Adam's proposal VDSM itself is issuing the setupNetwork and in your approach the engine does.
Right. we can do this 100+ ways, question is which implementation will be the simplest.
DESIGN FOCAL POINT
Host running vdsm is a complete slave of its master, will it be ovirt-engine or other engine.
Having a complete slave ease implementation:
- Master always apply the setting as-is.
- No need to consider slave state.
- No need to implement AI to reach from unknown state X to known
state Y + delta. 4. After reboot (or fence) host is always in known state.
ALGORITHM
A. Given communication to vdsm, construct required vlan, bonding, bridge setup on machine.
B. Reboot/Fence - host is reset, apply A.
C. Network configuration is changed at engine: (1) Drop all resources that are not used by active VMs.
I'm not sure what you mean by the above, drop all resources *not* used by VMs?
Let's say we have running VM using bridge bridge1. We cannot modify this bridge1 as long as VM is operative. So we drop all network configuration except of bridge1 to allow VM to survive the upgrade.
I was tempted to write something else but I did not want to alarm people.... But... when network configuration is changed on a host with running VMs, first move the VMs to a different host, then recycle configuration (simplest: reboot).
(2) Apply A.
D. Host in maintenance - network configuration can be changed, will be applied when host go into active, apply C (no resources are used by VMs, all resources are dropped).
E. Critical network is down (Host not operative) - network configuration is not changed.
F. Host unreachable (None responsive) - network configuration cannot be changed.
What happens if we have a host that is added to the engine (or used to be non-operational and now returns to up) and reports a network configuration different than what is configured in the engine?
This is a sign of totally malicious node! A trigger to fencing, active rebooting. Can you please describe a valid sequence in which it can happen?
BENEFITS
Single deterministic algorithm to apply network configuration.
Pre-defined state after host reboot/fence, host always reachable, previous network configuration that may be malformed is not in effect.
Easy to integrate with various network management solution, can it be primitive iproute, brctl implementation, NetworkManager, OVS or any other configuration, as Linux is Linux is Linux, the ability to interact with the kernel is single, while in order to persist implementation requires to interact with the distribution.
Moreover, a stateless implementation may be integrated with larger set of network management tools, as no assumption of persistence is added to the requirements, so if OVS is non-persistent, we use it as-is.
We should aspire to reach to a state in which ovirt-node or any other similar solution is totally stateless, adding a new node to a cluster should be some blade rebooting from PXE, each persistence layer we drop, the closer we reach to managing a large data center built on huge number of machines go up/down as required joining different clusters.
While discussing clusters, we should also consider autonomic clusters that enforces policy even if ovirt-engine is unreachable, in this mode we would like a primitive manager to be able to enforce policy including networking, while allowing adding/removing nodes without performing any local configuration.
IMPLICATIONS
System administrator will not be allowed to modify 'by hand' any of the network settings (except of this basic engine reachability).
Special settings can be set in the master, which will apply them via the master->vdsm protocol, which in turn use the network management interface in order to push them, this method should be generic enough to allow pushing most of the configuration setting allowed (key=value). This approach will also help replacing/adding nodes in cluster and/or mass deployment.
Edge conditions can be handled by executing some script on host machine, allowing administrator to override network configuration upon network configuration event.
SUMMARY
Assuming the host running vdsm as a complete slave and stateless will enable us to provide better control over that host in the short and long run.
Manual intervention on hosts serving as hypervisors has the flexibility argument. However at mass deployment, large data-center or dynamic environment this flexibility argument becomes liability.
Thank you, Alon Bar-Lev
On Tue, Nov 27, 2012 at 05:38:25AM -0500, Alon Bar-Lev wrote:
<snip>
ASSUMPTION
After boot a host running vdsm is able to receive communication from engine. This means that host has legitimate layer 2 configuration and layer 3 configuration for the interface used to communicate to engine.
MISSION
Reduce complexity of implementation, so that only one algorithm is used in order to reach to operative state as far as networking is concerned.
(Storage is extremely similar I can s/network/storage/ and still be relevant).
For reaching the mission above we can also use the approach suggested by Adam. start from a clean configuration and execute setup network to set the host networking configuration. In Adam's proposal VDSM itself is issuing the setupNetwork and in your approach the engine does.
Right. we can do this 100+ ways, question is which implementation will be the simplest.
My problem with Adam's idea is http://xkcd.com/927/ : it amounts to an (n+1)th way of persisting network configuration on disk. We may have to take that way, but as with VM definitions and storage connections, I would like to keep it in a smaller service on top of vdsm.
DESIGN FOCAL POINT
Host running vdsm is a complete slave of its master, will it be ovirt-engine or other engine.
Having a complete slave ease implementation:
- Master always apply the setting as-is.
- No need to consider slave state.
- No need to implement AI to reach from unknown state X to known
state Y + delta. 4. After reboot (or fence) host is always in known state.
ALGORITHM
A. Given communication to vdsm,
I think we should not brush this permise aside. Current Vdsm API lets Engine tweak the means of communication for next boot. We had customers that wanted to add a bond, or change the vlan, or fix the IP address of the management interface. They could have used Engine for this, and declare the new configuration as safe (setSafeNetConfig). In many cases, the latter step has to be done out of band. But there are cases where this can be done completely remotely.
It seems that you suggest to take this crucial configuration completely out-of-band.
construct required vlan, bonding, bridge setup on machine.
B. Reboot/Fence - host is reset, apply A.
C. Network configuration is changed at engine: (1) Drop all resources that are not used by active VMs.
I'm not sure what you mean by the above, drop all resources *not* used by VMs?
Let's say we have running VM using bridge bridge1. We cannot modify this bridge1 as long as VM is operative. So we drop all network configuration except of bridge1 to allow VM to survive the upgrade.
I was tempted to write something else but I did not want to alarm people.... But... when network configuration is changed on a host with running VMs, first move the VMs to a different host, then recycle configuration (simplest: reboot).
We've been doing that until v3.1...
(2) Apply A.
D. Host in maintenance - network configuration can be changed, will be applied when host go into active, apply C (no resources are used by VMs, all resources are dropped).
E. Critical network is down (Host not operative) - network configuration is not changed.
F. Host unreachable (None responsive) - network configuration cannot be changed.
What happens if we have a host that is added to the engine (or used to be non-operational and now returns to up) and reports a network configuration different than what is configured in the engine?
This is a sign of totally malicious node! A trigger to fencing, active rebooting. Can you please describe a valid sequence in which it can happen?
I'm not sure this example flies all, but how about a sysadmin that wants to replace our bonding definition with teaming.
On 11/26/2012 03:18 PM, Alon Bar-Lev wrote:
----- Original Message -----
From: "Livnat Peer" lpeer@redhat.com To: "Shu Ming" shuming@linux.vnet.ibm.com Cc: "Alon Bar-Lev" abarlev@redhat.com, "VDSM Project Development" vdsm-devel@lists.fedorahosted.org Sent: Monday, November 26, 2012 2:57:19 PM Subject: Re: [vdsm] Future of Vdsm network configuration - Thread mid-summary
On 26/11/12 03:15, Shu Ming wrote:
Livnat,
Thanks for your summary. I got comments below.
2012-11-25 18:53, Livnat Peer:
Hi All, We have been discussing $subject for a while and I'd like to summarized what we agreed and disagreed on thus far.
The way I see it there are two related discussions:
- Getting VDSM networking stack to be distribution agnostic.
- We are all in agreement that VDSM API should be generic enough
to incorporate multiple implementation. (discussed on this thread: Alon's suggestion, Mark's patch for adding support for netcf etc.)
- We would like to maintain at least one implementation as the
working/up-to-date implementation for our users, this implementation should be distribution agnostic (as we all acknowledge this is an important goal for VDSM). I also think that with the agreement of this community we can choose to change our focus, from time to time, from one implementation to another as we see fit (today it can be OVS+netcf and in a few months we'll use the quantum based implementation if we agree it is better)
- The second discussion is about persisting the network
configuration on the host vs. dynamically retrieving it from a centralized location like the engine. Danken raised a concern that even if going with the dynamic approach the host should persist the management network configuration.
About dynamical retrieving from a centralized location, when will the retrieving start? Just in the very early stage of host booting before network functions? Or after the host startup and in the normal running state of the host? Before retrieving the configuration, how does the host network connecting to the engine? I think we need a basic well known network between hosts and the engine first. Then after the retrieving, hosts should reconfigure the network for later management. However, the timing to retrieve and reconfigure are challenging.
We did not discuss the dynamic approach in details on the list so far and I think this is a good opportunity to start this discussion...
From what was discussed previously I can say that the need for a well known network was raised by danken, it was referred to as the management network, this network would be used for pulling the full host network configuration from the centralized location, at this point the engine.
About the timing for retrieving the configuration, there are several approaches. One of them was described by Alon, and I think he'll join this discussion and maybe put it in his own words, but the idea was to 'keep' the network synchronized at all times. When the host have communication channel to the engine and the engine detects there is a mismatch in the host configuration, the engine initiates 'apply network configuration' action on the host.
Using this approach we'll have a single path of code to maintain and that would reduce code complexity and bugs - That's quoting Alon Bar Lev (Alon I hope I did not twisted your words/idea).
On the other hand the above approach makes local tweaks on the host (done manually by the administrator) much harder.
Any other approaches ?
I'd like to add a more general question to the discussion what are the advantages of taking the dynamic approach? So far I collected two reasons:
-It is a 'cleaner' design, removes complexity on VDSM code, easier to maintain going forward, and less bug prone (I agree with that one, as long as we keep the retrieving configuration mechanism/algorithm simple).
-It adheres to the idea of having a stateless hypervisor - some more input on this point would be appreciated
Any other advantages?
discussing the benefits of having the persisted
Livnat
Sorry for the delay. Some more expansion.
ASSUMPTION
After boot a host running vdsm is able to receive communication from engine. This means that host has legitimate layer 2 configuration and layer 3 configuration for the interface used to communicate to engine.
MISSION
Reduce complexity of implementation, so that only one algorithm is used in order to reach to operative state as far as networking is concerned.
(Storage is extremely similar I can s/network/storage/ and still be relevant).
DESIGN FOCAL POINT
Host running vdsm is a complete slave of its master, will it be ovirt-engine or other engine.
Having a complete slave ease implementation:
- Master always apply the setting as-is.
- No need to consider slave state.
- No need to implement AI to reach from unknown state X to known state Y + delta.
- After reboot (or fence) host is always in known state.
ALGORITHM
A. Given communication to vdsm, construct required vlan, bonding, bridge setup on machine.
B. Reboot/Fence - host is reset, apply A.
C. Network configuration is changed at engine: (1) Drop all resources that are not used by active VMs. (2) Apply A.
D. Host in maintenance - network configuration can be changed, will be applied when host go into active, apply C (no resources are used by VMs, all resources are dropped).
E. Critical network is down (Host not operative) - network configuration is not changed.
F. Host unreachable (None responsive) - network configuration cannot be changed.
BENEFITS
Single deterministic algorithm to apply network configuration.
Pre-defined state after host reboot/fence, host always reachable, previous network configuration that may be malformed is not in effect.
Easy to integrate with various network management solution, can it be primitive iproute, brctl implementation, NetworkManager, OVS or any other configuration, as Linux is Linux is Linux, the ability to interact with the kernel is single, while in order to persist implementation requires to interact with the distribution.
Moreover, a stateless implementation may be integrated with larger set of network management tools, as no assumption of persistence is added to the requirements, so if OVS is non-persistent, we use it as-is.
We should aspire to reach to a state in which ovirt-node or any other similar solution is totally stateless, adding a new node to a cluster should be some blade rebooting from PXE, each persistence layer we drop, the closer we reach to managing a large data center built on huge number of machines go up/down as required joining different clusters.
While discussing clusters, we should also consider autonomic clusters that enforces policy even if ovirt-engine is unreachable, in this mode we would like a primitive manager to be able to enforce policy including networking, while allowing adding/removing nodes without performing any local configuration.
IMPLICATIONS
System administrator will not be allowed to modify 'by hand' any of the network settings (except of this basic engine reachability).
Special settings can be set in the master, which will apply them via the master->vdsm protocol, which in turn use the network management interface in order to push them, this method should be generic enough to allow pushing most of the configuration setting allowed (key=value). This approach will also help replacing/adding nodes in cluster and/or mass deployment.
Edge conditions can be handled by executing some script on host machine, allowing administrator to override network configuration upon network configuration event.
SUMMARY
Assuming the host running vdsm as a complete slave and stateless will enable us to provide better control over that host in the short and long run.
Manual intervention on hosts serving as hypervisors has the flexibility argument. However at mass deployment, large data-center or dynamic environment this flexibility argument becomes liability.
Thank you, Alon Bar-Lev _______________________________________________ vdsm-devel mailing list vdsm-devel@lists.fedorahosted.org https://lists.fedorahosted.org/mailman/listinfo/vdsm-devel
several questions: on management interface: 1. bonding configuration must match switch - I'm not sure you'll even get layer two without persisting the bonding configuration. 2. are you assuming dhcp for the host to get initial configuration? some are not using it so management network needs to be persisted.
on use case: we don't have good support for this today, but there is a notion of a "hybrid mode" - installing vdsm on a node doing other things to allow it to run some guests in a lower priority. I'm not sure we can assume total automatic control by ovirt in this use case. to date, we assumed "do no harm" to networks we were not directly asked to configure.
----- Original Message -----
From: "Itamar Heim" iheim@redhat.com To: "Alon Bar-Lev" alonbl@redhat.com Cc: "Livnat Peer" lpeer@redhat.com, "VDSM Project Development" vdsm-devel@lists.fedorahosted.org Sent: Tuesday, November 27, 2012 10:08:34 PM Subject: Re: [vdsm] Future of Vdsm network configuration - Thread mid-summary
On 11/26/2012 03:18 PM, Alon Bar-Lev wrote:
----- Original Message -----
From: "Livnat Peer" lpeer@redhat.com To: "Shu Ming" shuming@linux.vnet.ibm.com Cc: "Alon Bar-Lev" abarlev@redhat.com, "VDSM Project Development" vdsm-devel@lists.fedorahosted.org Sent: Monday, November 26, 2012 2:57:19 PM Subject: Re: [vdsm] Future of Vdsm network configuration - Thread mid-summary
On 26/11/12 03:15, Shu Ming wrote:
Livnat,
Thanks for your summary. I got comments below.
2012-11-25 18:53, Livnat Peer:
Hi All, We have been discussing $subject for a while and I'd like to summarized what we agreed and disagreed on thus far.
The way I see it there are two related discussions:
- Getting VDSM networking stack to be distribution agnostic.
- We are all in agreement that VDSM API should be generic enough
to incorporate multiple implementation. (discussed on this thread: Alon's suggestion, Mark's patch for adding support for netcf etc.)
- We would like to maintain at least one implementation as the
working/up-to-date implementation for our users, this implementation should be distribution agnostic (as we all acknowledge this is an important goal for VDSM). I also think that with the agreement of this community we can choose to change our focus, from time to time, from one implementation to another as we see fit (today it can be OVS+netcf and in a few months we'll use the quantum based implementation if we agree it is better)
- The second discussion is about persisting the network
configuration on the host vs. dynamically retrieving it from a centralized location like the engine. Danken raised a concern that even if going with the dynamic approach the host should persist the management network configuration.
About dynamical retrieving from a centralized location, when will the retrieving start? Just in the very early stage of host booting before network functions? Or after the host startup and in the normal running state of the host? Before retrieving the configuration, how does the host network connecting to the engine? I think we need a basic well known network between hosts and the engine first. Then after the retrieving, hosts should reconfigure the network for later management. However, the timing to retrieve and reconfigure are challenging.
We did not discuss the dynamic approach in details on the list so far and I think this is a good opportunity to start this discussion...
From what was discussed previously I can say that the need for a well known network was raised by danken, it was referred to as the management network, this network would be used for pulling the full host network configuration from the centralized location, at this point the engine.
About the timing for retrieving the configuration, there are several approaches. One of them was described by Alon, and I think he'll join this discussion and maybe put it in his own words, but the idea was to 'keep' the network synchronized at all times. When the host have communication channel to the engine and the engine detects there is a mismatch in the host configuration, the engine initiates 'apply network configuration' action on the host.
Using this approach we'll have a single path of code to maintain and that would reduce code complexity and bugs - That's quoting Alon Bar Lev (Alon I hope I did not twisted your words/idea).
On the other hand the above approach makes local tweaks on the host (done manually by the administrator) much harder.
Any other approaches ?
I'd like to add a more general question to the discussion what are the advantages of taking the dynamic approach? So far I collected two reasons:
-It is a 'cleaner' design, removes complexity on VDSM code, easier to maintain going forward, and less bug prone (I agree with that one, as long as we keep the retrieving configuration mechanism/algorithm simple).
-It adheres to the idea of having a stateless hypervisor - some more input on this point would be appreciated
Any other advantages?
discussing the benefits of having the persisted
Livnat
Sorry for the delay. Some more expansion.
ASSUMPTION
After boot a host running vdsm is able to receive communication from engine. This means that host has legitimate layer 2 configuration and layer 3 configuration for the interface used to communicate to engine.
MISSION
Reduce complexity of implementation, so that only one algorithm is used in order to reach to operative state as far as networking is concerned.
(Storage is extremely similar I can s/network/storage/ and still be relevant).
DESIGN FOCAL POINT
Host running vdsm is a complete slave of its master, will it be ovirt-engine or other engine.
Having a complete slave ease implementation:
- Master always apply the setting as-is.
- No need to consider slave state.
- No need to implement AI to reach from unknown state X to known
state Y + delta. 4. After reboot (or fence) host is always in known state.
ALGORITHM
A. Given communication to vdsm, construct required vlan, bonding, bridge setup on machine.
B. Reboot/Fence - host is reset, apply A.
C. Network configuration is changed at engine: (1) Drop all resources that are not used by active VMs. (2) Apply A.
D. Host in maintenance - network configuration can be changed, will be applied when host go into active, apply C (no resources are used by VMs, all resources are dropped).
E. Critical network is down (Host not operative) - network configuration is not changed.
F. Host unreachable (None responsive) - network configuration cannot be changed.
BENEFITS
Single deterministic algorithm to apply network configuration.
Pre-defined state after host reboot/fence, host always reachable, previous network configuration that may be malformed is not in effect.
Easy to integrate with various network management solution, can it be primitive iproute, brctl implementation, NetworkManager, OVS or any other configuration, as Linux is Linux is Linux, the ability to interact with the kernel is single, while in order to persist implementation requires to interact with the distribution.
Moreover, a stateless implementation may be integrated with larger set of network management tools, as no assumption of persistence is added to the requirements, so if OVS is non-persistent, we use it as-is.
We should aspire to reach to a state in which ovirt-node or any other similar solution is totally stateless, adding a new node to a cluster should be some blade rebooting from PXE, each persistence layer we drop, the closer we reach to managing a large data center built on huge number of machines go up/down as required joining different clusters.
While discussing clusters, we should also consider autonomic clusters that enforces policy even if ovirt-engine is unreachable, in this mode we would like a primitive manager to be able to enforce policy including networking, while allowing adding/removing nodes without performing any local configuration.
IMPLICATIONS
System administrator will not be allowed to modify 'by hand' any of the network settings (except of this basic engine reachability).
Special settings can be set in the master, which will apply them via the master->vdsm protocol, which in turn use the network management interface in order to push them, this method should be generic enough to allow pushing most of the configuration setting allowed (key=value). This approach will also help replacing/adding nodes in cluster and/or mass deployment.
Edge conditions can be handled by executing some script on host machine, allowing administrator to override network configuration upon network configuration event.
SUMMARY
Assuming the host running vdsm as a complete slave and stateless will enable us to provide better control over that host in the short and long run.
Manual intervention on hosts serving as hypervisors has the flexibility argument. However at mass deployment, large data-center or dynamic environment this flexibility argument becomes liability.
Thank you, Alon Bar-Lev _______________________________________________ vdsm-devel mailing list vdsm-devel@lists.fedorahosted.org https://lists.fedorahosted.org/mailman/listinfo/vdsm-devel
several questions: on management interface:
- bonding configuration must match switch - I'm not sure you'll even
get layer two without persisting the bonding configuration.
Management interface configuration is a separate issue. If we perform changes of this interface when host is in maintenance we reduce the complexity of the problem.
For your specific issue, if there are two interfaces, one which is up during boot and one which is down during boot, there is no problem to bond them after boot without persisting configuration.
- are you assuming dhcp for the host to get initial configuration?
some are not using it so management network needs to be persisted.
See assumption section, assumption is that you have connectivity engine->vdsm during boot. Of course you need to persist <something> The discussion is the persistence of the dynamic network configuration made by hosting VMs.
on use case: we don't have good support for this today, but there is a notion of a "hybrid mode" - installing vdsm on a node doing other things to allow it to run some guests in a lower priority. I'm not sure we can assume total automatic control by ovirt in this use case. to date, we assumed "do no harm" to networks we were not directly asked to configure.
This is a product decision, if you enforce this you enforce newly world of complexity.
If I am to attend to this issue, I would have ran VM with nested virtualization and reduce the problem into this nested host, as I can manage this host as a complete slave.
Regards, Alon.
On 11/27/2012 03:17 PM, Alon Bar-Lev wrote:
----- Original Message -----
From: "Itamar Heim" iheim@redhat.com To: "Alon Bar-Lev" alonbl@redhat.com Cc: "Livnat Peer" lpeer@redhat.com, "VDSM Project Development" vdsm-devel@lists.fedorahosted.org Sent: Tuesday, November 27, 2012 10:08:34 PM Subject: Re: [vdsm] Future of Vdsm network configuration - Thread mid-summary
On 11/26/2012 03:18 PM, Alon Bar-Lev wrote:
----- Original Message -----
From: "Livnat Peer" lpeer@redhat.com To: "Shu Ming" shuming@linux.vnet.ibm.com Cc: "Alon Bar-Lev" abarlev@redhat.com, "VDSM Project Development" vdsm-devel@lists.fedorahosted.org Sent: Monday, November 26, 2012 2:57:19 PM Subject: Re: [vdsm] Future of Vdsm network configuration - Thread mid-summary
On 26/11/12 03:15, Shu Ming wrote:
Livnat,
Thanks for your summary. I got comments below.
2012-11-25 18:53, Livnat Peer:
Hi All, We have been discussing $subject for a while and I'd like to summarized what we agreed and disagreed on thus far.
The way I see it there are two related discussions:
- Getting VDSM networking stack to be distribution agnostic.
- We are all in agreement that VDSM API should be generic enough
to incorporate multiple implementation. (discussed on this thread: Alon's suggestion, Mark's patch for adding support for netcf etc.)
- We would like to maintain at least one implementation as the
working/up-to-date implementation for our users, this implementation should be distribution agnostic (as we all acknowledge this is an important goal for VDSM). I also think that with the agreement of this community we can choose to change our focus, from time to time, from one implementation to another as we see fit (today it can be OVS+netcf and in a few months we'll use the quantum based implementation if we agree it is better)
- The second discussion is about persisting the network
configuration on the host vs. dynamically retrieving it from a centralized location like the engine. Danken raised a concern that even if going with the dynamic approach the host should persist the management network configuration.
About dynamical retrieving from a centralized location, when will the retrieving start? Just in the very early stage of host booting before network functions? Or after the host startup and in the normal running state of the host? Before retrieving the configuration, how does the host network connecting to the engine? I think we need a basic well known network between hosts and the engine first. Then after the retrieving, hosts should reconfigure the network for later management. However, the timing to retrieve and reconfigure are challenging.
We did not discuss the dynamic approach in details on the list so far and I think this is a good opportunity to start this discussion...
From what was discussed previously I can say that the need for a well known network was raised by danken, it was referred to as the management network, this network would be used for pulling the full host network configuration from the centralized location, at this point the engine.
About the timing for retrieving the configuration, there are several approaches. One of them was described by Alon, and I think he'll join this discussion and maybe put it in his own words, but the idea was to 'keep' the network synchronized at all times. When the host have communication channel to the engine and the engine detects there is a mismatch in the host configuration, the engine initiates 'apply network configuration' action on the host.
Using this approach we'll have a single path of code to maintain and that would reduce code complexity and bugs - That's quoting Alon Bar Lev (Alon I hope I did not twisted your words/idea).
On the other hand the above approach makes local tweaks on the host (done manually by the administrator) much harder.
Any other approaches ?
I'd like to add a more general question to the discussion what are the advantages of taking the dynamic approach? So far I collected two reasons:
-It is a 'cleaner' design, removes complexity on VDSM code, easier to maintain going forward, and less bug prone (I agree with that one, as long as we keep the retrieving configuration mechanism/algorithm simple).
-It adheres to the idea of having a stateless hypervisor - some more input on this point would be appreciated
Any other advantages?
discussing the benefits of having the persisted
Livnat
Sorry for the delay. Some more expansion.
ASSUMPTION
After boot a host running vdsm is able to receive communication from engine. This means that host has legitimate layer 2 configuration and layer 3 configuration for the interface used to communicate to engine.
MISSION
Reduce complexity of implementation, so that only one algorithm is used in order to reach to operative state as far as networking is concerned.
(Storage is extremely similar I can s/network/storage/ and still be relevant).
DESIGN FOCAL POINT
Host running vdsm is a complete slave of its master, will it be ovirt-engine or other engine.
Having a complete slave ease implementation:
- Master always apply the setting as-is.
- No need to consider slave state.
- No need to implement AI to reach from unknown state X to known
state Y + delta. 4. After reboot (or fence) host is always in known state.
ALGORITHM
A. Given communication to vdsm, construct required vlan, bonding, bridge setup on machine.
B. Reboot/Fence - host is reset, apply A.
C. Network configuration is changed at engine: (1) Drop all resources that are not used by active VMs. (2) Apply A.
D. Host in maintenance - network configuration can be changed, will be applied when host go into active, apply C (no resources are used by VMs, all resources are dropped).
E. Critical network is down (Host not operative) - network configuration is not changed.
F. Host unreachable (None responsive) - network configuration cannot be changed.
BENEFITS
Single deterministic algorithm to apply network configuration.
Pre-defined state after host reboot/fence, host always reachable, previous network configuration that may be malformed is not in effect.
Easy to integrate with various network management solution, can it be primitive iproute, brctl implementation, NetworkManager, OVS or any other configuration, as Linux is Linux is Linux, the ability to interact with the kernel is single, while in order to persist implementation requires to interact with the distribution.
Moreover, a stateless implementation may be integrated with larger set of network management tools, as no assumption of persistence is added to the requirements, so if OVS is non-persistent, we use it as-is.
We should aspire to reach to a state in which ovirt-node or any other similar solution is totally stateless, adding a new node to a cluster should be some blade rebooting from PXE, each persistence layer we drop, the closer we reach to managing a large data center built on huge number of machines go up/down as required joining different clusters.
While discussing clusters, we should also consider autonomic clusters that enforces policy even if ovirt-engine is unreachable, in this mode we would like a primitive manager to be able to enforce policy including networking, while allowing adding/removing nodes without performing any local configuration.
IMPLICATIONS
System administrator will not be allowed to modify 'by hand' any of the network settings (except of this basic engine reachability).
Special settings can be set in the master, which will apply them via the master->vdsm protocol, which in turn use the network management interface in order to push them, this method should be generic enough to allow pushing most of the configuration setting allowed (key=value). This approach will also help replacing/adding nodes in cluster and/or mass deployment.
Edge conditions can be handled by executing some script on host machine, allowing administrator to override network configuration upon network configuration event.
SUMMARY
Assuming the host running vdsm as a complete slave and stateless will enable us to provide better control over that host in the short and long run.
Manual intervention on hosts serving as hypervisors has the flexibility argument. However at mass deployment, large data-center or dynamic environment this flexibility argument becomes liability.
Thank you, Alon Bar-Lev _______________________________________________ vdsm-devel mailing list vdsm-devel@lists.fedorahosted.org https://lists.fedorahosted.org/mailman/listinfo/vdsm-devel
several questions: on management interface:
- bonding configuration must match switch - I'm not sure you'll even
get layer two without persisting the bonding configuration.
Management interface configuration is a separate issue. If we perform changes of this interface when host is in maintenance we reduce the complexity of the problem.
For your specific issue, if there are two interfaces, one which is up during boot and one which is down during boot, there is no problem to bond them after boot without persisting configuration.
how would you know which bond mode to use? which MTU?
- are you assuming dhcp for the host to get initial configuration?
some are not using it so management network needs to be persisted.
See assumption section, assumption is that you have connectivity engine->vdsm during boot. Of course you need to persist <something> The discussion is the persistence of the dynamic network configuration made by hosting VMs.
on use case: we don't have good support for this today, but there is a notion of a "hybrid mode" - installing vdsm on a node doing other things to allow it to run some guests in a lower priority. I'm not sure we can assume total automatic control by ovirt in this use case. to date, we assumed "do no harm" to networks we were not directly asked to configure.
This is a product decision, if you enforce this you enforce newly world of complexity.
If I am to attend to this issue, I would have ran VM with nested virtualization and reduce the problem into this nested host, as I can manage this host as a complete slave.
nested virt is still not relevant for production use cases in most distros.
----- Original Message -----
From: "Itamar Heim" iheim@redhat.com To: "Alon Bar-Lev" alonbl@redhat.com Cc: "Livnat Peer" lpeer@redhat.com, "VDSM Project Development" vdsm-devel@lists.fedorahosted.org Sent: Tuesday, November 27, 2012 10:19:51 PM Subject: Re: [vdsm] Future of Vdsm network configuration - Thread mid-summary
On 11/27/2012 03:17 PM, Alon Bar-Lev wrote:
----- Original Message -----
From: "Itamar Heim" iheim@redhat.com To: "Alon Bar-Lev" alonbl@redhat.com Cc: "Livnat Peer" lpeer@redhat.com, "VDSM Project Development" vdsm-devel@lists.fedorahosted.org Sent: Tuesday, November 27, 2012 10:08:34 PM Subject: Re: [vdsm] Future of Vdsm network configuration - Thread mid-summary
On 11/26/2012 03:18 PM, Alon Bar-Lev wrote:
----- Original Message -----
From: "Livnat Peer" lpeer@redhat.com To: "Shu Ming" shuming@linux.vnet.ibm.com Cc: "Alon Bar-Lev" abarlev@redhat.com, "VDSM Project Development" vdsm-devel@lists.fedorahosted.org Sent: Monday, November 26, 2012 2:57:19 PM Subject: Re: [vdsm] Future of Vdsm network configuration - Thread mid-summary
On 26/11/12 03:15, Shu Ming wrote:
Livnat,
Thanks for your summary. I got comments below.
2012-11-25 18:53, Livnat Peer: > Hi All, > We have been discussing $subject for a while and I'd like to > summarized > what we agreed and disagreed on thus far. > > The way I see it there are two related discussions: > > > 1. Getting VDSM networking stack to be distribution agnostic. > - We are all in agreement that VDSM API should be generic > enough > to > incorporate multiple implementation. (discussed on this > thread: > Alon's > suggestion, Mark's patch for adding support for netcf etc.) > > - We would like to maintain at least one implementation as the > working/up-to-date implementation for our users, this > implementation > should be distribution agnostic (as we all acknowledge this is > an > important goal for VDSM). > I also think that with the agreement of this community we can > choose to > change our focus, from time to time, from one implementation > to > another > as we see fit (today it can be OVS+netcf and in a few months > we'll > use > the quantum based implementation if we agree it is better) > > 2. The second discussion is about persisting the network > configuration > on the host vs. dynamically retrieving it from a centralized > location > like the engine. Danken raised a concern that even if going > with > the > dynamic approach the host should persist the management > network > configuration.
About dynamical retrieving from a centralized location, when will the retrieving start? Just in the very early stage of host booting before network functions? Or after the host startup and in the normal running state of the host? Before retrieving the configuration, how does the host network connecting to the engine? I think we need a basic well known network between hosts and the engine first. Then after the retrieving, hosts should reconfigure the network for later management. However, the timing to retrieve and reconfigure are challenging.
We did not discuss the dynamic approach in details on the list so far and I think this is a good opportunity to start this discussion...
From what was discussed previously I can say that the need for a well known network was raised by danken, it was referred to as the management network, this network would be used for pulling the full host network configuration from the centralized location, at this point the engine.
About the timing for retrieving the configuration, there are several approaches. One of them was described by Alon, and I think he'll join this discussion and maybe put it in his own words, but the idea was to 'keep' the network synchronized at all times. When the host have communication channel to the engine and the engine detects there is a mismatch in the host configuration, the engine initiates 'apply network configuration' action on the host.
Using this approach we'll have a single path of code to maintain and that would reduce code complexity and bugs - That's quoting Alon Bar Lev (Alon I hope I did not twisted your words/idea).
On the other hand the above approach makes local tweaks on the host (done manually by the administrator) much harder.
Any other approaches ?
I'd like to add a more general question to the discussion what are the advantages of taking the dynamic approach? So far I collected two reasons:
-It is a 'cleaner' design, removes complexity on VDSM code, easier to maintain going forward, and less bug prone (I agree with that one, as long as we keep the retrieving configuration mechanism/algorithm simple).
-It adheres to the idea of having a stateless hypervisor - some more input on this point would be appreciated
Any other advantages?
discussing the benefits of having the persisted
Livnat
Sorry for the delay. Some more expansion.
ASSUMPTION
After boot a host running vdsm is able to receive communication from engine. This means that host has legitimate layer 2 configuration and layer 3 configuration for the interface used to communicate to engine.
MISSION
Reduce complexity of implementation, so that only one algorithm is used in order to reach to operative state as far as networking is concerned.
(Storage is extremely similar I can s/network/storage/ and still be relevant).
DESIGN FOCAL POINT
Host running vdsm is a complete slave of its master, will it be ovirt-engine or other engine.
Having a complete slave ease implementation:
- Master always apply the setting as-is.
- No need to consider slave state.
- No need to implement AI to reach from unknown state X to
known state Y + delta. 4. After reboot (or fence) host is always in known state.
ALGORITHM
A. Given communication to vdsm, construct required vlan, bonding, bridge setup on machine.
B. Reboot/Fence - host is reset, apply A.
C. Network configuration is changed at engine: (1) Drop all resources that are not used by active VMs. (2) Apply A.
D. Host in maintenance - network configuration can be changed, will be applied when host go into active, apply C (no resources are used by VMs, all resources are dropped).
E. Critical network is down (Host not operative) - network configuration is not changed.
F. Host unreachable (None responsive) - network configuration cannot be changed.
BENEFITS
Single deterministic algorithm to apply network configuration.
Pre-defined state after host reboot/fence, host always reachable, previous network configuration that may be malformed is not in effect.
Easy to integrate with various network management solution, can it be primitive iproute, brctl implementation, NetworkManager, OVS or any other configuration, as Linux is Linux is Linux, the ability to interact with the kernel is single, while in order to persist implementation requires to interact with the distribution.
Moreover, a stateless implementation may be integrated with larger set of network management tools, as no assumption of persistence is added to the requirements, so if OVS is non-persistent, we use it as-is.
We should aspire to reach to a state in which ovirt-node or any other similar solution is totally stateless, adding a new node to a cluster should be some blade rebooting from PXE, each persistence layer we drop, the closer we reach to managing a large data center built on huge number of machines go up/down as required joining different clusters.
While discussing clusters, we should also consider autonomic clusters that enforces policy even if ovirt-engine is unreachable, in this mode we would like a primitive manager to be able to enforce policy including networking, while allowing adding/removing nodes without performing any local configuration.
IMPLICATIONS
System administrator will not be allowed to modify 'by hand' any of the network settings (except of this basic engine reachability).
Special settings can be set in the master, which will apply them via the master->vdsm protocol, which in turn use the network management interface in order to push them, this method should be generic enough to allow pushing most of the configuration setting allowed (key=value). This approach will also help replacing/adding nodes in cluster and/or mass deployment.
Edge conditions can be handled by executing some script on host machine, allowing administrator to override network configuration upon network configuration event.
SUMMARY
Assuming the host running vdsm as a complete slave and stateless will enable us to provide better control over that host in the short and long run.
Manual intervention on hosts serving as hypervisors has the flexibility argument. However at mass deployment, large data-center or dynamic environment this flexibility argument becomes liability.
Thank you, Alon Bar-Lev _______________________________________________ vdsm-devel mailing list vdsm-devel@lists.fedorahosted.org https://lists.fedorahosted.org/mailman/listinfo/vdsm-devel
several questions: on management interface:
- bonding configuration must match switch - I'm not sure you'll
even get layer two without persisting the bonding configuration.
Management interface configuration is a separate issue. If we perform changes of this interface when host is in maintenance we reduce the complexity of the problem.
For your specific issue, if there are two interfaces, one which is up during boot and one which is down during boot, there is no problem to bond them after boot without persisting configuration.
how would you know which bond mode to use? which MTU?
I don't understand the question.
- are you assuming dhcp for the host to get initial
configuration? some are not using it so management network needs to be persisted.
See assumption section, assumption is that you have connectivity engine->vdsm during boot. Of course you need to persist <something> The discussion is the persistence of the dynamic network configuration made by hosting VMs.
on use case: we don't have good support for this today, but there is a notion of a "hybrid mode" - installing vdsm on a node doing other things to allow it to run some guests in a lower priority. I'm not sure we can assume total automatic control by ovirt in this use case. to date, we assumed "do no harm" to networks we were not directly asked to configure.
This is a product decision, if you enforce this you enforce newly world of complexity.
If I am to attend to this issue, I would have ran VM with nested virtualization and reduce the problem into this nested host, as I can manage this host as a complete slave.
nested virt is still not relevant for production use cases in most distros.
I don't really care... if you want to host low priority VMs/VDSM on regular generic host, simpler to execute nested virtualization than implementing any other solution.
If this going to be a factor in the decision then we get unneeded complexity into product for setups that are not the mainline, in my humble opinion it will lead to never be able to properly stabilize the host component.
So better to state this clearly at this point.
Alon.
On Tue, Nov 27, 2012 at 03:24:58PM -0500, Alon Bar-Lev wrote:
Management interface configuration is a separate issue.
But it is an important issue that has to be discussed..
If we perform changes of this interface when host is in maintenance we reduce the complexity of the problem.
For your specific issue, if there are two interfaces, one which is up during boot and one which is down during boot, there is no problem to bond them after boot without persisting configuration.
how would you know which bond mode to use? which MTU?
I don't understand the question.
I think I do: Alon suggests that on boot, the management interface would not have bonding at all, and use a single nic. The switch would have to assume that other nics in the bond are dead, and will use the only one which is alive to transfer packets.
There is no doubt that we have to persist the vlan tag of the management interface, and its MTU, in the extremely rare case where the network would not alow Linux's default of 1500.
Dan.
On 11/28/2012 03:53 AM, Dan Kenigsberg wrote:
On Tue, Nov 27, 2012 at 03:24:58PM -0500, Alon Bar-Lev wrote:
Management interface configuration is a separate issue.
But it is an important issue that has to be discussed..
If we perform changes of this interface when host is in maintenance we reduce the complexity of the problem.
For your specific issue, if there are two interfaces, one which is up during boot and one which is down during boot, there is no problem to bond them after boot without persisting configuration.
how would you know which bond mode to use? which MTU?
I don't understand the question.
I think I do: Alon suggests that on boot, the management interface would not have bonding at all, and use a single nic. The switch would have to assume that other nics in the bond are dead, and will use the only one which is alive to transfer packets.
There is no doubt that we have to persist the vlan tag of the management interface, and its MTU, in the extremely rare case where the network would not alow Linux's default of 1500.
i was thinking manager may be using jumbo frames to talk to host, and host will have an issue with them since it is set to 1500 instead of 8k. jumbo frames isn't a rare case.
as for bond, are you sure you can use a nic in a non bonded mode for all bond modes?
next, what if we're using openvswitch, and you need some flow definitions for the management interface?
----- Original Message -----
From: "Itamar Heim" iheim@redhat.com To: "Dan Kenigsberg" danken@redhat.com Cc: "Alon Bar-Lev" alonbl@redhat.com, "VDSM Project Development" vdsm-devel@lists.fedorahosted.org, "Yaniv Kaul" ykaul@redhat.com Sent: Wednesday, November 28, 2012 11:01:35 AM Subject: Re: [vdsm] Future of Vdsm network configuration - Thread mid-summary
On 11/28/2012 03:53 AM, Dan Kenigsberg wrote:
On Tue, Nov 27, 2012 at 03:24:58PM -0500, Alon Bar-Lev wrote:
Management interface configuration is a separate issue.
But it is an important issue that has to be discussed..
If we perform changes of this interface when host is in maintenance we reduce the complexity of the problem.
For your specific issue, if there are two interfaces, one which is up during boot and one which is down during boot, there is no problem to bond them after boot without persisting configuration.
how would you know which bond mode to use? which MTU?
I don't understand the question.
I think I do: Alon suggests that on boot, the management interface would not have bonding at all, and use a single nic. The switch would have to assume that other nics in the bond are dead, and will use the only one which is alive to transfer packets.
There is no doubt that we have to persist the vlan tag of the management interface, and its MTU, in the extremely rare case where the network would not alow Linux's default of 1500.
i was thinking manager may be using jumbo frames to talk to host, and host will have an issue with them since it is set to 1500 instead of 8k. jumbo frames isn't a rare case.
as for bond, are you sure you can use a nic in a non bonded mode for all bond modes?
Changing the master interface mtu for either vlan or bond is required for management interface and non management interface.
So the logic would probably be set max(mtu(slaves)) regardless it is management interface or not.
I discussed this with Livnat, if there are applications that access the master interface directly we may break them, as the destination may not support non standard mtu.
This is true in current implementation and any future implementation.
It is bad practice to use the master interface directly (mixed tagged/untagged), better to define in switch that untagged communication belongs to vlanX, then use this explicit vlanX at host.
next, what if we're using openvswitch, and you need some flow definitions for the management interface?
I cannot answer that as I don't know openvswitch very well and don't know what "flow definitions" are, however, I do guess that it has non persistent mode that can effect any interface on its control. If you like I can research this one.
Alon
----- Original Message -----
From: "Itamar Heim" iheim@redhat.com To: "Dan Kenigsberg" danken@redhat.com Cc: "Alon Bar-Lev" alonbl@redhat.com, "VDSM Project Development" vdsm-devel@lists.fedorahosted.org, "Yaniv Kaul" ykaul@redhat.com Sent: Wednesday, November 28, 2012 11:01:35 AM Subject: Re: [vdsm] Future of Vdsm network configuration - Thread mid-summary
On 11/28/2012 03:53 AM, Dan Kenigsberg wrote:
On Tue, Nov 27, 2012 at 03:24:58PM -0500, Alon Bar-Lev wrote:
Management interface configuration is a separate issue.
But it is an important issue that has to be discussed..
If we perform changes of this interface when host is in maintenance we reduce the complexity of the problem.
For your specific issue, if there are two interfaces, one which is up during boot and one which is down during boot, there is no problem to bond them after boot without persisting configuration.
how would you know which bond mode to use? which MTU?
I don't understand the question.
I think I do: Alon suggests that on boot, the management interface would not have bonding at all, and use a single nic. The switch would have to assume that other nics in the bond are dead, and will use the only one which is alive to transfer packets.
There is no doubt that we have to persist the vlan tag of the management interface, and its MTU, in the extremely rare case where the network would not alow Linux's default of 1500.
i was thinking manager may be using jumbo frames to talk to host, and host will have an issue with them since it is set to 1500 instead of 8k. jumbo frames isn't a rare case.
as for bond, are you sure you can use a nic in a non bonded mode for all bond modes?
all bond modes have to cope with a situation where only a single nic is active and the rest are down, so one can boot with a single active nic and only activate the rest and promote to the desired bond mode upon getting the full network configuration from the manager.
Changing the master interface mtu for either vlan or bond is required for management interface and non management interface.
So the logic would probably be set max(mtu(slaves)) regardless it is management interface or not.
I discussed this with Livnat, if there are applications that access the master interface directly we may break them, as the destination may not support non standard mtu.
This is true in current implementation and any future implementation.
It is bad practice to use the master interface directly (mixed tagged/untagged), better to define in switch that untagged communication belongs to vlanX, then use this explicit vlanX at host.
next, what if we're using openvswitch, and you need some flow definitions for the management interface?
I cannot answer that as I don't know openvswitch very well and don't know what "flow definitions" are, however, I do guess that it has non persistent mode that can effect any interface on its control. If you like I can research this one.
you mainly need OVS for provisioning VM networks so here too you can completely bypass OVS during boot and only configure it in a transactional manner upon getting the full network configuration from the manager.
a general question, why would you need to configure VM networks on the host (assuming a persistent cached configuration) upon boot if it cannot talk to the manager? after-all, in this case no resources would be scheduled to run on this host until connection to the manager is restored and up-to-date network configuration is applied.
thanks, Roni
On 11/28/2012 05:34 AM, Roni Luxenberg wrote:
----- Original Message -----
From: "Itamar Heim" iheim@redhat.com To: "Dan Kenigsberg" danken@redhat.com Cc: "Alon Bar-Lev" alonbl@redhat.com, "VDSM Project Development" vdsm-devel@lists.fedorahosted.org, "Yaniv Kaul" ykaul@redhat.com Sent: Wednesday, November 28, 2012 11:01:35 AM Subject: Re: [vdsm] Future of Vdsm network configuration - Thread mid-summary
On 11/28/2012 03:53 AM, Dan Kenigsberg wrote:
On Tue, Nov 27, 2012 at 03:24:58PM -0500, Alon Bar-Lev wrote:
> > Management interface configuration is a separate issue.
But it is an important issue that has to be discussed..
> If we perform changes of this interface when host is in > maintenance > we reduce the complexity of the problem. > > For your specific issue, if there are two interfaces, one > which > is > up during boot and one which is down during boot, there is no > problem to bond them after boot without persisting > configuration.
how would you know which bond mode to use? which MTU?
I don't understand the question.
I think I do: Alon suggests that on boot, the management interface would not have bonding at all, and use a single nic. The switch would have to assume that other nics in the bond are dead, and will use the only one which is alive to transfer packets.
There is no doubt that we have to persist the vlan tag of the management interface, and its MTU, in the extremely rare case where the network would not alow Linux's default of 1500.
i was thinking manager may be using jumbo frames to talk to host, and host will have an issue with them since it is set to 1500 instead of 8k. jumbo frames isn't a rare case.
as for bond, are you sure you can use a nic in a non bonded mode for all bond modes?
all bond modes have to cope with a situation where only a single nic is active and the rest are down, so one can boot with a single active nic and only activate the rest and promote to the desired bond mode upon getting the full network configuration from the manager.
of course they need to handle single active nic, but iirc, the host must be configured for a matching bond as the switch. i.e., you can't configure the switch to be in bond, then boot the host with a "single active nic" in a non bonded config
Changing the master interface mtu for either vlan or bond is required for management interface and non management interface.
So the logic would probably be set max(mtu(slaves)) regardless it is management interface or not.
I discussed this with Livnat, if there are applications that access the master interface directly we may break them, as the destination may not support non standard mtu.
This is true in current implementation and any future implementation.
It is bad practice to use the master interface directly (mixed tagged/untagged), better to define in switch that untagged communication belongs to vlanX, then use this explicit vlanX at host.
next, what if we're using openvswitch, and you need some flow definitions for the management interface?
I cannot answer that as I don't know openvswitch very well and don't know what "flow definitions" are, however, I do guess that it has non persistent mode that can effect any interface on its control. If you like I can research this one.
you mainly need OVS for provisioning VM networks so here too you can completely bypass OVS during boot and only configure it in a transactional manner upon getting the full network configuration from the manager.
a general question, why would you need to configure VM networks on the host (assuming a persistent cached configuration) upon boot if it cannot talk to the manager? after-all, in this case no resources would be scheduled to run on this host until connection to the manager is restored and up-to-date network configuration is applied.
thanks, Roni
----- Original Message -----
From: "Itamar Heim" iheim@redhat.com To: "Roni Luxenberg" rluxenbe@redhat.com Cc: "Alon Bar-Lev" alonbl@redhat.com, "VDSM Project Development" vdsm-devel@lists.fedorahosted.org Sent: Wednesday, November 28, 2012 2:01:45 PM Subject: Re: [vdsm] Future of Vdsm network configuration - Thread mid-summary
On 11/28/2012 05:34 AM, Roni Luxenberg wrote:
----- Original Message -----
From: "Itamar Heim" iheim@redhat.com To: "Dan Kenigsberg" danken@redhat.com Cc: "Alon Bar-Lev" alonbl@redhat.com, "VDSM Project Development" vdsm-devel@lists.fedorahosted.org, "Yaniv Kaul" ykaul@redhat.com Sent: Wednesday, November 28, 2012 11:01:35 AM Subject: Re: [vdsm] Future of Vdsm network configuration - Thread mid-summary
On 11/28/2012 03:53 AM, Dan Kenigsberg wrote:
On Tue, Nov 27, 2012 at 03:24:58PM -0500, Alon Bar-Lev wrote:
>> >> Management interface configuration is a separate issue.
But it is an important issue that has to be discussed..
>> If we perform changes of this interface when host is in >> maintenance >> we reduce the complexity of the problem. >> >> For your specific issue, if there are two interfaces, one >> which >> is >> up during boot and one which is down during boot, there is no >> problem to bond them after boot without persisting >> configuration. > > how would you know which bond mode to use? which MTU?
I don't understand the question.
I think I do: Alon suggests that on boot, the management interface would not have bonding at all, and use a single nic. The switch would have to assume that other nics in the bond are dead, and will use the only one which is alive to transfer packets.
There is no doubt that we have to persist the vlan tag of the management interface, and its MTU, in the extremely rare case where the network would not alow Linux's default of 1500.
i was thinking manager may be using jumbo frames to talk to host, and host will have an issue with them since it is set to 1500 instead of 8k. jumbo frames isn't a rare case.
as for bond, are you sure you can use a nic in a non bonded mode for all bond modes?
all bond modes have to cope with a situation where only a single nic is active and the rest are down, so one can boot with a single active nic and only activate the rest and promote to the desired bond mode upon getting the full network configuration from the manager.
of course they need to handle single active nic, but iirc, the host must be configured for a matching bond as the switch. i.e., you can't configure the switch to be in bond, then boot the host with a "single active nic" in a non bonded config
as far as I know as long as the 2nd nic is down there is no problem. it is as if the cord is out.
Changing the master interface mtu for either vlan or bond is required for management interface and non management interface.
So the logic would probably be set max(mtu(slaves)) regardless it is management interface or not.
I discussed this with Livnat, if there are applications that access the master interface directly we may break them, as the destination may not support non standard mtu.
This is true in current implementation and any future implementation.
It is bad practice to use the master interface directly (mixed tagged/untagged), better to define in switch that untagged communication belongs to vlanX, then use this explicit vlanX at host.
next, what if we're using openvswitch, and you need some flow definitions for the management interface?
I cannot answer that as I don't know openvswitch very well and don't know what "flow definitions" are, however, I do guess that it has non persistent mode that can effect any interface on its control. If you like I can research this one.
you mainly need OVS for provisioning VM networks so here too you can completely bypass OVS during boot and only configure it in a transactional manner upon getting the full network configuration from the manager.
a general question, why would you need to configure VM networks on the host (assuming a persistent cached configuration) upon boot if it cannot talk to the manager? after-all, in this case no resources would be scheduled to run on this host until connection to the manager is restored and up-to-date network configuration is applied.
thanks, Roni
Hi,
I am working on one of the vdsm bugs that we have and I found that initscripts (initscripts-9.03.34-1.el6.x86_64) behaviour doesn't fits our needs. So, I would like to raise this issue in the list.
The issue is MTU setting according to ifcfg files. I'll try to describe the flow below.
1. I started with ifcfg file for the interface without MTU keyword at all and the proper interface (let say eth0) had the *default* MTU=1500 (according to /sys/class/net/eth0/mtu). 2. I created a bridge with MTU=9000 on top of this interface. Everything went OK. After I wrote MTU=9000 on ifcfg-eth0 and ifdown/ifup it, eth0 got the proper MTU. 3. Now, I removed the bridge and deleted MTU keyword from the ifcfg-eth0. But after ifup/ifdown the actual MTU of the eth0 stayed 9000.
The only way to change it back to 1500 (or something else) is explicitly set MTU in ifcfg file. According to Bill Nottingham it is intentional behaviour. If so, we have a problem in vdsm, because we never set MTU value until user ask it explicitly. It means that if we have interface with MTU=9000 on it just because once there was a bridge with such MTU attached to it and now we want to attach regular bridge with *default* MTU=1500 we have a problem. The only thing we can do to avoid this it's set explicitly MTU=1500 in interface's ifcfg file. IMHO it's a bit ugly, but it looks like we have no choice.
As usual comments more than welcome...
Regards, Igor Lvovsky
----- Original Message -----
From: "Igor Lvovsky" ilvovsky@redhat.com To: "VDSM Project Development" vdsm-devel@lists.fedorahosted.org Cc: "Simon Grinberg" simon@redhat.com Sent: Wednesday, November 28, 2012 2:58:52 PM Subject: [vdsm] MTU setting according to ifcfg files.
Hi,
I am working on one of the vdsm bugs that we have and I found that initscripts (initscripts-9.03.34-1.el6.x86_64) behaviour doesn't fits our needs. So, I would like to raise this issue in the list.
The issue is MTU setting according to ifcfg files. I'll try to describe the flow below.
- I started with ifcfg file for the interface without MTU keyword at
all and the proper interface (let say eth0) had the *default* MTU=1500 (according to /sys/class/net/eth0/mtu). 2. I created a bridge with MTU=9000 on top of this interface. Everything went OK. After I wrote MTU=9000 on ifcfg-eth0 and ifdown/ifup it, eth0 got the proper MTU. 3. Now, I removed the bridge and deleted MTU keyword from the ifcfg-eth0. But after ifup/ifdown the actual MTU of the eth0 stayed 9000.
The only way to change it back to 1500 (or something else) is explicitly set MTU in ifcfg file. According to Bill Nottingham it is intentional behaviour.
Right. The network should not push what not configured.
If so, we have a problem in vdsm, because we never set MTU value until user ask it explicitly. It means that if we have interface with MTU=9000 on it just because once there was a bridge with such MTU attached to it and now we want to attach regular bridge with *default* MTU=1500 we have a problem. The only thing we can do to avoid this it's set explicitly MTU=1500 in interface's ifcfg file. IMHO it's a bit ugly, but it looks like we have no choice.
As usual comments more than welcome...
For the long run have total control over our resources, and push whatever configuration we require... skipping distribution specific behaviour.
Regards, Alon.
On 11/28/2012 02:58 PM, Igor Lvovsky wrote:
Hi,
I am working on one of the vdsm bugs that we have and I found that initscripts (initscripts-9.03.34-1.el6.x86_64) behaviour doesn't fits our needs. So, I would like to raise this issue in the list.
The issue is MTU setting according to ifcfg files. I'll try to describe the flow below.
- I started with ifcfg file for the interface without MTU keyword at all
and the proper interface (let say eth0) had the *default* MTU=1500 (according to /sys/class/net/eth0/mtu). 2. I created a bridge with MTU=9000 on top of this interface. Everything went OK. After I wrote MTU=9000 on ifcfg-eth0 and ifdown/ifup it, eth0 got the proper MTU. 3. Now, I removed the bridge and deleted MTU keyword from the ifcfg-eth0. But after ifup/ifdown the actual MTU of the eth0 stayed 9000.
The only way to change it back to 1500 (or something else) is explicitly set MTU in ifcfg file. According to Bill Nottingham it is intentional behaviour. If so, we have a problem in vdsm, because we never set MTU value until user ask it explicitly. It means that if we have interface with MTU=9000 on it just because once there was a bridge with such MTU attached to it and now we want to attach regular bridge with *default* MTU=1500 we have a problem. The only thing we can do to avoid this it's set explicitly MTU=1500 in interface's ifcfg file. IMHO it's a bit ugly, but it looks like we have no choice.
As usual comments more than welcome...
Regards, Igor Lvovsky _______________________________________________ vdsm-devel mailing list vdsm-devel@lists.fedorahosted.org https://lists.fedorahosted.org/mailman/listinfo/vdsm-devel
To me it doesn't sound ugly at all.. part of the interface configuration file we want to add our default MTU value. Isn't it sound reasonable? if we take over and create a bridge and modify eth-0 to work with that bridge, why can't we change or add other interface's parameters?
----- Original Message -----
From: "Igor Lvovsky" ilvovsky@redhat.com To: "VDSM Project Development" vdsm-devel@lists.fedorahosted.org Cc: "Simon Grinberg" simon@redhat.com Sent: Wednesday, November 28, 2012 2:58:52 PM Subject: [vdsm] MTU setting according to ifcfg files.
Hi,
I am working on one of the vdsm bugs that we have and I found that initscripts (initscripts-9.03.34-1.el6.x86_64) behaviour doesn't fits our needs. So, I would like to raise this issue in the list.
The issue is MTU setting according to ifcfg files. I'll try to describe the flow below.
- I started with ifcfg file for the interface without MTU keyword at
all and the proper interface (let say eth0) had the *default* MTU=1500 (according to /sys/class/net/eth0/mtu). 2. I created a bridge with MTU=9000 on top of this interface. Everything went OK. After I wrote MTU=9000 on ifcfg-eth0 and ifdown/ifup it, eth0 got the proper MTU. 3. Now, I removed the bridge and deleted MTU keyword from the ifcfg-eth0. But after ifup/ifdown the actual MTU of the eth0 stayed 9000.
The only way to change it back to 1500 (or something else) is explicitly set MTU in ifcfg file. According to Bill Nottingham it is intentional behaviour. If so, we have a problem in vdsm, because we never set MTU value until user ask it explicitly.
Actually you are,
You where asked for MTU 9000 on the network, As implementation specif you had to do this all the way down the chain Now it's only reasonable that when you cancel the 9000 request then you'll do what is necessary to rollback the changes. It's pity that ifcfg-files don't have the option to set MTU='default', but as you can read this default before you change, then please keep it somewhere and revert to that.
It means that if we have interface with MTU=9000 on it just because once there was a bridge with such MTU attached to it and now we want to attach regular bridge with *default* MTU=1500 we have a problem. The only thing we can do to avoid this it's set explicitly MTU=1500 in interface's ifcfg file. IMHO it's a bit ugly, but it looks like we have no choice.
As usual comments more than welcome...
Regards, Igor Lvovsky _______________________________________________ vdsm-devel mailing list vdsm-devel@lists.fedorahosted.org https://lists.fedorahosted.org/mailman/listinfo/vdsm-devel
I suggest we don't have a default. If you don't specify an MTU it will use whatever is already configured. There is no way to "go back to the defaults" only to set a new value. The engine can assume 1500 (in case of ethernet devices) is the "recommended value".
----- Original Message -----
From: "Simon Grinberg" simon@redhat.com To: "Igor Lvovsky" ilvovsky@redhat.com Cc: "VDSM Project Development" vdsm-devel@lists.fedorahosted.org Sent: Wednesday, November 28, 2012 9:53:48 AM Subject: Re: [vdsm] MTU setting according to ifcfg files.
----- Original Message -----
From: "Igor Lvovsky" ilvovsky@redhat.com To: "VDSM Project Development" vdsm-devel@lists.fedorahosted.org Cc: "Simon Grinberg" simon@redhat.com Sent: Wednesday, November 28, 2012 2:58:52 PM Subject: [vdsm] MTU setting according to ifcfg files.
Hi,
I am working on one of the vdsm bugs that we have and I found that initscripts (initscripts-9.03.34-1.el6.x86_64) behaviour doesn't fits our needs. So, I would like to raise this issue in the list.
The issue is MTU setting according to ifcfg files. I'll try to describe the flow below.
- I started with ifcfg file for the interface without MTU keyword
at all and the proper interface (let say eth0) had the *default* MTU=1500 (according to /sys/class/net/eth0/mtu). 2. I created a bridge with MTU=9000 on top of this interface. Everything went OK. After I wrote MTU=9000 on ifcfg-eth0 and ifdown/ifup it, eth0 got the proper MTU. 3. Now, I removed the bridge and deleted MTU keyword from the ifcfg-eth0. But after ifup/ifdown the actual MTU of the eth0 stayed 9000.
The only way to change it back to 1500 (or something else) is explicitly set MTU in ifcfg file. According to Bill Nottingham it is intentional behaviour. If so, we have a problem in vdsm, because we never set MTU value until user ask it explicitly.
Actually you are,
You where asked for MTU 9000 on the network, As implementation specif you had to do this all the way down the chain Now it's only reasonable that when you cancel the 9000 request then you'll do what is necessary to rollback the changes. It's pity that ifcfg-files don't have the option to set MTU='default', but as you can read this default before you change, then please keep it somewhere and revert to that.
It means that if we have interface with MTU=9000 on it just because once there was a bridge with such MTU attached to it and now we want to attach regular bridge with *default* MTU=1500 we have a problem. The only thing we can do to avoid this it's set explicitly MTU=1500 in interface's ifcfg file. IMHO it's a bit ugly, but it looks like we have no choice.
As usual comments more than welcome...
Regards, Igor Lvovsky _______________________________________________ vdsm-devel mailing list vdsm-devel@lists.fedorahosted.org https://lists.fedorahosted.org/mailman/listinfo/vdsm-devel
vdsm-devel mailing list vdsm-devel@lists.fedorahosted.org https://lists.fedorahosted.org/mailman/listinfo/vdsm-devel
----- Original Message -----
From: "Saggi Mizrahi" smizrahi@redhat.com To: "Simon Grinberg" simon@redhat.com Cc: "VDSM Project Development" vdsm-devel@lists.fedorahosted.org, "Igor Lvovsky" ilvovsky@redhat.com Sent: Wednesday, November 28, 2012 5:30:17 PM Subject: Re: [vdsm] MTU setting according to ifcfg files.
I suggest we don't have a default. If you don't specify an MTU it will use whatever is already configured. There is no way to "go back to the defaults" only to set a new value. The engine can assume 1500 (in case of ethernet devices) is the "recommended value".
I understand, This is why I've suggested to keep the old value and revert to that.
Igor, alternately you may always calculate based on the hierarchy leafs, meaning the 'trunk' interface always needs to be set to the maximal MTU required by any of the logical networks, and it needs to be recalculated every time you change something in the hierarchy
The problem is what happens if all are removed and then another is configured with MTU set to not override, here you may need to use the saved one.
----- Original Message -----
From: "Simon Grinberg" simon@redhat.com To: "Igor Lvovsky" ilvovsky@redhat.com Cc: "VDSM Project Development" vdsm-devel@lists.fedorahosted.org Sent: Wednesday, November 28, 2012 9:53:48 AM Subject: Re: [vdsm] MTU setting according to ifcfg files.
----- Original Message -----
From: "Igor Lvovsky" ilvovsky@redhat.com To: "VDSM Project Development" vdsm-devel@lists.fedorahosted.org Cc: "Simon Grinberg" simon@redhat.com Sent: Wednesday, November 28, 2012 2:58:52 PM Subject: [vdsm] MTU setting according to ifcfg files.
Hi,
I am working on one of the vdsm bugs that we have and I found that initscripts (initscripts-9.03.34-1.el6.x86_64) behaviour doesn't fits our needs. So, I would like to raise this issue in the list.
The issue is MTU setting according to ifcfg files. I'll try to describe the flow below.
- I started with ifcfg file for the interface without MTU
keyword at all and the proper interface (let say eth0) had the *default* MTU=1500 (according to /sys/class/net/eth0/mtu). 2. I created a bridge with MTU=9000 on top of this interface. Everything went OK. After I wrote MTU=9000 on ifcfg-eth0 and ifdown/ifup it, eth0 got the proper MTU. 3. Now, I removed the bridge and deleted MTU keyword from the ifcfg-eth0. But after ifup/ifdown the actual MTU of the eth0 stayed 9000.
The only way to change it back to 1500 (or something else) is explicitly set MTU in ifcfg file. According to Bill Nottingham it is intentional behaviour. If so, we have a problem in vdsm, because we never set MTU value until user ask it explicitly.
Actually you are,
You where asked for MTU 9000 on the network, As implementation specif you had to do this all the way down the chain Now it's only reasonable that when you cancel the 9000 request then you'll do what is necessary to rollback the changes. It's pity that ifcfg-files don't have the option to set MTU='default', but as you can read this default before you change, then please keep it somewhere and revert to that.
It means that if we have interface with MTU=9000 on it just because once there was a bridge with such MTU attached to it and now we want to attach regular bridge with *default* MTU=1500 we have a problem. The only thing we can do to avoid this it's set explicitly MTU=1500 in interface's ifcfg file. IMHO it's a bit ugly, but it looks like we have no choice.
As usual comments more than welcome...
Regards, Igor Lvovsky _______________________________________________ vdsm-devel mailing list vdsm-devel@lists.fedorahosted.org https://lists.fedorahosted.org/mailman/listinfo/vdsm-devel
vdsm-devel mailing list vdsm-devel@lists.fedorahosted.org https://lists.fedorahosted.org/mailman/listinfo/vdsm-devel
----- Original Message -----
From: "Saggi Mizrahi" smizrahi@redhat.com To: "Simon Grinberg" simon@redhat.com Cc: "VDSM Project Development" vdsm-devel@lists.fedorahosted.org, "Igor Lvovsky" ilvovsky@redhat.com Sent: Wednesday, November 28, 2012 5:30:17 PM Subject: Re: [vdsm] MTU setting according to ifcfg files.
I suggest we don't have a default. If you don't specify an MTU it will use whatever is already configured. There is no way to "go back to the defaults" only to set a new value. The engine can assume 1500 (in case of ethernet devices) is the "recommended value".
This is not related to engine. You are right that the actually MTU will the last configured one, but this is exactly a problem. As I already mentioned, if you will add another bridge without custom MTU its users (VMs) can assume that the MTU is 1500
----- Original Message -----
From: "Simon Grinberg" simon@redhat.com To: "Igor Lvovsky" ilvovsky@redhat.com Cc: "VDSM Project Development" vdsm-devel@lists.fedorahosted.org Sent: Wednesday, November 28, 2012 9:53:48 AM Subject: Re: [vdsm] MTU setting according to ifcfg files.
----- Original Message -----
From: "Igor Lvovsky" ilvovsky@redhat.com To: "VDSM Project Development" vdsm-devel@lists.fedorahosted.org Cc: "Simon Grinberg" simon@redhat.com Sent: Wednesday, November 28, 2012 2:58:52 PM Subject: [vdsm] MTU setting according to ifcfg files.
Hi,
I am working on one of the vdsm bugs that we have and I found that initscripts (initscripts-9.03.34-1.el6.x86_64) behaviour doesn't fits our needs. So, I would like to raise this issue in the list.
The issue is MTU setting according to ifcfg files. I'll try to describe the flow below.
- I started with ifcfg file for the interface without MTU
keyword at all and the proper interface (let say eth0) had the *default* MTU=1500 (according to /sys/class/net/eth0/mtu). 2. I created a bridge with MTU=9000 on top of this interface. Everything went OK. After I wrote MTU=9000 on ifcfg-eth0 and ifdown/ifup it, eth0 got the proper MTU. 3. Now, I removed the bridge and deleted MTU keyword from the ifcfg-eth0. But after ifup/ifdown the actual MTU of the eth0 stayed 9000.
The only way to change it back to 1500 (or something else) is explicitly set MTU in ifcfg file. According to Bill Nottingham it is intentional behaviour. If so, we have a problem in vdsm, because we never set MTU value until user ask it explicitly.
Actually you are,
You where asked for MTU 9000 on the network, As implementation specif you had to do this all the way down the chain Now it's only reasonable that when you cancel the 9000 request then you'll do what is necessary to rollback the changes. It's pity that ifcfg-files don't have the option to set MTU='default', but as you can read this default before you change, then please keep it somewhere and revert to that.
It means that if we have interface with MTU=9000 on it just because once there was a bridge with such MTU attached to it and now we want to attach regular bridge with *default* MTU=1500 we have a problem. The only thing we can do to avoid this it's set explicitly MTU=1500 in interface's ifcfg file. IMHO it's a bit ugly, but it looks like we have no choice.
As usual comments more than welcome...
Regards, Igor Lvovsky _______________________________________________ vdsm-devel mailing list vdsm-devel@lists.fedorahosted.org https://lists.fedorahosted.org/mailman/listinfo/vdsm-devel
vdsm-devel mailing list vdsm-devel@lists.fedorahosted.org https://lists.fedorahosted.org/mailman/listinfo/vdsm-devel
I don't want to keep the last configured MTU. It's problematic. Having a stack is even worse. VDSM should try not to persist anything if possible.
Also, reverting to the last MTU is raceful and has weird corner cases. Best to just assume default it 1500 (Like all major OSs do). But since it's not really a default I would call it a recommended setting.
----- Original Message -----
From: "Igor Lvovsky" ilvovsky@redhat.com To: "Saggi Mizrahi" smizrahi@redhat.com Cc: "VDSM Project Development" vdsm-devel@lists.fedorahosted.org, "Simon Grinberg" simon@redhat.com Sent: Wednesday, November 28, 2012 11:10:27 AM Subject: Re: [vdsm] MTU setting according to ifcfg files.
----- Original Message -----
From: "Saggi Mizrahi" smizrahi@redhat.com To: "Simon Grinberg" simon@redhat.com Cc: "VDSM Project Development" vdsm-devel@lists.fedorahosted.org, "Igor Lvovsky" ilvovsky@redhat.com Sent: Wednesday, November 28, 2012 5:30:17 PM Subject: Re: [vdsm] MTU setting according to ifcfg files.
I suggest we don't have a default. If you don't specify an MTU it will use whatever is already configured. There is no way to "go back to the defaults" only to set a new value. The engine can assume 1500 (in case of ethernet devices) is the "recommended value".
This is not related to engine. You are right that the actually MTU will the last configured one, but this is exactly a problem. As I already mentioned, if you will add another bridge without custom MTU its users (VMs) can assume that the MTU is 1500
----- Original Message -----
From: "Simon Grinberg" simon@redhat.com To: "Igor Lvovsky" ilvovsky@redhat.com Cc: "VDSM Project Development" vdsm-devel@lists.fedorahosted.org Sent: Wednesday, November 28, 2012 9:53:48 AM Subject: Re: [vdsm] MTU setting according to ifcfg files.
----- Original Message -----
From: "Igor Lvovsky" ilvovsky@redhat.com To: "VDSM Project Development" vdsm-devel@lists.fedorahosted.org Cc: "Simon Grinberg" simon@redhat.com Sent: Wednesday, November 28, 2012 2:58:52 PM Subject: [vdsm] MTU setting according to ifcfg files.
Hi,
I am working on one of the vdsm bugs that we have and I found that initscripts (initscripts-9.03.34-1.el6.x86_64) behaviour doesn't fits our needs. So, I would like to raise this issue in the list.
The issue is MTU setting according to ifcfg files. I'll try to describe the flow below.
- I started with ifcfg file for the interface without MTU
keyword at all and the proper interface (let say eth0) had the *default* MTU=1500 (according to /sys/class/net/eth0/mtu). 2. I created a bridge with MTU=9000 on top of this interface. Everything went OK. After I wrote MTU=9000 on ifcfg-eth0 and ifdown/ifup it, eth0 got the proper MTU. 3. Now, I removed the bridge and deleted MTU keyword from the ifcfg-eth0. But after ifup/ifdown the actual MTU of the eth0 stayed 9000.
The only way to change it back to 1500 (or something else) is explicitly set MTU in ifcfg file. According to Bill Nottingham it is intentional behaviour. If so, we have a problem in vdsm, because we never set MTU value until user ask it explicitly.
Actually you are,
You where asked for MTU 9000 on the network, As implementation specif you had to do this all the way down the chain Now it's only reasonable that when you cancel the 9000 request then you'll do what is necessary to rollback the changes. It's pity that ifcfg-files don't have the option to set MTU='default', but as you can read this default before you change, then please keep it somewhere and revert to that.
It means that if we have interface with MTU=9000 on it just because once there was a bridge with such MTU attached to it and now we want to attach regular bridge with *default* MTU=1500 we have a problem. The only thing we can do to avoid this it's set explicitly MTU=1500 in interface's ifcfg file. IMHO it's a bit ugly, but it looks like we have no choice.
As usual comments more than welcome...
Regards, Igor Lvovsky _______________________________________________ vdsm-devel mailing list vdsm-devel@lists.fedorahosted.org https://lists.fedorahosted.org/mailman/listinfo/vdsm-devel
vdsm-devel mailing list vdsm-devel@lists.fedorahosted.org https://lists.fedorahosted.org/mailman/listinfo/vdsm-devel
OK, I think I need to explain myself better, MTU sizes under 1500 are not interesting as they are only really valid for slow networks which will not be able to support virt workloads anyway. 1500 is "internet MTU" and is the recommended size when communicating with the outside world.
MTU is just a size that has to be agreed upon by all participants in the chain. There is no inherent default MTU but default is technically 1500.
Reverting to previous value makes no sense unless you are just testing something out. For that case the engine can remember the current MTU and set it back.
To sum up, I suggest ignoring any previously set value like we would ignore it if VDSM had set it. It makes no sense to keep it because the semantic of setting the MTU is to override the current configuration.
As a side note, having verb to test max MTU for a path might be a good idea to give the engine\user a way to recommend a value to the user.
----- Original Message -----
From: "Saggi Mizrahi" smizrahi@redhat.com To: "Igor Lvovsky" ilvovsky@redhat.com Cc: "VDSM Project Development" vdsm-devel@lists.fedorahosted.org, "Simon Grinberg" simon@redhat.com Sent: Wednesday, November 28, 2012 11:23:52 AM Subject: Re: [vdsm] MTU setting according to ifcfg files.
I don't want to keep the last configured MTU. It's problematic. Having a stack is even worse. VDSM should try not to persist anything if possible.
Also, reverting to the last MTU is raceful and has weird corner cases. Best to just assume default it 1500 (Like all major OSs do). But since it's not really a default I would call it a recommended setting.
----- Original Message -----
From: "Igor Lvovsky" ilvovsky@redhat.com To: "Saggi Mizrahi" smizrahi@redhat.com Cc: "VDSM Project Development" vdsm-devel@lists.fedorahosted.org, "Simon Grinberg" simon@redhat.com Sent: Wednesday, November 28, 2012 11:10:27 AM Subject: Re: [vdsm] MTU setting according to ifcfg files.
----- Original Message -----
From: "Saggi Mizrahi" smizrahi@redhat.com To: "Simon Grinberg" simon@redhat.com Cc: "VDSM Project Development" vdsm-devel@lists.fedorahosted.org, "Igor Lvovsky" ilvovsky@redhat.com Sent: Wednesday, November 28, 2012 5:30:17 PM Subject: Re: [vdsm] MTU setting according to ifcfg files.
I suggest we don't have a default. If you don't specify an MTU it will use whatever is already configured. There is no way to "go back to the defaults" only to set a new value. The engine can assume 1500 (in case of ethernet devices) is the "recommended value".
This is not related to engine. You are right that the actually MTU will the last configured one, but this is exactly a problem. As I already mentioned, if you will add another bridge without custom MTU its users (VMs) can assume that the MTU is 1500
----- Original Message -----
From: "Simon Grinberg" simon@redhat.com To: "Igor Lvovsky" ilvovsky@redhat.com Cc: "VDSM Project Development" vdsm-devel@lists.fedorahosted.org Sent: Wednesday, November 28, 2012 9:53:48 AM Subject: Re: [vdsm] MTU setting according to ifcfg files.
----- Original Message -----
From: "Igor Lvovsky" ilvovsky@redhat.com To: "VDSM Project Development" vdsm-devel@lists.fedorahosted.org Cc: "Simon Grinberg" simon@redhat.com Sent: Wednesday, November 28, 2012 2:58:52 PM Subject: [vdsm] MTU setting according to ifcfg files.
Hi,
I am working on one of the vdsm bugs that we have and I found that initscripts (initscripts-9.03.34-1.el6.x86_64) behaviour doesn't fits our needs. So, I would like to raise this issue in the list.
The issue is MTU setting according to ifcfg files. I'll try to describe the flow below.
- I started with ifcfg file for the interface without MTU
keyword at all and the proper interface (let say eth0) had the *default* MTU=1500 (according to /sys/class/net/eth0/mtu). 2. I created a bridge with MTU=9000 on top of this interface. Everything went OK. After I wrote MTU=9000 on ifcfg-eth0 and ifdown/ifup it, eth0 got the proper MTU. 3. Now, I removed the bridge and deleted MTU keyword from the ifcfg-eth0. But after ifup/ifdown the actual MTU of the eth0 stayed 9000.
The only way to change it back to 1500 (or something else) is explicitly set MTU in ifcfg file. According to Bill Nottingham it is intentional behaviour. If so, we have a problem in vdsm, because we never set MTU value until user ask it explicitly.
Actually you are,
You where asked for MTU 9000 on the network, As implementation specif you had to do this all the way down the chain Now it's only reasonable that when you cancel the 9000 request then you'll do what is necessary to rollback the changes. It's pity that ifcfg-files don't have the option to set MTU='default', but as you can read this default before you change, then please keep it somewhere and revert to that.
It means that if we have interface with MTU=9000 on it just because once there was a bridge with such MTU attached to it and now we want to attach regular bridge with *default* MTU=1500 we have a problem. The only thing we can do to avoid this it's set explicitly MTU=1500 in interface's ifcfg file. IMHO it's a bit ugly, but it looks like we have no choice.
As usual comments more than welcome...
Regards, Igor Lvovsky _______________________________________________ vdsm-devel mailing list vdsm-devel@lists.fedorahosted.org https://lists.fedorahosted.org/mailman/listinfo/vdsm-devel
vdsm-devel mailing list vdsm-devel@lists.fedorahosted.org https://lists.fedorahosted.org/mailman/listinfo/vdsm-devel
----- Original Message -----
From: "Saggi Mizrahi" smizrahi@redhat.com To: "Igor Lvovsky" ilvovsky@redhat.com Cc: "VDSM Project Development" vdsm-devel@lists.fedorahosted.org, "Simon Grinberg" simon@redhat.com, "Barak Azulay" bazulay@redhat.com Sent: Wednesday, November 28, 2012 6:49:22 PM Subject: Re: [vdsm] MTU setting according to ifcfg files.
OK, I think I need to explain myself better, MTU sizes under 1500 are not interesting as they are only really valid for slow networks which will not be able to support virt workloads anyway. 1500 is "internet MTU" and is the recommended size when communicating with the outside world.
MTU is just a size that has to be agreed upon by all participants in the chain. There is no inherent default MTU but default is technically 1500.
Reverting to previous value makes no sense unless you are just testing something out.
Yes it does, There are networks out there that do use MTU > 1500 as weird as it sounds, this usually the admin does initial settings on the management network and then when you set don't touch all works well. An example is when you have storage and management on the same network.
Now consider the scenario that for some VMs the user wants to limit to the 'normal/recommended defaults' so in this case he will have to set in the logical network property to MTU=1500. when VDSM sets this chain it supposedly won't touch the interface MTU since it's already bigger (if it does it's a bug). Now the user has one more logical network of VMs with 9000 since he also have VMs using shared storage on this network.
All works well till now.
But what about when removing the 9000 network? Will VDSM 'remember' that it did not touch the interface MTU in the first place, or will it try to set it to this recommended MTU?.
I have no idea :)
For that case the engine can remember the current MTU and set it back.
To sum up, I suggest ignoring any previously set value like we would ignore it if VDSM had set it. It makes no sense to keep it because the semantic of setting the MTU is to override the current configuration.
As a side note, having verb to test max MTU for a path might be a good idea to give the engine\user a way to recommend a value to the user.
That is better but not perfect :)
----- Original Message -----
From: "Saggi Mizrahi" smizrahi@redhat.com To: "Igor Lvovsky" ilvovsky@redhat.com Cc: "VDSM Project Development" vdsm-devel@lists.fedorahosted.org, "Simon Grinberg" simon@redhat.com Sent: Wednesday, November 28, 2012 11:23:52 AM Subject: Re: [vdsm] MTU setting according to ifcfg files.
I don't want to keep the last configured MTU. It's problematic. Having a stack is even worse. VDSM should try not to persist anything if possible.
Also, reverting to the last MTU is raceful and has weird corner cases. Best to just assume default it 1500 (Like all major OSs do). But since it's not really a default I would call it a recommended setting.
----- Original Message -----
From: "Igor Lvovsky" ilvovsky@redhat.com To: "Saggi Mizrahi" smizrahi@redhat.com Cc: "VDSM Project Development" vdsm-devel@lists.fedorahosted.org, "Simon Grinberg" simon@redhat.com Sent: Wednesday, November 28, 2012 11:10:27 AM Subject: Re: [vdsm] MTU setting according to ifcfg files.
----- Original Message -----
From: "Saggi Mizrahi" smizrahi@redhat.com To: "Simon Grinberg" simon@redhat.com Cc: "VDSM Project Development" vdsm-devel@lists.fedorahosted.org, "Igor Lvovsky" ilvovsky@redhat.com Sent: Wednesday, November 28, 2012 5:30:17 PM Subject: Re: [vdsm] MTU setting according to ifcfg files.
I suggest we don't have a default. If you don't specify an MTU it will use whatever is already configured. There is no way to "go back to the defaults" only to set a new value. The engine can assume 1500 (in case of ethernet devices) is the "recommended value".
This is not related to engine. You are right that the actually MTU will the last configured one, but this is exactly a problem. As I already mentioned, if you will add another bridge without custom MTU its users (VMs) can assume that the MTU is 1500
----- Original Message -----
From: "Simon Grinberg" simon@redhat.com To: "Igor Lvovsky" ilvovsky@redhat.com Cc: "VDSM Project Development" vdsm-devel@lists.fedorahosted.org Sent: Wednesday, November 28, 2012 9:53:48 AM Subject: Re: [vdsm] MTU setting according to ifcfg files.
----- Original Message -----
From: "Igor Lvovsky" ilvovsky@redhat.com To: "VDSM Project Development" vdsm-devel@lists.fedorahosted.org Cc: "Simon Grinberg" simon@redhat.com Sent: Wednesday, November 28, 2012 2:58:52 PM Subject: [vdsm] MTU setting according to ifcfg files.
Hi,
I am working on one of the vdsm bugs that we have and I found that initscripts (initscripts-9.03.34-1.el6.x86_64) behaviour doesn't fits our needs. So, I would like to raise this issue in the list.
The issue is MTU setting according to ifcfg files. I'll try to describe the flow below.
- I started with ifcfg file for the interface without MTU
keyword at all and the proper interface (let say eth0) had the *default* MTU=1500 (according to /sys/class/net/eth0/mtu). 2. I created a bridge with MTU=9000 on top of this interface. Everything went OK. After I wrote MTU=9000 on ifcfg-eth0 and ifdown/ifup it, eth0 got the proper MTU. 3. Now, I removed the bridge and deleted MTU keyword from the ifcfg-eth0. But after ifup/ifdown the actual MTU of the eth0 stayed 9000.
The only way to change it back to 1500 (or something else) is explicitly set MTU in ifcfg file. According to Bill Nottingham it is intentional behaviour. If so, we have a problem in vdsm, because we never set MTU value until user ask it explicitly.
Actually you are,
You where asked for MTU 9000 on the network, As implementation specif you had to do this all the way down the chain Now it's only reasonable that when you cancel the 9000 request then you'll do what is necessary to rollback the changes. It's pity that ifcfg-files don't have the option to set MTU='default', but as you can read this default before you change, then please keep it somewhere and revert to that.
It means that if we have interface with MTU=9000 on it just because once there was a bridge with such MTU attached to it and now we want to attach regular bridge with *default* MTU=1500 we have a problem. The only thing we can do to avoid this it's set explicitly MTU=1500 in interface's ifcfg file. IMHO it's a bit ugly, but it looks like we have no choice.
As usual comments more than welcome...
Regards, Igor Lvovsky _______________________________________________ vdsm-devel mailing list vdsm-devel@lists.fedorahosted.org https://lists.fedorahosted.org/mailman/listinfo/vdsm-devel
vdsm-devel mailing list vdsm-devel@lists.fedorahosted.org https://lists.fedorahosted.org/mailman/listinfo/vdsm-devel
----- Original Message -----
From: "Simon Grinberg" simon@redhat.com To: "Saggi Mizrahi" smizrahi@redhat.com Cc: "VDSM Project Development" vdsm-devel@lists.fedorahosted.org, "Barak Azulay" bazulay@redhat.com, "Igor Lvovsky" ilvovsky@redhat.com Sent: Wednesday, November 28, 2012 12:03:03 PM Subject: Re: [vdsm] MTU setting according to ifcfg files.
----- Original Message -----
From: "Saggi Mizrahi" smizrahi@redhat.com To: "Igor Lvovsky" ilvovsky@redhat.com Cc: "VDSM Project Development" vdsm-devel@lists.fedorahosted.org, "Simon Grinberg" simon@redhat.com, "Barak Azulay" bazulay@redhat.com Sent: Wednesday, November 28, 2012 6:49:22 PM Subject: Re: [vdsm] MTU setting according to ifcfg files.
OK, I think I need to explain myself better, MTU sizes under 1500 are not interesting as they are only really valid for slow networks which will not be able to support virt workloads anyway. 1500 is "internet MTU" and is the recommended size when communicating with the outside world.
MTU is just a size that has to be agreed upon by all participants in the chain. There is no inherent default MTU but default is technically 1500.
Reverting to previous value makes no sense unless you are just testing something out.
Yes it does, There are networks out there that do use MTU > 1500 as weird as it sounds,
It not weird at all, this is why MTU settings exist. But setting a low MTU will not break the network but will just have some performance degredation.
this usually the admin does initial settings on the management network and then when you set don't touch all works well. An example is when you have storage and management on the same network.
Now consider the scenario that for some VMs the user wants to limit to the 'normal/recommended defaults' so in this case he will have to set in the logical network property to MTU=1500. when VDSM sets this chain it supposedly won't touch the interface MTU since it's already bigger (if it does it's a bug). Now the user has one more logical network of VMs with 9000 since he also have VMs using shared storage on this network.
All works well till now.
But what about when removing the 9000 network? Will VDSM 'remember' that it did not touch the interface MTU in the first place, or will it try to set it to this recommended MTU?.
It's a question of ownership. Because it's simpler I suggest we assume ownership and always set the maximum needed (also lowering if to high). The engine can query the MTU and make weird decision according. Like setting the current as default or as a saved value or whatever. This flow obviously needs user input so VSDM is not the place to put the decision making.
I have no idea :)
For that case the engine can remember the current MTU and set it back.
To sum up, I suggest ignoring any previously set value like we would ignore it if VDSM had set it. It makes no sense to keep it because the semantic of setting the MTU is to override the current configuration.
As a side note, having verb to test max MTU for a path might be a good idea to give the engine\user a way to recommend a value to the user.
That is better but not perfect :)
----- Original Message -----
From: "Saggi Mizrahi" smizrahi@redhat.com To: "Igor Lvovsky" ilvovsky@redhat.com Cc: "VDSM Project Development" vdsm-devel@lists.fedorahosted.org, "Simon Grinberg" simon@redhat.com Sent: Wednesday, November 28, 2012 11:23:52 AM Subject: Re: [vdsm] MTU setting according to ifcfg files.
I don't want to keep the last configured MTU. It's problematic. Having a stack is even worse. VDSM should try not to persist anything if possible.
Also, reverting to the last MTU is raceful and has weird corner cases. Best to just assume default it 1500 (Like all major OSs do). But since it's not really a default I would call it a recommended setting.
----- Original Message -----
From: "Igor Lvovsky" ilvovsky@redhat.com To: "Saggi Mizrahi" smizrahi@redhat.com Cc: "VDSM Project Development" vdsm-devel@lists.fedorahosted.org, "Simon Grinberg" simon@redhat.com Sent: Wednesday, November 28, 2012 11:10:27 AM Subject: Re: [vdsm] MTU setting according to ifcfg files.
----- Original Message -----
From: "Saggi Mizrahi" smizrahi@redhat.com To: "Simon Grinberg" simon@redhat.com Cc: "VDSM Project Development" vdsm-devel@lists.fedorahosted.org, "Igor Lvovsky" ilvovsky@redhat.com Sent: Wednesday, November 28, 2012 5:30:17 PM Subject: Re: [vdsm] MTU setting according to ifcfg files.
I suggest we don't have a default. If you don't specify an MTU it will use whatever is already configured. There is no way to "go back to the defaults" only to set a new value. The engine can assume 1500 (in case of ethernet devices) is the "recommended value".
This is not related to engine. You are right that the actually MTU will the last configured one, but this is exactly a problem. As I already mentioned, if you will add another bridge without custom MTU its users (VMs) can assume that the MTU is 1500
----- Original Message -----
From: "Simon Grinberg" simon@redhat.com To: "Igor Lvovsky" ilvovsky@redhat.com Cc: "VDSM Project Development" vdsm-devel@lists.fedorahosted.org Sent: Wednesday, November 28, 2012 9:53:48 AM Subject: Re: [vdsm] MTU setting according to ifcfg files.
----- Original Message ----- > From: "Igor Lvovsky" ilvovsky@redhat.com > To: "VDSM Project Development" > vdsm-devel@lists.fedorahosted.org > Cc: "Simon Grinberg" simon@redhat.com > Sent: Wednesday, November 28, 2012 2:58:52 PM > Subject: [vdsm] MTU setting according to ifcfg files. > > Hi, > > I am working on one of the vdsm bugs that we have and I > found > that > initscripts (initscripts-9.03.34-1.el6.x86_64) > behaviour doesn't fits our needs. > So, I would like to raise this issue in the list. > > The issue is MTU setting according to ifcfg files. > I'll try to describe the flow below. > > 1. I started with ifcfg file for the interface without > MTU > keyword > at > all > and the proper interface (let say eth0) had the *default* > MTU=1500 > (according to /sys/class/net/eth0/mtu). > 2. I created a bridge with MTU=9000 on top of this > interface. > Everything went OK. > After I wrote MTU=9000 on ifcfg-eth0 and ifdown/ifup > it, > eth0 > got > the proper MTU. > 3. Now, I removed the bridge and deleted MTU keyword from > the > ifcfg-eth0. > But after ifup/ifdown the actual MTU of the eth0 > stayed > 9000. > > The only way to change it back to 1500 (or something > else) > is > explicitly set MTU in ifcfg file. > According to Bill Nottingham it is intentional behaviour. > If so, we have a problem in vdsm, because we never set > MTU > value > until user ask it explicitly.
Actually you are,
You where asked for MTU 9000 on the network, As implementation specif you had to do this all the way down the chain Now it's only reasonable that when you cancel the 9000 request then you'll do what is necessary to rollback the changes. It's pity that ifcfg-files don't have the option to set MTU='default', but as you can read this default before you change, then please keep it somewhere and revert to that.
> It means that if we have interface with MTU=9000 on it > just > because > once there was a bridge with such MTU > attached to it and now we want to attach regular bridge > with > *default* MTU=1500 we have a problem. > The only thing we can do to avoid this it's set > explicitly > MTU=1500 > in interface's ifcfg file. > IMHO it's a bit ugly, but it looks like we have no > choice. > > As usual comments more than welcome... > > Regards, > Igor Lvovsky > _______________________________________________ > vdsm-devel mailing list > vdsm-devel@lists.fedorahosted.org > https://lists.fedorahosted.org/mailman/listinfo/vdsm-devel > _______________________________________________ vdsm-devel mailing list vdsm-devel@lists.fedorahosted.org https://lists.fedorahosted.org/mailman/listinfo/vdsm-devel
----- Original Message -----
From: "Saggi Mizrahi" smizrahi@redhat.com To: "Simon Grinberg" simon@redhat.com Cc: "VDSM Project Development" vdsm-devel@lists.fedorahosted.org Sent: Wednesday, November 28, 2012 7:15:35 PM Subject: Re: [vdsm] MTU setting according to ifcfg files.
----- Original Message -----
From: "Simon Grinberg" simon@redhat.com To: "Saggi Mizrahi" smizrahi@redhat.com Cc: "VDSM Project Development" vdsm-devel@lists.fedorahosted.org, "Barak Azulay" bazulay@redhat.com, "Igor Lvovsky" ilvovsky@redhat.com Sent: Wednesday, November 28, 2012 12:03:03 PM Subject: Re: [vdsm] MTU setting according to ifcfg files.
----- Original Message -----
From: "Saggi Mizrahi" smizrahi@redhat.com To: "Igor Lvovsky" ilvovsky@redhat.com Cc: "VDSM Project Development" vdsm-devel@lists.fedorahosted.org, "Simon Grinberg" simon@redhat.com, "Barak Azulay" bazulay@redhat.com Sent: Wednesday, November 28, 2012 6:49:22 PM Subject: Re: [vdsm] MTU setting according to ifcfg files.
OK, I think I need to explain myself better, MTU sizes under 1500 are not interesting as they are only really valid for slow networks which will not be able to support virt workloads anyway. 1500 is "internet MTU" and is the recommended size when communicating with the outside world.
MTU is just a size that has to be agreed upon by all participants in the chain. There is no inherent default MTU but default is technically 1500.
Reverting to previous value makes no sense unless you are just testing something out.
Yes it does, There are networks out there that do use MTU > 1500 as weird as it sounds,
It not weird at all, this is why MTU settings exist. But setting a low MTU will not break the network but will just have some performance degredation.
this usually the admin does initial settings on the management network and then when you set don't touch all works well. An example is when you have storage and management on the same network.
Now consider the scenario that for some VMs the user wants to limit to the 'normal/recommended defaults' so in this case he will have to set in the logical network property to MTU=1500. when VDSM sets this chain it supposedly won't touch the interface MTU since it's already bigger (if it does it's a bug). Now the user has one more logical network of VMs with 9000 since he also have VMs using shared storage on this network.
All works well till now.
But what about when removing the 9000 network? Will VDSM 'remember' that it did not touch the interface MTU in the first place, or will it try to set it to this recommended MTU?.
It's a question of ownership. Because it's simpler I suggest we assume ownership and always set the maximum needed (also lowering if to high). The engine can query the MTU and make weird decision according. Like setting the current as default or as a saved value or whatever. This flow obviously needs user input so VSDM is not the place to put the decision making.
I tend to agree, it's an ownership thing
Engine should not allow mixed configuration of 'default vs override' on the same interface. If user wishes to start playing with MTUs he needs to use it carefully and across the board.
VDSM should not bother with the issue at all, certainly not playing a guessing game.
Livant, your 0.02$?
I have no idea :)
For that case the engine can remember the current MTU and set it back.
To sum up, I suggest ignoring any previously set value like we would ignore it if VDSM had set it. It makes no sense to keep it because the semantic of setting the MTU is to override the current configuration.
As a side note, having verb to test max MTU for a path might be a good idea to give the engine\user a way to recommend a value to the user.
That is better but not perfect :)
----- Original Message -----
From: "Saggi Mizrahi" smizrahi@redhat.com To: "Igor Lvovsky" ilvovsky@redhat.com Cc: "VDSM Project Development" vdsm-devel@lists.fedorahosted.org, "Simon Grinberg" simon@redhat.com Sent: Wednesday, November 28, 2012 11:23:52 AM Subject: Re: [vdsm] MTU setting according to ifcfg files.
I don't want to keep the last configured MTU. It's problematic. Having a stack is even worse. VDSM should try not to persist anything if possible.
Also, reverting to the last MTU is raceful and has weird corner cases. Best to just assume default it 1500 (Like all major OSs do). But since it's not really a default I would call it a recommended setting.
----- Original Message -----
From: "Igor Lvovsky" ilvovsky@redhat.com To: "Saggi Mizrahi" smizrahi@redhat.com Cc: "VDSM Project Development" vdsm-devel@lists.fedorahosted.org, "Simon Grinberg" simon@redhat.com Sent: Wednesday, November 28, 2012 11:10:27 AM Subject: Re: [vdsm] MTU setting according to ifcfg files.
----- Original Message -----
From: "Saggi Mizrahi" smizrahi@redhat.com To: "Simon Grinberg" simon@redhat.com Cc: "VDSM Project Development" vdsm-devel@lists.fedorahosted.org, "Igor Lvovsky" ilvovsky@redhat.com Sent: Wednesday, November 28, 2012 5:30:17 PM Subject: Re: [vdsm] MTU setting according to ifcfg files.
I suggest we don't have a default. If you don't specify an MTU it will use whatever is already configured. There is no way to "go back to the defaults" only to set a new value. The engine can assume 1500 (in case of ethernet devices) is the "recommended value".
This is not related to engine. You are right that the actually MTU will the last configured one, but this is exactly a problem. As I already mentioned, if you will add another bridge without custom MTU its users (VMs) can assume that the MTU is 1500
----- Original Message ----- > From: "Simon Grinberg" simon@redhat.com > To: "Igor Lvovsky" ilvovsky@redhat.com > Cc: "VDSM Project Development" > vdsm-devel@lists.fedorahosted.org > Sent: Wednesday, November 28, 2012 9:53:48 AM > Subject: Re: [vdsm] MTU setting according to ifcfg files. > > > > ----- Original Message ----- > > From: "Igor Lvovsky" ilvovsky@redhat.com > > To: "VDSM Project Development" > > vdsm-devel@lists.fedorahosted.org > > Cc: "Simon Grinberg" simon@redhat.com > > Sent: Wednesday, November 28, 2012 2:58:52 PM > > Subject: [vdsm] MTU setting according to ifcfg files. > > > > Hi, > > > > I am working on one of the vdsm bugs that we have and I > > found > > that > > initscripts (initscripts-9.03.34-1.el6.x86_64) > > behaviour doesn't fits our needs. > > So, I would like to raise this issue in the list. > > > > The issue is MTU setting according to ifcfg files. > > I'll try to describe the flow below. > > > > 1. I started with ifcfg file for the interface without > > MTU > > keyword > > at > > all > > and the proper interface (let say eth0) had the > > *default* > > MTU=1500 > > (according to /sys/class/net/eth0/mtu). > > 2. I created a bridge with MTU=9000 on top of this > > interface. > > Everything went OK. > > After I wrote MTU=9000 on ifcfg-eth0 and ifdown/ifup > > it, > > eth0 > > got > > the proper MTU. > > 3. Now, I removed the bridge and deleted MTU keyword > > from > > the > > ifcfg-eth0. > > But after ifup/ifdown the actual MTU of the eth0 > > stayed > > 9000. > > > > The only way to change it back to 1500 (or something > > else) > > is > > explicitly set MTU in ifcfg file. > > According to Bill Nottingham it is intentional > > behaviour. > > If so, we have a problem in vdsm, because we never set > > MTU > > value > > until user ask it explicitly. > > Actually you are, > > You where asked for MTU 9000 on the network, > As implementation specif you had to do this all the way > down > the > chain > Now it's only reasonable that when you cancel the 9000 > request > then > you'll do what is necessary to rollback the changes. > It's pity that ifcfg-files don't have the option to set > MTU='default', but as you can read this default before > you > change, > then please keep it somewhere and revert to that. > > > > It means that if we have interface with MTU=9000 on it > > just > > because > > once there was a bridge with such MTU > > attached to it and now we want to attach regular bridge > > with > > *default* MTU=1500 we have a problem. > > The only thing we can do to avoid this it's set > > explicitly > > MTU=1500 > > in interface's ifcfg file. > > IMHO it's a bit ugly, but it looks like we have no > > choice. > > > > As usual comments more than welcome... > > > > Regards, > > Igor Lvovsky > > _______________________________________________ > > vdsm-devel mailing list > > vdsm-devel@lists.fedorahosted.org > > https://lists.fedorahosted.org/mailman/listinfo/vdsm-devel > > > _______________________________________________ > vdsm-devel mailing list > vdsm-devel@lists.fedorahosted.org > https://lists.fedorahosted.org/mailman/listinfo/vdsm-devel >
vdsm-devel mailing list vdsm-devel@lists.fedorahosted.org https://lists.fedorahosted.org/mailman/listinfo/vdsm-devel
----- Original Message -----
From: "Simon Grinberg" simon@redhat.com To: "Saggi Mizrahi" smizrahi@redhat.com, "lpeer >> Livnat Peer" lpeer@redhat.com Cc: "VDSM Project Development" vdsm-devel@lists.fedorahosted.org Sent: Wednesday, November 28, 2012 7:37:48 PM Subject: Re: [vdsm] MTU setting according to ifcfg files.
----- Original Message -----
From: "Saggi Mizrahi" smizrahi@redhat.com To: "Simon Grinberg" simon@redhat.com Cc: "VDSM Project Development" vdsm-devel@lists.fedorahosted.org Sent: Wednesday, November 28, 2012 7:15:35 PM Subject: Re: [vdsm] MTU setting according to ifcfg files.
----- Original Message -----
From: "Simon Grinberg" simon@redhat.com To: "Saggi Mizrahi" smizrahi@redhat.com Cc: "VDSM Project Development" vdsm-devel@lists.fedorahosted.org, "Barak Azulay" bazulay@redhat.com, "Igor Lvovsky" ilvovsky@redhat.com Sent: Wednesday, November 28, 2012 12:03:03 PM Subject: Re: [vdsm] MTU setting according to ifcfg files.
----- Original Message -----
From: "Saggi Mizrahi" smizrahi@redhat.com To: "Igor Lvovsky" ilvovsky@redhat.com Cc: "VDSM Project Development" vdsm-devel@lists.fedorahosted.org, "Simon Grinberg" simon@redhat.com, "Barak Azulay" bazulay@redhat.com Sent: Wednesday, November 28, 2012 6:49:22 PM Subject: Re: [vdsm] MTU setting according to ifcfg files.
OK, I think I need to explain myself better, MTU sizes under 1500 are not interesting as they are only really valid for slow networks which will not be able to support virt workloads anyway. 1500 is "internet MTU" and is the recommended size when communicating with the outside world.
MTU is just a size that has to be agreed upon by all participants in the chain. There is no inherent default MTU but default is technically 1500.
Reverting to previous value makes no sense unless you are just testing something out.
Yes it does, There are networks out there that do use MTU > 1500 as weird as it sounds,
It not weird at all, this is why MTU settings exist. But setting a low MTU will not break the network but will just have some performance degredation.
this usually the admin does initial settings on the management network and then when you set don't touch all works well. An example is when you have storage and management on the same network.
Now consider the scenario that for some VMs the user wants to limit to the 'normal/recommended defaults' so in this case he will have to set in the logical network property to MTU=1500. when VDSM sets this chain it supposedly won't touch the interface MTU since it's already bigger (if it does it's a bug). Now the user has one more logical network of VMs with 9000 since he also have VMs using shared storage on this network.
All works well till now.
But what about when removing the 9000 network? Will VDSM 'remember' that it did not touch the interface MTU in the first place, or will it try to set it to this recommended MTU?.
It's a question of ownership. Because it's simpler I suggest we assume ownership and always set the maximum needed (also lowering if to high). The engine can query the MTU and make weird decision according. Like setting the current as default or as a saved value or whatever. This flow obviously needs user input so VSDM is not the place to put the decision making.
I tend to agree, it's an ownership thing
Engine should not allow mixed configuration of 'default vs override' on the same interface. If user wishes to start playing with MTUs he needs to use it carefully and across the board.
VDSM should not bother with the issue at all, certainly not playing a guessing game.
Livant, your 0.02$?
This exactly the reason why we should either define completely stateless slave host, and apply configuration including what you call 'defaults'.
Or, store configuration before we perform any change so we can revert.
Assuming manual changes and distro specific persistence make the problem complex in factor of np complete, as we do not know what was changed when and how to revert.
Itamar though a bomb that we should co-exist on generic host, this is something I do not know to compute. I still waiting for a response of where this requirement came from and if that mandatory.
Alon
I have no idea :)
For that case the engine can remember the current MTU and set it back.
To sum up, I suggest ignoring any previously set value like we would ignore it if VDSM had set it. It makes no sense to keep it because the semantic of setting the MTU is to override the current configuration.
As a side note, having verb to test max MTU for a path might be a good idea to give the engine\user a way to recommend a value to the user.
That is better but not perfect :)
----- Original Message -----
From: "Saggi Mizrahi" smizrahi@redhat.com To: "Igor Lvovsky" ilvovsky@redhat.com Cc: "VDSM Project Development" vdsm-devel@lists.fedorahosted.org, "Simon Grinberg" simon@redhat.com Sent: Wednesday, November 28, 2012 11:23:52 AM Subject: Re: [vdsm] MTU setting according to ifcfg files.
I don't want to keep the last configured MTU. It's problematic. Having a stack is even worse. VDSM should try not to persist anything if possible.
Also, reverting to the last MTU is raceful and has weird corner cases. Best to just assume default it 1500 (Like all major OSs do). But since it's not really a default I would call it a recommended setting.
----- Original Message -----
From: "Igor Lvovsky" ilvovsky@redhat.com To: "Saggi Mizrahi" smizrahi@redhat.com Cc: "VDSM Project Development" vdsm-devel@lists.fedorahosted.org, "Simon Grinberg" simon@redhat.com Sent: Wednesday, November 28, 2012 11:10:27 AM Subject: Re: [vdsm] MTU setting according to ifcfg files.
----- Original Message ----- > From: "Saggi Mizrahi" smizrahi@redhat.com > To: "Simon Grinberg" simon@redhat.com > Cc: "VDSM Project Development" > vdsm-devel@lists.fedorahosted.org, > "Igor Lvovsky" ilvovsky@redhat.com > Sent: Wednesday, November 28, 2012 5:30:17 PM > Subject: Re: [vdsm] MTU setting according to ifcfg files. > > I suggest we don't have a default. If you don't specify > an > MTU > it > will use whatever is already configured. > There is no way to "go back to the defaults" only to set > a > new > value. > The engine can assume 1500 (in case of ethernet devices) > is > the > "recommended value". >
This is not related to engine. You are right that the actually MTU will the last configured one, but this is exactly a problem. As I already mentioned, if you will add another bridge without custom MTU its users (VMs) can assume that the MTU is 1500
> ----- Original Message ----- > > From: "Simon Grinberg" simon@redhat.com > > To: "Igor Lvovsky" ilvovsky@redhat.com > > Cc: "VDSM Project Development" > > vdsm-devel@lists.fedorahosted.org > > Sent: Wednesday, November 28, 2012 9:53:48 AM > > Subject: Re: [vdsm] MTU setting according to ifcfg > > files. > > > > > > > > ----- Original Message ----- > > > From: "Igor Lvovsky" ilvovsky@redhat.com > > > To: "VDSM Project Development" > > > vdsm-devel@lists.fedorahosted.org > > > Cc: "Simon Grinberg" simon@redhat.com > > > Sent: Wednesday, November 28, 2012 2:58:52 PM > > > Subject: [vdsm] MTU setting according to ifcfg files. > > > > > > Hi, > > > > > > I am working on one of the vdsm bugs that we have and > > > I > > > found > > > that > > > initscripts (initscripts-9.03.34-1.el6.x86_64) > > > behaviour doesn't fits our needs. > > > So, I would like to raise this issue in the list. > > > > > > The issue is MTU setting according to ifcfg files. > > > I'll try to describe the flow below. > > > > > > 1. I started with ifcfg file for the interface > > > without > > > MTU > > > keyword > > > at > > > all > > > and the proper interface (let say eth0) had the > > > *default* > > > MTU=1500 > > > (according to /sys/class/net/eth0/mtu). > > > 2. I created a bridge with MTU=9000 on top of this > > > interface. > > > Everything went OK. > > > After I wrote MTU=9000 on ifcfg-eth0 and > > > ifdown/ifup > > > it, > > > eth0 > > > got > > > the proper MTU. > > > 3. Now, I removed the bridge and deleted MTU keyword > > > from > > > the > > > ifcfg-eth0. > > > But after ifup/ifdown the actual MTU of the eth0 > > > stayed > > > 9000. > > > > > > The only way to change it back to 1500 (or something > > > else) > > > is > > > explicitly set MTU in ifcfg file. > > > According to Bill Nottingham it is intentional > > > behaviour. > > > If so, we have a problem in vdsm, because we never > > > set > > > MTU > > > value > > > until user ask it explicitly. > > > > Actually you are, > > > > You where asked for MTU 9000 on the network, > > As implementation specif you had to do this all the way > > down > > the > > chain > > Now it's only reasonable that when you cancel the 9000 > > request > > then > > you'll do what is necessary to rollback the changes. > > It's pity that ifcfg-files don't have the option to set > > MTU='default', but as you can read this default before > > you > > change, > > then please keep it somewhere and revert to that. > > > > > > > It means that if we have interface with MTU=9000 on > > > it > > > just > > > because > > > once there was a bridge with such MTU > > > attached to it and now we want to attach regular > > > bridge > > > with > > > *default* MTU=1500 we have a problem. > > > The only thing we can do to avoid this it's set > > > explicitly > > > MTU=1500 > > > in interface's ifcfg file. > > > IMHO it's a bit ugly, but it looks like we have no > > > choice. > > > > > > As usual comments more than welcome... > > > > > > Regards, > > > Igor Lvovsky > > > _______________________________________________ > > > vdsm-devel mailing list > > > vdsm-devel@lists.fedorahosted.org > > > https://lists.fedorahosted.org/mailman/listinfo/vdsm-devel > > > > > _______________________________________________ > > vdsm-devel mailing list > > vdsm-devel@lists.fedorahosted.org > > https://lists.fedorahosted.org/mailman/listinfo/vdsm-devel > > >
vdsm-devel mailing list vdsm-devel@lists.fedorahosted.org https://lists.fedorahosted.org/mailman/listinfo/vdsm-devel
vdsm-devel mailing list vdsm-devel@lists.fedorahosted.org https://lists.fedorahosted.org/mailman/listinfo/vdsm-devel
----- Original Message -----
From: "Alon Bar-Lev" alonbl@redhat.com To: "Simon Grinberg" simon@redhat.com Cc: "VDSM Project Development" vdsm-devel@lists.fedorahosted.org, "Saggi Mizrahi" smizrahi@redhat.com, "lpeer >> Livnat Peer" lpeer@redhat.com Sent: Wednesday, November 28, 2012 12:49:10 PM Subject: Re: [vdsm] MTU setting according to ifcfg files.
----- Original Message -----
From: "Simon Grinberg" simon@redhat.com To: "Saggi Mizrahi" smizrahi@redhat.com, "lpeer >> Livnat Peer" lpeer@redhat.com Cc: "VDSM Project Development" vdsm-devel@lists.fedorahosted.org Sent: Wednesday, November 28, 2012 7:37:48 PM Subject: Re: [vdsm] MTU setting according to ifcfg files.
----- Original Message -----
From: "Saggi Mizrahi" smizrahi@redhat.com To: "Simon Grinberg" simon@redhat.com Cc: "VDSM Project Development" vdsm-devel@lists.fedorahosted.org Sent: Wednesday, November 28, 2012 7:15:35 PM Subject: Re: [vdsm] MTU setting according to ifcfg files.
----- Original Message -----
From: "Simon Grinberg" simon@redhat.com To: "Saggi Mizrahi" smizrahi@redhat.com Cc: "VDSM Project Development" vdsm-devel@lists.fedorahosted.org, "Barak Azulay" bazulay@redhat.com, "Igor Lvovsky" ilvovsky@redhat.com Sent: Wednesday, November 28, 2012 12:03:03 PM Subject: Re: [vdsm] MTU setting according to ifcfg files.
----- Original Message -----
From: "Saggi Mizrahi" smizrahi@redhat.com To: "Igor Lvovsky" ilvovsky@redhat.com Cc: "VDSM Project Development" vdsm-devel@lists.fedorahosted.org, "Simon Grinberg" simon@redhat.com, "Barak Azulay" bazulay@redhat.com Sent: Wednesday, November 28, 2012 6:49:22 PM Subject: Re: [vdsm] MTU setting according to ifcfg files.
OK, I think I need to explain myself better, MTU sizes under 1500 are not interesting as they are only really valid for slow networks which will not be able to support virt workloads anyway. 1500 is "internet MTU" and is the recommended size when communicating with the outside world.
MTU is just a size that has to be agreed upon by all participants in the chain. There is no inherent default MTU but default is technically 1500.
Reverting to previous value makes no sense unless you are just testing something out.
Yes it does, There are networks out there that do use MTU > 1500 as weird as it sounds,
It not weird at all, this is why MTU settings exist. But setting a low MTU will not break the network but will just have some performance degredation.
this usually the admin does initial settings on the management network and then when you set don't touch all works well. An example is when you have storage and management on the same network.
Now consider the scenario that for some VMs the user wants to limit to the 'normal/recommended defaults' so in this case he will have to set in the logical network property to MTU=1500. when VDSM sets this chain it supposedly won't touch the interface MTU since it's already bigger (if it does it's a bug). Now the user has one more logical network of VMs with 9000 since he also have VMs using shared storage on this network.
All works well till now.
But what about when removing the 9000 network? Will VDSM 'remember' that it did not touch the interface MTU in the first place, or will it try to set it to this recommended MTU?.
It's a question of ownership. Because it's simpler I suggest we assume ownership and always set the maximum needed (also lowering if to high). The engine can query the MTU and make weird decision according. Like setting the current as default or as a saved value or whatever. This flow obviously needs user input so VSDM is not the place to put the decision making.
I tend to agree, it's an ownership thing
Engine should not allow mixed configuration of 'default vs override' on the same interface. If user wishes to start playing with MTUs he needs to use it carefully and across the board.
VDSM should not bother with the issue at all, certainly not playing a guessing game.
Livant, your 0.02$?
This exactly the reason why we should either define completely stateless slave host, and apply configuration including what you call 'defaults'.
Completely stateless is problematic because if the engine is down or unavailable and VDSM happens to restart you can't use any of your resources.
The way forward is currently to get rid of most of the configuration in vdsm.conf. Only have things that are necessary for communication with the engine (eg. Core dump on\off, management interface\port, SSL on\off). Other VDSM configuration should have a an API introduced to set them and that will be persisted but only configurable by management (eg. reserved host mem, guest ram overhead, migration timeouts). There should be a place where VDSM saves the configuration of owned resources (eg. managed storage connections, managed interfaces). This will be use by VDSM to make sure that the resources are configured properly after restarts\downtimes without the need of the engine.
To reiterate, the general logic for system resources should be that resources are either owned or used by VDSM, you never share ownership. Never assume ownership unless expressly given. VDSM has complete control over owned resources. VDSM has NO control over unowned resources, he can use them but never configure them.
Every other hybrid scheme is just asking for trouble.
Or, store configuration before we perform any change so we can revert.
Assuming manual changes and distro specific persistence make the problem complex in factor of np complete, as we do not know what was changed when and how to revert.
Itamar though a bomb that we should co-exist on generic host, this is something I do not know to compute. I still waiting for a response of where this requirement came from and if that mandatory.
It's all about resource provisioning and ownership delegation.
Alon
I have no idea :)
For that case the engine can remember the current MTU and set it back.
To sum up, I suggest ignoring any previously set value like we would ignore it if VDSM had set it. It makes no sense to keep it because the semantic of setting the MTU is to override the current configuration.
As a side note, having verb to test max MTU for a path might be a good idea to give the engine\user a way to recommend a value to the user.
That is better but not perfect :)
----- Original Message -----
From: "Saggi Mizrahi" smizrahi@redhat.com To: "Igor Lvovsky" ilvovsky@redhat.com Cc: "VDSM Project Development" vdsm-devel@lists.fedorahosted.org, "Simon Grinberg" simon@redhat.com Sent: Wednesday, November 28, 2012 11:23:52 AM Subject: Re: [vdsm] MTU setting according to ifcfg files.
I don't want to keep the last configured MTU. It's problematic. Having a stack is even worse. VDSM should try not to persist anything if possible.
Also, reverting to the last MTU is raceful and has weird corner cases. Best to just assume default it 1500 (Like all major OSs do). But since it's not really a default I would call it a recommended setting.
----- Original Message ----- > From: "Igor Lvovsky" ilvovsky@redhat.com > To: "Saggi Mizrahi" smizrahi@redhat.com > Cc: "VDSM Project Development" > vdsm-devel@lists.fedorahosted.org, > "Simon Grinberg" simon@redhat.com > Sent: Wednesday, November 28, 2012 11:10:27 AM > Subject: Re: [vdsm] MTU setting according to ifcfg files. > > > > ----- Original Message ----- > > From: "Saggi Mizrahi" smizrahi@redhat.com > > To: "Simon Grinberg" simon@redhat.com > > Cc: "VDSM Project Development" > > vdsm-devel@lists.fedorahosted.org, > > "Igor Lvovsky" ilvovsky@redhat.com > > Sent: Wednesday, November 28, 2012 5:30:17 PM > > Subject: Re: [vdsm] MTU setting according to ifcfg > > files. > > > > I suggest we don't have a default. If you don't specify > > an > > MTU > > it > > will use whatever is already configured. > > There is no way to "go back to the defaults" only to > > set > > a > > new > > value. > > The engine can assume 1500 (in case of ethernet > > devices) > > is > > the > > "recommended value". > > > > This is not related to engine. You are right that the > actually > MTU > will the last configured one, > but this is exactly a problem. > As I already mentioned, if you will add another bridge > without > custom > MTU its users (VMs) > can assume that the MTU is 1500 > > > ----- Original Message ----- > > > From: "Simon Grinberg" simon@redhat.com > > > To: "Igor Lvovsky" ilvovsky@redhat.com > > > Cc: "VDSM Project Development" > > > vdsm-devel@lists.fedorahosted.org > > > Sent: Wednesday, November 28, 2012 9:53:48 AM > > > Subject: Re: [vdsm] MTU setting according to ifcfg > > > files. > > > > > > > > > > > > ----- Original Message ----- > > > > From: "Igor Lvovsky" ilvovsky@redhat.com > > > > To: "VDSM Project Development" > > > > vdsm-devel@lists.fedorahosted.org > > > > Cc: "Simon Grinberg" simon@redhat.com > > > > Sent: Wednesday, November 28, 2012 2:58:52 PM > > > > Subject: [vdsm] MTU setting according to ifcfg > > > > files. > > > > > > > > Hi, > > > > > > > > I am working on one of the vdsm bugs that we have > > > > and > > > > I > > > > found > > > > that > > > > initscripts (initscripts-9.03.34-1.el6.x86_64) > > > > behaviour doesn't fits our needs. > > > > So, I would like to raise this issue in the list. > > > > > > > > The issue is MTU setting according to ifcfg files. > > > > I'll try to describe the flow below. > > > > > > > > 1. I started with ifcfg file for the interface > > > > without > > > > MTU > > > > keyword > > > > at > > > > all > > > > and the proper interface (let say eth0) had the > > > > *default* > > > > MTU=1500 > > > > (according to /sys/class/net/eth0/mtu). > > > > 2. I created a bridge with MTU=9000 on top of this > > > > interface. > > > > Everything went OK. > > > > After I wrote MTU=9000 on ifcfg-eth0 and > > > > ifdown/ifup > > > > it, > > > > eth0 > > > > got > > > > the proper MTU. > > > > 3. Now, I removed the bridge and deleted MTU > > > > keyword > > > > from > > > > the > > > > ifcfg-eth0. > > > > But after ifup/ifdown the actual MTU of the eth0 > > > > stayed > > > > 9000. > > > > > > > > The only way to change it back to 1500 (or > > > > something > > > > else) > > > > is > > > > explicitly set MTU in ifcfg file. > > > > According to Bill Nottingham it is intentional > > > > behaviour. > > > > If so, we have a problem in vdsm, because we never > > > > set > > > > MTU > > > > value > > > > until user ask it explicitly. > > > > > > Actually you are, > > > > > > You where asked for MTU 9000 on the network, > > > As implementation specif you had to do this all the > > > way > > > down > > > the > > > chain > > > Now it's only reasonable that when you cancel the > > > 9000 > > > request > > > then > > > you'll do what is necessary to rollback the changes. > > > It's pity that ifcfg-files don't have the option to > > > set > > > MTU='default', but as you can read this default > > > before > > > you > > > change, > > > then please keep it somewhere and revert to that. > > > > > > > > > > It means that if we have interface with MTU=9000 on > > > > it > > > > just > > > > because > > > > once there was a bridge with such MTU > > > > attached to it and now we want to attach regular > > > > bridge > > > > with > > > > *default* MTU=1500 we have a problem. > > > > The only thing we can do to avoid this it's set > > > > explicitly > > > > MTU=1500 > > > > in interface's ifcfg file. > > > > IMHO it's a bit ugly, but it looks like we have no > > > > choice. > > > > > > > > As usual comments more than welcome... > > > > > > > > Regards, > > > > Igor Lvovsky > > > > _______________________________________________ > > > > vdsm-devel mailing list > > > > vdsm-devel@lists.fedorahosted.org > > > > https://lists.fedorahosted.org/mailman/listinfo/vdsm-devel > > > > > > > _______________________________________________ > > > vdsm-devel mailing list > > > vdsm-devel@lists.fedorahosted.org > > > https://lists.fedorahosted.org/mailman/listinfo/vdsm-devel > > > > > >
vdsm-devel mailing list vdsm-devel@lists.fedorahosted.org https://lists.fedorahosted.org/mailman/listinfo/vdsm-devel
vdsm-devel mailing list vdsm-devel@lists.fedorahosted.org https://lists.fedorahosted.org/mailman/listinfo/vdsm-devel
On 11/28/2012 01:20 PM, Saggi Mizrahi wrote: ...
VDSM should not bother with the issue at all, certainly not playing a guessing game.
Livant, your 0.02$?
This exactly the reason why we should either define completely stateless slave host, and apply configuration including what you call 'defaults'.
Completely stateless is problematic because if the engine is down or unavailable and VDSM happens to restart you can't use any of your resources.
that's actually a very good point. going forward we would like to be able for hosts to continue working when engine is down, even post reboot. engine passing the policy to the hosts and hosts assuming that policy is relevant post boot would allow that. (though relying on central network services like qunatum will also cause an issue for this architecture).
The way forward is currently to get rid of most of the configuration in vdsm.conf. Only have things that are necessary for communication with the engine (eg. Core dump on\off, management interface\port, SSL on\off). Other VDSM configuration should have a an API introduced to set them and that will be persisted but only configurable by management (eg. reserved host mem, guest ram overhead, migration timeouts). There should be a place where VDSM saves the configuration of owned resources (eg. managed storage connections, managed interfaces). This will be use by VDSM to make sure that the resources are configured properly after restarts\downtimes without the need of the engine.
To reiterate, the general logic for system resources should be that resources are either owned or used by VDSM, you never share ownership. Never assume ownership unless expressly given. VDSM has complete control over owned resources. VDSM has NO control over unowned resources, he can use them but never configure them.
Every other hybrid scheme is just asking for trouble.
Or, store configuration before we perform any change so we can revert.
Assuming manual changes and distro specific persistence make the problem complex in factor of np complete, as we do not know what was changed when and how to revert.
Itamar though a bomb that we should co-exist on generic host, this is something I do not know to compute. I still waiting for a response of where this requirement came from and if that mandatory.
It's all about resource provisioning and ownership delegation.
hybrid mode is something brought up several times as a use case we should consider. so far our main concern was that SLA in the host would be needed (cgroup for example) between the native and guest workloads. as well as making sure hybrid nodes will not contend for critical resources to reduce the risk of a need to fence them.
----- Original Message -----
From: "Itamar Heim" iheim@redhat.com To: "Saggi Mizrahi" smizrahi@redhat.com Cc: "Alon Bar-Lev" alonbl@redhat.com, "Simon Grinberg" simon@redhat.com, "VDSM Project Development" vdsm-devel@lists.fedorahosted.org, "Andrew Cathrow" acathrow@redhat.com Sent: Thursday, November 29, 2012 1:06:29 PM Subject: Re: [vdsm] MTU setting according to ifcfg files.
On 11/28/2012 01:20 PM, Saggi Mizrahi wrote: ...
VDSM should not bother with the issue at all, certainly not playing a guessing game.
Livant, your 0.02$?
This exactly the reason why we should either define completely stateless slave host, and apply configuration including what you call 'defaults'.
Completely stateless is problematic because if the engine is down or unavailable and VDSM happens to restart you can't use any of your resources.
that's actually a very good point. going forward we would like to be able for hosts to continue working when engine is down, even post reboot. engine passing the policy to the hosts and hosts assuming that policy is relevant post boot would allow that. (though relying on central network services like qunatum will also cause an issue for this architecture).
The way forward is currently to get rid of most of the configuration in vdsm.conf. Only have things that are necessary for communication with the engine (eg. Core dump on\off, management interface\port, SSL on\off). Other VDSM configuration should have a an API introduced to set them and that will be persisted but only configurable by management (eg. reserved host mem, guest ram overhead, migration timeouts). There should be a place where VDSM saves the configuration of owned resources (eg. managed storage connections, managed interfaces). This will be use by VDSM to make sure that the resources are configured properly after restarts\downtimes without the need of the engine.
To reiterate, the general logic for system resources should be that resources are either owned or used by VDSM, you never share ownership. Never assume ownership unless expressly given. VDSM has complete control over owned resources. VDSM has NO control over unowned resources, he can use them but never configure them.
Every other hybrid scheme is just asking for trouble.
Or, store configuration before we perform any change so we can revert.
Assuming manual changes and distro specific persistence make the problem complex in factor of np complete, as we do not know what was changed when and how to revert.
Itamar though a bomb that we should co-exist on generic host, this is something I do not know to compute. I still waiting for a response of where this requirement came from and if that mandatory.
It's all about resource provisioning and ownership delegation.
hybrid mode is something brought up several times as a use case we should consider. so far our main concern was that SLA in the host would be needed (cgroup for example) between the native and guest workloads. as well as making sure hybrid nodes will not contend for critical resources to reduce the risk of a need to fence them.
brought up - ok. should be supported - this is the question. there is no problem to wrap the original server within vm and solve the problem with current terms.
----- Original Message -----
From: "Itamar Heim" iheim@redhat.com To: "Saggi Mizrahi" smizrahi@redhat.com Cc: "Alon Bar-Lev" alonbl@redhat.com, "Simon Grinberg" simon@redhat.com, "VDSM Project Development" vdsm-devel@lists.fedorahosted.org, "Andrew Cathrow" acathrow@redhat.com Sent: Thursday, November 29, 2012 1:06:29 PM Subject: Re: [vdsm] MTU setting according to ifcfg files.
On 11/28/2012 01:20 PM, Saggi Mizrahi wrote: ...
VDSM should not bother with the issue at all, certainly not playing a guessing game.
Livant, your 0.02$?
This exactly the reason why we should either define completely stateless slave host, and apply configuration including what you call 'defaults'.
Completely stateless is problematic because if the engine is down or unavailable and VDSM happens to restart you can't use any of your resources.
that's actually a very good point. going forward we would like to be able for hosts to continue working when engine is down, even post reboot.
How?, Will you really fire up VMs without central management control? This implies you'll have to go into host based clustering where you'll hit scale limits as any other such a solution.
If you do not intend to do the above then why not stateless? Host to remember on wakeup an old configuration may at best not work but at worst may conflict with existing configuration and do unpredictable things to your environment. You also loose the benefit of recovering bad configured host simply by fencing it.
engine passing the policy to the hosts and hosts assuming that policy is relevant post boot would allow that. (though relying on central network services like qunatum will also cause an issue for this architecture).
The way forward is currently to get rid of most of the configuration in vdsm.conf. Only have things that are necessary for communication with the engine (eg. Core dump on\off, management interface\port, SSL on\off). Other VDSM configuration should have a an API introduced to set them and that will be persisted but only configurable by management (eg. reserved host mem, guest ram overhead, migration timeouts). There should be a place where VDSM saves the configuration of owned resources (eg. managed storage connections, managed interfaces). This will be use by VDSM to make sure that the resources are configured properly after restarts\downtimes without the need of the engine.
To reiterate, the general logic for system resources should be that resources are either owned or used by VDSM, you never share ownership. Never assume ownership unless expressly given. VDSM has complete control over owned resources. VDSM has NO control over unowned resources, he can use them but never configure them.
Every other hybrid scheme is just asking for trouble.
Or, store configuration before we perform any change so we can revert.
Assuming manual changes and distro specific persistence make the problem complex in factor of np complete, as we do not know what was changed when and how to revert.
Itamar though a bomb that we should co-exist on generic host, this is something I do not know to compute. I still waiting for a response of where this requirement came from and if that mandatory.
Just few reasons: - One of the key attraction with KVM is that with it, you are capable to run process/application along side virtual machines. Look at every KVM presentation out there. - Licencing and support, some application (do I hear Oracle?) are not licensed/supported on KVM, but you would still want to use free cycles for virtual machines (especially on modern servers) - 3rd party monitoring and audit tools - custom drivers - custom SLA policies - etc, - etc, - etc,
You don't want to say, ha if you use VDSM to manage the node you can't do all of the above.
Stateless by the way, in a sense that after reboot the node goes back to the original configuration, works very well with the requirement above. This means that the admin sets everything required for the non virtualized hardware, VDSM configures on top, but after reboot all is reverted to the original thus everything else continues to work after reboot.
It's all about resource provisioning and ownership delegation.
hybrid mode is something brought up several times as a use case we should consider. so far our main concern was that SLA in the host would be needed (cgroup for example) between the native and guest workloads. as well as making sure hybrid nodes will not contend for critical resources to reduce the risk of a need to fence them.
----- Original Message -----
From: "Simon Grinberg" simon@redhat.com To: "Itamar Heim" iheim@redhat.com Cc: "Alon Bar-Lev" alonbl@redhat.com, "VDSM Project Development" vdsm-devel@lists.fedorahosted.org, "Andrew Cathrow" acathrow@redhat.com, "Saggi Mizrahi" smizrahi@redhat.com Sent: Thursday, November 29, 2012 2:12:09 PM Subject: Re: [vdsm] MTU setting according to ifcfg files.
----- Original Message -----
From: "Itamar Heim" iheim@redhat.com To: "Saggi Mizrahi" smizrahi@redhat.com Cc: "Alon Bar-Lev" alonbl@redhat.com, "Simon Grinberg" simon@redhat.com, "VDSM Project Development" vdsm-devel@lists.fedorahosted.org, "Andrew Cathrow" acathrow@redhat.com Sent: Thursday, November 29, 2012 1:06:29 PM Subject: Re: [vdsm] MTU setting according to ifcfg files.
<snip>
Assuming manual changes and distro specific persistence make the problem complex in factor of np complete, as we do not know what was changed when and how to revert.
Itamar though a bomb that we should co-exist on generic host, this is something I do not know to compute. I still waiting for a response of where this requirement came from and if that mandatory.
Just few reasons:
- One of the key attraction with KVM is that with it, you are capable
to run process/application along side virtual machines. Look at every KVM presentation out there.
- Licencing and support, some application (do I hear Oracle?) are not
licensed/supported on KVM, but you would still want to use free cycles for virtual machines (especially on modern servers)
- 3rd party monitoring and audit tools
- custom drivers
- custom SLA policies
- etc,
- etc,
- etc,
You don't want to say, ha if you use VDSM to manage the node you can't do all of the above.
Actually, I am. I claim that we will never be able to stabilize a product if we go this way. There is a very good reason why other virtualization solutions out there put similar restriction.
When and if we finish with rock solid solution using a pure completely managed slave and have good market share then we can start thinking about these non deterministic approaches. Or... maybe this is the marketing advantage we would like, and then we should FOCUS on this approach, but then we are aiming to low scale, manual managed solution, and the "other" open source project will probably consume the higher scale.
As I wrote there are two solution using CURRENT technology for that: 1. Move the original host into virtual machine and manage the host as a whole. 2. Execute virtual machine with nested virtualization and manage this VM as if it was our host, in this mode we have no conflict.
Stateless by the way, in a sense that after reboot the node goes back to the original configuration, works very well with the requirement above. This means that the admin sets everything required for the non virtualized hardware, VDSM configures on top, but after reboot all is reverted to the original thus everything else continues to work after reboot.
This is not the way to go in this case, Oracle will not live within stateless world, nor 1000 other solutions.
Regards, Alon.
----- Original Message -----
From: "Alon Bar-Lev" alonbl@redhat.com To: "Simon Grinberg" simon@redhat.com Cc: "VDSM Project Development" vdsm-devel@lists.fedorahosted.org Sent: Thursday, November 29, 2012 2:25:03 PM Subject: Re: [vdsm] MTU setting according to ifcfg files.
----- Original Message -----
From: "Simon Grinberg" simon@redhat.com To: "Itamar Heim" iheim@redhat.com Cc: "Alon Bar-Lev" alonbl@redhat.com, "VDSM Project Development" vdsm-devel@lists.fedorahosted.org, "Andrew Cathrow" acathrow@redhat.com, "Saggi Mizrahi" smizrahi@redhat.com Sent: Thursday, November 29, 2012 2:12:09 PM Subject: Re: [vdsm] MTU setting according to ifcfg files.
----- Original Message -----
From: "Itamar Heim" iheim@redhat.com To: "Saggi Mizrahi" smizrahi@redhat.com Cc: "Alon Bar-Lev" alonbl@redhat.com, "Simon Grinberg" simon@redhat.com, "VDSM Project Development" vdsm-devel@lists.fedorahosted.org, "Andrew Cathrow" acathrow@redhat.com Sent: Thursday, November 29, 2012 1:06:29 PM Subject: Re: [vdsm] MTU setting according to ifcfg files.
<snip>
Assuming manual changes and distro specific persistence make the problem complex in factor of np complete, as we do not know what was changed when and how to revert.
Itamar though a bomb that we should co-exist on generic host, this is something I do not know to compute. I still waiting for a response of where this requirement came from and if that mandatory.
Just few reasons:
- One of the key attraction with KVM is that with it, you are
capable to run process/application along side virtual machines. Look at every KVM presentation out there.
- Licencing and support, some application (do I hear Oracle?) are
not licensed/supported on KVM, but you would still want to use free cycles for virtual machines (especially on modern servers)
- 3rd party monitoring and audit tools
- custom drivers
- custom SLA policies
- etc,
- etc,
- etc,
You don't want to say, ha if you use VDSM to manage the node you can't do all of the above.
Actually, I am. I claim that we will never be able to stabilize a product if we go this way. There is a very good reason why other virtualization solutions out there put similar restriction.
When and if we finish with rock solid solution using a pure completely managed slave and have good market share then we can start thinking about these non deterministic approaches.
Actually it's the other way around. Since you are far from there, then many (if not most) users today actually use a full blown host to complement features or required functionality like: Monitoring, Private firewall, central logging, customization for third party devices etc.
Or... maybe this is the marketing advantage we would like, and then we should FOCUS on this approach, but then we are aiming to low scale, manual managed solution, and the "other" open source project will probably consume the higher scale.
As I wrote there are two solution using CURRENT technology for that:
- Move the original host into virtual machine and manage the host as
a whole. 2. Execute virtual machine with nested virtualization and manage this VM as if it was our host, in this mode we have no conflict.
Stateless by the way, in a sense that after reboot the node goes back to the original configuration, works very well with the requirement above. This means that the admin sets everything required for the non virtualized hardware, VDSM configures on top, but after reboot all is reverted to the original thus everything else continues to work after reboot.
This is not the way to go in this case, Oracle will not live within stateless world, nor 1000 other solutions.
You missed what I've said: Admin configures state-fully everything required for the 'native' application, VDSM may configure starless on top. After reboot, host goes back to the original configuration that is enough to run the 'native' non managed by VDSM applications.
Regards, Alon. _______________________________________________ vdsm-devel mailing list vdsm-devel@lists.fedorahosted.org https://lists.fedorahosted.org/mailman/listinfo/vdsm-devel
----- Original Message -----
From: "Simon Grinberg" simon@redhat.com To: "Alon Bar-Lev" alonbl@redhat.com Cc: "VDSM Project Development" vdsm-devel@lists.fedorahosted.org Sent: Thursday, November 29, 2012 2:35:46 PM Subject: Re: [vdsm] MTU setting according to ifcfg files.
----- Original Message -----
From: "Alon Bar-Lev" alonbl@redhat.com To: "Simon Grinberg" simon@redhat.com Cc: "VDSM Project Development" vdsm-devel@lists.fedorahosted.org Sent: Thursday, November 29, 2012 2:25:03 PM Subject: Re: [vdsm] MTU setting according to ifcfg files.
----- Original Message -----
From: "Simon Grinberg" simon@redhat.com To: "Itamar Heim" iheim@redhat.com Cc: "Alon Bar-Lev" alonbl@redhat.com, "VDSM Project Development" vdsm-devel@lists.fedorahosted.org, "Andrew Cathrow" acathrow@redhat.com, "Saggi Mizrahi" smizrahi@redhat.com Sent: Thursday, November 29, 2012 2:12:09 PM Subject: Re: [vdsm] MTU setting according to ifcfg files.
----- Original Message -----
From: "Itamar Heim" iheim@redhat.com To: "Saggi Mizrahi" smizrahi@redhat.com Cc: "Alon Bar-Lev" alonbl@redhat.com, "Simon Grinberg" simon@redhat.com, "VDSM Project Development" vdsm-devel@lists.fedorahosted.org, "Andrew Cathrow" acathrow@redhat.com Sent: Thursday, November 29, 2012 1:06:29 PM Subject: Re: [vdsm] MTU setting according to ifcfg files.
<snip>
Assuming manual changes and distro specific persistence make the problem complex in factor of np complete, as we do not know what was changed when and how to revert.
Itamar though a bomb that we should co-exist on generic host, this is something I do not know to compute. I still waiting for a response of where this requirement came from and if that mandatory.
Just few reasons:
- One of the key attraction with KVM is that with it, you are
capable to run process/application along side virtual machines. Look at every KVM presentation out there.
- Licencing and support, some application (do I hear Oracle?) are
not licensed/supported on KVM, but you would still want to use free cycles for virtual machines (especially on modern servers)
- 3rd party monitoring and audit tools
- custom drivers
- custom SLA policies
- etc,
- etc,
- etc,
You don't want to say, ha if you use VDSM to manage the node you can't do all of the above.
Actually, I am. I claim that we will never be able to stabilize a product if we go this way. There is a very good reason why other virtualization solutions out there put similar restriction.
When and if we finish with rock solid solution using a pure completely managed slave and have good market share then we can start thinking about these non deterministic approaches.
Actually it's the other way around. Since you are far from there, then many (if not most) users today actually use a full blown host to complement features or required functionality like: Monitoring, Private firewall, central logging, customization for third party devices etc.
And again, I disagree. This may be enough for an entry level solution. Enterprise solution will probably prefer rhev-h or similar self managed solution, this of course, if we provide decent management support.
Customization for third party devices has no management/state impact. Central logging - we have the log collector for that. Monitoring - if we going to provide SLA we are going to perform monitoring as well. Private Firewall - this will totally conflict with whatever engine enforces.
Or... maybe this is the marketing advantage we would like, and then we should FOCUS on this approach, but then we are aiming to low scale, manual managed solution, and the "other" open source project will probably consume the higher scale.
As I wrote there are two solution using CURRENT technology for that:
- Move the original host into virtual machine and manage the host
as a whole. 2. Execute virtual machine with nested virtualization and manage this VM as if it was our host, in this mode we have no conflict.
Stateless by the way, in a sense that after reboot the node goes back to the original configuration, works very well with the requirement above. This means that the admin sets everything required for the non virtualized hardware, VDSM configures on top, but after reboot all is reverted to the original thus everything else continues to work after reboot.
This is not the way to go in this case, Oracle will not live within stateless world, nor 1000 other solutions.
You missed what I've said: Admin configures state-fully everything required for the 'native' application, VDSM may configure starless on top. After reboot, host goes back to the original configuration that is enough to run the 'native' non managed by VDSM applications.
No I did not. Let's say we introduce watchdog support into vdsm, what will be the impact on Oracle? Let's say we modify block scheduler, will it conflict? Let's say Oracle tune the scheduler (io or cpu), what will be the impact? Now, let's assume we attach iscsi, then communication is lost, what impact will this have on other processes when mount point hangs process? I can think of many other complex scenarios without a valid solution. We will not be able to stabilize a solution this way... but we can sure die trying :)
Alon
----- Original Message -----
From: "Simon Grinberg" simon@redhat.com To: "Alon Bar-Lev" alonbl@redhat.com Cc: "VDSM Project Development" vdsm-devel@lists.fedorahosted.org Sent: Thursday, November 29, 2012 2:35:46 PM Subject: Re: [vdsm] MTU setting according to ifcfg files.
----- Original Message -----
From: "Alon Bar-Lev" alonbl@redhat.com To: "Simon Grinberg" simon@redhat.com Cc: "VDSM Project Development" vdsm-devel@lists.fedorahosted.org Sent: Thursday, November 29, 2012 2:25:03 PM Subject: Re: [vdsm] MTU setting according to ifcfg files.
----- Original Message -----
From: "Simon Grinberg" simon@redhat.com To: "Itamar Heim" iheim@redhat.com Cc: "Alon Bar-Lev" alonbl@redhat.com, "VDSM Project Development" vdsm-devel@lists.fedorahosted.org, "Andrew Cathrow" acathrow@redhat.com, "Saggi Mizrahi" smizrahi@redhat.com Sent: Thursday, November 29, 2012 2:12:09 PM Subject: Re: [vdsm] MTU setting according to ifcfg files.
----- Original Message -----
From: "Itamar Heim" iheim@redhat.com To: "Saggi Mizrahi" smizrahi@redhat.com Cc: "Alon Bar-Lev" alonbl@redhat.com, "Simon Grinberg" simon@redhat.com, "VDSM Project Development" vdsm-devel@lists.fedorahosted.org, "Andrew Cathrow" acathrow@redhat.com Sent: Thursday, November 29, 2012 1:06:29 PM Subject: Re: [vdsm] MTU setting according to ifcfg files.
<snip>
> > Assuming manual changes and distro specific persistence > make > the > problem complex in factor of np complete, as we do not > know > what > was > changed when and how to revert. > > Itamar though a bomb that we should co-exist on generic > host, > this > is > something I do not know to compute. I still waiting for a > response > of where this requirement came from and if that mandatory.
Just few reasons:
- One of the key attraction with KVM is that with it, you are
capable to run process/application along side virtual machines. Look at every KVM presentation out there.
- Licencing and support, some application (do I hear Oracle?)
are not licensed/supported on KVM, but you would still want to use free cycles for virtual machines (especially on modern servers)
- 3rd party monitoring and audit tools
- custom drivers
- custom SLA policies
- etc,
- etc,
- etc,
You don't want to say, ha if you use VDSM to manage the node you can't do all of the above.
Actually, I am. I claim that we will never be able to stabilize a product if we go this way. There is a very good reason why other virtualization solutions out there put similar restriction.
When and if we finish with rock solid solution using a pure completely managed slave and have good market share then we can start thinking about these non deterministic approaches.
Actually it's the other way around. Since you are far from there, then many (if not most) users today actually use a full blown host to complement features or required functionality like: Monitoring, Private firewall, central logging, customization for third party devices etc.
And again, I disagree. This may be enough for an entry level solution. Enterprise solution will probably prefer rhev-h or similar self managed solution, this of course, if we provide decent management support.
Customization for third party devices has no management/state impact. Central logging - we have the log collector for that. Monitoring - if we going to provide SLA we are going to perform monitoring as well. Private Firewall - this will totally conflict with whatever engine enforces.
engine & vdsm should provide a framework/api to offload network services like FW, IPS, DLP, WAAS, etc. (as well as other types of services like backup/DR) to external virtual appliances by seamlessly routing/redirecting traffic to/from these appliances. this will potentially reduce conflicts & dependencies and accelerate feature velocity.
Or... maybe this is the marketing advantage we would like, and then we should FOCUS on this approach, but then we are aiming to low scale, manual managed solution, and the "other" open source project will probably consume the higher scale.
As I wrote there are two solution using CURRENT technology for that:
- Move the original host into virtual machine and manage the
host as a whole. 2. Execute virtual machine with nested virtualization and manage this VM as if it was our host, in this mode we have no conflict.
Stateless by the way, in a sense that after reboot the node goes back to the original configuration, works very well with the requirement above. This means that the admin sets everything required for the non virtualized hardware, VDSM configures on top, but after reboot all is reverted to the original thus everything else continues to work after reboot.
This is not the way to go in this case, Oracle will not live within stateless world, nor 1000 other solutions.
You missed what I've said: Admin configures state-fully everything required for the 'native' application, VDSM may configure starless on top. After reboot, host goes back to the original configuration that is enough to run the 'native' non managed by VDSM applications.
No I did not. Let's say we introduce watchdog support into vdsm, what will be the impact on Oracle? Let's say we modify block scheduler, will it conflict? Let's say Oracle tune the scheduler (io or cpu), what will be the impact? Now, let's assume we attach iscsi, then communication is lost, what impact will this have on other processes when mount point hangs process? I can think of many other complex scenarios without a valid solution. We will not be able to stabilize a solution this way... but we can sure die trying :)
Alon _______________________________________________ vdsm-devel mailing list vdsm-devel@lists.fedorahosted.org https://lists.fedorahosted.org/mailman/listinfo/vdsm-devel
----- Original Message -----
From: "Roni Luxenberg" rluxenbe@redhat.com To: "Alon Bar-Lev" alonbl@redhat.com Cc: "Simon Grinberg" simon@redhat.com, "VDSM Project Development" vdsm-devel@lists.fedorahosted.org Sent: Thursday, November 29, 2012 5:13:04 PM Subject: Re: [vdsm] MTU setting according to ifcfg files.
----- Original Message -----
From: "Simon Grinberg" simon@redhat.com To: "Alon Bar-Lev" alonbl@redhat.com Cc: "VDSM Project Development" vdsm-devel@lists.fedorahosted.org Sent: Thursday, November 29, 2012 2:35:46 PM Subject: Re: [vdsm] MTU setting according to ifcfg files.
----- Original Message -----
From: "Alon Bar-Lev" alonbl@redhat.com To: "Simon Grinberg" simon@redhat.com Cc: "VDSM Project Development" vdsm-devel@lists.fedorahosted.org Sent: Thursday, November 29, 2012 2:25:03 PM Subject: Re: [vdsm] MTU setting according to ifcfg files.
----- Original Message -----
From: "Simon Grinberg" simon@redhat.com To: "Itamar Heim" iheim@redhat.com Cc: "Alon Bar-Lev" alonbl@redhat.com, "VDSM Project Development" vdsm-devel@lists.fedorahosted.org, "Andrew Cathrow" acathrow@redhat.com, "Saggi Mizrahi" smizrahi@redhat.com Sent: Thursday, November 29, 2012 2:12:09 PM Subject: Re: [vdsm] MTU setting according to ifcfg files.
----- Original Message -----
From: "Itamar Heim" iheim@redhat.com To: "Saggi Mizrahi" smizrahi@redhat.com Cc: "Alon Bar-Lev" alonbl@redhat.com, "Simon Grinberg" simon@redhat.com, "VDSM Project Development" vdsm-devel@lists.fedorahosted.org, "Andrew Cathrow" acathrow@redhat.com Sent: Thursday, November 29, 2012 1:06:29 PM Subject: Re: [vdsm] MTU setting according to ifcfg files.
<snip>
>> >> Assuming manual changes and distro specific persistence >> make >> the >> problem complex in factor of np complete, as we do not >> know >> what >> was >> changed when and how to revert. >> >> Itamar though a bomb that we should co-exist on generic >> host, >> this >> is >> something I do not know to compute. I still waiting for >> a >> response >> of where this requirement came from and if that >> mandatory.
Just few reasons:
- One of the key attraction with KVM is that with it, you are
capable to run process/application along side virtual machines. Look at every KVM presentation out there.
- Licencing and support, some application (do I hear Oracle?)
are not licensed/supported on KVM, but you would still want to use free cycles for virtual machines (especially on modern servers)
- 3rd party monitoring and audit tools
- custom drivers
- custom SLA policies
- etc,
- etc,
- etc,
You don't want to say, ha if you use VDSM to manage the node you can't do all of the above.
Actually, I am. I claim that we will never be able to stabilize a product if we go this way. There is a very good reason why other virtualization solutions out there put similar restriction.
When and if we finish with rock solid solution using a pure completely managed slave and have good market share then we can start thinking about these non deterministic approaches.
Actually it's the other way around. Since you are far from there, then many (if not most) users today actually use a full blown host to complement features or required functionality like: Monitoring, Private firewall, central logging, customization for third party devices etc.
And again, I disagree. This may be enough for an entry level solution. Enterprise solution will probably prefer rhev-h or similar self managed solution, this of course, if we provide decent management support.
Customization for third party devices has no management/state impact. Central logging - we have the log collector for that. Monitoring - if we going to provide SLA we are going to perform monitoring as well. Private Firewall - this will totally conflict with whatever engine enforces.
engine & vdsm should provide a framework/api to offload network services like FW, IPS, DLP, WAAS, etc. (as well as other types of services like backup/DR) to external virtual appliances by seamlessly routing/redirecting traffic to/from these appliances. this will potentially reduce conflicts & dependencies and accelerate feature velocity.
There is another thread on that, don't recall it's title but it discusses API, and there too we are divided into the stateless vs statefull parties.
Or... maybe this is the marketing advantage we would like, and then we should FOCUS on this approach, but then we are aiming to low scale, manual managed solution, and the "other" open source project will probably consume the higher scale.
As I wrote there are two solution using CURRENT technology for that:
- Move the original host into virtual machine and manage the
host as a whole. 2. Execute virtual machine with nested virtualization and manage this VM as if it was our host, in this mode we have no conflict.
Stateless by the way, in a sense that after reboot the node goes back to the original configuration, works very well with the requirement above. This means that the admin sets everything required for the non virtualized hardware, VDSM configures on top, but after reboot all is reverted to the original thus everything else continues to work after reboot.
This is not the way to go in this case, Oracle will not live within stateless world, nor 1000 other solutions.
You missed what I've said: Admin configures state-fully everything required for the 'native' application, VDSM may configure starless on top. After reboot, host goes back to the original configuration that is enough to run the 'native' non managed by VDSM applications.
No I did not. Let's say we introduce watchdog support into vdsm, what will be the impact on Oracle? Let's say we modify block scheduler, will it conflict? Let's say Oracle tune the scheduler (io or cpu), what will be the impact? Now, let's assume we attach iscsi, then communication is lost, what impact will this have on other processes when mount point hangs process? I can think of many other complex scenarios without a valid solution. We will not be able to stabilize a solution this way... but we can sure die trying :)
Alon _______________________________________________ vdsm-devel mailing list vdsm-devel@lists.fedorahosted.org https://lists.fedorahosted.org/mailman/listinfo/vdsm-devel
vdsm-devel mailing list vdsm-devel@lists.fedorahosted.org https://lists.fedorahosted.org/mailman/listinfo/vdsm-devel
On Wed, Nov 28, 2012 at 12:49:10PM -0500, Alon Bar-Lev wrote:
Itamar though a bomb that we should co-exist on generic host, this is something I do not know to compute. I still waiting for a response of where this requirement came from and if that mandatory.
This bomb has been ticking since ever. We have ovirt-node images for pure hypervisor nodes, but we support plain Linux nodes, where local admins are free to `yum upgrade` in the least convenient moment. The latter mode can be the stuff that nightmares are made of, but it also allows the flexibility and bleeding-endgeness we all cherish.
----- Original Message -----
From: "Dan Kenigsberg" danken@redhat.com To: "Alon Bar-Lev" alonbl@redhat.com Cc: "Simon Grinberg" simon@redhat.com, "VDSM Project Development" vdsm-devel@lists.fedorahosted.org Sent: Wednesday, November 28, 2012 10:20:11 PM Subject: Re: [vdsm] MTU setting according to ifcfg files.
On Wed, Nov 28, 2012 at 12:49:10PM -0500, Alon Bar-Lev wrote:
Itamar though a bomb that we should co-exist on generic host, this is something I do not know to compute. I still waiting for a response of where this requirement came from and if that mandatory.
This bomb has been ticking since ever. We have ovirt-node images for pure hypervisor nodes, but we support plain Linux nodes, where local admins are free to `yum upgrade` in the least convenient moment. The latter mode can be the stuff that nightmares are made of, but it also allows the flexibility and bleeding-endgeness we all cherish.
There is a different between having generic OS and having generic setup, running your email server, file server and LDAP on a node that running VMs.
I have no problem in having generic OS (opposed of ovirt-node) but have full control over that.
Alon.
On 11/29/2012 04:24 AM, Alon Bar-Lev wrote:
----- Original Message -----
From: "Dan Kenigsberg" danken@redhat.com To: "Alon Bar-Lev" alonbl@redhat.com Cc: "Simon Grinberg" simon@redhat.com, "VDSM Project Development" vdsm-devel@lists.fedorahosted.org Sent: Wednesday, November 28, 2012 10:20:11 PM Subject: Re: [vdsm] MTU setting according to ifcfg files.
On Wed, Nov 28, 2012 at 12:49:10PM -0500, Alon Bar-Lev wrote:
Itamar though a bomb that we should co-exist on generic host, this is something I do not know to compute. I still waiting for a response of where this requirement came from and if that mandatory.
This bomb has been ticking since ever. We have ovirt-node images for pure hypervisor nodes, but we support plain Linux nodes, where local admins are free to `yum upgrade` in the least convenient moment. The latter mode can be the stuff that nightmares are made of, but it also allows the flexibility and bleeding-endgeness we all cherish.
There is a different between having generic OS and having generic setup, running your email server, file server and LDAP on a node that running VMs.
I have no problem in having generic OS (opposed of ovirt-node) but have full control over that.
Alon.
Can I say we have got agreement on oVirt should cover two kinds of hypervisors? Stateless slave is good for pure and normal virtualization workload, while generic host can keep the flexibility of customization. In my opinion, it's good for the oVirt community to provide choices for users. They could customize it in production, building and even source code according to their requirements and skills. So moving back to the discussion network configuration, I would like to suggest we could adopt both of the two solutions.
dynamic way (as Alon suggested in his previous mail.) -- for oVirt node. It will take a step towards real stateless. Actually it's also helpful to offload the transaction management from vdsm for the static way. We're going to build vdsm network setup module on top of generic host network manager, like libvirt virInterface. But to persist the network configuration on oVirt node, vdsm has to care about the details of lower level. If we only run the static way on the generic host, then host network manager could perform the rollback stuff on behalf of vdsm. I only have two comments on dynamic way: 1. Do we really need to care about the management interface? How about just leaving it to installation and disallow to configure it at runtime. 2. How about putting the retrievement network configuration from engine into vdsm-reg?
static way -- for generic host. We didn't follow much on this topic in the thread. So I would like to talk about my understanding to continue this discussion. As Dan said in the first message of this thread, openvswitch couldn't keep 3rd level configurations, so it's not appropriate to use itself to cover the base network configurations. Then we have two choices: netcf and NetworkManager. It seems netcf is not used as widely as NM. Currently, it supports fedora/rhel, debian/ubuntu and suse. To support a new distribution, you need add a converter to translate the interface's XML definition into native configuration, because netcf just covers the part of static configuration, and relies on the system network service to make configurations take effect. Compared with netcf, it's easier to support new distribution because it has its own daemon to parse the self-defining key-value file and call netlink library to perform the live change. Besides that, NM can also monitor the physical interface's link status, and has the ability run callback on some events. Daniel mentioned that libvirt would support NM by the virInterface API. That's good for vdsm. But I found it didn't fit vdsm's requirements very well. 1. It doesn't allow to define a bridge on top of an existing interface. That means the schema requires you define the bridge port interface together with the bridge. The vdsm setupNetwork verb allows creating a bridge with a given name of existing bonding. To work around it, vdsm has to get the bonding definition from libvirt or collect information from /sys. And then put the bonding definition into bridge's definition. 2. It also removes its port interface, like bonding device together when remove bridge. It's not expected by vdsm when the option 'implicitBonding' is unset. To work around it, vdsm has to re-create the bonding as said in 1. 3. mtu setting is propagated to nic or bond when setting a mtu to bridge. It could break mtu support in oVirt when adding a bridge with smaller mtu to a vlan whose slave nic is also used by a bigger mtu network. 4. Some less used options are not allowed by the the schema, like some bonding modes and options.
Some of them could change in the backend NM. But it's better to claim vdsm's requirements while libvirt is moving to NM. Is it better that if libvirt allow vdsm manipulate sub-element of a network? Probably Igor and Antoni have more comments on it.
vdsm-devel mailing list vdsm-devel@lists.fedorahosted.org https://lists.fedorahosted.org/mailman/listinfo/vdsm-devel
----- Original Message -----
From: "Mark Wu" wudxw@linux.vnet.ibm.com To: "VDSM Project Development" vdsm-devel@lists.fedorahosted.org Cc: "Alon Bar-Lev" alonbl@redhat.com, "Dan Kenigsberg" danken@redhat.com, "Simon Grinberg" simon@redhat.com, "Antoni Segura Puimedon" asegurap@redhat.com, "Igor Lvovsky" ilvovsky@redhat.com, "Daniel P. Berrange" berrange@redhat.com Sent: Monday, December 3, 2012 7:39:49 AM Subject: Re: [vdsm] Back to future of vdsm network configuration
On 11/29/2012 04:24 AM, Alon Bar-Lev wrote:
----- Original Message -----
From: "Dan Kenigsberg" danken@redhat.com To: "Alon Bar-Lev" alonbl@redhat.com Cc: "Simon Grinberg" simon@redhat.com, "VDSM Project Development" vdsm-devel@lists.fedorahosted.org Sent: Wednesday, November 28, 2012 10:20:11 PM Subject: Re: [vdsm] MTU setting according to ifcfg files.
On Wed, Nov 28, 2012 at 12:49:10PM -0500, Alon Bar-Lev wrote:
Itamar though a bomb that we should co-exist on generic host, this is something I do not know to compute. I still waiting for a response of where this requirement came from and if that mandatory.
This bomb has been ticking since ever. We have ovirt-node images for pure hypervisor nodes, but we support plain Linux nodes, where local admins are free to `yum upgrade` in the least convenient moment. The latter mode can be the stuff that nightmares are made of, but it also allows the flexibility and bleeding-endgeness we all cherish.
There is a different between having generic OS and having generic setup, running your email server, file server and LDAP on a node that running VMs.
I have no problem in having generic OS (opposed of ovirt-node) but have full control over that.
Alon.
Can I say we have got agreement on oVirt should cover two kinds of hypervisors? Stateless slave is good for pure and normal virtualization workload, while generic host can keep the flexibility of customization. In my opinion, it's good for the oVirt community to provide choices for users. They could customize it in production, building and even source code according to their requirements and skills.
I also think it will be good to support both modes! It will also good if we can rule the world! :)
Now seriously... :)
If we want to ever have a working solution we need to focus, dropping wishful requirements in favour of the minimum required that will allow us to reach to stable milestone.
Having a good clean interface for vdsm network within the stateless mode, will allow a persistent implementation to exists even if the whole implementation of master and vdsm assume stateless. This kind of implementation will get a new state from master, compare to whatever exists on the host and sync.
I, of course, will be against investing resources in such network management plugin approach... but it is doable, and my vote is not something that you cannot safely ignore.
Having said that, let's come back to your original claim: """while generic host can keep the flexibility of customization."""
NOBODY, and I repeat my answer to Dan, NOBODY claim we should not support generic host. But the term 'generic' seems to confuse everyone... generic is a a host does not mean administrator can do whatever he likes, it just a host that is installed using standard distribution installation procedure.
Using 'generic host' can be done with either stateful or stateless modes.
However what and how customization can be done to a resource that is managed by VDSM (eg: storage, network) is a complete different question.
There cannot be two managers to the same resource, it is a rule of thumb, any other approach is non-deterministic and may lead to huge resource investment with almost no benefit, as it will never be stable.
So moving back to the discussion network configuration, I would like to suggest we could adopt both of the two solutions.
dynamic way (as Alon suggested in his previous mail.) -- for oVirt node. It will take a step towards real stateless. Actually it's also helpful to offload the transaction management from vdsm for the static way.
It will also provide the framework needed in order to provide network on demand as Livnat plan. Define network resources (and I guess storage) when VM is moved to the host. In this mode there is no other way to go!
We're going to build vdsm network setup module on top of generic host network manager, like libvirt virInterface. But to persist the network configuration on oVirt node, vdsm has to care about the details of lower level. If we only run the static way on the generic host, then host network manager could perform the rollback stuff on behalf of vdsm. I only have two comments on dynamic way:
- Do we really need to care about the management interface? How
about just leaving it to installation and disallow to configure it at runtime. 2. How about putting the retrievement network configuration from engine into vdsm-reg?
vdsm-reg is going to be killed soon, just like the vdsm-bootstrap. I was tempted to do this for 3.2, but I was taken...
static way -- for generic host. We didn't follow much on this topic in the thread. So I would like to talk about my understanding to continue this discussion. As Dan said in the first message of this thread, openvswitch couldn't keep 3rd level configurations, so it's not appropriate to use itself to cover the base network configurations. Then we have two choices: netcf and NetworkManager. It seems netcf is not used as widely as NM. Currently, it supports fedora/rhel, debian/ubuntu and suse. To support a new distribution, you need add a converter to translate the interface's XML definition into native configuration, because netcf just covers the part of static configuration, and relies on the system network service to make configurations take effect. Compared with netcf, it's easier to support new distribution because it has its own daemon to parse the self-defining key-value file and call netlink library to perform the live change. Besides that, NM can also monitor the physical interface's link status, and has the ability run callback on some events. Daniel mentioned that libvirt would support NM by the virInterface API. That's good for vdsm. But I found it didn't fit vdsm's requirements very well.
- It doesn't allow to define a bridge on top of an existing
interface. That means the schema requires you define the bridge port interface together with the bridge. The vdsm setupNetwork verb allows creating a bridge with a given name of existing bonding. To work around it, vdsm has to get the bonding definition from libvirt or collect information from /sys. And then put the bonding definition into bridge's definition. 2. It also removes its port interface, like bonding device together when remove bridge. It's not expected by vdsm when the option 'implicitBonding' is unset. To work around it, vdsm has to re-create the bonding as said in 1. 3. mtu setting is propagated to nic or bond when setting a mtu to bridge. It could break mtu support in oVirt when adding a bridge with smaller mtu to a vlan whose slave nic is also used by a bigger mtu network. 4. Some less used options are not allowed by the the schema, like some bonding modes and options.
Some of them could change in the backend NM. But it's better to claim vdsm's requirements while libvirt is moving to NM. Is it better that if libvirt allow vdsm manipulate sub-element of a network? Probably Igor and Antoni have more comments on it.
I don't like using libvirt for networking. We should interact directly with the host network manager or whatever alternative we choose.
Alon
On Mon, Dec 03, 2012 at 04:35:34AM -0500, Alon Bar-Lev wrote:
----- Original Message -----
From: "Mark Wu" wudxw@linux.vnet.ibm.com To: "VDSM Project Development" vdsm-devel@lists.fedorahosted.org Cc: "Alon Bar-Lev" alonbl@redhat.com, "Dan Kenigsberg" danken@redhat.com, "Simon Grinberg" simon@redhat.com, "Antoni Segura Puimedon" asegurap@redhat.com, "Igor Lvovsky" ilvovsky@redhat.com, "Daniel P. Berrange" berrange@redhat.com Sent: Monday, December 3, 2012 7:39:49 AM Subject: Re: [vdsm] Back to future of vdsm network configuration
On 11/29/2012 04:24 AM, Alon Bar-Lev wrote:
----- Original Message -----
From: "Dan Kenigsberg" danken@redhat.com To: "Alon Bar-Lev" alonbl@redhat.com Cc: "Simon Grinberg" simon@redhat.com, "VDSM Project Development" vdsm-devel@lists.fedorahosted.org Sent: Wednesday, November 28, 2012 10:20:11 PM Subject: Re: [vdsm] MTU setting according to ifcfg files.
On Wed, Nov 28, 2012 at 12:49:10PM -0500, Alon Bar-Lev wrote:
Itamar though a bomb that we should co-exist on generic host, this is something I do not know to compute. I still waiting for a response of where this requirement came from and if that mandatory.
This bomb has been ticking since ever. We have ovirt-node images for pure hypervisor nodes, but we support plain Linux nodes, where local admins are free to `yum upgrade` in the least convenient moment. The latter mode can be the stuff that nightmares are made of, but it also allows the flexibility and bleeding-endgeness we all cherish.
There is a different between having generic OS and having generic setup, running your email server, file server and LDAP on a node that running VMs.
I have no problem in having generic OS (opposed of ovirt-node) but have full control over that.
Alon.
Can I say we have got agreement on oVirt should cover two kinds of hypervisors? Stateless slave is good for pure and normal virtualization workload, while generic host can keep the flexibility of customization. In my opinion, it's good for the oVirt community to provide choices for users. They could customize it in production, building and even source code according to their requirements and skills.
I also think it will be good to support both modes! It will also good if we can rule the world! :)
Now seriously... :)
If we want to ever have a working solution we need to focus, dropping wishful requirements in favour of the minimum required that will allow us to reach to stable milestone.
Having a good clean interface for vdsm network within the stateless mode, will allow a persistent implementation to exists even if the whole implementation of master and vdsm assume stateless. This kind of implementation will get a new state from master, compare to whatever exists on the host and sync.
I, of course, will be against investing resources in such network management plugin approach... but it is doable, and my vote is not something that you cannot safely ignore.
I cannot say that I do not fail to parse English sentences with double or triple negations...
I'd like to see an API that lets us define a persistent initial management interface, and create volatile network devices during runtime. I'd love to see a define/create distiction, as libvirt has.
How about keeping our current setupNetwork API, with a minor change to its sematics - it would not persist anything. A new persistNetwork API would be added, intending to persist the management network after it has been tested.
On boot, only the management defitions would show up, and Engine (or a small local sevice on top of vdsm) would push the complete configuration.
setSafeNetConfig, and the rollback-on-boot mess would be scrapped.
The only little problem would be to implement setupNetwork without playing with persisted ifcfg* files.
Having said that, let's come back to your original claim: """while generic host can keep the flexibility of customization."""
NOBODY, and I repeat my answer to Dan, NOBODY claim we should not support generic host. But the term 'generic' seems to confuse everyone... generic is a a host does not mean administrator can do whatever he likes, it just a host that is installed using standard distribution installation procedure.
Using 'generic host' can be done with either stateful or stateless modes.
However what and how customization can be done to a resource that is managed by VDSM (eg: storage, network) is a complete different question.
There cannot be two managers to the same resource, it is a rule of thumb, any other approach is non-deterministic and may lead to huge resource investment with almost no benefit, as it will never be stable.
So moving back to the discussion network configuration, I would like to suggest we could adopt both of the two solutions.
dynamic way (as Alon suggested in his previous mail.) -- for oVirt node. It will take a step towards real stateless. Actually it's also helpful to offload the transaction management from vdsm for the static way.
It will also provide the framework needed in order to provide network on demand as Livnat plan. Define network resources (and I guess storage) when VM is moved to the host. In this mode there is no other way to go!
We're going to build vdsm network setup module on top of generic host network manager, like libvirt virInterface. But to persist the network configuration on oVirt node, vdsm has to care about the details of lower level. If we only run the static way on the generic host, then host network manager could perform the rollback stuff on behalf of vdsm. I only have two comments on dynamic way:
- Do we really need to care about the management interface? How
about just leaving it to installation and disallow to configure it at runtime. 2. How about putting the retrievement network configuration from engine into vdsm-reg?
vdsm-reg is going to be killed soon, just like the vdsm-bootstrap. I was tempted to do this for 3.2, but I was taken...
I, too, see no benefit in vdsm pulling its setup from Engine, over Engine pushing it once it is aware of the new host (and knows that the host is needed, and that its network config has changed, etc).
static way -- for generic host. We didn't follow much on this topic in the thread. So I would like to talk about my understanding to continue this discussion. As Dan said in the first message of this thread, openvswitch couldn't keep 3rd level configurations, so it's not appropriate to use itself to cover the base network configurations. Then we have two choices: netcf and NetworkManager. It seems netcf is not used as widely as NM. Currently, it supports fedora/rhel, debian/ubuntu and suse. To support a new distribution, you need add a converter to translate the interface's XML definition into native configuration, because netcf just covers the part of static configuration, and relies on the system network service to make configurations take effect.
This may be a good opportunity to show
$ git grep NETCF_TRANSACTION ... src/drv_suse.c:#define NETCF_TRANSACTION "/bin/false" ...
i.e., a considerable efferot has to take place in order to get distribution-neutrality out of netcf.
Beyond that, netcf is all about cf: configuration. It take lesser care about the current state of network devices. So to perform on-line changes to them, the user is responsible to taking them down, chagine the config, and taking them up again - just like what we do with ifcfg* files.
Compared with netcf, it's easier to support new distribution because it has its own daemon to parse the self-defining key-value file and call netlink library to perform the live change. Besides that, NM can also monitor the physical interface's link status, and has the ability run callback on some events. Daniel mentioned that libvirt would support NM by the virInterface API. That's good for vdsm. But I found it didn't fit vdsm's requirements very well.
- It doesn't allow to define a bridge on top of an existing
interface. That means the schema requires you define the bridge port interface together with the bridge. The vdsm setupNetwork verb allows creating a bridge with a given name of existing bonding. To work around it, vdsm has to get the bonding definition from libvirt or collect information from /sys. And then put the bonding definition into bridge's definition. 2. It also removes its port interface, like bonding device together when remove bridge. It's not expected by vdsm when the option 'implicitBonding' is unset. To work around it, vdsm has to re-create the bonding as said in 1. 3. mtu setting is propagated to nic or bond when setting a mtu to bridge. It could break mtu support in oVirt when adding a bridge with smaller mtu to a vlan whose slave nic is also used by a bigger mtu network.
this smells like a plain bug in NM. I believe they'd like to support managing vlans with different MTUs based on the same strata.
- Some less used options are not allowed by the the schema, like
some bonding modes and options.
Some of them could change in the backend NM. But it's better to claim vdsm's requirements while libvirt is moving to NM. Is it better that if libvirt allow vdsm manipulate sub-element of a network? Probably Igor and Antoni have more comments on it.
I don't like using libvirt for networking. We should interact directly with the host network manager or whatever alternative we choose.
I do like using libvirt's abstraction - when it has non-/bin/false substance behind it. I think that it has a potential to help projects outside oVirt.
Dan.
On 12/03/2012 04:25 PM, Dan Kenigsberg wrote:
On Mon, Dec 03, 2012 at 04:35:34AM -0500, Alon Bar-Lev wrote:
----- Original Message -----
From: "Mark Wu" wudxw@linux.vnet.ibm.com To: "VDSM Project Development" vdsm-devel@lists.fedorahosted.org Cc: "Alon Bar-Lev" alonbl@redhat.com, "Dan Kenigsberg" danken@redhat.com, "Simon Grinberg" simon@redhat.com, "Antoni Segura Puimedon" asegurap@redhat.com, "Igor Lvovsky" ilvovsky@redhat.com, "Daniel P. Berrange" berrange@redhat.com Sent: Monday, December 3, 2012 7:39:49 AM Subject: Re: [vdsm] Back to future of vdsm network configuration
On 11/29/2012 04:24 AM, Alon Bar-Lev wrote:
----- Original Message -----
From: "Dan Kenigsberg" danken@redhat.com To: "Alon Bar-Lev" alonbl@redhat.com Cc: "Simon Grinberg" simon@redhat.com, "VDSM Project Development" vdsm-devel@lists.fedorahosted.org Sent: Wednesday, November 28, 2012 10:20:11 PM Subject: Re: [vdsm] MTU setting according to ifcfg files.
On Wed, Nov 28, 2012 at 12:49:10PM -0500, Alon Bar-Lev wrote:
Itamar though a bomb that we should co-exist on generic host, this is something I do not know to compute. I still waiting for a response of where this requirement came from and if that mandatory.
This bomb has been ticking since ever. We have ovirt-node images for pure hypervisor nodes, but we support plain Linux nodes, where local admins are free to `yum upgrade` in the least convenient moment. The latter mode can be the stuff that nightmares are made of, but it also allows the flexibility and bleeding-endgeness we all cherish.
There is a different between having generic OS and having generic setup, running your email server, file server and LDAP on a node that running VMs.
I have no problem in having generic OS (opposed of ovirt-node) but have full control over that.
Alon.
Can I say we have got agreement on oVirt should cover two kinds of hypervisors? Stateless slave is good for pure and normal virtualization workload, while generic host can keep the flexibility of customization. In my opinion, it's good for the oVirt community to provide choices for users. They could customize it in production, building and even source code according to their requirements and skills.
I also think it will be good to support both modes! It will also good if we can rule the world! :)
Now seriously... :)
If we want to ever have a working solution we need to focus, dropping wishful requirements in favour of the minimum required that will allow us to reach to stable milestone.
Having a good clean interface for vdsm network within the stateless mode, will allow a persistent implementation to exists even if the whole implementation of master and vdsm assume stateless. This kind of implementation will get a new state from master, compare to whatever exists on the host and sync.
I, of course, will be against investing resources in such network management plugin approach... but it is doable, and my vote is not something that you cannot safely ignore.
I cannot say that I do not fail to parse English sentences with double or triple negations...
I'd like to see an API that lets us define a persistent initial management interface, and create volatile network devices during runtime. I'd love to see a define/create distiction, as libvirt has.
How about keeping our current setupNetwork API, with a minor change to its sematics - it would not persist anything. A new persistNetwork API would be added, intending to persist the management network after it has been tested.
On boot, only the management defitions would show up, and Engine (or a small local sevice on top of vdsm) would push the complete configuration.
how does this benefit over loading the last config, and then have engine refresh (always/if needed)?
setSafeNetConfig, and the rollback-on-boot mess would be scrapped.
The only little problem would be to implement setupNetwork without playing with persisted ifcfg* files.
Having said that, let's come back to your original claim: """while generic host can keep the flexibility of customization."""
NOBODY, and I repeat my answer to Dan, NOBODY claim we should not support generic host. But the term 'generic' seems to confuse everyone... generic is a a host does not mean administrator can do whatever he likes, it just a host that is installed using standard distribution installation procedure.
Using 'generic host' can be done with either stateful or stateless modes.
However what and how customization can be done to a resource that is managed by VDSM (eg: storage, network) is a complete different question.
There cannot be two managers to the same resource, it is a rule of thumb, any other approach is non-deterministic and may lead to huge resource investment with almost no benefit, as it will never be stable.
So moving back to the discussion network configuration, I would like to suggest we could adopt both of the two solutions.
dynamic way (as Alon suggested in his previous mail.) -- for oVirt node. It will take a step towards real stateless. Actually it's also helpful to offload the transaction management from vdsm for the static way.
It will also provide the framework needed in order to provide network on demand as Livnat plan. Define network resources (and I guess storage) when VM is moved to the host. In this mode there is no other way to go!
We're going to build vdsm network setup module on top of generic host network manager, like libvirt virInterface. But to persist the network configuration on oVirt node, vdsm has to care about the details of lower level. If we only run the static way on the generic host, then host network manager could perform the rollback stuff on behalf of vdsm. I only have two comments on dynamic way:
- Do we really need to care about the management interface? How
about just leaving it to installation and disallow to configure it at runtime. 2. How about putting the retrievement network configuration from engine into vdsm-reg?
vdsm-reg is going to be killed soon, just like the vdsm-bootstrap. I was tempted to do this for 3.2, but I was taken...
I, too, see no benefit in vdsm pulling its setup from Engine, over Engine pushing it once it is aware of the new host (and knows that the host is needed, and that its network config has changed, etc).
static way -- for generic host. We didn't follow much on this topic in the thread. So I would like to talk about my understanding to continue this discussion. As Dan said in the first message of this thread, openvswitch couldn't keep 3rd level configurations, so it's not appropriate to use itself to cover the base network configurations. Then we have two choices: netcf and NetworkManager. It seems netcf is not used as widely as NM. Currently, it supports fedora/rhel, debian/ubuntu and suse. To support a new distribution, you need add a converter to translate the interface's XML definition into native configuration, because netcf just covers the part of static configuration, and relies on the system network service to make configurations take effect.
This may be a good opportunity to show
$ git grep NETCF_TRANSACTION ... src/drv_suse.c:#define NETCF_TRANSACTION "/bin/false" ...
i.e., a considerable efferot has to take place in order to get distribution-neutrality out of netcf.
Beyond that, netcf is all about cf: configuration. It take lesser care about the current state of network devices. So to perform on-line changes to them, the user is responsible to taking them down, chagine the config, and taking them up again - just like what we do with ifcfg* files.
Compared with netcf, it's easier to support new distribution because it has its own daemon to parse the self-defining key-value file and call netlink library to perform the live change. Besides that, NM can also monitor the physical interface's link status, and has the ability run callback on some events. Daniel mentioned that libvirt would support NM by the virInterface API. That's good for vdsm. But I found it didn't fit vdsm's requirements very well.
- It doesn't allow to define a bridge on top of an existing
interface. That means the schema requires you define the bridge port interface together with the bridge. The vdsm setupNetwork verb allows creating a bridge with a given name of existing bonding. To work around it, vdsm has to get the bonding definition from libvirt or collect information from /sys. And then put the bonding definition into bridge's definition. 2. It also removes its port interface, like bonding device together when remove bridge. It's not expected by vdsm when the option 'implicitBonding' is unset. To work around it, vdsm has to re-create the bonding as said in 1. 3. mtu setting is propagated to nic or bond when setting a mtu to bridge. It could break mtu support in oVirt when adding a bridge with smaller mtu to a vlan whose slave nic is also used by a bigger mtu network.
this smells like a plain bug in NM. I believe they'd like to support managing vlans with different MTUs based on the same strata.
- Some less used options are not allowed by the the schema, like
some bonding modes and options.
Some of them could change in the backend NM. But it's better to claim vdsm's requirements while libvirt is moving to NM. Is it better that if libvirt allow vdsm manipulate sub-element of a network? Probably Igor and Antoni have more comments on it.
I don't like using libvirt for networking. We should interact directly with the host network manager or whatever alternative we choose.
I do like using libvirt's abstraction - when it has non-/bin/false substance behind it. I think that it has a potential to help projects outside oVirt.
Dan. _______________________________________________ vdsm-devel mailing list vdsm-devel@lists.fedorahosted.org https://lists.fedorahosted.org/mailman/listinfo/vdsm-devel
On Mon, Dec 03, 2012 at 04:28:16PM +0200, Itamar Heim wrote:
On 12/03/2012 04:25 PM, Dan Kenigsberg wrote:
On Mon, Dec 03, 2012 at 04:35:34AM -0500, Alon Bar-Lev wrote:
----- Original Message -----
From: "Mark Wu" wudxw@linux.vnet.ibm.com To: "VDSM Project Development" vdsm-devel@lists.fedorahosted.org Cc: "Alon Bar-Lev" alonbl@redhat.com, "Dan Kenigsberg" danken@redhat.com, "Simon Grinberg" simon@redhat.com, "Antoni Segura Puimedon" asegurap@redhat.com, "Igor Lvovsky" ilvovsky@redhat.com, "Daniel P. Berrange" berrange@redhat.com Sent: Monday, December 3, 2012 7:39:49 AM Subject: Re: [vdsm] Back to future of vdsm network configuration
On 11/29/2012 04:24 AM, Alon Bar-Lev wrote:
----- Original Message -----
From: "Dan Kenigsberg" danken@redhat.com To: "Alon Bar-Lev" alonbl@redhat.com Cc: "Simon Grinberg" simon@redhat.com, "VDSM Project Development" vdsm-devel@lists.fedorahosted.org Sent: Wednesday, November 28, 2012 10:20:11 PM Subject: Re: [vdsm] MTU setting according to ifcfg files.
On Wed, Nov 28, 2012 at 12:49:10PM -0500, Alon Bar-Lev wrote: >Itamar though a bomb that we should co-exist on generic host, >this >is >something I do not know to compute. I still waiting for a >response >of >where this requirement came from and if that mandatory. > This bomb has been ticking since ever. We have ovirt-node images for pure hypervisor nodes, but we support plain Linux nodes, where local admins are free to `yum upgrade` in the least convenient moment. The latter mode can be the stuff that nightmares are made of, but it also allows the flexibility and bleeding-endgeness we all cherish.
There is a different between having generic OS and having generic setup, running your email server, file server and LDAP on a node that running VMs.
I have no problem in having generic OS (opposed of ovirt-node) but have full control over that.
Alon.
Can I say we have got agreement on oVirt should cover two kinds of hypervisors? Stateless slave is good for pure and normal virtualization workload, while generic host can keep the flexibility of customization. In my opinion, it's good for the oVirt community to provide choices for users. They could customize it in production, building and even source code according to their requirements and skills.
I also think it will be good to support both modes! It will also good if we can rule the world! :)
Now seriously... :)
If we want to ever have a working solution we need to focus, dropping wishful requirements in favour of the minimum required that will allow us to reach to stable milestone.
Having a good clean interface for vdsm network within the stateless mode, will allow a persistent implementation to exists even if the whole implementation of master and vdsm assume stateless. This kind of implementation will get a new state from master, compare to whatever exists on the host and sync.
I, of course, will be against investing resources in such network management plugin approach... but it is doable, and my vote is not something that you cannot safely ignore.
I cannot say that I do not fail to parse English sentences with double or triple negations...
I'd like to see an API that lets us define a persistent initial management interface, and create volatile network devices during runtime. I'd love to see a define/create distiction, as libvirt has.
How about keeping our current setupNetwork API, with a minor change to its sematics - it would not persist anything. A new persistNetwork API would be added, intending to persist the management network after it has been tested.
On boot, only the management defitions would show up, and Engine (or a small local sevice on top of vdsm) would push the complete configuration.
how does this benefit over loading the last config, and then have engine refresh (always/if needed)?
It's clearer for the local admin: if it's on the file system, it would be there after boot; he can do his worst to them, and we'd try to manage.
Also, it is easier to recover from utterly-horrible remote commands, which had rendered our host incommunicado: the management interface used to send these commands -- and only it -- would show up after boot. This increases the probability that after fencing, we'd see the host again.
setSafeNetConfig, and the rollback-on-boot mess would be scrapped.
The only little problem would be to implement setupNetwork without playing with persisted ifcfg* files.
On 12/03/2012 06:54 PM, Dan Kenigsberg wrote:
On Mon, Dec 03, 2012 at 04:28:16PM +0200, Itamar Heim wrote:
On 12/03/2012 04:25 PM, Dan Kenigsberg wrote:
On Mon, Dec 03, 2012 at 04:35:34AM -0500, Alon Bar-Lev wrote:
----- Original Message -----
From: "Mark Wu" wudxw@linux.vnet.ibm.com To: "VDSM Project Development" vdsm-devel@lists.fedorahosted.org Cc: "Alon Bar-Lev" alonbl@redhat.com, "Dan Kenigsberg" danken@redhat.com, "Simon Grinberg" simon@redhat.com, "Antoni Segura Puimedon" asegurap@redhat.com, "Igor Lvovsky" ilvovsky@redhat.com, "Daniel P. Berrange" berrange@redhat.com Sent: Monday, December 3, 2012 7:39:49 AM Subject: Re: [vdsm] Back to future of vdsm network configuration
On 11/29/2012 04:24 AM, Alon Bar-Lev wrote:
----- Original Message ----- > From: "Dan Kenigsberg" danken@redhat.com > To: "Alon Bar-Lev" alonbl@redhat.com > Cc: "Simon Grinberg" simon@redhat.com, "VDSM Project > Development" vdsm-devel@lists.fedorahosted.org > Sent: Wednesday, November 28, 2012 10:20:11 PM > Subject: Re: [vdsm] MTU setting according to ifcfg files. > > On Wed, Nov 28, 2012 at 12:49:10PM -0500, Alon Bar-Lev wrote: >> Itamar though a bomb that we should co-exist on generic host, >> this >> is >> something I do not know to compute. I still waiting for a >> response >> of >> where this requirement came from and if that mandatory. >> > This bomb has been ticking since ever. We have ovirt-node images > for > pure hypervisor nodes, but we support plain Linux nodes, where > local > admins are free to `yum upgrade` in the least convenient moment. > The > latter mode can be the stuff that nightmares are made of, but it > also > allows the flexibility and bleeding-endgeness we all cherish. > There is a different between having generic OS and having generic setup, running your email server, file server and LDAP on a node that running VMs.
I have no problem in having generic OS (opposed of ovirt-node) but have full control over that.
Alon.
Can I say we have got agreement on oVirt should cover two kinds of hypervisors? Stateless slave is good for pure and normal virtualization workload, while generic host can keep the flexibility of customization. In my opinion, it's good for the oVirt community to provide choices for users. They could customize it in production, building and even source code according to their requirements and skills.
I also think it will be good to support both modes! It will also good if we can rule the world! :)
Now seriously... :)
If we want to ever have a working solution we need to focus, dropping wishful requirements in favour of the minimum required that will allow us to reach to stable milestone.
Having a good clean interface for vdsm network within the stateless mode, will allow a persistent implementation to exists even if the whole implementation of master and vdsm assume stateless. This kind of implementation will get a new state from master, compare to whatever exists on the host and sync.
I, of course, will be against investing resources in such network management plugin approach... but it is doable, and my vote is not something that you cannot safely ignore.
I cannot say that I do not fail to parse English sentences with double or triple negations...
I'd like to see an API that lets us define a persistent initial management interface, and create volatile network devices during runtime. I'd love to see a define/create distiction, as libvirt has.
How about keeping our current setupNetwork API, with a minor change to its sematics - it would not persist anything. A new persistNetwork API would be added, intending to persist the management network after it has been tested.
On boot, only the management defitions would show up, and Engine (or a small local sevice on top of vdsm) would push the complete configuration.
how does this benefit over loading the last config, and then have engine refresh (always/if needed)?
It's clearer for the local admin: if it's on the file system, it would be there after boot; he can do his worst to them, and we'd try to manage.
Also, it is easier to recover from utterly-horrible remote commands, which had rendered our host incommunicado: the management interface used to send these commands -- and only it -- would show up after boot. This increases the probability that after fencing, we'd see the host again.
i think we mentioned this before, but this will kill any way to have hosts come back to life, also have a policy on connecting to storage, even if engine is still down. (one of these use cases is for the engine itself to be hosted on the hosts as well)
----- Original Message -----
From: "Itamar Heim" iheim@redhat.com To: "Dan Kenigsberg" danken@redhat.com Cc: "Alon Bar-Lev" alonbl@redhat.com, "VDSM Project Development" vdsm-devel@lists.fedorahosted.org, "Simon Grinberg" simon@redhat.com, "Andrew Cathrow" acathrow@redhat.com Sent: Monday, December 3, 2012 10:56:53 PM Subject: Re: [vdsm] Back to future of vdsm network configuration
On 12/03/2012 06:54 PM, Dan Kenigsberg wrote:
On Mon, Dec 03, 2012 at 04:28:16PM +0200, Itamar Heim wrote:
On 12/03/2012 04:25 PM, Dan Kenigsberg wrote:
On Mon, Dec 03, 2012 at 04:35:34AM -0500, Alon Bar-Lev wrote:
----- Original Message -----
From: "Mark Wu" wudxw@linux.vnet.ibm.com To: "VDSM Project Development" vdsm-devel@lists.fedorahosted.org Cc: "Alon Bar-Lev" alonbl@redhat.com, "Dan Kenigsberg" danken@redhat.com, "Simon Grinberg" simon@redhat.com, "Antoni Segura Puimedon" asegurap@redhat.com, "Igor Lvovsky" ilvovsky@redhat.com, "Daniel P. Berrange" berrange@redhat.com Sent: Monday, December 3, 2012 7:39:49 AM Subject: Re: [vdsm] Back to future of vdsm network configuration
On 11/29/2012 04:24 AM, Alon Bar-Lev wrote: > > ----- Original Message ----- >> From: "Dan Kenigsberg" danken@redhat.com >> To: "Alon Bar-Lev" alonbl@redhat.com >> Cc: "Simon Grinberg" simon@redhat.com, "VDSM Project >> Development" vdsm-devel@lists.fedorahosted.org >> Sent: Wednesday, November 28, 2012 10:20:11 PM >> Subject: Re: [vdsm] MTU setting according to ifcfg files. >> >> On Wed, Nov 28, 2012 at 12:49:10PM -0500, Alon Bar-Lev wrote: >>> Itamar though a bomb that we should co-exist on generic >>> host, >>> this >>> is >>> something I do not know to compute. I still waiting for a >>> response >>> of >>> where this requirement came from and if that mandatory. >>> >> This bomb has been ticking since ever. We have ovirt-node >> images >> for >> pure hypervisor nodes, but we support plain Linux nodes, >> where >> local >> admins are free to `yum upgrade` in the least convenient >> moment. >> The >> latter mode can be the stuff that nightmares are made of, but >> it >> also >> allows the flexibility and bleeding-endgeness we all cherish. >> > There is a different between having generic OS and having > generic > setup, running your email server, file server and LDAP on a > node > that running VMs. > > I have no problem in having generic OS (opposed of ovirt-node) > but > have full control over that. > > Alon. Can I say we have got agreement on oVirt should cover two kinds of hypervisors? Stateless slave is good for pure and normal virtualization workload, while generic host can keep the flexibility of customization. In my opinion, it's good for the oVirt community to provide choices for users. They could customize it in production, building and even source code according to their requirements and skills.
I also think it will be good to support both modes! It will also good if we can rule the world! :)
Now seriously... :)
If we want to ever have a working solution we need to focus, dropping wishful requirements in favour of the minimum required that will allow us to reach to stable milestone.
Having a good clean interface for vdsm network within the stateless mode, will allow a persistent implementation to exists even if the whole implementation of master and vdsm assume stateless. This kind of implementation will get a new state from master, compare to whatever exists on the host and sync.
I, of course, will be against investing resources in such network management plugin approach... but it is doable, and my vote is not something that you cannot safely ignore.
I cannot say that I do not fail to parse English sentences with double or triple negations...
I'd like to see an API that lets us define a persistent initial management interface, and create volatile network devices during runtime. I'd love to see a define/create distiction, as libvirt has.
How about keeping our current setupNetwork API, with a minor change to its sematics - it would not persist anything. A new persistNetwork API would be added, intending to persist the management network after it has been tested.
On boot, only the management defitions would show up, and Engine (or a small local sevice on top of vdsm) would push the complete configuration.
how does this benefit over loading the last config, and then have engine refresh (always/if needed)?
It's clearer for the local admin: if it's on the file system, it would be there after boot; he can do his worst to them, and we'd try to manage.
Also, it is easier to recover from utterly-horrible remote commands, which had rendered our host incommunicado: the management interface used to send these commands -- and only it -- would show up after boot. This increases the probability that after fencing, we'd see the host again.
i think we mentioned this before, but this will kill any way to have hosts come back to life, also have a policy on connecting to storage, even if engine is still down. (one of these use cases is for the engine itself to be hosted on the hosts as well)
For this use case you'll need much more - you'll need host based clustering (Assuming your engine is on a self hosted VM), this it a total different ball game.
But note that the approach does not contradict that: 1. You always have your admin statefull configuration on host boot - this is the idea behind getting back to the original host network configuration (not necessary just the management interface) that I've raised in one of the numerous threads on this subject. I don't remember if it was on this one or another. 2. The stateless configuration coming from the engine is always on top
Anything required to re-run the engine on a host reboot must be part of the statefull section and not part of the engine config section.
On 12/04/2012 07:49 PM, Simon Grinberg wrote:
----- Original Message -----
From: "Itamar Heim" iheim@redhat.com To: "Dan Kenigsberg" danken@redhat.com Cc: "Alon Bar-Lev" alonbl@redhat.com, "VDSM Project Development" vdsm-devel@lists.fedorahosted.org, "Simon Grinberg" simon@redhat.com, "Andrew Cathrow" acathrow@redhat.com Sent: Monday, December 3, 2012 10:56:53 PM Subject: Re: [vdsm] Back to future of vdsm network configuration
On 12/03/2012 06:54 PM, Dan Kenigsberg wrote:
On Mon, Dec 03, 2012 at 04:28:16PM +0200, Itamar Heim wrote:
On 12/03/2012 04:25 PM, Dan Kenigsberg wrote:
On Mon, Dec 03, 2012 at 04:35:34AM -0500, Alon Bar-Lev wrote:
----- Original Message ----- > From: "Mark Wu" wudxw@linux.vnet.ibm.com > To: "VDSM Project Development" > vdsm-devel@lists.fedorahosted.org > Cc: "Alon Bar-Lev" alonbl@redhat.com, "Dan Kenigsberg" > danken@redhat.com, "Simon Grinberg" simon@redhat.com, > "Antoni Segura Puimedon" asegurap@redhat.com, "Igor Lvovsky" > ilvovsky@redhat.com, "Daniel P. Berrange" > berrange@redhat.com > Sent: Monday, December 3, 2012 7:39:49 AM > Subject: Re: [vdsm] Back to future of vdsm network > configuration > > On 11/29/2012 04:24 AM, Alon Bar-Lev wrote: >> >> ----- Original Message ----- >>> From: "Dan Kenigsberg" danken@redhat.com >>> To: "Alon Bar-Lev" alonbl@redhat.com >>> Cc: "Simon Grinberg" simon@redhat.com, "VDSM Project >>> Development" vdsm-devel@lists.fedorahosted.org >>> Sent: Wednesday, November 28, 2012 10:20:11 PM >>> Subject: Re: [vdsm] MTU setting according to ifcfg files. >>> >>> On Wed, Nov 28, 2012 at 12:49:10PM -0500, Alon Bar-Lev wrote: >>>> Itamar though a bomb that we should co-exist on generic >>>> host, >>>> this >>>> is >>>> something I do not know to compute. I still waiting for a >>>> response >>>> of >>>> where this requirement came from and if that mandatory. >>>> >>> This bomb has been ticking since ever. We have ovirt-node >>> images >>> for >>> pure hypervisor nodes, but we support plain Linux nodes, >>> where >>> local >>> admins are free to `yum upgrade` in the least convenient >>> moment. >>> The >>> latter mode can be the stuff that nightmares are made of, but >>> it >>> also >>> allows the flexibility and bleeding-endgeness we all cherish. >>> >> There is a different between having generic OS and having >> generic >> setup, running your email server, file server and LDAP on a >> node >> that running VMs. >> >> I have no problem in having generic OS (opposed of ovirt-node) >> but >> have full control over that. >> >> Alon. > Can I say we have got agreement on oVirt should cover two kinds > of > hypervisors? Stateless slave is good for pure and normal > virtualization > workload, while generic host can keep the flexibility of > customization. > In my opinion, it's good for the oVirt community to provide > choices > for > users. They could customize it in production, building and > even > source > code according to their requirements and skills.
I also think it will be good to support both modes! It will also good if we can rule the world! :)
Now seriously... :)
If we want to ever have a working solution we need to focus, dropping wishful requirements in favour of the minimum required that will allow us to reach to stable milestone.
Having a good clean interface for vdsm network within the stateless mode, will allow a persistent implementation to exists even if the whole implementation of master and vdsm assume stateless. This kind of implementation will get a new state from master, compare to whatever exists on the host and sync.
I, of course, will be against investing resources in such network management plugin approach... but it is doable, and my vote is not something that you cannot safely ignore.
I cannot say that I do not fail to parse English sentences with double or triple negations...
I'd like to see an API that lets us define a persistent initial management interface, and create volatile network devices during runtime. I'd love to see a define/create distiction, as libvirt has.
How about keeping our current setupNetwork API, with a minor change to its sematics - it would not persist anything. A new persistNetwork API would be added, intending to persist the management network after it has been tested.
On boot, only the management defitions would show up, and Engine (or a small local sevice on top of vdsm) would push the complete configuration.
how does this benefit over loading the last config, and then have engine refresh (always/if needed)?
It's clearer for the local admin: if it's on the file system, it would be there after boot; he can do his worst to them, and we'd try to manage.
Also, it is easier to recover from utterly-horrible remote commands, which had rendered our host incommunicado: the management interface used to send these commands -- and only it -- would show up after boot. This increases the probability that after fencing, we'd see the host again.
i think we mentioned this before, but this will kill any way to have hosts come back to life, also have a policy on connecting to storage, even if engine is still down. (one of these use cases is for the engine itself to be hosted on the hosts as well)
For this use case you'll need much more - you'll need host based clustering (Assuming your engine is on a self hosted VM), this it a total different ball game.
But note that the approach does not contradict that:
- You always have your admin statefull configuration on host boot - this is the idea behind getting back to the original host network configuration (not necessary just the management interface) that I've raised in one of the numerous threads on this subject. I don't remember if it was on this one or another.
you are assuming the engine image didn't come from the "storage network"
- The stateless configuration coming from the engine is always on top
Anything required to re-run the engine on a host reboot must be part of the statefull section and not part of the engine config section.
----- Original Message -----
From: "Igor Lvovsky" ilvovsky@redhat.com To: "Saggi Mizrahi" smizrahi@redhat.com Cc: "Simon Grinberg" simon@redhat.com, "VDSM Project Development" vdsm-devel@lists.fedorahosted.org Sent: Wednesday, November 28, 2012 6:10:27 PM Subject: Re: [vdsm] MTU setting according to ifcfg files.
----- Original Message -----
From: "Saggi Mizrahi" smizrahi@redhat.com To: "Simon Grinberg" simon@redhat.com Cc: "VDSM Project Development" vdsm-devel@lists.fedorahosted.org, "Igor Lvovsky" ilvovsky@redhat.com Sent: Wednesday, November 28, 2012 5:30:17 PM Subject: Re: [vdsm] MTU setting according to ifcfg files.
I suggest we don't have a default. If you don't specify an MTU it will use whatever is already configured. There is no way to "go back to the defaults" only to set a new value. The engine can assume 1500 (in case of ethernet devices) is the "recommended value".
This is not related to engine. You are right that the actually MTU will the last configured one, but this is exactly a problem. As I already mentioned, if you will add another bridge without custom MTU its users (VMs) can assume that the MTU is 1500
Assumption is the mother of all ____.
What needs to be done is reverting to the old value. Can be done easily by inserting a comment in the ifcfg-file with the MTU prior to the change.
When we (hopefully) go into a stateless configuration controlled by the engine/any other manager then it should be determined solely by the manager, and reverted to user defined on reboot.
----- Original Message -----
From: "Simon Grinberg" simon@redhat.com To: "Igor Lvovsky" ilvovsky@redhat.com Cc: "VDSM Project Development" vdsm-devel@lists.fedorahosted.org Sent: Wednesday, November 28, 2012 9:53:48 AM Subject: Re: [vdsm] MTU setting according to ifcfg files.
----- Original Message -----
From: "Igor Lvovsky" ilvovsky@redhat.com To: "VDSM Project Development" vdsm-devel@lists.fedorahosted.org Cc: "Simon Grinberg" simon@redhat.com Sent: Wednesday, November 28, 2012 2:58:52 PM Subject: [vdsm] MTU setting according to ifcfg files.
Hi,
I am working on one of the vdsm bugs that we have and I found that initscripts (initscripts-9.03.34-1.el6.x86_64) behaviour doesn't fits our needs. So, I would like to raise this issue in the list.
The issue is MTU setting according to ifcfg files. I'll try to describe the flow below.
- I started with ifcfg file for the interface without MTU
keyword at all and the proper interface (let say eth0) had the *default* MTU=1500 (according to /sys/class/net/eth0/mtu). 2. I created a bridge with MTU=9000 on top of this interface. Everything went OK. After I wrote MTU=9000 on ifcfg-eth0 and ifdown/ifup it, eth0 got the proper MTU. 3. Now, I removed the bridge and deleted MTU keyword from the ifcfg-eth0. But after ifup/ifdown the actual MTU of the eth0 stayed 9000.
The only way to change it back to 1500 (or something else) is explicitly set MTU in ifcfg file. According to Bill Nottingham it is intentional behaviour. If so, we have a problem in vdsm, because we never set MTU value until user ask it explicitly.
Actually you are,
You where asked for MTU 9000 on the network, As implementation specif you had to do this all the way down the chain Now it's only reasonable that when you cancel the 9000 request then you'll do what is necessary to rollback the changes. It's pity that ifcfg-files don't have the option to set MTU='default', but as you can read this default before you change, then please keep it somewhere and revert to that.
It means that if we have interface with MTU=9000 on it just because once there was a bridge with such MTU attached to it and now we want to attach regular bridge with *default* MTU=1500 we have a problem. The only thing we can do to avoid this it's set explicitly MTU=1500 in interface's ifcfg file. IMHO it's a bit ugly, but it looks like we have no choice.
As usual comments more than welcome...
Regards, Igor Lvovsky _______________________________________________ vdsm-devel mailing list vdsm-devel@lists.fedorahosted.org https://lists.fedorahosted.org/mailman/listinfo/vdsm-devel
vdsm-devel mailing list vdsm-devel@lists.fedorahosted.org https://lists.fedorahosted.org/mailman/listinfo/vdsm-devel
vdsm-devel mailing list vdsm-devel@lists.fedorahosted.org https://lists.fedorahosted.org/mailman/listinfo/vdsm-devel
----- Original Message -----
From: "Itamar Heim" iheim@redhat.com To: "Roni Luxenberg" rluxenbe@redhat.com Cc: "Alon Bar-Lev" alonbl@redhat.com, "VDSM Project Development" vdsm-devel@lists.fedorahosted.org Sent: Wednesday, November 28, 2012 2:01:45 PM Subject: Re: [vdsm] Future of Vdsm network configuration - Thread mid-summary
On 11/28/2012 05:34 AM, Roni Luxenberg wrote:
----- Original Message -----
From: "Itamar Heim" iheim@redhat.com To: "Dan Kenigsberg" danken@redhat.com Cc: "Alon Bar-Lev" alonbl@redhat.com, "VDSM Project Development" vdsm-devel@lists.fedorahosted.org, "Yaniv Kaul" ykaul@redhat.com Sent: Wednesday, November 28, 2012 11:01:35 AM Subject: Re: [vdsm] Future of Vdsm network configuration - Thread mid-summary
On 11/28/2012 03:53 AM, Dan Kenigsberg wrote:
On Tue, Nov 27, 2012 at 03:24:58PM -0500, Alon Bar-Lev wrote: > >
>>> >>> Management interface configuration is a separate issue.
But it is an important issue that has to be discussed..
>>> If we perform changes of this interface when host is in >>> maintenance >>> we reduce the complexity of the problem. >>> >>> For your specific issue, if there are two interfaces, one >>> which >>> is >>> up during boot and one which is down during boot, there is >>> no >>> problem to bond them after boot without persisting >>> configuration. >> >> how would you know which bond mode to use? which MTU? > > I don't understand the question.
I think I do: Alon suggests that on boot, the management interface would not have bonding at all, and use a single nic. The switch would have to assume that other nics in the bond are dead, and will use the only one which is alive to transfer packets.
There is no doubt that we have to persist the vlan tag of the management interface, and its MTU, in the extremely rare case where the network would not alow Linux's default of 1500.
i was thinking manager may be using jumbo frames to talk to host, and host will have an issue with them since it is set to 1500 instead of 8k. jumbo frames isn't a rare case.
as for bond, are you sure you can use a nic in a non bonded mode for all bond modes?
all bond modes have to cope with a situation where only a single nic is active and the rest are down, so one can boot with a single active nic and only activate the rest and promote to the desired bond mode upon getting the full network configuration from the manager.
of course they need to handle single active nic, but iirc, the host must be configured for a matching bond as the switch. i.e., you can't configure the switch to be in bond, then boot the host with a "single active nic" in a non bonded config
as far as I know as long as the 2nd nic is down there is no problem. it is as if the cord is out.
regular port grouping or trunking on adjacent switch does not mandate bond configuration on the host as long as only a single nic is active. OTAH, 802.3ad port grouping might require host configuration as link aggregation control packets are exchanged on the wire. If this is the case, you'd need to persist your L2 bonding config.
Changing the master interface mtu for either vlan or bond is required for management interface and non management interface.
So the logic would probably be set max(mtu(slaves)) regardless it is management interface or not.
I discussed this with Livnat, if there are applications that access the master interface directly we may break them, as the destination may not support non standard mtu.
This is true in current implementation and any future implementation.
It is bad practice to use the master interface directly (mixed tagged/untagged), better to define in switch that untagged communication belongs to vlanX, then use this explicit vlanX at host.
next, what if we're using openvswitch, and you need some flow definitions for the management interface?
I cannot answer that as I don't know openvswitch very well and don't know what "flow definitions" are, however, I do guess that it has non persistent mode that can effect any interface on its control. If you like I can research this one.
you mainly need OVS for provisioning VM networks so here too you can completely bypass OVS during boot and only configure it in a transactional manner upon getting the full network configuration from the manager.
a general question, why would you need to configure VM networks on the host (assuming a persistent cached configuration) upon boot if it cannot talk to the manager? after-all, in this case no resources would be scheduled to run on this host until connection to the manager is restored and up-to-date network configuration is applied.
thanks, Roni
vdsm-devel@lists.fedorahosted.org