We should present administrators, with the ability to configure the launch-time provider account selection policy for a specific pool, and to set a global default policy to apply in pools where no custom policy is defined.
The policy would be applied after a set of viable provider accounts has been identified. Those will be the provider accounts to which the relevant images have been pushed, and for which a set of hardware profile matches can be made etc..
The selection policy should work by defining a probability distribution, stating how likely each provider account is to be selected to host the new deployment, expressed as a percentage. Once those percentage are calculated, Conductor should pick a random number between 1 and 100 and attempt to launch on the lucky provider account.
Using a probability range and randomly selecting within that range might seem counter-intuitive: Having done the maths and assigned a numerical probability to each provider account, based on its suitability to host the deployment, why not just launch on the "best" provider? The issue is one of scale. When considering a single launch, selecting the account which gathered the highest score makes sense, but once Conductor is managing a large volume of deployments, the downside of that approach becomes clear - If one provider account gathered more than 50% of the probability ranking, it would get 100% of the instances, without the randomness.
Whilst the various policies should be stackable, one of the two following policies should be the initial basis for the calculation:
*Round robin, with optional weighting: *
With this policy, Conductor would use each of the available provider accounts equally, by assigning the same probability to each of them. Varying the probabilities, to assign a weighting, would be useful in instances where the private cloud providers associated with each provider account are of differing sizes. e.g. Three vSphere clusters, one of which has double the capacity of the other two. In that circumstance, the Administrator could adjust the weighting ratios to more closely reflect the actual capacities of each cluster.
It is worth noting that this isn't strictly round robin. The provider accounts wouldn't be selected in strict rotation, though the overall result is the same.
*Least used, with optional weighting: *
This policy would make most sense in scenarios where Conductor is the sole means by which instances are launched on private cloud providers. Conductor would seek to ensure that the usage of the providers was balanced, by giving a higher probability to whichever provider accounts are currently least used. As with round robin, the weightings could be adjusted to reflect differing capacities between providers.
Having used on of those two policies to acquire an initial set of probabilities, administrators could then elect to apply additional policies, including:
*Assigned priority: *
The probability assigned to each provider account would be adjusted according to the provider accounts' priority, by increasing the probability ranking percentage of the higher priority provider accounts at the expense of the lower priority ones
*Punishing failure: *
Once the audit history records past failures, for each occurrence of a launch failure within a configurable period (6 hours feels reasonable), a provider account would be fined 5% from its probability ranking. This would serve to reduce the attempts to launch on a provider which is running out of capacity, or experiencing hardware issues etc.
*Cost *
There are three principle cloud uses which can incur costs: consumption of network bandwidth, consumption of storage and running a VM.
Happily, only one of these needs to be a factor when Conductor is selecting a provider account to launch: the cost of running the VM.
The amount of network bandwidth that a deployment will consume is pretty much unknowable at launch time. And, if it is known because, for example, a deployment is for a streaming media server, then Administrators can minimize costs by only launching that deployment in a "Low cost bandwith" pool.
As long as we're not supporting deployments which include the allocation of additional storage, the costs of storage consumption are an issue to consider at build & push time, rather than at launch time.
So, in order to allow cost to be another factor which affects the probability rankings, all we need is a cost per realm, per hour, for each provider hardware profile, for each provider account.
Admins are going to have to enter that data themselves. That's not as onerous as it sounds, given that, for example, it will often be the case that costs will not vary across realms, so the UI can help by pre-filling.
Clearly, for private clouds, no alternative means for getting pricing data into Conductor exists. For public providers, it would be beneficial if their APIs exported list pricing, however:
- Few organizations which operate on a scale which justifies using Aeolus are likely to be paying list price
- Organizations may wish to store and export the adjusted costs that they'll be assigning to users, rather than the basic costs appearing on the provider's monthly invoice.
Once the cost data is in Conductor, adjusting the probabilities of each provider account to favour whichever provider could more cheaply host the specific range of hardware profiles would be a relatively simple matter of increasing the selection probability percentages of cheaper provider accounts, by a configurable amount, at the expense of the more costly provider accounts.
Having completed the stack of modules' calculations, the result is a final set of probabilities. At this point, Conductor would roll the loaded dice and attempt to launch on the winner.
The UI to allow Administrators to enable modules, and to tune the parameters associated with them, could give a real-time representation of the effect of the current settings for a specific deployable. A certain type of Administrator would be very happy, tuning options and seeing an immediate change in, for example, a pie chart, which showed the resultant probability ranking percentages.
In future, we could provide Administrators with an interface to implement their own selection modules. They might choose, for example, to vary the selection probability percentages according to time and date, to increase usage of private cloud at times when they would otherwise be relatively idle.
On Thu, Apr 05, 2012 at 05:44:20PM +0100, Angus Thomas wrote:
We should present administrators, with the ability to configure the launch-time provider account selection policy for a specific pool, and to set a global default policy to apply in pools where no custom policy is defined.
The policy would be applied after a set of viable provider accounts has been identified. Those will be the provider accounts to which the relevant images have been pushed, and for which a set of hardware profile matches can be made etc..
The selection policy should work by defining a probability distribution, stating how likely each provider account is to be selected to host the new deployment, expressed as a percentage. Once those percentage are calculated, Conductor should pick a random number between 1 and 100 and attempt to launch on the lucky provider account.
Using a probability range and randomly selecting within that range might seem counter-intuitive: Having done the maths and assigned a numerical probability to each provider account, based on its suitability to host the deployment, why not just launch on the "best" provider? The issue is one of scale. When considering a single launch, selecting the account which gathered the highest score makes sense, but once Conductor is managing a large volume of deployments, the downside of that approach becomes clear - If one provider account gathered more than 50% of the probability ranking, it would get 100% of the instances, without the randomness.
I've been struggling with this for a bit. I see the reasoning, but it also feels wrong to randomly pick anything. I suspect I just need a little more time to digest this, though, as the latter thought is more of a knee-jerk reaction.
To be clear, this is a weighted random selection, yes? In other words, if a policy plug-in gave a weight of 90 to Cloud A, and 10 to Cloud B, there is a 90% chance it would launch on Cloud A? In that case, I think this is fairly sensible.
Whilst the various policies should be stackable, one of the two following policies should be the initial basis for the calculation:
*Round robin, with optional weighting: *
With this policy, Conductor would use each of the available provider accounts equally, by assigning the same probability to each of them. Varying the probabilities, to assign a weighting, would be useful in instances where the private cloud providers associated with each provider account are of differing sizes. e.g. Three vSphere clusters, one of which has double the capacity of the other two. In that circumstance, the Administrator could adjust the weighting ratios to more closely reflect the actual capacities of each cluster.
It is worth noting that this isn't strictly round robin. The provider accounts wouldn't be selected in strict rotation, though the overall result is the same.
This is perhaps an implementation detail, but how should all of this be configured? Do we need a web interface so you can dynamically tune this, or should it be stored in a config file somewhere?
*Least used, with optional weighting: *
This policy would make most sense in scenarios where Conductor is the sole means by which instances are launched on private cloud providers. Conductor would seek to ensure that the usage of the providers was balanced, by giving a higher probability to whichever provider accounts are currently least used. As with round robin, the weightings could be adjusted to reflect differing capacities between providers.
Having used on of those two policies to acquire an initial set of probabilities, administrators could then elect to apply additional policies, including:
*Assigned priority: *
The probability assigned to each provider account would be adjusted according to the provider accounts' priority, by increasing the probability ranking percentage of the higher priority provider accounts at the expense of the lower priority ones
This almost sounds like it could be the default, since it's all data we have today.
*Punishing failure: *
Once the audit history records past failures, for each occurrence of a launch failure within a configurable period (6 hours feels reasonable), a provider account would be fined 5% from its probability ranking. This would serve to reduce the attempts to launch on a provider which is running out of capacity, or experiencing hardware issues etc.
I definitely like the stackable aspect. Each plugin can return its scores summing to 100, and Conductor will then aggregate them and proceed accordingly.
The question (in my mind, at least) is _how_ to combine them, especially where a plugin gives a weight of zero. Suppose I have two plugins, one of which gives a weight of 50/50 between the two, and the other does 100/0. Should we just add them to get 150 and 50, or should we take a score of zero as meaning, "Absolutely do not use this provider" and drop it from consideration? Should this be configurable?
*Cost *
There are three principle cloud uses which can incur costs: consumption of network bandwidth, consumption of storage and running a VM.
Happily, only one of these needs to be a factor when Conductor is selecting a provider account to launch: the cost of running the VM.
The amount of network bandwidth that a deployment will consume is pretty much unknowable at launch time. And, if it is known because, for example, a deployment is for a streaming media server, then Administrators can minimize costs by only launching that deployment in a "Low cost bandwith" pool.
As long as we're not supporting deployments which include the allocation of additional storage, the costs of storage consumption are an issue to consider at build & push time, rather than at launch time.
So, in order to allow cost to be another factor which affects the probability rankings, all we need is a cost per realm, per hour, for each provider hardware profile, for each provider account.
Admins are going to have to enter that data themselves. That's not as onerous as it sounds, given that, for example, it will often be the case that costs will not vary across realms, so the UI can help by pre-filling.
I had previously given some thought to the various metrics we might care about -- where is bandwidth cheapest? Where is the cheapest place to run a compute-intensive workload? Where can I launch a memory hog for the least cost? But you raise a really good point that these aren't things Conductor can know to optimize for.
But I do think administrators may want to optimize for some of these things on their own. If I'm building an application that's going to push massive bandwidth, I might still want to build it for multiple providers for flexibility, but heavily weight it to whatever provider is cheapest on high-bandwidth deployables. I don't think that's something we can/should provide out of the box, but I think that savvy administrators may want to write their own plugins for specialized use cases.
Clearly, for private clouds, no alternative means for getting pricing data into Conductor exists. For public providers, it would be beneficial if their APIs exported list pricing, however:
- Few organizations which operate on a scale which justifies using
Aeolus are likely to be paying list price
Really just an off-topic nit, but I like to think that even small organizations that don't have this sort of muscle could benefit from Aeolus. I'm not disputing your point that pricing isn't standard, I just don't think Aeolus has to be exclusively for enormous deployments.
- Organizations may wish to store and export the adjusted costs that
they'll be assigning to users, rather than the basic costs appearing on the provider's monthly invoice.
Once the cost data is in Conductor, adjusting the probabilities of each provider account to favour whichever provider could more cheaply host the specific range of hardware profiles would be a relatively simple matter of increasing the selection probability percentages of cheaper provider accounts, by a configurable amount, at the expense of the more costly provider accounts.
You know, I wonder how complex people want to make this. If you're just weighting based on instance cost, does it make sense to just use the provider's existing weight field? Can we assign different weights depending on the hardware profile that would be matched? In other words: on average Provider X is cheaper, but for this specific deployment, it would get matched onto a HWP which is considerably more expensive than what it would match on Provider Y. Can we catch that case and do something marginally intelligent, as opposed to having flat weighting per-provider?
Having completed the stack of modules' calculations, the result is a final set of probabilities. At this point, Conductor would roll the loaded dice and attempt to launch on the winner.
I wonder how this will work with Jan's work around robust instance launching, especially around failing over to the next provider if something goes wrong. Do we "roll the dice" a second time, with the failed provider removed from the set?
The UI to allow Administrators to enable modules, and to tune the parameters associated with them, could give a real-time representation of the effect of the current settings for a specific deployable. A certain type of Administrator would be very happy, tuning options and seeing an immediate change in, for example, a pie chart, which showed the resultant probability ranking percentages.
In future, we could provide Administrators with an interface to implement their own selection modules. They might choose, for example, to vary the selection probability percentages according to time and date, to increase usage of private cloud at times when they would otherwise be relatively idle.
I'm not sure if this needs to be a web UI (maybe it does), but I think we ought to allow administrators to create and load their own plugins. It could just be a matter of copying the plugin into the vendor/plugins directory (or wherever these end up going).
-- Matt
Hi Matt,
Thanks for the feedback.
Comments inline below.
Angus
On 04/09/2012 04:30 PM, Matt Wagner wrote:
On Thu, Apr 05, 2012 at 05:44:20PM +0100, Angus Thomas wrote:
We should present administrators, with the ability to configure the launch-time provider account selection policy for a specific pool, and to set a global default policy to apply in pools where no custom policy is defined.
The policy would be applied after a set of viable provider accounts has been identified. Those will be the provider accounts to which the relevant images have been pushed, and for which a set of hardware profile matches can be made etc..
The selection policy should work by defining a probability distribution, stating how likely each provider account is to be selected to host the new deployment, expressed as a percentage. Once those percentage are calculated, Conductor should pick a random number between 1 and 100 and attempt to launch on the lucky provider account.
Using a probability range and randomly selecting within that range might seem counter-intuitive: Having done the maths and assigned a numerical probability to each provider account, based on its suitability to host the deployment, why not just launch on the "best" provider? The issue is one of scale. When considering a single launch, selecting the account which gathered the highest score makes sense, but once Conductor is managing a large volume of deployments, the downside of that approach becomes clear - If one provider account gathered more than 50% of the probability ranking, it would get 100% of the instances, without the randomness.
I've been struggling with this for a bit. I see the reasoning, but it also feels wrong to randomly pick anything. I suspect I just need a little more time to digest this, though, as the latter thought is more of a knee-jerk reaction.
To be clear, this is a weighted random selection, yes? In other words, if a policy plug-in gave a weight of 90 to Cloud A, and 10 to Cloud B, there is a 90% chance it would launch on Cloud A? In that case, I think this is fairly sensible.
Yes. That's exactly it. In that scenario, there's a 90% probability of selecting Cloud A to attempt to deploy onto first. So, across multiple deployments, the actual usage of Cloud A will be very close to 90%.
One of the benefits of this approach is that we get a macro-level result; the proportional usage of each provider account by Conductor, but the launch-time decision of which provider account to use is pretty light, and, with the exception of the least-used algorithm which requires a current usage count for each provider account, doesn't depend on considering the current state of deployments, across an arbitrarily large array.
On the point about it seeming wrong to randomly select anything.. it's only random within some significant bounds: the scope is limited to selecting one of the enabled provider accounts in the current pool. So, there's no real scope for something truly "random" to happen.
The intelligible, intended result for the user, is in the overall distribution of deployments. If they are concerned instead about the placement of each individual deployment/instance (a concern that won't scale beyond a certain point), they shouldn't be launching into a pool which contains multiple enabled provider accounts in the first place.
Whilst the various policies should be stackable, one of the two following policies should be the initial basis for the calculation:
*Round robin, with optional weighting: *
With this policy, Conductor would use each of the available provider accounts equally, by assigning the same probability to each of them. Varying the probabilities, to assign a weighting, would be useful in instances where the private cloud providers associated with each provider account are of differing sizes. e.g. Three vSphere clusters, one of which has double the capacity of the other two. In that circumstance, the Administrator could adjust the weighting ratios to more closely reflect the actual capacities of each cluster.
It is worth noting that this isn't strictly round robin. The provider accounts wouldn't be selected in strict rotation, though the overall result is the same.
This is perhaps an implementation detail, but how should all of this be configured? Do we need a web interface so you can dynamically tune this, or should it be stored in a config file somewhere?
I was envisaging that we'd have a web UI to allow admins to enable/disable specific modules and to adjust whatever parameters each module might take. it could also show the effect of the current settings, for a specific deployable, in the form of a bar chart/pie chart.
*Least used, with optional weighting: *
This policy would make most sense in scenarios where Conductor is the sole means by which instances are launched on private cloud providers. Conductor would seek to ensure that the usage of the providers was balanced, by giving a higher probability to whichever provider accounts are currently least used. As with round robin, the weightings could be adjusted to reflect differing capacities between providers.
Having used on of those two policies to acquire an initial set of probabilities, administrators could then elect to apply additional policies, including:
*Assigned priority: *
The probability assigned to each provider account would be adjusted according to the provider accounts' priority, by increasing the probability ranking percentage of the higher priority provider accounts at the expense of the lower priority ones
This almost sounds like it could be the default, since it's all data we have today.
*Punishing failure: *
Once the audit history records past failures, for each occurrence of a launch failure within a configurable period (6 hours feels reasonable), a provider account would be fined 5% from its probability ranking. This would serve to reduce the attempts to launch on a provider which is running out of capacity, or experiencing hardware issues etc.
I definitely like the stackable aspect. Each plugin can return its scores summing to 100, and Conductor will then aggregate them and proceed accordingly.
The question (in my mind, at least) is _how_ to combine them, especially where a plugin gives a weight of zero. Suppose I have two plugins, one of which gives a weight of 50/50 between the two, and the other does 100/0. Should we just add them to get 150 and 50, or should we take a score of zero as meaning, "Absolutely do not use this provider" and drop it from consideration? Should this be configurable?
The initial percentages could be derived from either the round robin or the least-used modules, and then could be affected within a bounded range by whatever modules were added to the stack. So, for example, the weighted round robin module might result in A(50%) B(25%) C(25%) - The priority module could then have the ability to shift each of those numbers by no more than 10% in either direction.
I don't think we'd need the ability to specify that a provider account must not be used at all through these modules. That could be achieved by disabling the provider account in the pool.
*Cost *
There are three principle cloud uses which can incur costs: consumption of network bandwidth, consumption of storage and running a VM.
Happily, only one of these needs to be a factor when Conductor is selecting a provider account to launch: the cost of running the VM.
The amount of network bandwidth that a deployment will consume is pretty much unknowable at launch time. And, if it is known because, for example, a deployment is for a streaming media server, then Administrators can minimize costs by only launching that deployment in a "Low cost bandwith" pool.
As long as we're not supporting deployments which include the allocation of additional storage, the costs of storage consumption are an issue to consider at build& push time, rather than at launch time.
So, in order to allow cost to be another factor which affects the probability rankings, all we need is a cost per realm, per hour, for each provider hardware profile, for each provider account.
Admins are going to have to enter that data themselves. That's not as onerous as it sounds, given that, for example, it will often be the case that costs will not vary across realms, so the UI can help by pre-filling.
I had previously given some thought to the various metrics we might care about -- where is bandwidth cheapest? Where is the cheapest place to run a compute-intensive workload? Where can I launch a memory hog for the least cost? But you raise a really good point that these aren't things Conductor can know to optimize for.
But I do think administrators may want to optimize for some of these things on their own. If I'm building an application that's going to push massive bandwidth, I might still want to build it for multiple providers for flexibility, but heavily weight it to whatever provider is cheapest on high-bandwidth deployables. I don't think that's something we can/should provide out of the box, but I think that savvy administrators may want to write their own plugins for specialized use cases.
Clearly, for private clouds, no alternative means for getting pricing data into Conductor exists. For public providers, it would be beneficial if their APIs exported list pricing, however:
- Few organizations which operate on a scale which justifies using
Aeolus are likely to be paying list price
Really just an off-topic nit, but I like to think that even small organizations that don't have this sort of muscle could benefit from Aeolus. I'm not disputing your point that pricing isn't standard, I just don't think Aeolus has to be exclusively for enormous deployments.
- Organizations may wish to store and export the adjusted costs that
they'll be assigning to users, rather than the basic costs appearing on the provider's monthly invoice.
Once the cost data is in Conductor, adjusting the probabilities of each provider account to favour whichever provider could more cheaply host the specific range of hardware profiles would be a relatively simple matter of increasing the selection probability percentages of cheaper provider accounts, by a configurable amount, at the expense of the more costly provider accounts.
You know, I wonder how complex people want to make this. If you're just weighting based on instance cost, does it make sense to just use the provider's existing weight field? Can we assign different weights depending on the hardware profile that would be matched? In other words: on average Provider X is cheaper, but for this specific deployment, it would get matched onto a HWP which is considerably more expensive than what it would match on Provider Y. Can we catch that case and do something marginally intelligent, as opposed to having flat weighting per-provider?
We'd definiately need a cost per provider's HWP, rather than being able to represent cost as a weighting on the provider.
Having completed the stack of modules' calculations, the result is a final set of probabilities. At this point, Conductor would roll the loaded dice and attempt to launch on the winner.
I wonder how this will work with Jan's work around robust instance launching, especially around failing over to the next provider if something goes wrong. Do we "roll the dice" a second time, with the failed provider removed from the set?
The result from the calculations can be treated as an ordered list of provider accounts to attempt to launch on. If the first fails, Jan's code could switch to the second on the list, without needing to recalculate probabilities.
The UI to allow Administrators to enable modules, and to tune the parameters associated with them, could give a real-time representation of the effect of the current settings for a specific deployable. A certain type of Administrator would be very happy, tuning options and seeing an immediate change in, for example, a pie chart, which showed the resultant probability ranking percentages.
In future, we could provide Administrators with an interface to implement their own selection modules. They might choose, for example, to vary the selection probability percentages according to time and date, to increase usage of private cloud at times when they would otherwise be relatively idle.
I'm not sure if this needs to be a web UI (maybe it does), but I think we ought to allow administrators to create and load their own plugins. It could just be a matter of copying the plugin into the vendor/plugins directory (or wherever these end up going).
Agreed. We wouldn't need to provide a web UI to allow the management of custom modules.
-- Matt
On 04/12/2012 11:19 AM, Angus Thomas wrote:
Hi Matt,
Thanks for the feedback.
Comments inline below.
Angus
On 04/09/2012 04:30 PM, Matt Wagner wrote:
On Thu, Apr 05, 2012 at 05:44:20PM +0100, Angus Thomas wrote:
We should present administrators, with the ability to configure the launch-time provider account selection policy for a specific pool, and to set a global default policy to apply in pools where no custom policy is defined.
The policy would be applied after a set of viable provider accounts has been identified. Those will be the provider accounts to which the relevant images have been pushed, and for which a set of hardware profile matches can be made etc..
The selection policy should work by defining a probability distribution, stating how likely each provider account is to be selected to host the new deployment, expressed as a percentage. Once those percentage are calculated, Conductor should pick a random number between 1 and 100 and attempt to launch on the lucky provider account.
Using a probability range and randomly selecting within that range might seem counter-intuitive: Having done the maths and assigned a numerical probability to each provider account, based on its suitability to host the deployment, why not just launch on the "best" provider? The issue is one of scale. When considering a single launch, selecting the account which gathered the highest score makes sense, but once Conductor is managing a large volume of deployments, the downside of that approach becomes clear - If one provider account gathered more than 50% of the probability ranking, it would get 100% of the instances, without the randomness.
I've been struggling with this for a bit. I see the reasoning, but it also feels wrong to randomly pick anything. I suspect I just need a little more time to digest this, though, as the latter thought is more of a knee-jerk reaction.
To be clear, this is a weighted random selection, yes? In other words, if a policy plug-in gave a weight of 90 to Cloud A, and 10 to Cloud B, there is a 90% chance it would launch on Cloud A? In that case, I think this is fairly sensible.
Yes. That's exactly it. In that scenario, there's a 90% probability of selecting Cloud A to attempt to deploy onto first. So, across multiple deployments, the actual usage of Cloud A will be very close to 90%.
One of the benefits of this approach is that we get a macro-level result; the proportional usage of each provider account by Conductor, but the launch-time decision of which provider account to use is pretty light, and, with the exception of the least-used algorithm which requires a current usage count for each provider account, doesn't depend on considering the current state of deployments, across an arbitrarily large array.
On the point about it seeming wrong to randomly select anything.. it's only random within some significant bounds: the scope is limited to selecting one of the enabled provider accounts in the current pool. So, there's no real scope for something truly "random" to happen.
The intelligible, intended result for the user, is in the overall distribution of deployments. If they are concerned instead about the placement of each individual deployment/instance (a concern that won't scale beyond a certain point), they shouldn't be launching into a pool which contains multiple enabled provider accounts in the first place.
Whilst the various policies should be stackable, one of the two following policies should be the initial basis for the calculation:
*Round robin, with optional weighting: *
With this policy, Conductor would use each of the available provider accounts equally, by assigning the same probability to each of them. Varying the probabilities, to assign a weighting, would be useful in instances where the private cloud providers associated with each provider account are of differing sizes. e.g. Three vSphere clusters, one of which has double the capacity of the other two. In that circumstance, the Administrator could adjust the weighting ratios to more closely reflect the actual capacities of each cluster.
It is worth noting that this isn't strictly round robin. The provider accounts wouldn't be selected in strict rotation, though the overall result is the same.
This is perhaps an implementation detail, but how should all of this be configured? Do we need a web interface so you can dynamically tune this, or should it be stored in a config file somewhere?
I was envisaging that we'd have a web UI to allow admins to enable/disable specific modules and to adjust whatever parameters each module might take. it could also show the effect of the current settings, for a specific deployable, in the form of a bar chart/pie chart.
*Least used, with optional weighting: *
This policy would make most sense in scenarios where Conductor is the sole means by which instances are launched on private cloud providers. Conductor would seek to ensure that the usage of the providers was balanced, by giving a higher probability to whichever provider accounts are currently least used. As with round robin, the weightings could be adjusted to reflect differing capacities between providers.
Having used on of those two policies to acquire an initial set of probabilities, administrators could then elect to apply additional policies, including:
*Assigned priority: *
The probability assigned to each provider account would be adjusted according to the provider accounts' priority, by increasing the probability ranking percentage of the higher priority provider accounts at the expense of the lower priority ones
This almost sounds like it could be the default, since it's all data we have today.
*Punishing failure: *
Once the audit history records past failures, for each occurrence of a launch failure within a configurable period (6 hours feels reasonable), a provider account would be fined 5% from its probability ranking. This would serve to reduce the attempts to launch on a provider which is running out of capacity, or experiencing hardware issues etc.
I definitely like the stackable aspect. Each plugin can return its scores summing to 100, and Conductor will then aggregate them and proceed accordingly.
The question (in my mind, at least) is _how_ to combine them, especially where a plugin gives a weight of zero. Suppose I have two plugins, one of which gives a weight of 50/50 between the two, and the other does 100/0. Should we just add them to get 150 and 50, or should we take a score of zero as meaning, "Absolutely do not use this provider" and drop it from consideration? Should this be configurable?
The initial percentages could be derived from either the round robin or the least-used modules, and then could be affected within a bounded range by whatever modules were added to the stack. So, for example, the weighted round robin module might result in A(50%) B(25%) C(25%) - The priority module could then have the ability to shift each of those numbers by no more than 10% in either direction.
I don't think we'd need the ability to specify that a provider account must not be used at all through these modules. That could be achieved by disabling the provider account in the pool.
Hm, I think it would be nice to allow admins to control choosing policy absolutely if they want. Then you can cover examples like: "if a deployment is launched by QA, use a primary machine unless there are more than 10 running instances" W/o allowing a module to fully change possibility range, it will not be possible to eliminate "random" factor in choose policy.
*Cost *
There are three principle cloud uses which can incur costs: consumption of network bandwidth, consumption of storage and running a VM.
Happily, only one of these needs to be a factor when Conductor is selecting a provider account to launch: the cost of running the VM.
The amount of network bandwidth that a deployment will consume is pretty much unknowable at launch time. And, if it is known because, for example, a deployment is for a streaming media server, then Administrators can minimize costs by only launching that deployment in a "Low cost bandwith" pool.
As long as we're not supporting deployments which include the allocation of additional storage, the costs of storage consumption are an issue to consider at build& push time, rather than at launch time.
So, in order to allow cost to be another factor which affects the probability rankings, all we need is a cost per realm, per hour, for each provider hardware profile, for each provider account.
Admins are going to have to enter that data themselves. That's not as onerous as it sounds, given that, for example, it will often be the case that costs will not vary across realms, so the UI can help by pre-filling.
I had previously given some thought to the various metrics we might care about -- where is bandwidth cheapest? Where is the cheapest place to run a compute-intensive workload? Where can I launch a memory hog for the least cost? But you raise a really good point that these aren't things Conductor can know to optimize for.
But I do think administrators may want to optimize for some of these things on their own. If I'm building an application that's going to push massive bandwidth, I might still want to build it for multiple providers for flexibility, but heavily weight it to whatever provider is cheapest on high-bandwidth deployables. I don't think that's something we can/should provide out of the box, but I think that savvy administrators may want to write their own plugins for specialized use cases.
Clearly, for private clouds, no alternative means for getting pricing data into Conductor exists. For public providers, it would be beneficial if their APIs exported list pricing, however:
- Few organizations which operate on a scale which justifies using
Aeolus are likely to be paying list price
Really just an off-topic nit, but I like to think that even small organizations that don't have this sort of muscle could benefit from Aeolus. I'm not disputing your point that pricing isn't standard, I just don't think Aeolus has to be exclusively for enormous deployments.
- Organizations may wish to store and export the adjusted costs that
they'll be assigning to users, rather than the basic costs appearing on the provider's monthly invoice.
Once the cost data is in Conductor, adjusting the probabilities of each provider account to favour whichever provider could more cheaply host the specific range of hardware profiles would be a relatively simple matter of increasing the selection probability percentages of cheaper provider accounts, by a configurable amount, at the expense of the more costly provider accounts.
You know, I wonder how complex people want to make this. If you're just weighting based on instance cost, does it make sense to just use the provider's existing weight field? Can we assign different weights depending on the hardware profile that would be matched? In other words: on average Provider X is cheaper, but for this specific deployment, it would get matched onto a HWP which is considerably more expensive than what it would match on Provider Y. Can we catch that case and do something marginally intelligent, as opposed to having flat weighting per-provider?
We'd definiately need a cost per provider's HWP, rather than being able to represent cost as a weighting on the provider.
Having completed the stack of modules' calculations, the result is a final set of probabilities. At this point, Conductor would roll the loaded dice and attempt to launch on the winner.
I wonder how this will work with Jan's work around robust instance launching, especially around failing over to the next provider if something goes wrong. Do we "roll the dice" a second time, with the failed provider removed from the set?
The result from the calculations can be treated as an ordered list of provider accounts to attempt to launch on. If the first fails, Jan's code could switch to the second on the list, without needing to recalculate probabilities.
The UI to allow Administrators to enable modules, and to tune the parameters associated with them, could give a real-time representation of the effect of the current settings for a specific deployable. A certain type of Administrator would be very happy, tuning options and seeing an immediate change in, for example, a pie chart, which showed the resultant probability ranking percentages.
In future, we could provide Administrators with an interface to implement their own selection modules. They might choose, for example, to vary the selection probability percentages according to time and date, to increase usage of private cloud at times when they would otherwise be relatively idle.
I'm not sure if this needs to be a web UI (maybe it does), but I think we ought to allow administrators to create and load their own plugins. It could just be a matter of copying the plugin into the vendor/plugins directory (or wherever these end up going).
Agreed. We wouldn't need to provide a web UI to allow the management of custom modules.
-- Matt
On 04/05/2012 06:44 PM, Angus Thomas wrote:
We should present administrators, with the ability to configure the launch-time provider account selection policy for a specific pool, and to set a global default policy to apply in pools where no custom policy is defined.
The policy would be applied after a set of viable provider accounts has been identified. Those will be the provider accounts to which the relevant images have been pushed, and for which a set of hardware profile matches can be made etc..
The selection policy should work by defining a probability distribution, stating how likely each provider account is to be selected to host the new deployment, expressed as a percentage. Once those percentage are calculated, Conductor should pick a random number between 1 and 100 and attempt to launch on the lucky provider account.
Using a probability range and randomly selecting within that range might seem counter-intuitive: Having done the maths and assigned a numerical probability to each provider account, based on its suitability to host the deployment, why not just launch on the “best” provider? The issue is one of scale. When considering a single launch, selecting the account which gathered the highest score makes sense, but once Conductor is managing a large volume of deployments, the downside of that approach becomes clear - If one provider account gathered more than 50% of the probability ranking, it would get 100% of the instances, without the randomness.
Whilst the various policies should be stackable, one of the two following policies should be the initial basis for the calculation:
*Round robin, with optional weighting: *
With this policy, Conductor would use each of the available provider accounts equally, by assigning the same probability to each of them. Varying the probabilities, to assign a weighting, would be useful in instances where the private cloud providers associated with each provider account are of differing sizes. e.g. Three vSphere clusters, one of which has double the capacity of the other two. In that circumstance, the Administrator could adjust the weighting ratios to more closely reflect the actual capacities of each cluster.
It is worth noting that this isn't strictly round robin. The provider accounts wouldn't be selected in strict rotation, though the overall result is the same.
*Least used, with optional weighting: *
This policy would make most sense in scenarios where Conductor is the sole means by which instances are launched on private cloud providers. Conductor would seek to ensure that the usage of the providers was balanced, by giving a higher probability to whichever provider accounts are currently least used. As with round robin, the weightings could be adjusted to reflect differing capacities between providers.
Having used on of those two policies to acquire an initial set of probabilities, administrators could then elect to apply additional policies, including:
I wonder which policy is better. It might be quite difficult to tune usable penalty/bonus points for "Least used" policy. If an account fails repeatedly, it gets some penalty points for each fail, but also some bonus points because it's used least. So "Punishing failure" described below will not have big effect?
*Assigned priority: *
The probability assigned to each provider account would be adjusted according to the provider accounts' priority, by increasing the probability ranking percentage of the higher priority provider accounts at the expense of the lower priority ones
*Punishing failure: *
Once the audit history records past failures, for each occurrence of a launch failure within a configurable period (6 hours feels reasonable), a provider account would be fined 5% from its probability ranking. This would serve to reduce the attempts to launch on a provider which is running out of capacity, or experiencing hardware issues etc.
*Cost *
There are three principle cloud uses which can incur costs: consumption of network bandwidth, consumption of storage and running a VM.
Happily, only one of these needs to be a factor when Conductor is selecting a provider account to launch: the cost of running the VM.
The amount of network bandwidth that a deployment will consume is pretty much unknowable at launch time. And, if it is known because, for example, a deployment is for a streaming media server, then Administrators can minimize costs by only launching that deployment in a "Low cost bandwith" pool.
As long as we're not supporting deployments which include the allocation of additional storage, the costs of storage consumption are an issue to consider at build & push time, rather than at launch time.
So, in order to allow cost to be another factor which affects the probability rankings, all we need is a cost per realm, per hour, for each provider hardware profile, for each provider account.
Admins are going to have to enter that data themselves. That's not as onerous as it sounds, given that, for example, it will often be the case that costs will not vary across realms, so the UI can help by pre-filling.
Clearly, for private clouds, no alternative means for getting pricing data into Conductor exists. For public providers, it would be beneficial if their APIs exported list pricing, however:
- Few organizations which operate on a scale which justifies using
Aeolus are likely to be paying list price
- Organizations may wish to store and export the adjusted costs that
they'll be assigning to users, rather than the basic costs appearing on the provider's monthly invoice.
Once the cost data is in Conductor, adjusting the probabilities of each provider account to favour whichever provider could more cheaply host the specific range of hardware profiles would be a relatively simple matter of increasing the selection probability percentages of cheaper provider accounts, by a configurable amount, at the expense of the more costly provider accounts.
Having completed the stack of modules' calculations, the result is a final set of probabilities. At this point, Conductor would roll the loaded dice and attempt to launch on the winner.
The UI to allow Administrators to enable modules, and to tune the parameters associated with them, could give a real-time representation of the effect of the current settings for a specific deployable. A certain type of Administrator would be very happy, tuning options and seeing an immediate change in, for example, a pie chart, which showed the resultant probability ranking percentages.
In future, we could provide Administrators with an interface to implement their own selection modules. They might choose, for example, to vary the selection probability percentages according to time and date, to increase usage of private cloud at times when they would otherwise be relatively idle.
On 04/12/2012 01:23 PM, Jan Provaznik wrote:
On 04/05/2012 06:44 PM, Angus Thomas wrote:
We should present administrators, with the ability to configure the launch-time provider account selection policy for a specific pool, and to set a global default policy to apply in pools where no custom policy is defined.
The policy would be applied after a set of viable provider accounts has been identified. Those will be the provider accounts to which the relevant images have been pushed, and for which a set of hardware profile matches can be made etc..
Should we not do this selection at the Hardware Profile level since this is the lowest level object from which we can make a decision. i.e. even after selecting a provider we might still have a list of Hardware Profiles to decide from. If we're to make the decision on the hardware profile level we can get all the information such as provider and account then make a suitable decision.
The selection policy should work by defining a probability distribution, stating how likely each provider account is to be selected to host the new deployment, expressed as a percentage. Once those percentage are calculated, Conductor should pick a random number between 1 and 100 and attempt to launch on the lucky provider account.
I don't get this? This is adding some randomness into the decision; surely a user would want this to be explicit.
Using a probability range and randomly selecting within that range might seem counter-intuitive: Having done the maths and assigned a numerical probability to each provider account, based on its suitability to host the deployment, why not just launch on the “best” provider? The issue is one of scale. When considering a single launch, selecting the account which gathered the highest score makes sense, but once Conductor is managing a large volume of deployments, the downside of that approach becomes clear - If one provider account gathered more than 50% of the probability ranking, it would get 100% of the instances, without the randomness.
True, though using another method like below sounds better to me. The thing is that a user would have to have 1000s of deployments in order for the distribution to be equivilent to the weighting. Why not use the information that is in conductor to decide when to use the cloud. I.e. if we have 2 deployments in provider 1 then deploy to provider 2?
Whilst the various policies should be stackable, one of the two following policies should be the initial basis for the calculation:
*Round robin, with optional weighting: *
With this policy, Conductor would use each of the available provider accounts equally, by assigning the same probability to each of them. Varying the probabilities, to assign a weighting, would be useful in instances where the private cloud providers associated with each provider account are of differing sizes. e.g. Three vSphere clusters, one of which has double the capacity of the other two. In that circumstance, the Administrator could adjust the weighting ratios to more closely reflect the actual capacities of each cluster.
It is worth noting that this isn't strictly round robin. The provider accounts wouldn't be selected in strict rotation, though the overall result is the same.
I guess this is similar to what I described earlier.
*Least used, with optional weighting: *
This policy would make most sense in scenarios where Conductor is the sole means by which instances are launched on private cloud providers. Conductor would seek to ensure that the usage of the providers was balanced, by giving a higher probability to whichever provider accounts are currently least used. As with round robin, the weightings could be adjusted to reflect differing capacities between providers.
Having used on of those two policies to acquire an initial set of probabilities, administrators could then elect to apply additional policies, including:
I wonder which policy is better. It might be quite difficult to tune usable penalty/bonus points for "Least used" policy. If an account fails repeatedly, it gets some penalty points for each fail, but also some bonus points because it's used least. So "Punishing failure" described below will not have big effect?
Rather than trying to invent our own why don't we look at some of the standard policies offered by cloud providers. If we really want to ship with a few policies then we're more likely to get an idea of what people actually use.
*Assigned priority: *
The probability assigned to each provider account would be adjusted according to the provider accounts' priority, by increasing the probability ranking percentage of the higher priority provider accounts at the expense of the lower priority ones
*Punishing failure: *
Once the audit history records past failures, for each occurrence of a launch failure within a configurable period (6 hours feels reasonable), a provider account would be fined 5% from its probability ranking. This would serve to reduce the attempts to launch on a provider which is running out of capacity, or experiencing hardware issues etc.
*Cost *
There are three principle cloud uses which can incur costs: consumption of network bandwidth, consumption of storage and running a VM.
Happily, only one of these needs to be a factor when Conductor is selecting a provider account to launch: the cost of running the VM.
The amount of network bandwidth that a deployment will consume is pretty much unknowable at launch time. And, if it is known because, for example, a deployment is for a streaming media server, then Administrators can minimize costs by only launching that deployment in a "Low cost bandwith" pool.
As long as we're not supporting deployments which include the allocation of additional storage, the costs of storage consumption are an issue to consider at build & push time, rather than at launch time.
So, in order to allow cost to be another factor which affects the probability rankings, all we need is a cost per realm, per hour, for each provider hardware profile, for each provider account.
Admins are going to have to enter that data themselves. That's not as onerous as it sounds, given that, for example, it will often be the case that costs will not vary across realms, so the UI can help by pre-filling.
Clearly, for private clouds, no alternative means for getting pricing data into Conductor exists. For public providers, it would be beneficial if their APIs exported list pricing, however:
- Few organizations which operate on a scale which justifies using
Aeolus are likely to be paying list price
- Organizations may wish to store and export the adjusted costs that
they'll be assigning to users, rather than the basic costs appearing on the provider's monthly invoice.
Once the cost data is in Conductor, adjusting the probabilities of each provider account to favour whichever provider could more cheaply host the specific range of hardware profiles would be a relatively simple matter of increasing the selection probability percentages of cheaper provider accounts, by a configurable amount, at the expense of the more costly provider accounts.
Having completed the stack of modules' calculations, the result is a final set of probabilities. At this point, Conductor would roll the loaded dice and attempt to launch on the winner.
The UI to allow Administrators to enable modules, and to tune the parameters associated with them, could give a real-time representation of the effect of the current settings for a specific deployable. A certain type of Administrator would be very happy, tuning options and seeing an immediate change in, for example, a pie chart, which showed the resultant probability ranking percentages.
In future, we could provide Administrators with an interface to implement their own selection modules. They might choose, for example, to vary the selection probability percentages according to time and date, to increase usage of private cloud at times when they would otherwise be relatively idle.
This is all good information for hwp/provider selection, but we will never know what an end user might want to do. Why don't we design the policy to be pluggable from the off, we can ship with some defaults, and some documentation on how to write policy components, using the stock ones we provider as examples? We could then later add some console to help with this?
Cheers
Martyn
On Thu, Apr 12, 2012 at 02:08:53PM +0200, Martyn Taylor wrote:
On 04/12/2012 01:23 PM, Jan Provaznik wrote:
On 04/05/2012 06:44 PM, Angus Thomas wrote:
We should present administrators, with the ability to configure the launch-time provider account selection policy for a specific pool, and to set a global default policy to apply in pools where no custom policy is defined.
The policy would be applied after a set of viable provider accounts has been identified. Those will be the provider accounts to which the relevant images have been pushed, and for which a set of hardware profile matches can be made etc..
Should we not do this selection at the Hardware Profile level since this is the lowest level object from which we can make a decision. i.e. even after selecting a provider we might still have a list of Hardware Profiles to decide from. If we're to make the decision on the hardware profile level we can get all the information such as provider and account then make a suitable decision.
Yes, I'm thinking the same thing here -- we should include both provider and HWP in the equation. What I'm not sure about, and perhaps it's not something we have to decide just yet, is at what stage to do this. Should we pick a provider and then pick the best HWP within that provider? Or should we group them from the start and operate based on a combination of HWP and Provider as one object? (The latter sounds more complicated, but also more optimal -- the best provider overall might have a non-ideal HWP. I'm not positive how to make this work, though.)
The selection policy should work by defining a probability distribution, stating how likely each provider account is to be selected to host the new deployment, expressed as a percentage. Once those percentage are calculated, Conductor should pick a random number between 1 and 100 and attempt to launch on the lucky provider account.
I don't get this? This is adding some randomness into the decision; surely a user would want this to be explicit.
This took me a while to come around to agreeing with. And for extremely small setups, the injection of randomness might not be the best idea.
The problem this is solving (preempting?) is that you would otherwise always send *all* instances to whichever provider had the best score, so the next-best would never get anything.
Using a probability range and randomly selecting within that range might seem counter-intuitive: Having done the maths and assigned a numerical probability to each provider account, based on its suitability to host the deployment, why not just launch on the “best” provider? The issue is one of scale. When considering a single launch, selecting the account which gathered the highest score makes sense, but once Conductor is managing a large volume of deployments, the downside of that approach becomes clear - If one provider account gathered more than 50% of the probability ranking, it would get 100% of the instances, without the randomness.
True, though using another method like below sounds better to me. The thing is that a user would have to have 1000s of deployments in order for the distribution to be equivilent to the weighting. Why not use the information that is in conductor to decide when to use the cloud. I.e. if we have 2 deployments in provider 1 then deploy to provider 2?
I'd be interested in seeing how practical this is. I'm not sure we need truly thousands of deployments before it makes sense, but I agree that for, say, 5 instances, it might be erratic. (Though it's only the subset of providers that are usable, so it's not like we'll ever match you onto an awful provider.)
If it ends up being fairly inexpensive to look at a bit more of our data, I'm all for not invoking randomness. But if it becomes complex (either to implement, or just computationally-expensive), I think a weighted-random approach is a clever workaround.
Whilst the various policies should be stackable, one of the two following policies should be the initial basis for the calculation:
*Round robin, with optional weighting: *
With this policy, Conductor would use each of the available provider accounts equally, by assigning the same probability to each of them. Varying the probabilities, to assign a weighting, would be useful in instances where the private cloud providers associated with each provider account are of differing sizes. e.g. Three vSphere clusters, one of which has double the capacity of the other two. In that circumstance, the Administrator could adjust the weighting ratios to more closely reflect the actual capacities of each cluster.
It is worth noting that this isn't strictly round robin. The provider accounts wouldn't be selected in strict rotation, though the overall result is the same.
I guess this is similar to what I described earlier.
*Least used, with optional weighting: *
This policy would make most sense in scenarios where Conductor is the sole means by which instances are launched on private cloud providers. Conductor would seek to ensure that the usage of the providers was balanced, by giving a higher probability to whichever provider accounts are currently least used. As with round robin, the weightings could be adjusted to reflect differing capacities between providers.
Having used on of those two policies to acquire an initial set of probabilities, administrators could then elect to apply additional policies, including:
I wonder which policy is better. It might be quite difficult to tune usable penalty/bonus points for "Least used" policy. If an account fails repeatedly, it gets some penalty points for each fail, but also some bonus points because it's used least. So "Punishing failure" described below will not have big effect?
Rather than trying to invent our own why don't we look at some of the standard policies offered by cloud providers. If we really want to ship with a few policies then we're more likely to get an idea of what people actually use.
I'm not quite sure what you're referring to about policies offered by cloud providers. Do they publish something similar to what we're trying to do?
Jan's point about policies cancelling each other out is interesting, and it feels like something that could come up in other places.
*Assigned priority: *
The probability assigned to each provider account would be adjusted according to the provider accounts' priority, by increasing the probability ranking percentage of the higher priority provider accounts at the expense of the lower priority ones
*Punishing failure: *
Once the audit history records past failures, for each occurrence of a launch failure within a configurable period (6 hours feels reasonable), a provider account would be fined 5% from its probability ranking. This would serve to reduce the attempts to launch on a provider which is running out of capacity, or experiencing hardware issues etc.
*Cost *
There are three principle cloud uses which can incur costs: consumption of network bandwidth, consumption of storage and running a VM.
Happily, only one of these needs to be a factor when Conductor is selecting a provider account to launch: the cost of running the VM.
The amount of network bandwidth that a deployment will consume is pretty much unknowable at launch time. And, if it is known because, for example, a deployment is for a streaming media server, then Administrators can minimize costs by only launching that deployment in a "Low cost bandwith" pool.
As long as we're not supporting deployments which include the allocation of additional storage, the costs of storage consumption are an issue to consider at build & push time, rather than at launch time.
So, in order to allow cost to be another factor which affects the probability rankings, all we need is a cost per realm, per hour, for each provider hardware profile, for each provider account.
Admins are going to have to enter that data themselves. That's not as onerous as it sounds, given that, for example, it will often be the case that costs will not vary across realms, so the UI can help by pre-filling.
Clearly, for private clouds, no alternative means for getting pricing data into Conductor exists. For public providers, it would be beneficial if their APIs exported list pricing, however:
- Few organizations which operate on a scale which justifies using
Aeolus are likely to be paying list price
- Organizations may wish to store and export the adjusted costs that
they'll be assigning to users, rather than the basic costs appearing on the provider's monthly invoice.
Once the cost data is in Conductor, adjusting the probabilities of each provider account to favour whichever provider could more cheaply host the specific range of hardware profiles would be a relatively simple matter of increasing the selection probability percentages of cheaper provider accounts, by a configurable amount, at the expense of the more costly provider accounts.
Having completed the stack of modules' calculations, the result is a final set of probabilities. At this point, Conductor would roll the loaded dice and attempt to launch on the winner.
The UI to allow Administrators to enable modules, and to tune the parameters associated with them, could give a real-time representation of the effect of the current settings for a specific deployable. A certain type of Administrator would be very happy, tuning options and seeing an immediate change in, for example, a pie chart, which showed the resultant probability ranking percentages.
In future, we could provide Administrators with an interface to implement their own selection modules. They might choose, for example, to vary the selection probability percentages according to time and date, to increase usage of private cloud at times when they would otherwise be relatively idle.
This is all good information for hwp/provider selection, but we will never know what an end user might want to do. Why don't we design the policy to be pluggable from the off, we can ship with some defaults, and some documentation on how to write policy components, using the stock ones we provider as examples? We could then later add some console to help with this?
Yes, I agree with this part -- I think it's very important that administrators be able to write their own from the start. I think we can provide a decently-usable set from the outset, but I think the ability to write application-specific plugins is important.
On 04/12/2012 03:44 PM, Matt Wagner wrote:
On Thu, Apr 12, 2012 at 02:08:53PM +0200, Martyn Taylor wrote:
On 04/12/2012 01:23 PM, Jan Provaznik wrote:
On 04/05/2012 06:44 PM, Angus Thomas wrote:
We should present administrators, with the ability to configure the launch-time provider account selection policy for a specific pool, and to set a global default policy to apply in pools where no custom policy is defined.
The policy would be applied after a set of viable provider accounts has been identified. Those will be the provider accounts to which the relevant images have been pushed, and for which a set of hardware profile matches can be made etc..
Should we not do this selection at the Hardware Profile level since this is the lowest level object from which we can make a decision. i.e. even after selecting a provider we might still have a list of Hardware Profiles to decide from. If we're to make the decision on the hardware profile level we can get all the information such as provider and account then make a suitable decision.
Yes, I'm thinking the same thing here -- we should include both provider and HWP in the equation. What I'm not sure about, and perhaps it's not something we have to decide just yet, is at what stage to do this. Should we pick a provider and then pick the best HWP within that provider? Or should we group them from the start and operate based on a combination of HWP and Provider as one object? (The latter sounds more complicated, but also more optimal -- the best provider overall might have a non-ideal HWP. I'm not positive how to make this work, though.)
The latter is more like what we're doing right now (with "match" objects listing the provider account, realm, provider, image, etc). The one difference is that, for now, we choose the "best" HWP in isolation, only returning one HWP match per provider account. If we expand this again and list all HWPs in separate matches (like we do with Realms), then we'll have all valid combinations represented. Then we can weigh HWP/provider account/Realm/etc combinations however we want to.
The selection policy should work by defining a probability distribution, stating how likely each provider account is to be selected to host the new deployment, expressed as a percentage. Once those percentage are calculated, Conductor should pick a random number between 1 and 100 and attempt to launch on the lucky provider account.
I don't get this? This is adding some randomness into the decision; surely a user would want this to be explicit.
This took me a while to come around to agreeing with. And for extremely small setups, the injection of randomness might not be the best idea.
The problem this is solving (preempting?) is that you would otherwise always send *all* instances to whichever provider had the best score, so the next-best would never get anything.
Perhaps whether to use a probability distribution should be one of the configurable policy parameters. The probability element essentially gives us round-robin-like behavior except it also takes into account non-evenly-weighted choices. One example of where you'd want to use the probability/round-robin approach (rather than "always use best until it's full, then second-best") would be for network routing. If you have two ISP connections -- one at 10 mbps, one at 5 mbps, you wouldn't want to send all traffic over the faster link until it's saturated -- you'd want to use the two links in a roughly 2:1 ratio. So in some cases you may want to use your providers the same way. Split your usage between RHEV and ec2 roughly along some sort of weighted ranking.
On the other hand, if you're scheduling instance strictly on a cost basis, you would _not_ want the round-robin behavior. You'd want to use the lower-cost option until you maxed it out, then the more expensive one. An example where would want this behavior would be a primarily RHEV environment with an ec2 spillover option. Presumably scheduling an instance on ec2 is costing you more (in the short term) than using excess RHEV capacity would cost. In this case we would _only_ schedule on ec2 once RHEV is fully-allocated.
Realitstically, though, you may need a hybrid. In a real-world situation, you'll probably have more than one RHEV account/provider, and possibly multiple ec2 accounts. You may have a valid match of (RHEVaccount1, RHEVaccount2, RHEVaccount3, us-east-account1, us-east-account2, us-west-account1) with probability distribution/round-robin scheduling among the RHEV matches until they hit quota, then scheduling among ec2 accounts, etc.
You could accomplish this with the scoring algorithm if you first did a ranking which took into account different types of preferences -- i.e. the "only schedule here if higher stuff is totally unavailable" spillover sort of preference, as well as the "X is 30% better than Y, but really we want a blending of use on both" i.e. if RHEV accounts had scores of 11, 15, and 19; ec2 scores of 44, 50, and 55 -- we'd see that we'd do a probability distribution of the 11,15,19 RHEV accounts, ignoring ec2. If we only had one RHEV match, we'd schedule there until it was gone, and if we got just ec2 accounts back we'd do the probability distribution over that.
The only thing is -- this adds a ton of complexity. But if we always assume a probability distribution among _all_ valid matches, we can't implement spillover-like behavior. It seems to me that we need to handle both "X is better than Y, so schedule X more often than Y" and "X is better than Y, but only use Y if X is no longer available"
I wonder if we really have 3 levels of matching here: 1) eliminate totally-unavailable matches (i.e. no matching HWP, quota exceeded, etc) 2) "category" ranking of matches -- where we fully exhaust one tier before going to tier 2 3) "weighted" ranking of matches within a category (where we schedule on all matches in a tier according to some sort of weighted probability-based measure -- if they are all equally-weighted, then it amounts to an effective round-robin within the category, on average at least)
In the above example, category 1 (highest priority) would include the RHEV account matches, and category 2 would include the ec2 matches. Within a category we'd do the probability-ranking as proposed.
Using a probability range and randomly selecting within that range might seem counter-intuitive: Having done the maths and assigned a numerical probability to each provider account, based on its suitability to host the deployment, why not just launch on the “best” provider? The issue is one of scale. When considering a single launch, selecting the account which gathered the highest score makes sense, but once Conductor is managing a large volume of deployments, the downside of that approach becomes clear - If one provider account gathered more than 50% of the probability ranking, it would get 100% of the instances, without the randomness.
True, though using another method like below sounds better to me. The thing is that a user would have to have 1000s of deployments in order for the distribution to be equivilent to the weighting. Why not use the information that is in conductor to decide when to use the cloud. I.e. if we have 2 deployments in provider 1 then deploy to provider 2?
This raises as many questions as it answers. Do we do this "2 here, so now go there" per-user? per-pool? per-environment? It also doesn't take into account whether the other instances were matched on the same criteria, HWP, etc.
I'd be interested in seeing how practical this is. I'm not sure we need truly thousands of deployments before it makes sense, but I agree that for, say, 5 instances, it might be erratic. (Though it's only the subset of providers that are usable, so it's not like we'll ever match you onto an awful provider.)
If it ends up being fairly inexpensive to look at a bit more of our data, I'm all for not invoking randomness. But if it becomes complex (either to implement, or just computationally-expensive), I think a weighted-random approach is a clever workaround.
Whilst the various policies should be stackable, one of the two following policies should be the initial basis for the calculation:
*Round robin, with optional weighting: *
With this policy, Conductor would use each of the available provider accounts equally, by assigning the same probability to each of them. Varying the probabilities, to assign a weighting, would be useful in instances where the private cloud providers associated with each provider account are of differing sizes. e.g. Three vSphere clusters, one of which has double the capacity of the other two. In that circumstance, the Administrator could adjust the weighting ratios to more closely reflect the actual capacities of each cluster.
It is worth noting that this isn't strictly round robin. The provider accounts wouldn't be selected in strict rotation, though the overall result is the same.
I guess this is similar to what I described earlier.
*Least used, with optional weighting: *
This policy would make most sense in scenarios where Conductor is the sole means by which instances are launched on private cloud providers. Conductor would seek to ensure that the usage of the providers was balanced, by giving a higher probability to whichever provider accounts are currently least used. As with round robin, the weightings could be adjusted to reflect differing capacities between providers.
Having used on of those two policies to acquire an initial set of probabilities, administrators could then elect to apply additional policies, including:
I wonder which policy is better. It might be quite difficult to tune usable penalty/bonus points for "Least used" policy. If an account fails repeatedly, it gets some penalty points for each fail, but also some bonus points because it's used least. So "Punishing failure" described below will not have big effect?
Rather than trying to invent our own why don't we look at some of the standard policies offered by cloud providers. If we really want to ship with a few policies then we're more likely to get an idea of what people actually use.
I'm not quite sure what you're referring to about policies offered by cloud providers. Do they publish something similar to what we're trying to do?
Jan's point about policies cancelling each other out is interesting, and it feels like something that could come up in other places.
*Assigned priority: *
The probability assigned to each provider account would be adjusted according to the provider accounts' priority, by increasing the probability ranking percentage of the higher priority provider accounts at the expense of the lower priority ones
*Punishing failure: *
Once the audit history records past failures, for each occurrence of a launch failure within a configurable period (6 hours feels reasonable), a provider account would be fined 5% from its probability ranking. This would serve to reduce the attempts to launch on a provider which is running out of capacity, or experiencing hardware issues etc.
*Cost *
There are three principle cloud uses which can incur costs: consumption of network bandwidth, consumption of storage and running a VM.
Happily, only one of these needs to be a factor when Conductor is selecting a provider account to launch: the cost of running the VM.
The amount of network bandwidth that a deployment will consume is pretty much unknowable at launch time. And, if it is known because, for example, a deployment is for a streaming media server, then Administrators can minimize costs by only launching that deployment in a "Low cost bandwith" pool.
As long as we're not supporting deployments which include the allocation of additional storage, the costs of storage consumption are an issue to consider at build& push time, rather than at launch time.
So, in order to allow cost to be another factor which affects the probability rankings, all we need is a cost per realm, per hour, for each provider hardware profile, for each provider account.
Admins are going to have to enter that data themselves. That's not as onerous as it sounds, given that, for example, it will often be the case that costs will not vary across realms, so the UI can help by pre-filling.
Clearly, for private clouds, no alternative means for getting pricing data into Conductor exists. For public providers, it would be beneficial if their APIs exported list pricing, however:
- Few organizations which operate on a scale which justifies using
Aeolus are likely to be paying list price
- Organizations may wish to store and export the adjusted costs that
they'll be assigning to users, rather than the basic costs appearing on the provider's monthly invoice.
Once the cost data is in Conductor, adjusting the probabilities of each provider account to favour whichever provider could more cheaply host the specific range of hardware profiles would be a relatively simple matter of increasing the selection probability percentages of cheaper provider accounts, by a configurable amount, at the expense of the more costly provider accounts.
Having completed the stack of modules' calculations, the result is a final set of probabilities. At this point, Conductor would roll the loaded dice and attempt to launch on the winner.
The UI to allow Administrators to enable modules, and to tune the parameters associated with them, could give a real-time representation of the effect of the current settings for a specific deployable. A certain type of Administrator would be very happy, tuning options and seeing an immediate change in, for example, a pie chart, which showed the resultant probability ranking percentages.
In future, we could provide Administrators with an interface to implement their own selection modules. They might choose, for example, to vary the selection probability percentages according to time and date, to increase usage of private cloud at times when they would otherwise be relatively idle.
This is all good information for hwp/provider selection, but we will never know what an end user might want to do. Why don't we design the policy to be pluggable from the off, we can ship with some defaults, and some documentation on how to write policy components, using the stock ones we provider as examples? We could then later add some console to help with this?
Yes, I agree with this part -- I think it's very important that administrators be able to write their own from the start. I think we can provide a decently-usable set from the outset, but I think the ability to write application-specific plugins is important.
On 04/12/2012 09:44 PM, Matt Wagner wrote:
On Thu, Apr 12, 2012 at 02:08:53PM +0200, Martyn Taylor wrote:
On 04/12/2012 01:23 PM, Jan Provaznik wrote:
On 04/05/2012 06:44 PM, Angus Thomas wrote:
We should present administrators, with the ability to configure the launch-time provider account selection policy for a specific pool, and to set a global default policy to apply in pools where no custom policy is defined.
The policy would be applied after a set of viable provider accounts has been identified. Those will be the provider accounts to which the relevant images have been pushed, and for which a set of hardware profile matches can be made etc..
Should we not do this selection at the Hardware Profile level since this is the lowest level object from which we can make a decision. i.e. even after selecting a provider we might still have a list of Hardware Profiles to decide from. If we're to make the decision on the hardware profile level we can get all the information such as provider and account then make a suitable decision.
Yes, I'm thinking the same thing here -- we should include both provider and HWP in the equation. What I'm not sure about, and perhaps it's not something we have to decide just yet, is at what stage to do this. Should we pick a provider and then pick the best HWP within that provider?
So this is just the way the policy is implementation. I think you can do either. At the moment there are two stages to the matching algorithm.
1) Find a list of matches. 2) Select the best choice from those matches.
1) is based on things like what pool you want to deploy to, which accounts this pool is associated with, the minimum hwp requirements specified in the deployment request, etc...
2) Is the bit we're talking about here. We have already narrowed down the search space at this point and its just a case of preference. One implementation might make the decision based on initial provider selection, other might be based on the best hwp match. I think though we should be able to do either.
Or should we group them from the start and operate based on a combination of HWP and Provider as one object? (The latter sounds more complicated, but also more optimal -- the best provider overall might have a non-ideal HWP. I'm not positive how to make this work, though.)
The selection policy should work by defining a probability distribution, stating how likely each provider account is to be selected to host the new deployment, expressed as a percentage. Once those percentage are calculated, Conductor should pick a random number between 1 and 100 and attempt to launch on the lucky provider account.
I don't get this? This is adding some randomness into the decision; surely a user would want this to be explicit.
This took me a while to come around to agreeing with. And for extremely small setups, the injection of randomness might not be the best idea.
The problem this is solving (preempting?) is that you would otherwise always send *all* instances to whichever provider had the best score, so the next-best would never get anything.
Using a probability range and randomly selecting within that range might seem counter-intuitive: Having done the maths and assigned a numerical probability to each provider account, based on its suitability to host the deployment, why not just launch on the “best” provider? The issue is one of scale. When considering a single launch, selecting the account which gathered the highest score makes sense, but once Conductor is managing a large volume of deployments, the downside of that approach becomes clear - If one provider account gathered more than 50% of the probability ranking, it would get 100% of the instances, without the randomness.
True, though using another method like below sounds better to me. The thing is that a user would have to have 1000s of deployments in order for the distribution to be equivilent to the weighting. Why not use the information that is in conductor to decide when to use the cloud. I.e. if we have 2 deployments in provider 1 then deploy to provider 2?
I'd be interested in seeing how practical this is. I'm not sure we need truly thousands of deployments before it makes sense, but I agree that for, say, 5 instances, it might be erratic. (Though it's only the subset of providers that are usable, so it's not like we'll ever match you onto an awful provider.)
I still think this only works on large amounts of instances (to get a true reflection of the weighting). It's just the same as flicking a coin. How many times do you have to do this before you get 50/50 distribution? I guess though my point here is that the problem this is trying to solve can be done in a more deterministic way. The policy can just look to see how many of deployment X are in Provider A? Then make a decision on where to deploy the next deployment based on this. e.g.
provider1_weight = 30 provider2_weight = 70
if (provider1.deployments / provider2.deployments) > (provider1_weight / provider2_weight) deploy to provider 2 else deploy to provider 1 end
If it ends up being fairly inexpensive to look at a bit more of our data, I'm all for not invoking randomness. But if it becomes complex (either to implement, or just computationally-expensive), I think a weighted-random approach is a clever workaround.
I can't imagine that deployments are going to be started stopped very frequently. I think its better to get it right rather than fudge it to get better performance. If we are concerned about performance then we should probably use a separate engine to do this.
Whilst the various policies should be stackable, one of the two following policies should be the initial basis for the calculation:
*Round robin, with optional weighting: *
With this policy, Conductor would use each of the available provider accounts equally, by assigning the same probability to each of them. Varying the probabilities, to assign a weighting, would be useful in instances where the private cloud providers associated with each provider account are of differing sizes. e.g. Three vSphere clusters, one of which has double the capacity of the other two. In that circumstance, the Administrator could adjust the weighting ratios to more closely reflect the actual capacities of each cluster.
It is worth noting that this isn't strictly round robin. The provider accounts wouldn't be selected in strict rotation, though the overall result is the same.
I guess this is similar to what I described earlier.
*Least used, with optional weighting: *
This policy would make most sense in scenarios where Conductor is the sole means by which instances are launched on private cloud providers. Conductor would seek to ensure that the usage of the providers was balanced, by giving a higher probability to whichever provider accounts are currently least used. As with round robin, the weightings could be adjusted to reflect differing capacities between providers.
Having used on of those two policies to acquire an initial set of probabilities, administrators could then elect to apply additional policies, including:
I wonder which policy is better. It might be quite difficult to tune usable penalty/bonus points for "Least used" policy. If an account fails repeatedly, it gets some penalty points for each fail, but also some bonus points because it's used least. So "Punishing failure" described below will not have big effect?
Rather than trying to invent our own why don't we look at some of the standard policies offered by cloud providers. If we really want to ship with a few policies then we're more likely to get an idea of what people actually use.
I'm not quite sure what you're referring to about policies offered by cloud providers. Do they publish something similar to what we're trying to do?
I just meant that there's no point reinvesting the wheel. it seems we are trying to come up with our own algorithms here. The scaling stuff in EC2 and Rightscale for example, is probably more likely to be what actual users care about, since they have thousands of users and there's probably plenty of places in which this problem has been addressed, outside of Cloud.
Jan's point about policies cancelling each other out is interesting, and it feels like something that could come up in other places.
particularly if its non-deterministic, another argument for leaving out randomness. (Though if a user wants to add randomness in their own implementation then that's up to them, I just don't think its a good idea, so I don't feel that we should have this as a default)
*Assigned priority: *
The probability assigned to each provider account would be adjusted according to the provider accounts' priority, by increasing the probability ranking percentage of the higher priority provider accounts at the expense of the lower priority ones
*Punishing failure: *
Once the audit history records past failures, for each occurrence of a launch failure within a configurable period (6 hours feels reasonable), a provider account would be fined 5% from its probability ranking. This would serve to reduce the attempts to launch on a provider which is running out of capacity, or experiencing hardware issues etc.
*Cost *
There are three principle cloud uses which can incur costs: consumption of network bandwidth, consumption of storage and running a VM.
Happily, only one of these needs to be a factor when Conductor is selecting a provider account to launch: the cost of running the VM.
The amount of network bandwidth that a deployment will consume is pretty much unknowable at launch time. And, if it is known because, for example, a deployment is for a streaming media server, then Administrators can minimize costs by only launching that deployment in a "Low cost bandwith" pool.
As long as we're not supporting deployments which include the allocation of additional storage, the costs of storage consumption are an issue to consider at build& push time, rather than at launch time.
So, in order to allow cost to be another factor which affects the probability rankings, all we need is a cost per realm, per hour, for each provider hardware profile, for each provider account.
Admins are going to have to enter that data themselves. That's not as onerous as it sounds, given that, for example, it will often be the case that costs will not vary across realms, so the UI can help by pre-filling.
Clearly, for private clouds, no alternative means for getting pricing data into Conductor exists. For public providers, it would be beneficial if their APIs exported list pricing, however:
- Few organizations which operate on a scale which justifies using
Aeolus are likely to be paying list price
- Organizations may wish to store and export the adjusted costs that
they'll be assigning to users, rather than the basic costs appearing on the provider's monthly invoice.
Once the cost data is in Conductor, adjusting the probabilities of each provider account to favour whichever provider could more cheaply host the specific range of hardware profiles would be a relatively simple matter of increasing the selection probability percentages of cheaper provider accounts, by a configurable amount, at the expense of the more costly provider accounts.
Having completed the stack of modules' calculations, the result is a final set of probabilities. At this point, Conductor would roll the loaded dice and attempt to launch on the winner.
The UI to allow Administrators to enable modules, and to tune the parameters associated with them, could give a real-time representation of the effect of the current settings for a specific deployable. A certain type of Administrator would be very happy, tuning options and seeing an immediate change in, for example, a pie chart, which showed the resultant probability ranking percentages.
In future, we could provide Administrators with an interface to implement their own selection modules. They might choose, for example, to vary the selection probability percentages according to time and date, to increase usage of private cloud at times when they would otherwise be relatively idle.
This is all good information for hwp/provider selection, but we will never know what an end user might want to do. Why don't we design the policy to be pluggable from the off, we can ship with some defaults, and some documentation on how to write policy components, using the stock ones we provider as examples? We could then later add some console to help with this?
Yes, I agree with this part -- I think it's very important that administrators be able to write their own from the start. I think we can provide a decently-usable set from the outset, but I think the ability to write application-specific plugins is important.
aeolus-devel@lists.fedorahosted.org