dc-rhev-image
by Gabriele Paciucci
Hi,
I don't know if this is the right mailing list to write for a PUSHING
problem.
My setup is:
- RHEL 6.2 x86_64
- unstable aeolus installation:
aeolus-conductor-0.9.0-0.20120403164322git3901824.el6.noarch
aeolus-all-0.9.0-0.20120403164322git3901824.el6.noarch
rubygem-aeolus-image-0.4.0-0.20120403114406gitbc44a5a.el6.noarch
aeolus-configure-2.6.0-0.20120403114342git9de4f42.el6.noarch
aeolus-conductor-daemons-0.9.0-0.20120403164322git3901824.el6.noarch
rubygem-rack-mount-0.7.1-3.aeolus.el6.noarch
aeolus-conductor-doc-0.9.0-0.20120403164322git3901824.el6.noarch
rubygem-arel-2.0.10-0.aeolus.el6.noarch
rubygem-aeolus-cli-0.4.0-0.20120403114355git6cbf986.el6.noarch
oz-0.8.0-4.el6.noarch
iwhd-1.2-3.el6.x86_64
when I try to push an immage to RHEV-M version 3 by an NFS Export. I
receive this errors:
2012-04-17 17:05:32,439 DEBUG
imgfac.builders.BaseBuilder.RHEL6_rhevm_Builder pid(2259) Message: Image
file
/var/lib/imagefactory/images/rhevm-image-7cc80648-505c-4720-9625-e6c65beb9b4f.dsk
already present - skipping warehouse download
2012-04-17 17:05:32,440 DEBUG
imgfac.builders.BaseBuilder.RHEL6_rhevm_Builder pid(2259) Message:
Produced provider json:
{
"apipass": "REDACTED",
"apiurl": "https://rhevm.jnet2000.lab:8443/api",
"apiuser": "rhevadmin(a)JNET2000.LAB",
"cluster": "_any_",
"image": "/tmp/1212a62a-028b-4fca-a638-aae1a28abf45",
"name": "rhevm-default",
"nfsdir": "/mnt/TEMPLATE",
"nfshost": "192.168.2.90",
"nfspath": "/mnt/vg0/export/TEMPLATE",
"target": "rhevm",
"timeout": 1800
}
2012-04-17 17:05:32,442 DEBUG
imgfac.builders.BaseBuilder.RHEL6_rhevm_Builder pid(2259) Message:
Executing external RHEV-M push command (['/usr/bin/dc-rhev-image',
'/tmp/tmpGKa7Rl'])
2012-04-17 17:05:32,495 DEBUG paste.httpserver.ThreadPool pid(2259)
Message: Added task (0 tasks queued)
2012-04-17 17:06:43,621 DEBUG
imgfac.builders.BaseBuilder.RHEL6_rhevm_Builder pid(2259) Message:
Exception caught in ImageFactory
2012-04-17 17:06:43,627 DEBUG
imgfac.builders.BaseBuilder.RHEL6_rhevm_Builder pid(2259) Message:
Traceback (most recent call last):
File
"/usr/lib/python2.6/site-packages/imgfac/builders/Fedora_rhevm_Builder.py",
line 200, in push_image
self.rhevm_push_image_upload(target_image_id, provider, credentials)
File
"/usr/lib/python2.6/site-packages/imgfac/builders/Fedora_rhevm_Builder.py",
line 278, in rhevm_push_image_upload
(stdout, stderr, retcode) = subprocess_check_output(rhevm_push_command)
File
"/usr/lib/python2.6/site-packages/imgfac/builders/Fedora_rhevm_Builder.py",
line 46, in subprocess_check_output
raise ImageFactoryException("'%s' failed(%d): %s\nstdout: %s" %
(cmd, retcode, stderr, stdout))
ImageFactoryException: '/usr/bin/dc-rhev-image /tmp/tmpGKa7Rl'
failed(1): None
stdout: ERROR import `failed
org.ovirt.engine.core.vdsbroker.vdsbroker.VDSErrorException:
VDSGenericException: VDSErrorException: Failed to
HSMGetAllTasksStatusesVDS, error = low level Image copy failed'
import cluster 77181ec8-7973-11e1-9296-002481355ab5
import target 1020b1bd-c68c-48d2-95e2-4b183704e293
import href
/api/storagedomains/24ff2849-1d2f-4c2a-8eec-93a30b0e5923/templates/3065df24-5e68-4f73-b0ea-c9ddf2ff98b1/import/7e29cb09-67a6-4564-9b59-c6146c4047e0
2012-04-17 17:06:43,628 DEBUG imgfac.BuildJob.BuildJob pid(2259)
Message: Builder (1212a62a-028b-4fca-a638-aae1a28abf45) changed status
from PUSHING to FAILED
2012-04-17 17:06:43,628 DEBUG imgfac.BuildJob.BuildJob pid(2259)
Message: 1212a62a-028b-4fca-a638-aae1a28abf45 for rhevm about to exit
None queue...
Anyone has any ideas?
Thanks in advance for the support.
12 years
[PATCH aeolus-website 0/7] Update Getting Started Guide for 0.9.0
by Matt Wagner
Hi all,
This patch updates the Getting Started Guide for the 0.9.0 release. A good deal of the information on our site was outdated.
Note that, as I went, I killed off some screenshots where I thought text would be more valuable. (Since it's searchable, translatable, and readable by people using screenreaders.) There are still plenty of helpful screenshots, though.
Speaking of which, we need to update some screenshots on the start_image and stop_image pages, since they reflect a more dated version of the UI. I'm going to send that out later, under separate cover, though. These updates have already taken me too long; I don't want to hold things up further by waiting for those.
-- Matt
12 years
generating a tdl from a running cloud instance
by Mo Morsi
I finally had some cycles to dedicate to the Snap [1] project recently
and have added the capability to generate an imagefactory / oz tdl from
a running instance on the cloud.
As you may recall, Snap is a system snapshot and restoration utility
which uses the underlying tooling provided by the operating system and
residing services to take and restore system backups.
By adding a generic output formatter and allowing the user to select the
output format which to write, I was able to generate TDLs (in addition
to the current snapshot format) which can be imported into Aeolus via
the existing mechanisms.
Thus a user can create an image in Aeolus based of an instance (Fedora,
RHEL, Ubuntu, Windows, or other) already running on any cloud provider.
All the files, repos, packages, and service configurations from the
original instance will appear in the image and can be replicated on any
cloud provider from there.
I've pushed the patches adding this feature (as well as various patches
I've received from the community) to the Snap git repo, I will update
the package in Fedora [2] soon.
Stay tuned for more updates to snap in the near future
-Mo
[1] http://github.com/movitto/snap
[2] https://admin.fedoraproject.org/pkgdb/acls/name/snap
12 years
Aeolus GSoC
by Mo Morsi
Fedora was accepted as one of the Google Summer of Code (GSoC) projects
and I submitted a proposal to extend Aeolus support on Fedora [1] a few
weeks back. We've gotten some interest in the project, and I've been
helping various students install and get antiquated w/ the framework so
that they can submit proposals of their own to work on some aspect of
the project.
Here is one of those [2] that Samridh (cc'd, student attending P.E.S.
institute of tech in Bangalore, India ) is working on, and I'm sending
it around for comments and thoughts. Will reply to this thread with
updates and more proposals as time goes on.
If anyone wants to expand upon my proposal with more specific items to
work on and/or co-mentor it with me just shout out and we can make the
necessary arrangements.
-Mo
[1]
http://fedoraproject.org/wiki/Summer_coding_ideas_for_2012#Bringing_the_C...
[2]
https://fedoraproject.org/wiki/GSOC_2012/Student_Application_Samridh90/Br...
12 years
[PATCH audrey 2/2] Only use --org for more recent versions of subscription-manager
by James Laska
On systems with subscription-manager-0.95.* (e.g. RHEL5.7), the
'register --org' option does not exist. This patch will change the
provided example registration to conditionally use the "--org" option
during registration.
---
configserver/examples/katello-register.xml | 2 +-
1 files changed, 1 insertions(+), 1 deletions(-)
diff --git a/configserver/examples/katello-register.xml b/configserver/examples/katello-register.xml
index cc49e5a..8d94960 100644
--- a/configserver/examples/katello-register.xml
+++ b/configserver/examples/katello-register.xml
@@ -222,7 +222,6 @@ def register_katello_command():
sm_ver_maj, sm_ver_min = subscription_manager_version()
cmd = "subscription-manager register --force"
- cmd += " --org=%s" % org
if sm_ver_maj <= 0:
if sm_ver_min < 96:
@@ -234,6 +233,7 @@ def register_katello_command():
else:
pass # determine and print error condition to stdout
else:
+ cmd += " --org=%s" % org
if auto_subscribe:
cmd += " --username=%s --password=%s" % (username, password)
cmd += " --env=%s" % kat_env
--
1.7.7.6
12 years
RFC: Pluggable modules to allow administrators to select their preferred provider selection algorithm
by Angus Thomas
We should present administrators, with the ability to configure the
launch-time provider account selection policy for a specific pool, and
to set a global default policy to apply in pools where no custom policy
is defined.
The policy would be applied after a set of viable provider accounts has
been identified. Those will be the provider accounts to which the
relevant images have been pushed, and for which a set of hardware
profile matches can be made etc..
The selection policy should work by defining a probability distribution,
stating how likely each provider account is to be selected to host the
new deployment, expressed as a percentage. Once those percentage are
calculated, Conductor should pick a random number between 1 and 100 and
attempt to launch on the lucky provider account.
Using a probability range and randomly selecting within that range might
seem counter-intuitive: Having done the maths and assigned a numerical
probability to each provider account, based on its suitability to host
the deployment, why not just launch on the "best" provider? The issue is
one of scale. When considering a single launch, selecting the account
which gathered the highest score makes sense, but once Conductor is
managing a large volume of deployments, the downside of that approach
becomes clear - If one provider account gathered more than 50% of the
probability ranking, it would get 100% of the instances, without the
randomness.
Whilst the various policies should be stackable, one of the two
following policies should be the initial basis for the calculation:
*Round robin, with optional weighting: *
With this policy, Conductor would use each of the available provider
accounts equally, by assigning the same probability to each of them.
Varying the probabilities, to assign a weighting, would be useful in
instances where the private cloud providers associated with each
provider account are of differing sizes. e.g. Three vSphere clusters,
one of which has double the capacity of the other two. In that
circumstance, the Administrator could adjust the weighting ratios to
more closely reflect the actual capacities of each cluster.
It is worth noting that this isn't strictly round robin. The provider
accounts wouldn't be selected in strict rotation, though the overall
result is the same.
*Least used, with optional weighting: *
This policy would make most sense in scenarios where Conductor is the
sole means by which instances are launched on private cloud providers.
Conductor would seek to ensure that the usage of the providers was
balanced, by giving a higher probability to whichever provider accounts
are currently least used. As with round robin, the weightings could be
adjusted to reflect differing capacities between providers.
Having used on of those two policies to acquire an initial set of
probabilities, administrators could then elect to apply additional
policies, including:
*Assigned priority: *
The probability assigned to each provider account would be adjusted
according to the provider accounts' priority, by increasing the
probability ranking percentage of the higher priority provider accounts
at the expense of the lower priority ones
*Punishing failure: *
Once the audit history records past failures, for each occurrence of a
launch failure within a configurable period (6 hours feels reasonable),
a provider account would be fined 5% from its probability ranking. This
would serve to reduce the attempts to launch on a provider which is
running out of capacity, or experiencing hardware issues etc.
*Cost *
There are three principle cloud uses which can incur costs: consumption
of network bandwidth, consumption of storage and running a VM.
Happily, only one of these needs to be a factor when Conductor is
selecting a provider account to launch: the cost of running the VM.
The amount of network bandwidth that a deployment will consume is pretty
much unknowable at launch time. And, if it is known because, for
example, a deployment is for a streaming media server, then
Administrators can minimize costs by only launching that deployment in a
"Low cost bandwith" pool.
As long as we're not supporting deployments which include the allocation
of additional storage, the costs of storage consumption are an issue to
consider at build & push time, rather than at launch time.
So, in order to allow cost to be another factor which affects the
probability rankings, all we need is a cost per realm, per hour, for
each provider hardware profile, for each provider account.
Admins are going to have to enter that data themselves. That's not as
onerous as it sounds, given that, for example, it will often be the case
that costs will not vary across realms, so the UI can help by pre-filling.
Clearly, for private clouds, no alternative means for getting pricing
data into Conductor exists. For public providers, it would be beneficial
if their APIs exported list pricing, however:
- Few organizations which operate on a scale which justifies using
Aeolus are likely to be paying list price
- Organizations may wish to store and export the adjusted costs that
they'll be assigning to users, rather than the basic costs appearing on
the provider's monthly invoice.
Once the cost data is in Conductor, adjusting the probabilities of each
provider account to favour whichever provider could more cheaply host
the specific range of hardware profiles would be a relatively simple
matter of increasing the selection probability percentages of cheaper
provider accounts, by a configurable amount, at the expense of the more
costly provider accounts.
Having completed the stack of modules' calculations, the result is a
final set of probabilities. At this point, Conductor would roll the
loaded dice and attempt to launch on the winner.
The UI to allow Administrators to enable modules, and to tune the
parameters associated with them, could give a real-time representation
of the effect of the current settings for a specific deployable. A
certain type of Administrator would be very happy, tuning options and
seeing an immediate change in, for example, a pie chart, which showed
the resultant probability ranking percentages.
In future, we could provide Administrators with an interface to
implement their own selection modules. They might choose, for example,
to vary the selection probability percentages according to time and
date, to increase usage of private cloud at times when they would
otherwise be relatively idle.
12 years