New UI for launch from catalog
by Tomas Hrcka
-there are some CSS glitches in this patch
-the button is only on one page but will be added to another pages later
-you need to recompile scss files
12 years, 6 months
Aeolus "preview" repository?
by James Laska
Greetings,
Once all of aeolus* components are included in Fedora 16, how do folks
feel about re-purposing the existing fedorapeople repos under the theme
"aeolus-preview"? The frequency and methods of update wouldn't change,
but I suspect the use of the repo will alter after aeolus* is fully
included in Fedora. The preview repo would house, as the name suggests,
preview packages. Preview packages are mid-sprint updates (as bugs and
stories are addressed). Once the sprint finishes, packages could move
to the standard Fedora 'updates' repository, or just stay in
'preview' (whatever the team feels is appropriate).
The virt folks use a similar mechanism for posting pre-release packages
for testing. From their wiki [1], these are the expected consumers of
the preview repo ...
1. Users who want things to stay stable and who aren't necessarily
expecting new features until they update to the next release of
Fedora - these are people with just the updates repo enabled
2. Same as (1) but who are willing to help out testing updates for
the whole distro in order to catch things before they hit the
people in category (1) - these people have the updates and
updates-testing repos enabled
3. Mostly the same as (1) or (2), but have a specific interest in
testing new [aeolus] features and are willing to deal with
[aeolus] regressions - these people enable the updates,
updates-testing and preview repos
4. People who are interested in helping with Fedora <next>
development in general, not just [aeolus] - these people run
rawhide
Thoughts/comments?
Thanks,
James
[1] https://fedoraproject.org/wiki/Virtualization_Preview_Repository
12 years, 6 months
aeolus-image push successful but not seen in RHEV export domain
by James Labocki
I attemped to build/push/launch an image with a RHEV 3.0 provider. The logs indicate the push has succeeded, but I do not see the image in the export domain in the RHEV-M 3.0 interface. When I attempt to deploy the image it is stuck in pending. Below are the commands/configs/logs. Any tips on where to begin troubleshooting are appreciated.
# aeolus-image build --target rhevm --template template.tpl
# cat ~/template.tpl
<template>
<name>tmpl1</name>
<description>Fedora 14 Template</description>
<os>
<name>Fedora</name>
<arch>x86_64</arch>
<version>14</version>
<install type="url">
<url>http://10.0.3.41/fedora/</url>
</install>
</os>
# cat /var/www/html/deployable4.xml
<deployable name="Example">
<description>This is an Example Deployment</description>
<assemblies>
<assembly name="frontend" hwp="hwp1">
<image id="b42c5144-d62f-4a71-99e4-4b11925f9e2c" build="09026658-71aa-4f23-b8b5-dd134141263e">
</image>
</assembly>
</assemblies>
</deployable>
# aeolus-image --images list
IMAGE ID LASTEST PUSHED BUILD NAME OS OS VERSION ARCH DESCRIPTION
b42c5144-d62f-4a71-99e4-4b11925f9e2c 09026658-71aa-4f23-b8b5-dd134141263e
# ls -lRath /mnt/rhevm-nfs/
/mnt/rhevm-nfs/:
total 36K
drwxr-x---. 2 nobody nobody 4.0K Oct 18 09:16 provider_images
drwxr-xr-x. 9 nobody nobody 4.0K Oct 18 09:16 .
drwxr-xr-x. 5 nobody nobody 4.0K Oct 18 09:13 053b897b-cb68-4084-ba8e-835a23b43bad
drwxr-x---. 2 nobody nobody 4.0K Oct 18 09:05 target_images
drwxr-x---. 2 nobody nobody 4.0K Oct 18 09:05 icicles
drwxr-x---. 2 nobody nobody 4.0K Oct 18 09:05 templates
drwxr-x---. 2 nobody nobody 4.0K Oct 18 08:59 builds
drwxr-x---. 2 nobody nobody 4.0K Oct 18 08:59 images
drwxr-xr-x. 5 root root 4.0K Jul 27 05:12 ..
/mnt/rhevm-nfs/provider_images:
total 12K
-rw-r--r--. 1 nobody nobody 104 Oct 18 09:16 4b0bd044-57c6-4331-b0a2-1f9bf083f081
drwxr-x---. 2 nobody nobody 4.0K Oct 18 09:16 .
drwxr-xr-x. 9 nobody nobody 4.0K Oct 18 09:16 ..
/mnt/rhevm-nfs/053b897b-cb68-4084-ba8e-835a23b43bad:
total 20K
drwxr-xr-x. 9 nobody nobody 4.0K Oct 18 09:16 ..
drwxr-xr-x. 2 nobody nobody 4.0K Oct 18 09:16 images
drwxr-xr-x. 5 nobody nobody 4.0K Oct 18 09:13 .
drwxr-xr-x. 4 nobody nobody 4.0K Oct 18 05:04 master
drwxr-xr-x. 2 nobody nobody 4.0K Oct 18 05:02 dom_md
/mnt/rhevm-nfs/053b897b-cb68-4084-ba8e-835a23b43bad/images:
total 8.0K
drwxr-xr-x. 2 nobody nobody 4.0K Oct 18 09:16 .
drwxr-xr-x. 5 nobody nobody 4.0K Oct 18 09:13 ..
/mnt/rhevm-nfs/053b897b-cb68-4084-ba8e-835a23b43bad/master:
total 16K
drwxr-xr-x. 2 nobody nobody 4.0K Oct 18 09:16 vms
drwxr-xr-x. 5 nobody nobody 4.0K Oct 18 09:13 ..
drwxr-xr-x. 4 nobody nobody 4.0K Oct 18 05:04 .
drwxr-xr-x. 2 nobody nobody 4.0K Oct 18 05:04 tasks
/mnt/rhevm-nfs/053b897b-cb68-4084-ba8e-835a23b43bad/master/vms:
total 8.0K
drwxr-xr-x. 2 nobody nobody 4.0K Oct 18 09:16 .
drwxr-xr-x. 4 nobody nobody 4.0K Oct 18 05:04 ..
/mnt/rhevm-nfs/053b897b-cb68-4084-ba8e-835a23b43bad/master/tasks:
total 8.0K
drwxr-xr-x. 2 nobody nobody 4.0K Oct 18 05:04 .
drwxr-xr-x. 4 nobody nobody 4.0K Oct 18 05:04 ..
/mnt/rhevm-nfs/053b897b-cb68-4084-ba8e-835a23b43bad/dom_md:
total 16K
drwxr-xr-x. 5 nobody nobody 4.0K Oct 18 09:13 ..
-rw-r--r--. 1 nobody nobody 2.0K Oct 18 05:02 leases
drwxr-xr-x. 2 nobody nobody 4.0K Oct 18 05:02 .
-rw-r--r--. 1 nobody nobody 344 Oct 18 05:02 metadata
-rw-r--r--. 1 nobody nobody 16 Oct 18 05:02 outbox
-rw-r--r--. 1 nobody nobody 16 Oct 18 05:02 inbox
-rw-r--r--. 1 nobody nobody 8 Oct 18 05:02 ids
/mnt/rhevm-nfs/target_images:
total 11G
drwxr-xr-x. 9 nobody nobody 4.0K Oct 18 09:16 ..
-rw-r--r--. 1 nobody nobody 10G Oct 18 09:07 e144fab2-96dc-43d6-93e1-610ca067465a
drwxr-x---. 2 nobody nobody 4.0K Oct 18 09:05 .
/mnt/rhevm-nfs/icicles:
total 32K
drwxr-xr-x. 9 nobody nobody 4.0K Oct 18 09:16 ..
-rw-r--r--. 1 nobody nobody 22K Oct 18 09:05 1562b87a-b6d7-48da-8839-8ec27b850f0b
drwxr-x---. 2 nobody nobody 4.0K Oct 18 09:05 .
/mnt/rhevm-nfs/templates:
total 12K
drwxr-xr-x. 9 nobody nobody 4.0K Oct 18 09:16 ..
-rw-r--r--. 1 nobody nobody 229 Oct 18 09:05 4f0ebeba-80ef-4086-879d-e6412368f6a1
drwxr-x---. 2 nobody nobody 4.0K Oct 18 09:05 .
/mnt/rhevm-nfs/builds:
total 8.0K
drwxr-xr-x. 9 nobody nobody 4.0K Oct 18 09:16 ..
-rw-r--r--. 1 nobody nobody 0 Oct 18 08:59 09026658-71aa-4f23-b8b5-dd134141263e
drwxr-x---. 2 nobody nobody 4.0K Oct 18 08:59 .
-rw-r--r--. 1 nobody nobody 0 Oct 18 08:46 25df1ee4-43c6-4ff7-bb76-89db004580b8
-rw-r--r--. 1 nobody nobody 0 Oct 18 05:08 22dccdb4-fdf6-47ce-b687-446cf9ad3651
/mnt/rhevm-nfs/images:
total 20K
drwxr-xr-x. 9 nobody nobody 4.0K Oct 18 09:16 ..
-rw-r--r--. 1 nobody nobody 33 Oct 18 08:59 b42c5144-d62f-4a71-99e4-4b11925f9e2c
drwxr-x---. 2 nobody nobody 4.0K Oct 18 08:59 .
-rw-r--r--. 1 nobody nobody 33 Oct 18 08:46 ee3c3614-ecd1-40dd-957b-7ef01f64ca52
-rw-r--r--. 1 nobody nobody 33 Oct 18 05:08 361e9e19-6953-4a39-968f-0441f6c83fbf
# cat /etc/aeolus-configure/nodes/default_configure
#Default setup configuration.
#Set everything up on this box.
#You can override the default behavior
#by creating <fqdn>_configure with the
#class membership and parameters as
#desire and it will take precedence over this.
#NOTE: Although this suggests the components
#can be deployed on individual boxes. This will likely
#become a common practice, but be advised that currently
#apart from https on the web server for conductor, you should
#consider intermachine communications insecure. Securing
#intermachine service calls is on the roadmap.
---
parameters:
enable_https: true
enable_security: false
# Uncomment this section and provide reasonable values to
# enable RHEV values to be populated.
rhevm_nfs_server: 10.0.3.41
rhevm_nfs_export: /exportdomain
rhevm_nfs_mount_point: /mnt/rhevm-nfs
rhevm_deltacloud_port: 3005
# rhevm_deltacloud_username: rhevadmin(a)cloud.redhat.com
rhevm_deltacloud_username: admin(a)rhev3.cloud.redhat.com
rhevm_deltacloud_password: ***
rhevm_deltacloud_powershell_url: https://rhev3.cloud.redhat.com:8443/api
#
# Uncomment this section and provide appropriate values to configure vmware
# vmware_api_endpoint: vsphere.server.com
# vmware_username: username
# vmware_password: password
# vmware_datastore: datastore
# vmware_network_name: network_name
# vmware_deltacloud_port: 3006
classes:
- aeolus::conductor
- aeolus::image-factory
- aeolus::iwhd
- aeolus::conductor::seed_data
# Uncomment this section to include rhev setup
- aeolus::rhevm
#
# Uncomment this section to include vmware setup
#- aeolus::vmware
# /var/log/imagefactory.log
2011-10-18 09:00:04,465 DEBUG oz.Guest.FedoraGuest pid(3586) Message: Generated XML:
<?xml version="1.0"?>
<domain type="kvm">
<name>tmpl1-e144fab2-96dc-43d6-93e1-610ca067465a</name>
<memory>1048576</memory>
<currentMemory>1048576</currentMemory>
<uuid>c2984bd1-eff6-4fa4-90bc-5a4785ac55fc</uuid>
<clock offset="utc"/>
<vcpu>1</vcpu>
<features>
<acpi/>
<apic/>
<pae/>
</features>
<os>
<type>hvm</type>
<boot dev="hd"/>
</os>
<on_poweroff>destroy</on_poweroff>
<on_reboot>destroy</on_reboot>
<on_crash>destroy</on_crash>
<devices>
<console device="pty"/>
<graphics port="-1" type="vnc"/>
<interface type="bridge">
<source bridge="virbr0"/>
<mac address="52:54:00:eb:23:37"/>
<model type="virtio"/>
</interface>
<input bus="ps2" type="mouse"/>
<console type="pty">
<target port="0"/>
</console>
<disk device="disk" type="file">
<target dev="vda" bus="virtio"/>
<source file="/var/tmp/base-image-e144fab2-96dc-43d6-93e1-610ca067465a.dsk"/>
</disk>
</devices>
</domain>
2011-10-18 09:00:04,465 DEBUG imagefactory.builders.BaseBuilder.FedoraBuilder pid(3586) Message: Base install complete - Doing customization
2011-10-18 09:00:04,465 DEBUG imagefactory.BuildJob.BuildAdaptor pid(3586) Message: Raising event with agent handler (<ImageFactoryAgent(Thread-1, initial)>), changed percent complete from 10 to 30
2011-10-18 09:00:04,466 INFO oz.Guest.FedoraGuest pid(3586) Message: Customizing image
2011-10-18 09:00:04,466 INFO oz.Guest.FedoraGuest pid(3586) Message: No additional packages, files or commands to install, skipping customization
2011-10-18 09:00:04,466 DEBUG imagefactory.builders.BaseBuilder.FedoraBuilder pid(3586) Message: Customization complete
2011-10-18 09:00:04,466 DEBUG imagefactory.BuildJob.BuildAdaptor pid(3586) Message: Raising event with agent handler (<ImageFactoryAgent(Thread-1, initial)>), changed percent complete from 30 to 50
2011-10-18 09:00:04,466 DEBUG imagefactory.builders.BaseBuilder.FedoraBuilder pid(3586) Message: Generating ICICLE
2011-10-18 09:00:04,466 INFO oz.Guest.FedoraGuest pid(3586) Message: Generating ICICLE
2011-10-18 09:00:04,466 INFO oz.Guest.FedoraGuest pid(3586) Message: Collection Setup
2011-10-18 09:00:04,468 DEBUG oz.Guest.FedoraGuest pid(3586) Message: DomID: 1
2011-10-18 09:00:04,474 DEBUG oz.Guest.FedoraGuest pid(3586) Message: DomID: 2
2011-10-18 09:00:04,480 INFO oz.Guest.FedoraGuest pid(3586) Message: Setting up guestfs handle for tmpl1-e144fab2-96dc-43d6-93e1-610ca067465a
2011-10-18 09:00:04,480 DEBUG oz.Guest.FedoraGuest pid(3586) Message: Adding disk image /var/tmp/base-image-e144fab2-96dc-43d6-93e1-610ca067465a.dsk
2011-10-18 09:00:04,480 DEBUG oz.Guest.FedoraGuest pid(3586) Message: Launching guestfs
2011-10-18 09:00:06,814 DEBUG oz.Guest.FedoraGuest pid(3586) Message: Inspecting guest OS
2011-10-18 09:00:07,864 DEBUG oz.Guest.FedoraGuest pid(3586) Message: Getting mountpoints
2011-10-18 09:00:07,864 DEBUG oz.Guest.FedoraGuest pid(3586) Message: Root device: /dev/VolGroup00/LogVol00
2011-10-18 09:00:08,034 DEBUG oz.Guest.FedoraGuest pid(3586) Message: Step 1: Uploading ssh keys
2011-10-18 09:00:08,037 INFO oz.Guest.FedoraGuest pid(3586) Message: Generating new openssh key
2011-10-18 09:00:08,037 DEBUG oz.Guest.FedoraGuest pid(3586) Message: Step 2: setup sshd
2011-10-18 09:00:08,061 DEBUG oz.Guest.FedoraGuest pid(3586) Message: Step 3: Open up the firewall
2011-10-18 09:00:08,062 DEBUG oz.Guest.FedoraGuest pid(3586) Message: Step 4: Guest announcement
2011-10-18 09:00:08,072 DEBUG oz.Guest.FedoraGuest pid(3586) Message: Step 5: Set SELinux to permissive mode
2011-10-18 09:00:08,077 INFO oz.Guest.FedoraGuest pid(3586) Message: Cleaning up guestfs handle for tmpl1-e144fab2-96dc-43d6-93e1-610ca067465a
2011-10-18 09:00:08,077 DEBUG oz.Guest.FedoraGuest pid(3586) Message: Syncing
2011-10-18 09:00:08,105 DEBUG oz.Guest.FedoraGuest pid(3586) Message: Unmounting all
2011-10-18 09:00:08,116 DEBUG oz.Guest.FedoraGuest pid(3586) Message: Killing guestfs subprocess
2011-10-18 09:00:08,667 INFO oz.Guest.FedoraGuest pid(3586) Message: Waiting for guest tmpl1-e144fab2-96dc-43d6-93e1-610ca067465a to boot
2011-10-18 09:00:08,677 DEBUG oz.Guest.FedoraGuest pid(3586) Message: Waiting for guest tmpl1-e144fab2-96dc-43d6-93e1-610ca067465a to boot, 300/300
2011-10-18 09:00:18,704 DEBUG oz.Guest.FedoraGuest pid(3586) Message: Waiting for guest tmpl1-e144fab2-96dc-43d6-93e1-610ca067465a to boot, 290/300
2011-10-18 09:00:28,725 DEBUG oz.Guest.FedoraGuest pid(3586) Message: Waiting for guest tmpl1-e144fab2-96dc-43d6-93e1-610ca067465a to boot, 280/300
2011-10-18 09:00:38,747 DEBUG oz.Guest.FedoraGuest pid(3586) Message: Waiting for guest tmpl1-e144fab2-96dc-43d6-93e1-610ca067465a to boot, 270/300
2011-10-18 09:00:48,768 DEBUG oz.Guest.FedoraGuest pid(3586) Message: Waiting for guest tmpl1-e144fab2-96dc-43d6-93e1-610ca067465a to boot, 260/300
2011-10-18 09:00:58,792 DEBUG oz.Guest.FedoraGuest pid(3586) Message: Waiting for guest tmpl1-e144fab2-96dc-43d6-93e1-610ca067465a to boot, 250/300
2011-10-18 09:01:03,431 DEBUG oz.Guest.FedoraGuest pid(3586) Message: IP address of guest is 192.168.122.60
2011-10-18 09:03:04,093 DEBUG oz.Guest.FedoraGuest pid(3586) Message: Waiting for tmpl1-e144fab2-96dc-43d6-93e1-610ca067465a to shutdown, 60/60
2011-10-18 09:03:11,115 DEBUG oz.Guest.FedoraGuest pid(3586) Message: Libvirt Domain Info Failed:
2011-10-18 09:03:11,116 DEBUG oz.Guest.FedoraGuest pid(3586) Message: code is 42
2011-10-18 09:03:11,116 DEBUG oz.Guest.FedoraGuest pid(3586) Message: domain is 10
2011-10-18 09:03:11,116 DEBUG oz.Guest.FedoraGuest pid(3586) Message: message is Domain not found: no domain with matching uuid 'c2984bd1-eff6-4fa4-90bc-5a4785ac55fc'
2011-10-18 09:03:11,117 DEBUG oz.Guest.FedoraGuest pid(3586) Message: level is 2
2011-10-18 09:03:11,117 DEBUG oz.Guest.FedoraGuest pid(3586) Message: str1 is Domain not found: %s
2011-10-18 09:03:11,117 DEBUG oz.Guest.FedoraGuest pid(3586) Message: str2 is no domain with matching uuid 'c2984bd1-eff6-4fa4-90bc-5a4785ac55fc'
2011-10-18 09:03:11,117 DEBUG oz.Guest.FedoraGuest pid(3586) Message: str3 is None
2011-10-18 09:03:11,117 DEBUG oz.Guest.FedoraGuest pid(3586) Message: int1 is -1
2011-10-18 09:03:11,117 DEBUG oz.Guest.FedoraGuest pid(3586) Message: int2 is -1
2011-10-18 09:03:11,117 INFO oz.Guest.FedoraGuest pid(3586) Message: Collection Teardown
2011-10-18 09:03:11,118 DEBUG oz.Guest.FedoraGuest pid(3586) Message: DomID: 1
2011-10-18 09:03:11,120 DEBUG oz.Guest.FedoraGuest pid(3586) Message: DomID: 2
2011-10-18 09:03:11,122 INFO oz.Guest.FedoraGuest pid(3586) Message: Setting up guestfs handle for tmpl1-e144fab2-96dc-43d6-93e1-610ca067465a
2011-10-18 09:03:11,122 DEBUG oz.Guest.FedoraGuest pid(3586) Message: Adding disk image /var/tmp/base-image-e144fab2-96dc-43d6-93e1-610ca067465a.dsk
2011-10-18 09:03:11,122 DEBUG oz.Guest.FedoraGuest pid(3586) Message: Launching guestfs
2011-10-18 09:03:13,476 DEBUG oz.Guest.FedoraGuest pid(3586) Message: Inspecting guest OS
2011-10-18 09:03:14,477 DEBUG oz.Guest.FedoraGuest pid(3586) Message: Getting mountpoints
2011-10-18 09:03:14,477 DEBUG oz.Guest.FedoraGuest pid(3586) Message: Root device: /dev/VolGroup00/LogVol00
2011-10-18 09:03:14,642 DEBUG oz.Guest.FedoraGuest pid(3586) Message: Teardown step 1
2011-10-18 09:03:14,642 DEBUG oz.Guest.FedoraGuest pid(3586) Message: Resetting authorized_keys
2011-10-18 09:03:14,653 DEBUG oz.Guest.FedoraGuest pid(3586) Message: Teardown step 2
2011-10-18 09:03:14,653 DEBUG oz.Guest.FedoraGuest pid(3586) Message: Resetting sshd_config
2011-10-18 09:03:14,659 DEBUG oz.Guest.FedoraGuest pid(3586) Message: Resetting sshd service
2011-10-18 09:03:14,673 DEBUG oz.Guest.FedoraGuest pid(3586) Message: Teardown step 3
2011-10-18 09:03:14,673 DEBUG oz.Guest.FedoraGuest pid(3586) Message: Resetting iptables rules
2011-10-18 09:03:14,673 DEBUG oz.Guest.FedoraGuest pid(3586) Message: Teardown step 4
2011-10-18 09:03:14,673 DEBUG oz.Guest.FedoraGuest pid(3586) Message: Resetting announcement to host
2011-10-18 09:03:14,674 DEBUG oz.Guest.FedoraGuest pid(3586) Message: Removing icicle-nc binary
2011-10-18 09:03:14,674 DEBUG oz.Guest.FedoraGuest pid(3586) Message: Resetting crond service
2011-10-18 09:03:14,681 DEBUG oz.Guest.FedoraGuest pid(3586) Message: Teardown step 5
2011-10-18 09:03:14,684 INFO oz.Guest.FedoraGuest pid(3586) Message: Cleaning up guestfs handle for tmpl1-e144fab2-96dc-43d6-93e1-610ca067465a
2011-10-18 09:03:14,684 DEBUG oz.Guest.FedoraGuest pid(3586) Message: Syncing
2011-10-18 09:03:14,702 DEBUG oz.Guest.FedoraGuest pid(3586) Message: Unmounting all
2011-10-18 09:03:14,713 DEBUG oz.Guest.FedoraGuest pid(3586) Message: Killing guestfs subprocess
2011-10-18 09:03:14,810 DEBUG imagefactory.builders.BaseBuilder.FedoraBuilder pid(3586) Message: ICICLE generation complete
2011-10-18 09:03:14,810 INFO oz.Guest.FedoraGuest pid(3586) Message: Cleaning up after install
2011-10-18 09:03:14,810 DEBUG oz.Guest.FedoraGuest pid(3586) Message: Removing modified ISO
2011-10-18 09:03:14,854 DEBUG imagefactory.builders.BaseBuilder.FedoraBuilder pid(3586) Message: Generated disk image (/var/tmp/base-image-e144fab2-96dc-43d6-93e1-610ca067465a.dsk)
2011-10-18 09:03:14,854 DEBUG imagefactory.builders.BaseBuilder.FedoraBuilder pid(3586) Message: Doing further Factory specific modification of Oz image
2011-10-18 09:03:14,854 DEBUG imagefactory.builders.BaseBuilder.FedoraBuilder pid(3586) Message: init guestfs
2011-10-18 09:03:14,854 DEBUG imagefactory.builders.BaseBuilder.FedoraBuilder pid(3586) Message: add input image
2011-10-18 09:03:14,854 DEBUG imagefactory.builders.BaseBuilder.FedoraBuilder pid(3586) Message: launch guestfs
2011-10-18 09:03:17,325 INFO imagefactory.builders.BaseBuilder.FedoraBuilder pid(3586) Message: Creating cloud-info file indicating target (rhevm)
2011-10-18 09:03:17,326 INFO imagefactory.builders.BaseBuilder.FedoraBuilder pid(3586) Message: Updating rc.local with Audrey conditional
2011-10-18 09:03:18,334 DEBUG imagefactory.builders.BaseBuilder.FedoraBuilder pid(3586) Message: Removed HWADDR from image's /etc/sysconfig/network-scripts/ifcfg-eth0
2011-10-18 09:03:18,505 DEBUG imagefactory.builders.BaseBuilder.FedoraBuilder pid(3586) Message: Storing Fedora image at http://localhost:9090/...
2011-10-18 09:03:18,512 INFO imagefactory.ImageWarehouse.ImageWarehouse pid(3586) Message: Creating a bucket returned status 500.
2011-10-18 09:03:18,555 DEBUG imagefactory.ImageWarehouse.ImageWarehouse pid(3586) Message: Setting metadata ({'object_type': 'template', 'uuid': '4f0ebeba-80ef-4086-879d-e6412368f6a1'}) for http://localhost:9090/templates/4f0ebeba-80ef-4086-879d-e6412368f6a1
2011-10-18 09:03:18,634 DEBUG imagefactory.ImageWarehouse.ImageWarehouse pid(3586) Message: Setting metadata ({'object_type': 'icicle', 'uuid': '1562b87a-b6d7-48da-8839-8ec27b850f0b'}) for http://localhost:9090/icicles/1562b87a-b6d7-48da-8839-8ec27b850f0b
2011-10-18 09:05:17,961 DEBUG imagefactory.ImageWarehouse.ImageWarehouse pid(3586) Message: Setting metadata ({'icicle': '1562b87a-b6d7-48da-8839-8ec27b850f0b', 'uuid': 'e144fab2-96dc-43d6-93e1-610ca067465a', 'template': '4f0ebeba-80ef-4086-879d-e6412368f6a1', 'target_parameters': '<?xml version="1.0"?>\n<domain type="kvm">\n <name>tmpl1-e144fab2-96dc-43d6-93e1-610ca067465a</name>\n <memory>1048576</memory>\n <currentMemory>1048576</currentMemory>\n <uuid>c2984bd1-eff6-4fa4-90bc-5a4785ac55fc</uuid>\n <clock offset="utc"/>\n <vcpu>1</vcpu>\n <features>\n <acpi/>\n <apic/>\n <pae/>\n </features>\n <os>\n <type>hvm</type>\n <boot dev="hd"/>\n </os>\n <on_poweroff>destroy</on_poweroff>\n <on_reboot>destroy</on_reboot>\n <on_crash>destroy</on_crash>\n <devices>\n <console device="pty"/>\n <graphics port="-1" type="vnc"/>\n <interface type="bridge">\n <source bridge="virbr0"/>\n <mac address="52:54:00:eb:23:37"/>\n <model type="virtio"/>\n </interface>\n <input bus="ps2" type="mouse"/>\n <console type="pty">\n <target port="0"/>\n </console>\n <disk device="disk" type="file">\n <target dev="vda" bus="virtio"/>\n <source file="/var/tmp/base-image-e144fab2-96dc-43d6-93e1-610ca067465a.dsk"/>\n </disk>\n </devices>\n</domain>\n', 'object_type': 'target_image', 'target': 'rhevm', 'build': '09026658-71aa-4f23-b8b5-dd134141263e'}) for http://localhost:9090/target_images/e144fab2-96dc-43d6-93e1-610ca067465a
2011-10-18 09:05:18,210 DEBUG imagefactory.builders.BaseBuilder.FedoraBuilder pid(3586) Message: Image warehouse storage complete
2011-10-18 09:05:18,210 DEBUG imagefactory.BuildJob.BuildAdaptor pid(3586) Message: Raising event with agent handler (<ImageFactoryAgent(Thread-1, initial)>), changed percent complete from 50 to 100
2011-10-18 09:05:18,211 DEBUG imagefactory.ImageWarehouse.ImageWarehouse pid(3586) Message: Getting metadata (['latest_unpushed']) from http://localhost:9090/images/b42c5144-d62f-4a71-99e4-4b11925f9e2c
2011-10-18 09:05:18,211 DEBUG imagefactory.ImageWarehouse.ImageWarehouse pid(3586) Message: Getting metadata (['latest_build']) from http://localhost:9090/images/b42c5144-d62f-4a71-99e4-4b11925f9e2c
2011-10-18 09:05:18,213 DEBUG imagefactory.ImageWarehouse.ImageWarehouse pid(3586) Message: Setting metadata ({'latest_unpushed': '09026658-71aa-4f23-b8b5-dd134141263e'}) for http://localhost:9090/images/b42c5144-d62f-4a71-99e4-4b11925f9e2c
2011-10-18 09:05:18,214 DEBUG imagefactory.BuildJob.BuildAdaptor pid(3586) Message: Raising event with agent handler (<ImageFactoryAgent(Thread-1, initial)>), changed status from BUILDING to COMPLETED
2011-10-18 09:07:16,106 DEBUG imagefactory.qmfagent.ImageFactoryAgent.ImageFactoryAgent pid(3586) Message: Method called: name = push_image
args = {'credentials': '*** REDACTED ***', 'image': 'b42c5144-d62f-4a71-99e4-4b11925f9e2c', 'build': '', 'providers': ['rhevm']}
handle = <cqmf2.AgentEvent; proxy of <Swig Object of type 'qmf::AgentEvent *' at 0x2b699c0> >
addr = redhat.com:imagefactory:81e453bd-892c-4487-83f3-8108e826a974:image_factory
subtypes = {}
userId = anonymous
2011-10-18 09:07:16,107 DEBUG imagefactory.ImageWarehouse.ImageWarehouse pid(3586) Message: Getting metadata (['latest_unpushed']) from http://localhost:9090/images/b42c5144-d62f-4a71-99e4-4b11925f9e2c
2011-10-18 09:07:16,109 DEBUG imagefactory.ImageWarehouse.ImageWarehouse pid(3586) Message: Querying (http://localhost:9090/target_images/_query) with expression ($build == "09026658-71aa-4f23-b8b5-dd134141263e" && $target == "rhevm")
2011-10-18 09:07:16,111 DEBUG imagefactory.ImageWarehouse.ImageWarehouse pid(3586) Message: Getting metadata (['template']) from http://localhost:9090/target_images/e144fab2-96dc-43d6-93e1-610ca067465a
2011-10-18 09:07:16,112 DEBUG imagefactory.ImageWarehouse.ImageWarehouse pid(3586) Message: Created Image Warehouse instance http://localhost:9090 - buckets(target_images, templates, icicles, provider_images)
2011-10-18 09:07:16,151 DEBUG imagefactory.ImageWarehouse.ImageWarehouse pid(3586) Message: Created Image Warehouse instance http://localhost:9090 - buckets(target_images, templates, icicles, provider_images)
2011-10-18 09:07:16,154 DEBUG imagefactory.BuildJob.BuildAdaptor pid(3586) Message: Raising event with agent handler (<ImageFactoryAgent(Thread-1, initial)>), changed status from NEW to PUSHING
2011-10-18 09:07:16,154 DEBUG imagefactory.BuildJob.BuildAdaptor pid(3586) Message: Raising event with agent handler (<ImageFactoryAgent(Thread-1, initial)>), changed percent complete from 0 to 0
2011-10-18 09:14:23,203 DEBUG imagefactory.builders.BaseBuilder.FedoraBuilder pid(3586) Message: Extracted RHEVM UUID: 97c827ed-d80f-4cdc-b41d-5a376d3d9afb
2011-10-18 09:14:23,369 DEBUG imagefactory.ImageWarehouse.ImageWarehouse pid(3586) Message: Setting metadata ({'target_image': 'e144fab2-96dc-43d6-93e1-610ca067465a', 'uuid': '4b0bd044-57c6-4331-b0a2-1f9bf083f081', 'icicle': 'none', 'target_identifier': '97c827ed-d80f-4cdc-b41d-5a376d3d9afb', 'object_type': 'provider_image', 'provider': 'rhevm'}) for http://localhost:9090/provider_images/4b0bd044-57c6-4331-b0a2-1f9bf083f081
2011-10-18 09:14:23,576 DEBUG imagefactory.BuildJob.BuildAdaptor pid(3586) Message: Raising event with agent handler (<ImageFactoryAgent(Thread-1, initial)>), changed percent complete from 0 to 100
2011-10-18 09:14:23,576 DEBUG imagefactory.ImageWarehouse.ImageWarehouse pid(3586) Message: Setting metadata ({'latest_build': '09026658-71aa-4f23-b8b5-dd134141263e'}) for http://localhost:9090/images/b42c5144-d62f-4a71-99e4-4b11925f9e2c
2011-10-18 09:14:23,578 DEBUG imagefactory.ImageWarehouse.ImageWarehouse pid(3586) Message: Setting metadata ({'latest_unpushed': None}) for http://localhost:9090/images/b42c5144-d62f-4a71-99e4-4b11925f9e2c
2011-10-18 09:14:23,619 DEBUG imagefactory.BuildJob.BuildAdaptor pid(3586) Message: Raising event with agent handler (<ImageFactoryAgent(Thread-1, initial)>), changed status from PUSHING to COMPLETED
# /var/log/condor/GridmanagerLog.aeolus
10/18/11 12:33:23 [3066] BaseResource::UpdateResource:
Machine = "cldmgr01.cloud.redhat.com"
NumJobs = 1
IdleJobs = 1
RunningJobs = 0
SubmitsAllowed = 1
DeltacloudUserName = "admin(a)rhev3.cloud.redhat.com"
Name = "deltacloud http://localhost:3005/api"
CondorVersion = "$CondorVersion: 7.6.0 Jun 30 2011 $"
ScheddIpAddr = "<10.0.4.21:54364>"
JobLimit = 1000
HashName = "http://localhost:3005/api#admin@rhev3.cloud.redhat.com#/var/lib/aeolus-co..."
MyAddress = "<10.0.4.21:48310>"
GridResourceUnavailableTime = 1318955304
MyCurrentTime = 1318955603
SubmitsWanted = 0
CondorPlatform = "$CondorPlatform: X86_64-RedHat_6.1 $"
CurrentTime = time()
MyType = "Grid"
ScheddName = "cldmgr01.cloud.redhat.com"
Owner = "aeolus"
10/18/11 12:33:23 [3066] Trying to update collector <127.0.0.1:9618>
10/18/11 12:33:23 [3066] Attempting to send update via UDP to collector localhost.localdomain <127.0.0.1:9618>
10/18/11 12:33:24 [3066] GAHP[3167] <- 'DELTACLOUD_VM_STATUS_ALL 186 http://localhost:3005/api admin(a)rhev3.cloud.redhat.com /var/lib/aeolus-conductor/jobs/job_myDeployment100_frontend_1'
10/18/11 12:33:24 [3066] GAHP[3167] -> 'S'
10/18/11 12:33:24 [3066] BaseResource::DoBatchStatus for http://localhost:3005/api.
10/18/11 12:33:24 [3066] BaseResource::DoBatchStatus for http://localhost:3005/api skipped for 30 seconds because the resource is down.
10/18/11 12:33:24 [3066] GAHP[3167] <- 'RESULTS'
10/18/11 12:33:24 [3066] GAHP[3167] -> 'R'
10/18/11 12:33:24 [3066] GAHP[3167] -> 'S' '1'
10/18/11 12:33:24 [3066] GAHP[3167] -> '186' 'Instance_Fetch_Failure: undefined method `downcase' for nil:NilClass'
10/18/11 12:33:24 [3066] resource http://localhost:3005/api is still down
10/18/11 12:33:35 [3066] Checking proxies
10/18/11 12:33:36 [3066] Evaluating periodic job policy expressions.
10/18/11 12:33:37 [3066] BaseResource::UpdateResource:
Machine = "cldmgr01.cloud.redhat.com"
NumJobs = 1
IdleJobs = 1
RunningJobs = 0
SubmitsAllowed = 1
DeltacloudUserName = "admin(a)rhev3.cloud.redhat.com"
Name = "deltacloud http://localhost:3005/api"
CondorVersion = "$CondorVersion: 7.6.0 Jun 30 2011 $"
ScheddIpAddr = "<10.0.4.21:54364>"
JobLimit = 1000
HashName = "http://localhost:3005/api#admin@rhev3.cloud.redhat.com#/var/lib/aeolus-co..."
MyAddress = "<10.0.4.21:48310>"
GridResourceUnavailableTime = 1318928622
MyCurrentTime = 1318955617
SubmitsWanted = 0
CondorPlatform = "$CondorPlatform: X86_64-RedHat_6.1 $"
CurrentTime = time()
MyType = "Grid"
ScheddName = "cldmgr01.cloud.redhat.com"
Owner = "aeolus"
10/18/11 12:33:37 [3066] Trying to update collector <127.0.0.1:9618>
10/18/11 12:33:37 [3066] Attempting to send update via UDP to collector localhost.localdomain <127.0.0.1:9618>
10/18/11 12:33:37 [3066] BaseResource::UpdateResource:
Machine = "cldmgr01.cloud.redhat.com"
NumJobs = 1
IdleJobs = 1
RunningJobs = 0
SubmitsAllowed = 1
DeltacloudUserName = "admin(a)rhev3.cloud.redhat.com"
Name = "deltacloud http://localhost:3005/api"
CondorVersion = "$CondorVersion: 7.6.0 Jun 30 2011 $"
ScheddIpAddr = "<10.0.4.21:54364>"
JobLimit = 1000
HashName = "http://localhost:3005/api#admin@rhev3.cloud.redhat.com#/var/lib/aeolus-co..."
MyAddress = "<10.0.4.21:48310>"
GridResourceUnavailableTime = 1318928617
MyCurrentTime = 1318955617
SubmitsWanted = 0
CondorPlatform = "$CondorPlatform: X86_64-RedHat_6.1 $"
CurrentTime = time()
MyType = "Grid"
ScheddName = "cldmgr01.cloud.redhat.com"
Owner = "aeolus"
10/18/11 12:33:37 [3066] Trying to update collector <127.0.0.1:9618>
10/18/11 12:33:37 [3066] Attempting to send update via UDP to collector localhost.localdomain <127.0.0.1:9618>
10/18/11 12:33:37 [3066] GAHP[3167] <- 'DELTACLOUD_VM_STATUS_ALL 187 http://localhost:3005/api admin(a)rhev3.cloud.redhat.com /var/lib/aeolus-conductor/jobs/job_testz_frontend_2'
10/18/11 12:33:37 [3066] GAHP[3167] -> 'S'
10/18/11 12:33:37 [3066] GAHP[3167] <- 'RESULTS'
10/18/11 12:33:37 [3066] GAHP[3167] -> 'R'
10/18/11 12:33:37 [3066] GAHP[3167] -> 'S' '1'
10/18/11 12:33:37 [3066] GAHP[3167] -> '187' 'Invalid_Password_File'
10/18/11 12:33:37 [3066] resource http://localhost:3005/api is still down
10/18/11 12:33:40 [3066] Received CHECK_LEASES signal
10/18/11 12:33:40 [3066] GAHP[3167] <- 'RESULTS'
10/18/11 12:33:40 [3066] GAHP[3167] -> 'S' '0'
10/18/11 12:33:40 [3066] Evaluating staleness of remote job statuses.
10/18/11 12:33:40 [3066] in doContactSchedd()
10/18/11 12:33:40 [3066] querying for renewed leases
10/18/11 12:33:40 [3066] querying for removed/held jobs
10/18/11 12:33:40 [3066] Using constraint ((Owner=?="aeolus"&&JobUniverse==9)) && ((Managed =!= "ScheddDone")) && (JobStatus == 3 || JobStatus == 4 || (JobStatus == 5 && Managed =?= "External"))
10/18/11 12:33:40 [3066] Fetched 0 job ads from schedd
10/18/11 12:33:40 [3066] leaving doContactSchedd()
10/18/11 12:33:42 [3066] BaseResource::DoBatchStatus for http://localhost:3005/api.
10/18/11 12:33:42 [3066] BaseResource::DoBatchStatus for http://localhost:3005/api skipped for 30 seconds because the resource is down.
10/18/11 12:33:43 [3066] GAHP[3167] <- 'DELTACLOUD_VM_STATUS_ALL 188 http://localhost:3005/api admin(a)rhev3.cloud.redhat.com /var/lib/aeolus-conductor/jobs/job_TestX_frontend_1'
10/18/11 12:33:43 [3066] GAHP[3167] -> 'S'
10/18/11 12:33:43 [3066] BaseResource::DoBatchStatus for http://localhost:3005/api.
10/18/11 12:33:43 [3066] BaseResource::DoBatchStatus for http://localhost:3005/api skipped for 30 seconds because the resource is down.
10/18/11 12:33:43 [3066] GAHP[3167] <- 'RESULTS'
10/18/11 12:33:43 [3066] GAHP[3167] -> 'R'
10/18/11 12:33:43 [3066] GAHP[3167] -> 'S' '1'
10/18/11 12:33:43 [3066] GAHP[3167] -> '188' 'Invalid_Password_File'
10/18/11 12:33:43 [3066] resource http://localhost:3005/api is still down
10/18/11 12:33:54 [3066] BaseResource::DoBatchStatus for http://localhost:3005/api.
10/18/11 12:33:54 [3066] BaseResource::DoBatchStatus for http://localhost:3005/api skipped for 30 seconds because the resource is down.
10/18/11 12:34:12 [3066] BaseResource::DoBatchStatus for http://localhost:3005/api.
10/18/11 12:34:12 [3066] BaseResource::DoBatchStatus for http://localhost:3005/api skipped for 30 seconds because the resource is down.
10/18/11 12:34:13 [3066] BaseResource::DoBatchStatus for http://localhost:3005/api.
10/18/11 12:34:13 [3066] BaseResource::DoBatchStatus for http://localhost:3005/api skipped for 30 seconds because the resource is down.
10/18/11 12:34:24 [3066] BaseResource::DoBatchStatus for http://localhost:3005/api.
10/18/11 12:34:24 [3066] BaseResource::DoBatchStatus for http://localhost:3005/api skipped for 30 seconds because the resource is down.
10/18/11 12:34:40 [3066] Received CHECK_LEASES signal
10/18/11 12:34:40 [3066] GAHP[3167] <- 'RESULTS'
10/18/11 12:34:40 [3066] GAHP[3167] -> 'S' '0'
10/18/11 12:34:40 [3066] Evaluating staleness of remote job statuses.
10/18/11 12:34:40 [3066] in doContactSchedd()
10/18/11 12:34:40 [3066] querying for renewed leases
10/18/11 12:34:40 [3066] querying for removed/held jobs
10/18/11 12:34:40 [3066] Using constraint ((Owner=?="aeolus"&&JobUniverse==9)) && ((Managed =!= "ScheddDone")) && (JobStatus == 3 || JobStatus == 4 || (JobStatus == 5 && Managed =?= "External"))
10/18/11 12:34:40 [3066] Fetched 0 job ads from schedd
10/18/11 12:34:40 [3066] leaving doContactSchedd()
10/18/11 12:34:42 [3066] BaseResource::DoBatchStatus for http://localhost:3005/api.
10/18/11 12:34:42 [3066] BaseResource::DoBatchStatus for http://localhost:3005/api skipped for 30 seconds because the resource is down.
10/18/11 12:34:43 [3066] BaseResource::DoBatchStatus for http://localhost:3005/api.
10/18/11 12:34:43 [3066] BaseResource::DoBatchStatus for http://localhost:3005/api skipped for 30 seconds because the resource is down.
10/18/11 12:34:54 [3066] BaseResource::DoBatchStatus for http://localhost:3005/api.
10/18/11 12:34:54 [3066] BaseResource::DoBatchStatus for http://localhost:3005/api skipped for 30 seconds because the resource is down.
10/18/11 12:35:12 [3066] BaseResource::DoBatchStatus for http://localhost:3005/api.
10/18/11 12:35:12 [3066] BaseResource::DoBatchStatus for http://localhost:3005/api skipped for 30 seconds because the resource is down.
10/18/11 12:35:13 [3066] BaseResource::DoBatchStatus for http://localhost:3005/api.
10/18/11 12:35:13 [3066] BaseResource::DoBatchStatus for http://localhost:3005/api skipped for 30 seconds because the resource is down.
10/18/11 12:35:24 [3066] BaseResource::DoBatchStatus for http://localhost:3005/api.
10/18/11 12:35:24 [3066] BaseResource::DoBatchStatus for http://localhost:3005/api skipped for 30 seconds because the resource is down.
10/18/11 12:35:40 [3066] Received CHECK_LEASES signal
10/18/11 12:35:40 [3066] GAHP[3167] <- 'RESULTS'
10/18/11 12:35:40 [3066] GAHP[3167] -> 'S' '0'
10/18/11 12:35:40 [3066] Evaluating staleness of remote job statuses.
10/18/11 12:35:40 [3066] in doContactSchedd()
10/18/11 12:35:40 [3066] querying for renewed leases
10/18/11 12:35:40 [3066] querying for removed/held jobs
10/18/11 12:35:40 [3066] Using constraint ((Owner=?="aeolus"&&JobUniverse==9)) && ((Managed =!= "ScheddDone")) && (JobStatus == 3 || JobStatus == 4 || (JobStatus == 5 && Managed =?= "External"))
10/18/11 12:35:40 [3066] Fetched 0 job ads from schedd
10/18/11 12:35:40 [3066] leaving doContactSchedd()
10/18/11 12:35:42 [3066] BaseResource::DoBatchStatus for http://localhost:3005/api.
10/18/11 12:35:42 [3066] BaseResource::DoBatchStatus for http://localhost:3005/api skipped for 30 seconds because the resource is down.
10/18/11 12:35:43 [3066] BaseResource::DoBatchStatus for http://localhost:3005/api.
10/18/11 12:35:43 [3066] BaseResource::DoBatchStatus for http://localhost:3005/api skipped for 30 seconds because the resource is down.
12 years, 6 months
[PATCH] Use a thread to do instance creates V4.
by Ian Main
From: Chris Lalancette <clalance(a)redhat.com>
I modified Chris' patch to use a thread instead of fork. This pushes
the communications with deltacloud into a thread which can take as long
as it needs to complete the start.
In the future we may want to put a retry in here too.
You will notice in V2 we now set allow_concurrency = true in the
postgres config. I'm a little worried that this means it may not work
with other DB providers.
V4: Fix whitespace errors
Signed-off-by: Ian Main <imain(a)redhat.com>
---
src/app/util/taskomatic.rb | 81 ++++++++++++++++++++++++++++++-------------
src/config/database.pg | 3 ++
2 files changed, 59 insertions(+), 25 deletions(-)
diff --git a/src/app/util/taskomatic.rb b/src/app/util/taskomatic.rb
index 84dabd0..2f08daa 100644
--- a/src/app/util/taskomatic.rb
+++ b/src/app/util/taskomatic.rb
@@ -26,27 +26,15 @@ module Taskomatic
task.state = Task::STATE_PENDING
task.save!
- task.instance.provider_account = match.provider_account
- task.instance.create_auth_key unless task.instance.instance_key
-
- dcloud_instance = create_dcloud_instance(task.instance, match)
-
- handle_dcloud_error(dcloud_instance)
+ create_dcloud_instance(task.instance, match)
task.state = Task::STATE_RUNNING
- task.save!
-
- Rails.logger.info "Task instance create completed with key #{dcloud_instance.id} and state #{dcloud_instance.state}"
- task.instance.external_key = dcloud_instance.id
- task.instance.state = dcloud_to_instance_state(dcloud_instance.state)
- task.instance.save!
rescue HttpException => ex
task.failure_code = Task::FAILURE_PROVIDER_CONTACT_FAILED
handle_create_instance_error(task, ex)
rescue Exception => ex
handle_create_instance_error(task, ex)
ensure
- task.instance.save!
task.save!
end
end
@@ -146,18 +134,61 @@ module Taskomatic
end
def self.create_dcloud_instance(instance, match)
- client = match.provider_account.connect
-
- overrides = HardwareProfile.generate_override_property_values(instance.hardware_profile, match.hwp)
-
- client.create_instance(:image_id => match.provider_image.target_identifier,
- :name => instance.name.tr("/", "-"),
- :hwp_id => match.hwp.external_key,
- :hwp_memory => overrides[:memory],
- :hwp_cpu => overrides[:cpu],
- :hwp_storage => overrides[:storage],
- :realm_id => (match.realm.external_key rescue nil),
- :keyname => (instance.instance_key.name))
+ # because creating an instance can take a potentially long time (and
+ # creating multiple of them just prolongs this), we start a new thread
+ # that does this work. The new thread continues to run in the background
+ # and communicates the status back to the main UI via the database
+
+ # This cleans up old DB connections. AR creates a new connection for each
+ # thread. This can leak FDs if you don't clean them up. Basically this
+ # ensures the previously used FD is cleaned up.
+ ActiveRecord::Base.connection_pool.clear_stale_cached_connections!
+
+ Thread.new do
+ begin
+ # These all need to be reloaded because when you create a new thread
+ # AR creates a new connection and these objects become stale.
+ #
+ # Everything used here must be reloaded.
+ instance.reload
+ match.provider_account.reload
+ match.hwp.reload
+ match.realm.reload if match.realm
+
+ instance.provider_account = match.provider_account
+ instance.create_auth_key unless instance.instance_key
+
+ client = match.provider_account.connect
+
+ overrides = HardwareProfile.generate_override_property_values(instance.hardware_profile, match.hwp)
+
+ dcloud_instance = client.create_instance(:image_id => match.provider_image.target_identifier,
+ :name => instance.name.tr("/", "-"),
+ :hwp_id => match.hwp.external_key,
+ :hwp_memory => overrides[:memory],
+ :hwp_cpu => overrides[:cpu],
+ :hwp_storage => overrides[:storage],
+ :realm_id => (match.realm.external_key rescue nil),
+ :keyname => (instance.instance_key.name))
+
+ handle_dcloud_error(dcloud_instance)
+
+ Rails.logger.info "Task instance create completed with key #{dcloud_instance.id} and state #{dcloud_instance.state}"
+ instance.external_key = dcloud_instance.id
+ instance.state = dcloud_to_instance_state(dcloud_instance.state)
+ rescue Exception => ex
+ # any sort of exception causes us to put the instance in CREATE_FAILED
+ # FIXME: if the exception is raised *after* the create_instance, this
+ # isn't true and will result in a rogue instance
+ Rails.logger.error ex.message
+ Rails.logger.error ex.backtrace.join("\n")
+ instance.state = Instance::STATE_CREATE_FAILED
+ raise ex
+ ensure
+ instance.save!
+ Thread.exit
+ end
+ end
end
def self.matches(instance)
diff --git a/src/config/database.pg b/src/config/database.pg
index 19fe4e6..72a3895 100644
--- a/src/config/database.pg
+++ b/src/config/database.pg
@@ -45,6 +45,7 @@ development:
username: aeolus
password: v23zj59an
host: localhost
+ allow_concurrency: true
min_messages: warning
# Warning: The database defined as 'test' will be erased and
@@ -56,6 +57,7 @@ test: &TEST
username: aeolus
password: v23zj59an
host: localhost
+ allow_concurrency: true
min_messages: warning
production:
@@ -63,6 +65,7 @@ production:
database: conductor
username: aeolus
password: v23zj59an
+ allow_concurrency: true
host: localhost
cucumber:
--
1.7.6.2
12 years, 6 months
[PATCH] Fixed up failing api rspec tests.
by Chris Alfonso
Removed image icicles since they're not used.
The api for target images returns a single provider (may need to revisit)
---
src/spec/controllers/api/builds_controller_spec.rb | 3 +
src/spec/controllers/api/images_controller_spec.rb | 4 +
.../api/provider_images_controller_spec.rb | 6 +-
.../api/target_images_controller_spec.rb | 16 ++--
src/spec/vcr/cassettes/iwhd_connection.yml | 104 ++++++++++++++++++-
5 files changed, 114 insertions(+), 19 deletions(-)
diff --git a/src/spec/controllers/api/builds_controller_spec.rb b/src/spec/controllers/api/builds_controller_spec.rb
index d5d35fd..feafaf2 100644
--- a/src/spec/controllers/api/builds_controller_spec.rb
+++ b/src/spec/controllers/api/builds_controller_spec.rb
@@ -124,6 +124,7 @@ describe Api::BuildsController do
context "when there is NOT wanted build" do
before(:each) do
+ send_and_accept_xml
Aeolus::Image::Warehouse::ImageBuild.stub(:find).and_return(nil)
get :show, :id => '10'
end
@@ -143,6 +144,7 @@ describe Api::BuildsController do
describe "#index" do
before(:each) do
+ send_and_accept_xml
get :index
end
@@ -155,6 +157,7 @@ describe Api::BuildsController do
describe "#show" do
before(:each) do
+ send_and_accept_xml
get :show, :id => '5'
end
diff --git a/src/spec/controllers/api/images_controller_spec.rb b/src/spec/controllers/api/images_controller_spec.rb
index c2c8495..180fc2d 100644
--- a/src/spec/controllers/api/images_controller_spec.rb
+++ b/src/spec/controllers/api/images_controller_spec.rb
@@ -101,6 +101,7 @@ describe Api::ImagesController do
context "when there is no image" do
before(:each) do
+ send_and_accept_xml
Aeolus::Image::Warehouse::Image.stub(:all).and_return([])
get :index
end
@@ -141,6 +142,7 @@ describe Api::ImagesController do
context "when there is NOT wanted image" do
before(:each) do
+ send_and_accept_xml
Aeolus::Image::Warehouse::Image.stub(:find).and_return(nil)
get :show, :id => '5'
end
@@ -160,6 +162,7 @@ describe Api::ImagesController do
describe "#index" do
before(:each) do
+ send_and_accept_xml
get :index
end
@@ -172,6 +175,7 @@ describe Api::ImagesController do
describe "#show" do
before(:each) do
+ send_and_accept_xml
get :show, :id => '5'
end
diff --git a/src/spec/controllers/api/provider_images_controller_spec.rb b/src/spec/controllers/api/provider_images_controller_spec.rb
index 2661c8f..b2f49ac 100644
--- a/src/spec/controllers/api/provider_images_controller_spec.rb
+++ b/src/spec/controllers/api/provider_images_controller_spec.rb
@@ -30,7 +30,7 @@ describe Api::ProviderImagesController do
:id => '300')
@pimage = mock(Aeolus::Image::Warehouse::ProviderImage,
:id => '17',
- :icicle => '30',
+ :provider_name => 'provider_name',
:object_type => 'provider_image',
:target_identifier => '80',
:target_image => @timage)
@@ -65,7 +65,6 @@ describe Api::ProviderImagesController do
resp = Hash.from_xml(response.body)
@provider_image_collection.each_with_index do |pimage, index|
resp['provider_images']['provider_image'][index]['id'].should == pimage.id
- resp['provider_images']['provider_image'][index]['icicle'].should == pimage.icicle
resp['provider_images']['provider_image'][index]['object_type'].should == pimage.object_type
resp['provider_images']['provider_image'][index]['target_identifier'].should == pimage.target_identifier
resp['provider_images']['provider_image'][index]['target_image']['id'].should == pimage.target_image.id
@@ -85,7 +84,6 @@ describe Api::ProviderImagesController do
it "should have a provider image with corrent attributes" do
resp = Hash.from_xml(response.body)
resp['provider_images']['provider_image']['id'].should == @pimage.id
- resp['provider_images']['provider_image']['icicle'].should == @pimage.icicle
resp['provider_images']['provider_image']['object_type'].should == @pimage.object_type
resp['provider_images']['provider_image']['target_identifier'].should == @pimage.target_identifier
resp['provider_images']['provider_image']['target_image']['id'].should == @pimage.target_image.id
@@ -123,7 +121,7 @@ describe Api::ProviderImagesController do
it "should have a provider image with correct attributes" do
resp = Hash.from_xml(response.body)
resp['provider_image']['id'].should == @pimage.id
- resp['provider_image']['icicle'].should == @pimage.icicle
+ resp['provider_image']['provider'].should == @pimage.provider_name
resp['provider_image']['object_type'].should == @pimage.object_type
resp['provider_image']['target_identifier'].should == @pimage.target_identifier
resp['provider_image']['target_image']['id'].should == @pimage.target_image.id
diff --git a/src/spec/controllers/api/target_images_controller_spec.rb b/src/spec/controllers/api/target_images_controller_spec.rb
index 5943111..56d1f4e 100644
--- a/src/spec/controllers/api/target_images_controller_spec.rb
+++ b/src/spec/controllers/api/target_images_controller_spec.rb
@@ -32,7 +32,6 @@ describe Api::TargetImagesController do
:id => '543')
@timage = mock(Aeolus::Image::Warehouse::TargetImage,
:id => '100',
- :icicle => '321',
:object_type => 'target_image',
:template => '12',
:build => @build,
@@ -50,6 +49,7 @@ describe Api::TargetImagesController do
describe "#index" do
context "when there are 3 target images" do
before(:each) do
+ send_and_accept_xml
@timage_collection = [@timage, @timage, @timage]
Aeolus::Image::Warehouse::TargetImage.stub(:all).and_return(@timage_collection)
get :index
@@ -65,11 +65,10 @@ describe Api::TargetImagesController do
resp = Hash.from_xml(response.body)
@timage_collection.each_with_index do |timage, index|
resp['target_images']['target_image'][index]['id'].should == timage.id
- resp['target_images']['target_image'][index]['icicle'].should == timage.icicle
resp['target_images']['target_image'][index]['object_type'].should == timage.object_type
resp['target_images']['target_image'][index]['template'].should == timage.template
resp['target_images']['target_image'][index]['build']['id'].should == timage.build.id
- pimgs = resp['target_images']['target_image'][index]['provider_images']
+ pimgs = resp['target_images']['target_image'][index]['provider_image']
pimgs['provider_image']['id'].should == @pimage.id
end
end
@@ -77,6 +76,7 @@ describe Api::TargetImagesController do
context "when there is only 1 target images" do
before(:each) do
+ send_and_accept_xml
Aeolus::Image::Warehouse::TargetImage.stub(:all).and_return([@timage])
get :index
end
@@ -86,11 +86,10 @@ describe Api::TargetImagesController do
it "should have a target image with correct attributes" do
resp = Hash.from_xml(response.body)
resp['target_images']['target_image']['id'].should == @timage.id
- resp['target_images']['target_image']['icicle'].should == @timage.icicle
resp['target_images']['target_image']['object_type'].should == @timage.object_type
resp['target_images']['target_image']['template'].should == @timage.template
resp['target_images']['target_image']['build']['id'].should == @timage.build.id
- pimgs = resp['target_images']['target_image']['provider_images']
+ pimgs = resp['target_images']['target_image']['provider_image']
pimgs['provider_image']['id'].should == @pimage.id
end
end
@@ -113,7 +112,6 @@ describe Api::TargetImagesController do
describe "#show" do
context "when there is wanted target image in warehouse" do
before(:each) do
-
Aeolus::Image::Warehouse::TargetImage.stub(:find).and_return(@timage)
get :show, :id => '100'
end
@@ -123,18 +121,17 @@ describe Api::TargetImagesController do
it "should have a target image with correct attributes" do
resp = Hash.from_xml(response.body)
resp['target_image']['id'].should == @timage.id
- resp['target_image']['icicle'].should == @timage.icicle
resp['target_image']['object_type'].should == @timage.object_type
resp['target_image']['template'].should == @timage.template
resp['target_image']['build']['id'].should == @timage.build.id
- pimgs = resp['target_image']['provider_images']
+ pimgs = resp['target_image']['provider_image']
pimgs['provider_image']['id'].should == @pimage.id
end
end
context "when there is NOT wanted target image in warehouse" do
before(:each) do
-
+ send_and_accept_xml
Aeolus::Image::Warehouse::TargetImage.stub(:find).and_return(nil)
end
@@ -178,6 +175,7 @@ describe Api::TargetImagesController do
describe "#index" do
before(:each) do
+ send_and_accept_xml
get :index
end
diff --git a/src/spec/vcr/cassettes/iwhd_connection.yml b/src/spec/vcr/cassettes/iwhd_connection.yml
index 91fba47..0b7d7c2 100644
--- a/src/spec/vcr/cassettes/iwhd_connection.yml
+++ b/src/spec/vcr/cassettes/iwhd_connection.yml
@@ -378,7 +378,7 @@
<object>
<object_body path="http://localhost:9090/images/34c87aa0-3405-42f8-820e-309054029295"/>
<object_attr_list path="http://localhost:9090/images/34c87aa0-3405-42f8-820e-309054029295/_attrs"/>
- <object_attr name="icicle" path="http://localhost:9090/images/34c87aa0-3405-42f8-820e-309054029295/icicle"/>
+ <object_attr name="provider" path="http://localhost:9090/images/34c87aa0-3405-42f8-820e-309054029295/provider"/>
<object_attr name="object_type" path="http://localhost:9090/images/34c87aa0-3405-42f8-820e-309054029295/object_..."/>
<object_attr name="target" path="http://localhost:9090/images/34c87aa0-3405-42f8-820e-309054029295/target"/>
<object_attr name="target_parameters" path="http://localhost:9090/images/34c87aa0-3405-42f8-820e-309054029295/target_..."/>
@@ -550,7 +550,7 @@
<object>
<object_body path="http://localhost:9090/images/ef6fd2bb-50d4-4f53-ae92-a68b01f82cf1"/>
<object_attr_list path="http://localhost:9090/images/ef6fd2bb-50d4-4f53-ae92-a68b01f82cf1/_attrs"/>
- <object_attr name="icicle" path="http://localhost:9090/images/ef6fd2bb-50d4-4f53-ae92-a68b01f82cf1/icicle"/>
+ <object_attr name="provider" path="http://localhost:9090/images/ef6fd2bb-50d4-4f53-ae92-a68b01f82cf1/provider"/>
<object_attr name="object_type" path="http://localhost:9090/images/ef6fd2bb-50d4-4f53-ae92-a68b01f82cf1/object_..."/>
<object_attr name="target" path="http://localhost:9090/images/ef6fd2bb-50d4-4f53-ae92-a68b01f82cf1/target"/>
<object_attr name="target_parameters" path="http://localhost:9090/images/ef6fd2bb-50d4-4f53-ae92-a68b01f82cf1/target_..."/>
@@ -937,7 +937,7 @@
<object_body path="http://localhost:9090/target_images/1a955a06-ca92-4546-9121-6c35e162f67b"/>
<object_attr_list path="http://localhost:9090/target_images/1a955a06-ca92-4546-9121-6c35e162f67b/..."/>
<object_attr name="build" path="http://localhost:9090/target_images/1a955a06-ca92-4546-9121-6c35e162f67b/..."/>
- <object_attr name="icicle" path="http://localhost:9090/target_images/1a955a06-ca92-4546-9121-6c35e162f67b/..."/>
+ <object_attr name="provider" path="http://localhost:9090/target_images/1a955a06-ca92-4546-9121-6c35e162f67b/..."/>
<object_attr name="object_type" path="http://localhost:9090/target_images/1a955a06-ca92-4546-9121-6c35e162f67b/..."/>
<object_attr name="target" path="http://localhost:9090/target_images/1a955a06-ca92-4546-9121-6c35e162f67b/..."/>
<object_attr name="target_parameters" path="http://localhost:9090/target_images/1a955a06-ca92-4546-9121-6c35e162f67b/..."/>
@@ -1133,7 +1133,7 @@
<object_body path="http://localhost:9090/target_images/ce1c6d32-5e26-4feb-9b86-83435a39b1f0"/>
<object_attr_list path="http://localhost:9090/target_images/ce1c6d32-5e26-4feb-9b86-83435a39b1f0/..."/>
<object_attr name="build" path="http://localhost:9090/target_images/ce1c6d32-5e26-4feb-9b86-83435a39b1f0/..."/>
- <object_attr name="icicle" path="http://localhost:9090/target_images/ce1c6d32-5e26-4feb-9b86-83435a39b1f0/..."/>
+ <object_attr name="provider" path="http://localhost:9090/target_images/ce1c6d32-5e26-4feb-9b86-83435a39b1f0/..."/>
<object_attr name="object_type" path="http://localhost:9090/target_images/ce1c6d32-5e26-4feb-9b86-83435a39b1f0/..."/>
<object_attr name="target" path="http://localhost:9090/target_images/ce1c6d32-5e26-4feb-9b86-83435a39b1f0/..."/>
<object_attr name="target_parameters" path="http://localhost:9090/target_images/ce1c6d32-5e26-4feb-9b86-83435a39b1f0/..."/>
@@ -1495,7 +1495,7 @@
<object>
<object_body path="http://localhost:9090/provider_images/3cdd9f26-b211-454b-89ff-655b0ebbff03"/>
<object_attr_list path="http://localhost:9090/provider_images/3cdd9f26-b211-454b-89ff-655b0ebbff0..."/>
- <object_attr name="icicle" path="http://localhost:9090/provider_images/3cdd9f26-b211-454b-89ff-655b0ebbff0..."/>
+ <object_attr name="provider" path="http://localhost:9090/provider_images/3cdd9f26-b211-454b-89ff-655b0ebbff0..."/>
<object_attr name="object_type" path="http://localhost:9090/provider_images/3cdd9f26-b211-454b-89ff-655b0ebbff0..."/>
<object_attr name="provider" path="http://localhost:9090/provider_images/3cdd9f26-b211-454b-89ff-655b0ebbff0..."/>
<object_attr name="target_identifier" path="http://localhost:9090/provider_images/3cdd9f26-b211-454b-89ff-655b0ebbff0..."/>
@@ -1667,7 +1667,7 @@
<object>
<object_body path="http://localhost:9090/provider_images/aa57b1ab-f565-4a7a-b054-58ef5af1a053"/>
<object_attr_list path="http://localhost:9090/provider_images/aa57b1ab-f565-4a7a-b054-58ef5af1a05..."/>
- <object_attr name="icicle" path="http://localhost:9090/provider_images/aa57b1ab-f565-4a7a-b054-58ef5af1a05..."/>
+ <object_attr name="provider" path="http://localhost:9090/provider_images/aa57b1ab-f565-4a7a-b054-58ef5af1a05..."/>
<object_attr name="object_type" path="http://localhost:9090/provider_images/aa57b1ab-f565-4a7a-b054-58ef5af1a05..."/>
<object_attr name="provider" path="http://localhost:9090/provider_images/aa57b1ab-f565-4a7a-b054-58ef5af1a05..."/>
<object_attr name="target_identifier" path="http://localhost:9090/provider_images/aa57b1ab-f565-4a7a-b054-58ef5af1a05..."/>
@@ -2010,3 +2010,95 @@
- "0"
body:
http_version: "1.1"
+- !ruby/struct:VCR::HTTPInteraction
+ request: !ruby/struct:VCR::Request
+ method: :get
+ uri: http://localhost:9090/target_images/1a955a06-ca92-4546-9121-6c35e162f67b/...
+ body:
+ headers:
+ accept:
+ - "*/*; q=0.5, application/xml"
+ accept-encoding:
+ - gzip, deflate
+ content-length:
+ - "0"
+ response: !ruby/struct:VCR::Response
+ status: !ruby/struct:VCR::ResponseStatus
+ code: 404
+ message: Not Found
+ headers:
+ date:
+ - Tue, 18 Oct 2011 23:24:03 GMT
+ content-length:
+ - "0"
+ body:
+ http_version: "1.1"
+- !ruby/struct:VCR::HTTPInteraction
+ request: !ruby/struct:VCR::Request
+ method: :get
+ uri: http://localhost:9090/target_images/ce1c6d32-5e26-4feb-9b86-83435a39b1f0/...
+ body:
+ headers:
+ accept:
+ - "*/*; q=0.5, application/xml"
+ accept-encoding:
+ - gzip, deflate
+ content-length:
+ - "0"
+ response: !ruby/struct:VCR::Response
+ status: !ruby/struct:VCR::ResponseStatus
+ code: 404
+ message: Not Found
+ headers:
+ date:
+ - Tue, 18 Oct 2011 23:24:07 GMT
+ content-length:
+ - "0"
+ body:
+ http_version: "1.1"
+- !ruby/struct:VCR::HTTPInteraction
+ request: !ruby/struct:VCR::Request
+ method: :get
+ uri: http://localhost:9090/images/34c87aa0-3405-42f8-820e-309054029295/provider
+ body:
+ headers:
+ accept:
+ - "*/*; q=0.5, application/xml"
+ accept-encoding:
+ - gzip, deflate
+ content-length:
+ - "0"
+ response: !ruby/struct:VCR::Response
+ status: !ruby/struct:VCR::ResponseStatus
+ code: 404
+ message: Not Found
+ headers:
+ date:
+ - Tue, 18 Oct 2011 23:32:29 GMT
+ content-length:
+ - "0"
+ body:
+ http_version: "1.1"
+- !ruby/struct:VCR::HTTPInteraction
+ request: !ruby/struct:VCR::Request
+ method: :get
+ uri: http://localhost:9090/images/ef6fd2bb-50d4-4f53-ae92-a68b01f82cf1/provider
+ body:
+ headers:
+ accept:
+ - "*/*; q=0.5, application/xml"
+ accept-encoding:
+ - gzip, deflate
+ content-length:
+ - "0"
+ response: !ruby/struct:VCR::Response
+ status: !ruby/struct:VCR::ResponseStatus
+ code: 404
+ message: Not Found
+ headers:
+ date:
+ - Tue, 18 Oct 2011 23:35:08 GMT
+ content-length:
+ - "0"
+ body:
+ http_version: "1.1"
--
1.7.6.4
12 years, 6 months
[PATCH 1/3] Document undocumented hard-coded 300 second timeout and default timout value.
by Steven Dake
Signed-off-by: Steven Dake <sdake(a)redhat.com>
---
man/oz-install.1 | 11 ++++++++---
1 files changed, 8 insertions(+), 3 deletions(-)
diff --git a/man/oz-install.1 b/man/oz-install.1
index b497813..7cf0077 100644
--- a/man/oz-install.1
+++ b/man/oz-install.1
@@ -70,9 +70,14 @@ will undefine the libvirt guest with the same name or UUID and delete
the diskimage, so it should be used with caution.
.TP
.B "\-t"
-Use a timeout value of \fBtimeout\fR for installation, rather than the
-oz default. This can be useful if you know you have slower storage
-and want to wait longer for the installation to timeout.
+Terminate the installation of the guest image in \fBtimeout\fR seconds
+rather then the default of 1200 seconds. This value can be increased in the
+case of slow storage or multiple oz-install operations on the same machine
+consuming the disk bandwidth.
+
+Please note there is a separate termination action that occurs if 300 seconds
+elapses before any data is written to the VM image. This timer value is not
+configurable.
.TP
.B "\-u"
Customize the image after installation. This generally installs
--
1.7.4.4
12 years, 6 months
[PATCH] configuration for iwhd/imagefactory/conductor oauth authentication
by Mo Morsi
---
bin/aeolus-configure | 7 +-
recipes/aeolus/manifests/conductor.pp | 7 +-
recipes/aeolus/manifests/image-factory.pp | 10 ++-
recipes/aeolus/manifests/iwhd.pp | 11 ++-
recipes/aeolus/templates/conductor-settings.yml | 23 ++++
recipes/aeolus/templates/imagefactory.conf | 15 +++
recipes/aeolus/templates/iwhd.init | 146 +++++++++++++++++++++++
7 files changed, 213 insertions(+), 6 deletions(-)
mode change 100644 => 100755 bin/aeolus-configure
create mode 100644 recipes/aeolus/templates/conductor-settings.yml
create mode 100644 recipes/aeolus/templates/imagefactory.conf
create mode 100755 recipes/aeolus/templates/iwhd.init
diff --git a/bin/aeolus-configure b/bin/aeolus-configure
old mode 100644
new mode 100755
index 8f65d37..7123226
--- a/bin/aeolus-configure
+++ b/bin/aeolus-configure
@@ -68,6 +68,11 @@ echo "Launching aeolus configuration recipe..."
export FACTER_AEOLUS_ENABLE_HTTPS=true
export FACTER_AEOLUS_ENABLE_SECURITY=false
+export FACTER_IWHD_OAUTH_USER=`uuidgen`
+export FACTER_IWHD_OAUTH_PASSWORD=`uuidgen`
+export FACTER_IMAGEFACTORY_OAUTH_USER=`uuidgen`
+export FACTER_IMAGEFACTORY_OAUTH_PASSWORD=`uuidgen`
+
NODE_ARRAY=(`echo $PUPPET_NODE | tr "," "\n"`)
for x in "${NODE_ARRAY[@]}"
do
@@ -77,4 +82,4 @@ do
--logdest=/var/log/aeolus-configure/aeolus-configure.log \
--logdest=console \
$LOGLEVEL
-done
\ No newline at end of file
+done
diff --git a/recipes/aeolus/manifests/conductor.pp b/recipes/aeolus/manifests/conductor.pp
index e5c63f5..08ed142 100644
--- a/recipes/aeolus/manifests/conductor.pp
+++ b/recipes/aeolus/manifests/conductor.pp
@@ -23,6 +23,10 @@ class aeolus::conductor inherits aeolus {
ensure => 'installed',
provider => $package_provider }
+ file{"/usr/share/aeolus-conductor/config/settings.yml":
+ content => template("aeolus/conductor-settings.yml"),
+ require => Package['aeolus-conductor']}
+
file {"/var/lib/aeolus-conductor":
ensure => directory,
owner => 'aeolus',
@@ -48,7 +52,8 @@ class aeolus::conductor inherits aeolus {
require => [Package['aeolus-conductor-daemons'],
Rails::Migrate::Db[migrate_aeolus_database],
Service['httpd'],
- Apache::Site[aeolus-conductor], Exec[reload-apache]] }
+ Apache::Site[aeolus-conductor], Exec[reload-apache],
+ File['/usr/share/aeolus-conductor/config/settings.yml']] }
### Initialize and start the aeolus database
# Right now we configure and start postgres, at some point I want
diff --git a/recipes/aeolus/manifests/image-factory.pp b/recipes/aeolus/manifests/image-factory.pp
index 65f710d..c386c1c 100644
--- a/recipes/aeolus/manifests/image-factory.pp
+++ b/recipes/aeolus/manifests/image-factory.pp
@@ -59,8 +59,16 @@ class aeolus::image-factory inherits aeolus {
enable => true,
hasstatus => true,
require => Package['libvirt']}
+
+
+ file {"/etc/imagefactory/imagefactory.conf":
+ content => template("aeolus/imagefactory.conf"),
+ mode => 755,
+ require => Package['imagefactory'] }
+
$requires = [Package['imagefactory'],
- File['/var/tmp/imagefactory-mock'],
+ File['/var/tmp/imagefactory-mock',
+ '/etc/imagefactory/imagefactory.conf'],
Service[qpidd], Service[libvirtd],
Rails::Seed::Db[seed_aeolus_database]]
service { 'imagefactory':
diff --git a/recipes/aeolus/manifests/iwhd.pp b/recipes/aeolus/manifests/iwhd.pp
index 75b9de3..de0209b 100644
--- a/recipes/aeolus/manifests/iwhd.pp
+++ b/recipes/aeolus/manifests/iwhd.pp
@@ -30,6 +30,11 @@ class aeolus::iwhd inherits aeolus {
file { "/etc/iwhd": ensure => 'directory'}
file { "/var/lib/iwhd": ensure => 'directory' }
+ file {"/etc/init.d/iwhd":
+ content => template("aeolus/iwhd.init"),
+ mode => 755,
+ require => Package['iwhd'] }
+
service { 'mongod':
ensure => 'running',
enable => true,
@@ -39,9 +44,9 @@ class aeolus::iwhd inherits aeolus {
ensure => 'running',
enable => true,
hasstatus => true,
- require => [Package['iwhd'],
- Service[mongod],
- File['/var/lib/iwhd']]}
+ require => [Service[mongod],
+ File['/var/lib/iwhd',
+ '/etc/init.d/iwhd']]}
# XXX ugly hack but iwhd might take some time to come up
exec{"iwhd_startup_pause":
diff --git a/recipes/aeolus/templates/conductor-settings.yml b/recipes/aeolus/templates/conductor-settings.yml
new file mode 100644
index 0000000..dec51d3
--- /dev/null
+++ b/recipes/aeolus/templates/conductor-settings.yml
@@ -0,0 +1,23 @@
+:default_deltacloud_url: http://localhost:3002/api
+
+:auth:
+ # supported strategies: database, ldap
+ :strategy: database
+ :ldap:
+ :host: localhost
+ # '%s' expression in username_dn string will be replaced
+ # by user's login
+ # username_dn: "deltacloud\%s"
+ :username_dn: uid=%s,ou=People,dc=my-domain,dc=com
+ # :port: 389
+:iwhd:
+ :url: http://localhost:9090
+ :oauth:
+ :consumer_key: <%= iwhd_oauth_user %>
+ :consumer_secret: <%= iwhd_oauth_password %>
+
+:imagefactory:
+ :url: https://localhost:8075/imagefactory
+ :oauth:
+ :consumer_key: <%= imagefactory_oauth_user %>
+ :consumer_secret: <%= imagefactory_oauth_password %>
diff --git a/recipes/aeolus/templates/imagefactory.conf b/recipes/aeolus/templates/imagefactory.conf
new file mode 100644
index 0000000..7969719
--- /dev/null
+++ b/recipes/aeolus/templates/imagefactory.conf
@@ -0,0 +1,15 @@
+{
+ "warehouse": "http://localhost:9090/",
+ "image_bucket": "images",
+ "build_bucket": "builds",
+ "target_bucket": "target_images",
+ "template_bucket": "templates",
+ "icicle_bucket": "icicles",
+ "provider_bucket": "provider_images",
+ "imgdir": "/var/lib/imagefactory/images",
+ "ec2_build_style": "snapshot",
+ "ec2_ami_type": "s3",
+ "clients": {
+ "<%= imagefactory_oauth_user %>": "<%= imagefactory_oauth_password %>"
+ }
+}
diff --git a/recipes/aeolus/templates/iwhd.init b/recipes/aeolus/templates/iwhd.init
new file mode 100755
index 0000000..12617a0
--- /dev/null
+++ b/recipes/aeolus/templates/iwhd.init
@@ -0,0 +1,146 @@
+#!/bin/sh
+
+# The following is the LSB init header. See
+# http://www.linux-foundation.org/spec/booksets/LSB-Core-generic/LSB-Core-g...
+#
+### BEGIN INIT INFO
+# Provides: iwhd
+# Default-Start: 3 4 5
+# Short-Description: image warehouse daemon
+# Description: This is the primary server process for the image warehouse
+### END INIT INFO
+
+# the following is chkconfig init header
+#
+# iwhd: image warehouse daemon
+#
+# chkconfig: - 40 60
+# Description: This is the primary server process for the image warehouse
+#
+# processname: iwhd
+# pidfile: /var/run/iwhd.pid
+
+. /etc/rc.d/init.d/functions
+
+SERVICE=iwhd
+PROCESS=iwhd
+PIDFILE=/var/run/$SERVICE.pid
+CONFIG_JS=/etc/iwhd/conf.js
+MONGOD_SERVER_SPEC=localhost:27017
+
+# How many seconds to wait for mongod to become usable before giving up.
+MONGOD_N_SECONDS=2
+
+# Tell iwhd to use /var/cache/iwhd, not /tmp for a small S3-related
+# temporary file. This avoids conflict with SELinux policy that discourages
+# writing in /tmp.
+export TMPDIR=/var/cache/iwhd
+
+IWHD_ARGS="-d $MONGOD_SERVER_SPEC -l /var/log/iwhd.log"
+
+test -r /etc/sysconfig/iwhd && . /etc/sysconfig/iwhd
+
+RETVAL=0
+
+wait_for()
+{
+ local sleep_seconds=$1
+ local max_n_sleeps=$2
+ local cmd=$3
+ case $max_n_sleeps in
+ [0-9]*);; *) echo invalid max_n_sleeps $max_n_sleeps 1>&2; exit 1;;
+ esac
+ case $sleep_seconds in
+ [0-9]*|.[0-9]*);; *)
+ echo invalid sleep interval $sleep_seconds 1>&2; exit 1;;
+ esac
+ local i=0
+ while :; do
+ eval "$cmd" && return 0
+ sleep $sleep_seconds
+ i=$(expr $i + 1)
+ test $i = $max_n_sleeps && return 1
+ done
+}
+
+wait_for_mongod() {
+ # Wait for up to $1 seconds for mongod to begin listening.
+ wait_for .1 $(($1 * 10)) 'mongo $MONGOD_SERVER_SPEC \
+ < /dev/null >/dev/null 2>&1'
+}
+
+start() {
+ # This is a bit kludgey. We'll use the standard daemon
+ # framework once iwhd knows how to daemonize itself.
+ test -f $PIDFILE && kill -0 $(cat $PIDFILE) 2>/dev/null \
+ && { printf %s $"$PROCESS appears to already be running"
+ echo_failure; echo; return 1; }
+ mkdir -p /var/cache/iwhd
+ rm -rf /var/cache/iwhd/*
+ printf %s $"waiting for mongod to listen on $MONGOD_SERVER_SPEC"
+ wait_for_mongod $MONGOD_N_SECONDS && echo_success \
+ || { echo_failure; echo; return 1; }
+ echo
+
+ printf %s $"Starting $SERVICE daemon: "
+ $PROCESS -c "$CONFIG_JS" $IWHD_ARGS -o -U <%= iwhd_oauth_user %>:<%= iwhd_oauth_password %>&
+ pid=$!
+ RETVAL=$?
+ if test $RETVAL = 0; then
+ echo $pid > $PIDFILE
+ touch /var/lock/subsys/$SERVICE
+ success
+ else
+ failure
+ fi
+ echo
+ return $RETVAL
+}
+
+stop() {
+ action $"Stopping $SERVICE daemon: " killproc -p $PIDFILE $PROCESS
+ RETVAL=$?
+ if test $RETVAL = 0; then
+ rm -f /var/lock/subsys/$SERVICE
+ rm -f $PIDFILE
+ rm -rf /var/cache/iwhd/*
+ fi
+ return $RETVAL
+}
+
+restart() {
+ stop
+ start
+}
+
+reload() {
+ printf %s $"Reloading $SERVICE configuration: "
+
+ killproc -p $PIDFILE $PROCESS -HUP
+ RETVAL=$?
+ echo
+ return $RETVAL
+}
+
+# See how we were called.
+case "$1" in
+ start|stop|restart|reload)
+ $1
+ ;;
+ status)
+ status -p $PIDFILE $PROCESS
+ ;;
+ force-reload)
+ reload
+ ;;
+ condrestart|try-restart)
+ test -f /var/lock/subsys/$SERVICE && restart || :
+ ;;
+ *)
+ echo $"Usage: $0 {start|stop|status|restart|condrestart|reload|force-reload|try-restart}"
+ exit 2
+ ;;
+esac
+
+# Exit with the result of the "case" statement.
+exit $?
--
1.7.6.4
12 years, 6 months