[PATCH 1/2] Avoid adding a non-null column with a null default value
by Mark McLoughlin
I was getting this error when running 'rake db:migrate' with sqlite:
SQLite3::SQLException: Cannot add a NOT NULL column with default value NULL: ALTER TABLE "deployables" ADD "uuid" varchar(255) NOT NULL
To avoid the warning, we can add the column as nullable and then change
it to non-null later.
Presumably this would fail if we ran the migration on a DB containing
any data in the deployables table, but this is unlikely. To avoid it,
we could do e.g.
Deployable.reset_column_information
Deployable.all.each do |deployable|
deployable.uuid = "" if deployable.uuid.nil?
deployable.xml = "" if deployable.xml.nil?
end
but that seems like overkill given the situation, especially since the
default values don't make any sense.
---
.../migrate/20110207100800_update_deployables.rb | 7 +++++--
1 files changed, 5 insertions(+), 2 deletions(-)
diff --git a/src/db/migrate/20110207100800_update_deployables.rb b/src/db/migrate/20110207100800_update_deployables.rb
index 4414560..ff87bed 100644
--- a/src/db/migrate/20110207100800_update_deployables.rb
+++ b/src/db/migrate/20110207100800_update_deployables.rb
@@ -2,13 +2,16 @@ class UpdateDeployables < ActiveRecord::Migration
def self.up
change_table :deployables do |t|
t.integer :lock_version, :default => 0
- t.string :uuid, :null => false
- t.binary :xml, :null => false
+ t.string :uuid
+ t.binary :xml
t.string :uri
t.text :summary
t.boolean :uploaded, :default => false
end
+ change_column :deployables, :uuid, :string, :null => false
+ change_column :deployables, :xml, :string, :null => false
+
create_table :assemblies_deployables, :id => false do |t|
t.integer :assembly_id, :null => false
t.integer :deployable_id, :null => false
--
1.7.4.4
11 years, 8 months
#671 - BZ 691562 fix
by Matt Wagner
Hi all,
I spent a bit of time tracking down the remnants of #671 / BZ 651562, in
which Apache throws proxy timeout errors because the backend does not
respond in time as a result of API calls timing out.
Most of these errors were fixed a bit ago in deltacloud-client, but it
seems that the versions yum and gem know about don't have this fix, so
the issue was seemingly not fixed. I spent a bit getting deltacloud
0.3.0 running, and was unable to reproduce any errors with Apache
timeouts after attempting to simulate unreachable providers and other
cases where a timeout may occur. The timeouts in deltacloud-client are
slightly shorter than Apache's timeout value.
I'm attaching a patch that cleans up error handling in Conductor
slightly. There were a few spots (e.g., adding a new ProviderAccount
when the provider was unreachable) where the timeout exception was
reaching the user unhandled. This exception now triggers as a
flash[:error] message when raised, in lieu of a hard error.
You will need the 0.3.0 version of deltacloud to test this. (Some older
versions may work, but best not to chance it.) You can build this from
their source, although I suspect some of you already the 0.3.0 RC
installed.
-- Matt
11 years, 9 months
Image building - cli, factory, conductor and iwhd plan
by Mark McLoughlin
Hey,
Apologies for another long email, but here's my proposal for how to
proceed with the image building CLI etc.
Cheers,
Mark.
= Terminology =
- Provider - a cloud instance, e.g. a RHEV-M installation or an EC2
region.
- Target/Provider Type - the cloud type, irrespective of the specific
instance e.g. RHEV-M vs EC2.
- Provider Account - a set of user credentials for a specific
provider.
- Provider Image - these are images which have been pushed to, or
imported from, a provider and are available for use by instances.
- Target Image - these are images which have been built for a
specific target and are available to be pushed to multiple
providers.
- Image Build - these are objects which track all target and
provider images built from a template at a certain time, all of
which should be equivalent to each other.
- Image - these are metadata objects which describe the purpose of
the image, the parameters it takes, the set of available builds of
the image and the latest build.
= Overview =
As described previously[1], the short/medium term plan is that Aeolus
will:
- Have well documented deployable and template description formats
with plenty of examples. Deployable authors will create these
descriptions manually or with the help of simple command line
tools and store them as they fit (e.g. in their own git repo)
- Support launching deployables in conductor simply by allowing the
user to supply a URL to a deployable description, or choosing a
URL from a list populated by an admin.
- Include a set of command line tools for building images from a
template description. This tools will also allow deployable
authors to list available images available to be referenced in
their deployables.
- Store images and their related metadata in IWHD. Conductor will
use IWHD queries to resolve deployables' image references and
allow users to launch individual images directly.
- Use image factory to build and import images, including the
updating of IWHD metadata about the images.
= IHWD Metadata =
The following IWHD buckets will be used, each one for a different
object type:
images:
- object_type == "image"
- object_body: XML document with image description and parameters,
for deployable authors and simple instance launch from UI
- uuid: UUID
- latest_build: build UUID
builds:
- object_type == "build"
- object_body: empty
- uuid: UUID
- image: UUID of the image object
- parent: UUID of previous build
target_images:
- object_type == "target_image"
- object_body: target specific image, for upload builds; empty for
snapshot builds
- uuid: UUID
- build: UUID of the build object
- icicle: UUID of icicle object
- template: UUID of template object
- target: ec2, rhev-m, etc.
- target_parameters: target specific data stashed here for the push
stage
provider_images:
- object_type == "provider_image"
- object_body: empty
- uuid: UUID
- target_image: UUID of image
- provider: e.g. ec2-us-east-1, rhev-m site
- icicle == "none"
- target_identifier: target specific ID for the image
templates:
- object_type == "template"
- object_body: <template/> document supplied for image build
- uuid: UUID
icicles:
- object_type == "icicle"
- object_body: <icicle/> document
- uuid: UUID
This fairly closely reflects the metadata currently stored by image
factory in IWHD, with the following changes:
- the current "image" type is renamed to "target_type", along with
the bucket and image reference on provider images[2]
- new image and build object types and buckets are introduced
- a build reference is added to the target image type
= Image Factory APIs =
Image factory currently supports building target images and pushing
provider images, along with the appropriate updating of image
warehouse metadata.
In order to allow the CLI to fire-and-forget, we will need to add new
build and push APIs which can handle multiple images at once.
These would look like:
- image(image_id, template, targets[])
+ builds an image for the supplied targets
+ image_id should be ommitted, if a new image is required
+ template and targets may be ommitted if a previous build
exists and the previous template and targets will be used
- push(image_id, providers, credentials)
+ push the image the supplied providers
+ assumes factory can figure out which target image is
appropriate for each provider
+ credentials argument is a <provider_credentials/> document
Also, we'll need to rename image factory's current concept of an image
as a target image. Some initial work on that here[3].
= Conductor APIs =
Conductor needs APIs to support the following:
1) List the available provider types in an XML doc
2) List the available providers in an XML doc
3) Dump a <provider_credentials/> document encapsulating all of the
available provider accounts
The latter API would be subject to authentication and access control.
= Conductor IWHD Queries =
Conductor has basically two use cases for querying IWHD:
1) As a user, I want to launch a deployable:
2) As a user, I want to launch an instance of an image
For (1), conductor needs to resolve each deployable image reference to
a set of provider images. This can be done with something like:
$> $build_id = curl http://iwhd/$image_id/latest_build
$> curl -d '$build=="'$build_id'"' http://iwhd/target_images/_query
$> foreach $target_image_id; do curl -d '$target_image=="'$target_image_id'"' http://iwhd/provider_images; done
For (2), conductor needs to list all images available for launching a
standalone instance and, when the user launches the image, it needs to
list the parameters for the image. Both are easily achieved by
querying the images bucket.
= Image Building CLI =
The image building CLI is used by Aeolus users to build and upload
images from templates. It is also used by deployable authors to list
the available images.
The use cases are:
1) As an image builder or deployable author, I want to list all
images
2) As an image builder or deployable author, I want to list all
builds of an image
3) As an image builder or deployable author, I want to list all the
targets and providers an image has been built for
4) As an image builder, I want to build an image
5) As an image builder, I want to push an image to a provider
6) As an image builder, I want to import an image
7) As an image builder, I want to delete an image
8) As an image builder, I want to delete old versions of an image
9) As an image builder, I want to delete a provider image
10) As an image builder, I want to delete an image
The tool needs to interact with IWHD for listing, image factory for
building and conductor for provider/account details.
It might look like:
$> aeolus-image images # list available images
$> aeolus-image builds $image_id # list the builds of an image
$> aeolus-image target-images $build_id # list the target images from a build
$> aeolus-image provider-images $target_image_id # list the provider images from a target image
$> aeolus-image build --target ec2 --template my.tmpl # build a new image for ec2 from the template
$> aeolus-image build --image $image_id # rebuild the image template and targets from latest build
$> aeolus-image build --target ec2 --target rackspace \ #
--image $image_id \ # rebuild the image with a new template and set of targets
--template my.tmpl #
$> aeolus-image push --provider ec2-us-east-1 $target_image_id # push the target image to the specified provider
$> aeolus-image push $build_id # push all target images for a build, to same providers as previously
$> aeolus-image push --account $provider_account $build_id # ditto, using a specific provider account
$> aeolus-image push $image_id # push all the target images for the latest build
$> aeolus-image import --provider ec2-us-east-1 $ami_id # import an AMI from the specified provider
$> aeolus-image delete --image $image_id # deletes all builds, target images and provider images
$> aeolus-image delete --build $build_id # deletes a build, updating latest/parent references as appropriate
$> aeolus-image delete --target-image $target_image # deletes a target image and its provider images
$> aeolus-image delete --provider-image $provider_image # deletes a provider image
$> aeolus-image targets # list the values available for the --target parameter
$> aeolus-image providers # list the values available for the --provider parameter
$> aeolus-image accounts #
list the values available for the --account parameter
Some other notes:
- The tool will need to authenticate at least against conductor, so
--user/--pass arguments will be needed. We may also want to support
supplying these via environment variables and/or a dotfile.
- The build/push/import commands are long running, so we should
support displaying the progress of the builds and have a
--background parameter for fire-and-forget.
= Footnotes =
[1] - https://fedorahosted.org/pipermail/aeolus-devel/2011-May/001640.html
[2] - to migrate existing data, you can do:
$> cat > update.js <<EOF
db.main.update({ _bucket : "images", object_type : "image" }, { \$set: { _bucket : "target_images", object_type : "target_image" } }, false, true)
db.main.find({ _bucket : "provider_images", object_type : "provider_image" }).forEach(function upd(pi) { pi.target_image=pi.image; db.main.save(pi); })
db.main.update({ _bucket : "provider_images", object_type : "provider_image" }, { \$unset: { image : 1 } }, false, true)
EOF
$> mongo --port 27018 repo update.js
$> mv _fs/images _fs/provider_images
This clearly only works for IWHD with the fs_autostart, but
should be easy to adjust.
[3] - https://github.com/markmc/image_factory/commits/master
11 years, 9 months
Image permissions
by Mark McLoughlin
Hey,
We had a chat earlier medium term plans for image permissions. John is
going to write up some more detailed design thoughts, but I thought I'd
write down my understanding of the basic requirements before I forget:
1) Access control
We need users to be able to restrict access to images they create
or own - e.g. if you've got sensitive data in an image, or you
just want to prevent others from being able to delete your images
(This sounds to me like posix filesystem style permissions on
IWHD objects)
2) Quotas
When an administrator adds a provider account in conductor, she
needs to be able to set a per-user quota for that account - e.g.
Mary can only use 20Gb of S3 storage on this EC2 account
(This sounds to me like a policy stored in Conductor, enforced
either by conductor or image factory. If the latter, the quota
could be passed to image factory via the credentials XML)
3) Environment/pool family policies
Based on the environment a user is launching an instance, a
different set of images should be available to the user.
(This sounds to me like a policy managed by the image tools and
enforced by Conductor)
4) Entitlements/slots
This I'm less clear on. Take RHEL entitlements. When a RHEL
instance is started, it should automatically consume an
entitlement. However, does an image consume an entitlement? If so,
how do we make that happen?
(I think I got this one totally wrong)
Cheers,
Mark.
11 years, 10 months
#1451/1452 - RESTful controllers
by Matt Wagner
Hi all,
I'm attaching a patch that implements feature #1451 - RESTful
controllers. The scope here was just to update PoolsController,
InstancesController, and DeploymentsController.
There were some oddities along the process, since some of what we do
doesn't map up with the REST ideals. So please note (and feel free to
propose alternatives for) the following:
* I implemented destroy methods, but nothing calls them yet. The destroy
* methods will accept params[:id] or an array in params[:ids], so we
* could replace the multi_destroy methods with destroy. I have not done
* so yet because Rails won't generate an instance URL this way, so it
* would require some client-side wrangling. multi_destroy is a
* collection method in routes.rb, but destroy is expected to be an
* instance method and requires that the ID be passed in. In our case,
* the id is an array of whatever the user selects. For now I've just
* left multi_destroy, but this feels like needless duplication.
* I didn't link to the edit page for Instances or Deployments. Right
* now, the user is only permitted to edit the name attribute for these,
* since changing any other attributes would be harmful (or, at best,
* ineffective). I have what feels like a crude setup here to strip out
* non-permitted attributes. It seems like attr_protected would help
* here, but it's really going to get much more complicated -- other
* components of the app can use this method to, for example, mark the
* status of an instance as having changed, but a user shouldn't be able
* to. I'm all ears if someone sees a cleaner way to do this.
The short version is that there are currently no user-facing changes in
this patch, but it does afford us a RESTful interface for Pools,
Instances, and Deployments.
-- Matt
11 years, 10 months
[PATCH conductor 1/4] add extended dbomatic logging
by Mo Morsi
---
src/dbomatic/dbomatic | 57 ++++++++++++++++++++++++++++++++++++++----------
1 files changed, 45 insertions(+), 12 deletions(-)
diff --git a/src/dbomatic/dbomatic b/src/dbomatic/dbomatic
index e7f29b0..ff5b7fb 100755
--- a/src/dbomatic/dbomatic
+++ b/src/dbomatic/dbomatic
@@ -74,27 +74,34 @@ CONDOR_EVENT_LOG_FILE = "#{condor_event_log_dir}/EventLog"
CONDOR_EVENT_LOG_FILE_OLD = "#{condor_event_log_dir}/EventLog.old"
EVENT_LOG_POS_FILE = "#{dbomatic_run_dir}/event_log_position"
if dbomatic_log_dir == '-'
- DBOMATIC_LOG_FILE = STDOUT
+ DBOMATIC_LOG_FILE = STDOUT
+ DBOMATIC_PARSER_LOG_FILE = STDOUT
else
- DBOMATIC_LOG_FILE = "#{dbomatic_log_dir}/dbomatic.log"
+ DBOMATIC_LOG_FILE = "#{dbomatic_log_dir}/dbomatic.log"
+ DBOMATIC_PARSER_LOG_FILE = "#{dbomatic_log_dir}/dbomatic-parser.log"
end
-logger = Logger.new(DBOMATIC_LOG_FILE)
-logger.level = Logger::DEBUG
-logger.info "DBOmatic starting up"
-
# daemonize
if daemon
# note that this requires 'active_support', which we get for free from dutils
Process.daemon
end
+# Custom Log Format
+class DBomaticLogger < Logger
+ def format_message(severity, timestamp, progname, msg)
+ "#{timestamp.to_formatted_s(:db)} #{severity} #{msg}\n"
+ end
+end
+
# Handle the event log's xml
class CondorEventLog < Nokogiri::XML::SAX::Document
attr_accessor :tag, :event_type, :event_cmd, :event_time, :trigger_type, :grid_resource, :execute_host, :username, :hold_reason, :private_addresses, :public_addresses
- def initialize(logger)
- @logger = logger
+ def initialize
+ @logger = DBomaticLogger.new(DBOMATIC_PARSER_LOG_FILE)
+ @logger.level = Logger::DEBUG
+ @logger.info "DBOmatic parser starting up"
end
# Store the name of the event log attribute we're looking at
@@ -130,6 +137,8 @@ class CondorEventLog < Nokogiri::XML::SAX::Document
end
def update_instance_state_event(inst)
+ @logger.info "update_instance_state_event for #{inst}"
+
if @trigger_type == "ULOG_GRID_SUBMIT"
inst.state = Instance::STATE_PENDING
elsif @trigger_type == "ULOG_JOB_ABORTED" or @trigger_type == "ULOG_JOB_TERMINATED"
@@ -160,12 +169,14 @@ class CondorEventLog < Nokogiri::XML::SAX::Document
inst.last_error = @hold_reason
inst.state = Instance::STATE_ERROR
else
- @logger.info "Unexpected trigger type #{@trigger_type}, not updating instance state"
+ @logger.warn "Unexpected trigger type #{@trigger_type}, not updating instance state"
return
end
begin
+ @logger.info "update_instance_state_event saving instance #{inst}"
inst.save!
+ @logger.debug "updated_instance_state_event saved instance #{inst}, creating event for state #{inst.state}@#{@event_time}"
inst.events.create!(:status_code => inst.state,
:event_time => @event_time)
rescue => e
@@ -174,9 +185,13 @@ class CondorEventLog < Nokogiri::XML::SAX::Document
@logger.error "\tfrom #{step}"
end
end
+
+ @logger.info "update_instance_state_event completed fo #{inst}"
end
def update_instance_cloud_id(inst)
+ @logger.info "update_instance_cloud_id for #{inst}"
+
# The GridResource/ExecuteHost string looks like this:
# dcloud http://localhost:3001/api
@@ -185,7 +200,7 @@ class CondorEventLog < Nokogiri::XML::SAX::Document
elsif !(a)execute_host.nil?
resource = @execute_host
else
- @logger.info "Unexpected nil GridResource/ExecuteHost field, skipping cloud id update"
+ @logger.warn "Unexpected nil GridResource/ExecuteHost field, skipping cloud id update"
return
end
@@ -213,15 +228,22 @@ class CondorEventLog < Nokogiri::XML::SAX::Document
return
end
+ @logger.info "update_instance_cloud_id updating instance #{inst} to cloud provider #{provider}"
inst.provider_account_id = provider_account.id
inst.save!
+ @logger.info "update_instance_cloud_id completed for #{inst}"
end
def update_instance_addresses(inst)
+ @logger.info "update_instance_addresses for #{inst}, \
+ setting public addresses: #{@public_addresses} \
+ --- and private addresses #{private_addresses}"
+
inst.public_addresses = @public_addresses
inst.private_addresses = @private_addresses
inst.save!
+ @logger.info "update_instance_addresses completed for #{inst}"
end
# Create a new entry for events which we have all the neccessary data for
@@ -231,11 +253,13 @@ class CondorEventLog < Nokogiri::XML::SAX::Document
inst = Instance.find(:first, :conditions => ['condor_job_id = ?', @event_cmd])
if inst.nil?
- @logger.info "Unexpected nil instance, skipping..."
+ @logger.warn "Unexpected nil instance, skipping..."
else
+ @logger.info "Instance #{inst} found, running update events"
update_instance_state_event(inst)
update_instance_cloud_id(inst)
update_instance_addresses(inst)
+ @logger.info "Instance #{inst} update events completed"
end
@tag = @event_type = @event_cmd = @event_time = @trigger_type = @grid_resource = @execute_host = @hold_reason = @public_addresses = @private_addresses = nil
end
@@ -282,13 +306,18 @@ def parse_log_file(parser)
File.open(EVENT_LOG_POS_FILE, 'w') { |f| f.write log_file.pos.to_s }
end
+logger = DBomaticLogger.new(DBOMATIC_LOG_FILE)
+logger.level = Logger::DEBUG
+logger.datetime_format = "%Y-%m-%d %H:%M " # simplify time output
+logger.info "DBOmatic starting up"
+
begin
DBOMATIC_PID_FILE = "#{dbomatic_pid_dir}/dbomatic.pid"
FileUtils.mkdir_p File.dirname(DBOMATIC_PID_FILE)
open(DBOMATIC_PID_FILE, "w") {|f| f.write(Process.pid) }
File.chmod(0644, DBOMATIC_PID_FILE)
- parser = Nokogiri::XML::SAX::PushParser.new(CondorEventLog.new(logger))
+ parser = Nokogiri::XML::SAX::PushParser.new CondorEventLog.new
# XXX hack, condor event log doesn't seem to have a top level element
# enclosing everything else in the doc (as standards conforming xml must).
@@ -298,6 +327,7 @@ begin
notifier = INotify::Notifier.new
parse_log_file(parser) if File.exists? CONDOR_EVENT_LOG_FILE
+ logger.info "Parsed existing event log file - current postition: #{get_log_file_pos}"
# Setup inotify watch for condor event log changes
notifier.watch(condor_event_log_dir, :all_events){ |event|
@@ -306,6 +336,7 @@ begin
end
}
+ logger.info "Beginning main event loop"
while true
begin
notifier.run
@@ -314,8 +345,10 @@ begin
e.backtrace.each do |step|
logger.error "\tfrom #{step}"
end
+ logger.info "EventLog modification event trigger completed, parsing finished - current position #{get_log_file_pos}"
end
end
+ logger.info "Main event loop completed"
parser << "</events>"
parser.finish
--
1.7.2.3
11 years, 10 months
how to dry controller index actions
by Jan Provazník
Hi Ken,
new UI concept (spec/progressive_enhancement.md) requires access to same
data through multiple controllers depending JS is on/off.
Example: a user is on pools index page:
- if JS is off, pools/deployments/instances lists should be loaded
from pools#index action (including all pagination/searching/permission
logic for all 3 tabs)
- if JS is on, pools#index action renders only tabpanel with tabs for
pools/deployments/instances. When the user clicks on deployments tab,
deployment list is loaded through deployments#index (same
pagination/searching/permission logic must be here)
This differs from our current concept, when both JS and non-JS requests
use same controller#action. So I wonder what is most resonable way how
to not duplicate code on multiple places.
So solution could be this:
For each tab (pools, instances, deployments) move this 'list logic' into
separate module which will be then included both in pools and 'tab'
controller.
Example:
PoolsController
include DeploymentsListing
include PoolsListing
include InstancesListing
def index
@pools = list_pools
@instances = list_instances
@deployments = list_deployments
end
end
DeploymentsController
include DeploymentsListing
def index
list
end
end
module DeploymentListing
def list_pools
code for index action including perms check, pagination, search
end
end
What do you think? Anyone has some idea how to improve this?
Jan
11 years, 10 months
[PATCH conductor] Updated to touch thin.log instead of mongrel.log.
by Justin Clift
---
aeolus-conductor.spec.in | 2 +-
1 files changed, 1 insertions(+), 1 deletions(-)
diff --git a/aeolus-conductor.spec.in b/aeolus-conductor.spec.in
index f983f8d..266ee1e 100644
--- a/aeolus-conductor.spec.in
+++ b/aeolus-conductor.spec.in
@@ -214,7 +214,7 @@ done
# by aeolus:aeolus. This is a temporary workaround while we've still
# got root-owned daemon processes. Once we resolve that issue
# these files will no longer be added explicitly here.
-touch %{buildroot}%{_localstatedir}/log/%{name}/mongrel.log
+touch %{buildroot}%{_localstatedir}/log/%{name}/thin.log
touch %{buildroot}%{_localstatedir}/log/%{name}/rails.log
touch %{buildroot}%{_localstatedir}/log/%{name}/dbomatic.log
touch %{buildroot}%{_localstatedir}/run/%{name}/event_log_position
--
1.7.4.4
11 years, 10 months