[PATCH 1/3] Document undocumented hard-coded 300 second timeout and default timout value.
by Steven Dake
Signed-off-by: Steven Dake <sdake(a)redhat.com>
---
man/oz-install.1 | 11 ++++++++---
1 files changed, 8 insertions(+), 3 deletions(-)
diff --git a/man/oz-install.1 b/man/oz-install.1
index b497813..7cf0077 100644
--- a/man/oz-install.1
+++ b/man/oz-install.1
@@ -70,9 +70,14 @@ will undefine the libvirt guest with the same name or UUID and delete
the diskimage, so it should be used with caution.
.TP
.B "\-t"
-Use a timeout value of \fBtimeout\fR for installation, rather than the
-oz default. This can be useful if you know you have slower storage
-and want to wait longer for the installation to timeout.
+Terminate the installation of the guest image in \fBtimeout\fR seconds
+rather then the default of 1200 seconds. This value can be increased in the
+case of slow storage or multiple oz-install operations on the same machine
+consuming the disk bandwidth.
+
+Please note there is a separate termination action that occurs if 300 seconds
+elapses before any data is written to the VM image. This timer value is not
+configurable.
.TP
.B "\-u"
Customize the image after installation. This generally installs
--
1.7.4.4
11 years, 11 months
aeolus configure features
by Mo Morsi
The following patchset implements several new features
for aeolus-configure including
- configuration profiles or grouping of aeolus components
and seed data which can be configured seperately from each
other
- interactive installer allowing the user to select which
components are configured locally and the parameters
to seed the application with
- expanded seed data interface including support for provider
accounts and images
- expanded spec suite, verifying all the above
- a few bug fixes / improvements to smoothen operations
Everything has been tested, all existing functionality still
fully works as does the new additions
12 years
[PATCH configure 1/2] Added script for restarting aeolus components
by Maros Zatko
From: Maros Zatko <mzatko(a)redhat.com>
---
bin/aeolus-restart-services | 48 +++++++++++++++++++++++++++++++++++++++++++
1 files changed, 48 insertions(+), 0 deletions(-)
create mode 100755 bin/aeolus-restart-services
diff --git a/bin/aeolus-restart-services b/bin/aeolus-restart-services
new file mode 100755
index 0000000..5e9eb73
--- /dev/null
+++ b/bin/aeolus-restart-services
@@ -0,0 +1,48 @@
+#!/usr/bin/ruby
+
+# ordered as in rc.d
+services = %w(mongod messagebus iwhd postgresql httpd qpidd deltacloud-ec2-us-east-1 deltacloud-ec2-us-west-1 deltacloud-mock libvirtd condor aeolus-conductor conductor-dbomatic imagefactory)
+
+def perform(action, svcs)
+ action = action.to_s
+ svcs.map do |script|
+ puts "\n#{action.capitalize}ing #{script} ..."
+ cmd = "/etc/init.d/#{script} #{action}"
+ out = `#{cmd}`
+ if $?.to_i == 0
+ puts " \e[1;32mSuccess:\e[0m #{out.strip}"
+ else
+ puts " \e[1;31mFAILURE:\e[0m #{out.strip}"
+ end
+ $?.to_i
+ end
+end
+
+perform :stop, services.reverse
+perform :start, services
+
+## Other checks
+commands = [
+ {:name => 'condor_q', :command => 'condor_q'},
+ {:name => 'condor_status', :command => 'condor_status'}
+]
+commands.each do |cmd|
+ puts "\nChecking #{cmd[:name]} ..."
+ cmd = "#{cmd[:command]}"
+ out = `#{cmd}`
+ if $?.to_i == 0
+ puts " \e[1;32mSuccess:\e[0m #{out.strip}"
+ else
+ puts " \e[1;31mFAILURE:\e[0m #{out.strip}"
+ end
+end
+
+if perform(:status, ['mongod']) == [1]
+ lockfile = '/var/lib/mongodb/mongod.lock'
+ if File.exists?(lockfile)
+ puts " \e[1;33mremoving\e[0m leftover #{lockfile}"
+ File.delete(lockfile)
+ perform :restart, %w(mongod iwhd)
+ end
+end
+
--
1.7.6
12 years
[PATCH 1/8] removed compass/960 text stylesheet
by Scott Seago
1) was having compilation errors
2) we don't use it anymore
---
src/app/stylesheets/_base.scss | 2 --
src/app/stylesheets/text.scss | 30 ------------------------------
2 files changed, 0 insertions(+), 32 deletions(-)
delete mode 100644 src/app/stylesheets/text.scss
diff --git a/src/app/stylesheets/_base.scss b/src/app/stylesheets/_base.scss
index 3ac43c6..096ec46 100644
--- a/src/app/stylesheets/_base.scss
+++ b/src/app/stylesheets/_base.scss
@@ -26,8 +26,6 @@ $goodcl: #bfcc29;
$okcl: #f6a20a;
$badcl: #cb292b;
-@import "text";
-
@mixin border-radius($radius) {
border-radius: $radius;
-moz-border-radius: $radius;
diff --git a/src/app/stylesheets/text.scss b/src/app/stylesheets/text.scss
deleted file mode 100644
index b6819c3..0000000
--- a/src/app/stylesheets/text.scss
+++ /dev/null
@@ -1,30 +0,0 @@
-/* 960 Grid System ~ Text CSS.
- * Learn more ~ http://960.gs/
- * *
- * Licensed under GPL and MIT. */
-
-@import "960/text";
-
-@include text;
-
-/* Need to solve licensing first */
-@font-face {
- font-family: 'Roadgeek E';
- src: local('Roadgeek E'), local('RoadgeekE'),
- url(../../fonts/RoadgeekE.otf);
-}
-
-/* Headline Font based on Highway Gothic/FHWA to preserve RH identity on the web unachievable with Interstate */
-@font-face {
- font-family: 'FreeWay Bold';
- src: local('FreeWay Bold'), local('FreeWayBold'),
- url(../../fonts/FreeWay-Bold.ttf);
-}
-
-body {
- font: 12px/1.5 $screenfont;
-}
-
-h1,h2,h3,h4,h5 {
- font-family: $headlinefont;
-}
--
1.7.6
12 years
[RFC PATCH 0/3]: Remove condor from the conductor
by Chris Lalancette
All,
This is a RFC patch series to remove condor from the conductor. In short,
condor presents problems for our project because it is an external project, it
is written in C++ (when most of our developers are ruby), and it is too complex
for our current needs.
The new way we do scheduling is described pretty well in patch 1, so I
won't delve into it here. This is an RFC series because there are 2 known
problems and it has only been lightly tested.
The first known problem is that since we are doing the deltacloud create
calls inline in the conductor, this can cause the UI itself to timeout. This is
going to be a problem when using the VMware backend, as we know that the
create call there can take a long time. Possible solutions are to use a
different thread or process for the call, but there may be others.
The second known problem is that for reasons I don't really understand,
updating the instance row in the database (using instance.save!) has some
surprising results. One example of this is public_addresses; the
public_addresses field I get from the deltacloud backend looks correct
(ec2-50-7-27-214.compute-1.amazonaws.com, or whatever), but when it is saved
into the database it looks odd (---\n- ec2-50-7-27-214.compute-1.amazonaws.com\n).
Since dbomatic is not doing this manipulation, I can only presume that some
observer is screwing it up, but I can't see how. A second example of this
problem is that the public key that gets created when the instance is launched
disappears from the UI as soon as dbomatic runs.
I have only tested it so far using the EC2 backend. Except for the two
bugs above, things work pretty well there; I can start and stop deployments
from the UI, and the state gets updated along the way. To really put this
into the repository we would need to test on the other backends, particularly
RHEV-M and VMware.
Comments and questions about the patchset and what we are trying to
accomplish here are welcome.
Chris Lalancette
12 years
Aeolus Management Model API
by Steven Dake
As has been previously discussed on this mailing list and irc, there are
gaps between what deltacloud provides and what higher level management
systems (such as Aeolus) have interest. Deltacloud provides a great
low-level interface to various cloud providers objects, but doesn't
enforce any particular policy or organization on these low level objects.
In the Aeolus project we are interested in enforcing organization and
policy. The question becomes do we want to create a coherent set of
APIs that model the Aeolus system. My answer to this is yes, we do, and
to kick that process off, I'll propose a few APIs that we need exposed
from aeolus to support the pacemaker-cloud[1] project.
What pacemaker cloud needs today:
1 notifications that third parties modify a deployable state
1.1 A deployable was detected faulty by a third party
1.2 An assembly was detected faulty by a third party
1.3 A deployable was started by a third party
1.4 A deployable was stopped by a third party
2 third party notification involving deployable state monitoring
2.1 Active monitoring has determined that an assembly has failed
2.2 Active monitoring has determined that a deployable has failed
A proposed API model follows:
Exported from the management system
-----------------------------------
POST api/monitors/add
Add a third party monitor to a list of monitors maintained by the
management system
inputs: internet protocol address, port, and version of a third party
software component implementing the monitoring API
outputs: monitor identifier and success or failure
POST api/monitors/remove
Remove a third party monitor from the list of monitors maintained by the
management system
inputs: internet protocol address, port, version, and identifier of a
third party software component implementing the monitoring API
outputs: monitor identifier and success or failure
GET api/monitors/list
Retrieve a list of third party monitors from the management system
inputs: version
outputs: list of monitors including internet protocol address, port, and
version
POST api/monitors/deployable/id
inputs: version and START DETECTED or FAIL DETECTED
outputs: success or fail
POST api/monitors/deployable/assembly/id
inputs: version and START DETECTED or FAIL DETECTED
outputs: success or fail
Exported by the third party monitor API
---------------------------------------
POST api/deployable/id
inputs: version, deployment ID, and state (START DETECTED, FAIL
DETECTED, STOP DETECTED), deployable metadata
outputs: success or fail
POST api/deployable/assembly/id
inputs: version, assebmly ID, deployable metadata, and state (START
DETECTED, STOP DETECTED, FAIL DETECTED)
outputs: success or fail
Regards
-steve
[1] http://pacemaker-cloud.org
12 years
Rework oz.git unittests
by James Laska
Greetings,
I've been looking into integrating existing oz unittests with jenkins. It
wasn't hard to have them run in jenkins as is, but I was looking give jenkins a
better idea for pass/fail data over time. One way to do this was to convert
the tests to use py.test (or unittest -- but that's a little crusty). Py.test
can emit junit XML test output, which is jenkins friendly, and has much less
stock test class/method bloat. I've modified the existing oz unittests to be
py.test friendly.
Another annoyance with running tests in jenkins is that the nodes need to be
manually setup with appropriate dependencies ahead of time. While this isn't
hard, it's maintenance and one more thing to forget/fail when testing. The new
test driver (runtests.sh) will setup a python virtualenv and install required
dependencies there. This is intended to handle deps installation during
unittest execution.
> .gitignore | 6 +
> Makefile | 6 +-
> tests/dependencies.txt | 2 +
> tests/factory/run.sh | 5 -
> tests/factory/test_factory.py | 311 +++++++++++++++++++++--------------------
> tests/runtests.sh | 90 ++++++++++++
> tests/tdl/run.sh | 128 -----------------
> tests/tdl/test.cfg | 135 ++++++++++++++++++
> tests/tdl/test_tdl.py | 131 +++++++++++++++++-
> 9 files changed, 524 insertions(+), 290 deletions(-)
Comments appreciated.
Thanks,
James
12 years, 1 month
Warden auth
by Jan Provazník
https://www.aeolusproject.org/redmine/issues/2118
This patchset replaces authlogic with warden which is used
also in katello. This is first step which allows us to
add more auth strategies (ldap, oauth) or use devise on top
of warden, this also synchronizes us with katello auth.
What's not very good in this solution is lack of password
handling/validations, which authlogic handled so now we have
to handle this ourselves. This can be solved in some next
step by using devise which is based on warden.
Password reset (or db recreation) is required with this patch
because encrypted password has different format.
12 years, 1 month