Hi all,
Can everyone who makes rpm's for Aeolus Project components, please throw
updated ones into the upstream testing repo? (for both RHEL6 and F16)
Matt Wagner and I are writing up docs for the upcoming 0.9.0 release. So
we need the latest upstream code in packages, to do screenshots and
stuff with. :)
Regards and best wishes,
Justin Clift
--
Aeolus Community Manager
http://www.aeolusproject.org
On 04/04/2012 09:42 AM, Jan Provaznik wrote:
> On 04/03/2012 05:10 PM, Scott Seago wrote:
>> On 04/03/2012 08:08 AM, Jan Provaznik wrote:
>>> On 03/30/2012 03:52 PM, Scott Seago wrote:
>>>> Rather than paste in the entire text, I'm just linking to the wiki
>>>> writeup. Please read the whole thing first and if you want to
>>>> comment on anything specific, quote it and comment on this thread.
>>>>
>>>> https://www.aeolusproject.org/redmine/projects/aeolus/wiki/Adding_Permissio…
>>>>
>>>>
>>>>
>>>>
>>>>
>>>>
>>>> Scott
>>>
>>>
>>>> In "LDAP Groups" mode we won't pull in all user groups from the LDAP
>>>> server, as there may well be many groups that aren't related to
>>>> Conductor use, so we'll have an "Import LDAP User Group" action,
>>>> analogous to "Create User Group" action for standalone. This will
>>>> bring up a list of LDAP groups exposed by the LDAP server, allowing
>>>> the admin to choose one (or more? -- not sure if we'll allow
>>>> multiple groups at once yet) to import. The UserGroup will be created
>>>> in conductor, with the memberships pulled into Conductor at the same
>>>> time. Any group members that are not yet imported into the Conductor
>>>> database as Conductor users will be pulled in at the same time.
>>>>
>>>
>>> "Import LDAP User Group" action implies that an admin has to manually
>>> add a new group in conductor anytime the group is added on LDAP server?
>>> It would be nice to have both user and group lists configurable just by
>>> setting ldap query (in config file).
>>>
>> This is correct. I don't think we want to automatically pull in all LDAP
>> groups, since most of them will probably be not relevant for Conductor
>> purposes. We _could_ add a global "automatically import _every_ group
>> from LDAP" flag at some future point -- it wouldn't really affect much
>> of the design here, but we don't want that to be mandatory, so we still
>> need the 'import' feature.
>>
>
> Well, centralized user and group management is major benefit of using
> LDAP. By enforcing an admin to add groups both in LDAP and in Aeolus
> this benefit is lost.
>
> I didn't mean we import all groups - LDAP query is a powerful tool
> equivalent to SQL and allows you to filter only groups which meet some
> restrictions (e.g. "give me all groups which have an attribute
> 'webapp' set to '1'"). Alternatively only a LDAP subtree can be used
> to specify group subset.
>
> So the advantage would be that an admin sets proper LDAP query in
> Aeolus config only once, then all users/group management can be done
> outside Aeolus by a preferred tool.
>
> All of above are just brainstorming ideas, I don't have strong opinion
> about this.
>
So it sounds like we potentially have 3 different ways to create/manage
groups:
1) local groups: standard CRUD within Rails, as described in the doc
2) explicitly-added LDAP groups: admin picks a specific group to add
from LDAP, as described in the wiki doc
3) LDAP query-managed group list: Admin defines an LDAP query that
returns a list of groups, all of which should be "imported" into
Conductor and maintained automatically.
So 3) would be something new (not yet defined in my wiki doc). It would
be implemented as follows:
a) admin enters a query that returns a list of groups -- should this be
done via the UI or via a config file?
b) groups have a "type" field: local, imported, and query-backed. For
local groups, add/delete group _and_ add/remove from members list are
supported. For imported, add/remove are supported, but membership is
maintained via LDAP sync. For query-backed, add/remove group isn't
supported either, as the group itself is also maintained via LDAP sync.
b) the LDAP re-sync code should be augmented to not just re-sync
explicitly-imported groups but also query-imported groups. Essentially
re-syncing is a two-step process now.
1) re-sync the list of query-backed groups
2) re-sync membership for query-backed and imported groups
I'm leaving users out of this for the moment, since this feature is only
for managing groups, but if we decide to implement query-backed groups
we could do something similar with users at some point as well.
So from a feature planning point if view, we now have 3
requirements-related questions:
1) Should we allow LDAP-backed and local groups to exist concurrently?
If we do this we'll enable them via separate config vars, so the
sysadmin can still restrict to LDAP-only or local-only -- but we'd have
the option of doing both
2) do we need LDAP query-backed groups in addition to explicit
"imported" groups? This would be a third category to enable via config file
3) If we do 2) -- do we want a UI to add/remove group queries, or would
doing this via config file be sufficient? In the latter case, would we
need to allow multiple queries, or would one be fine?
>> You bring up another point -- we could possibly generate a custom query
>> to generate the group list. Again this would probably need to be in
>> addition to the manual import, and possibly not for the first version
>> here (unless there's a compelling need for it)
>>>> For sync-on-use, the idea is we would store a "last sync time" every
>>>> time we re-sync group memberships. Any time we make a permissions
>>>> check, the checking code would call a method on Group to refresh if
>>>> necessary (i.e. refresh if last refresh was longer ago than
>>>> $refresh-interval, otherwise return and do nothing).
>>>> Group.refresh_from_ldap_if_necessary would block while checking last
>>>> sync time and (if necessary) while performing the update using an
>>>> ActiveRecord pessimistic lock on the "last refresh time" data element
>>>> in order to prevent more than one web UI front end process (or
>>>> thread) from attempting to re-sync at the same time.
>>>>
>>>
>>> -1 for this (+1 for doing this on background), I experienced many
>>> situations when customer's LDAP server was responding too slow.
>>> Typical real-life situation is: LDAP is used for SMTP auth, SMTP
>>> server is overloaded -> LDAP responses are slow. IOW slow LDAP is much
>>> more common then it would be expected.
>>>
>>> We have other background tasks anyway, so there is not much additional
>>> work to do this on background.
>>>
>>> Jan
>> OK, yeah performance was my main concern with the sync-on-use. Jay was
>> more firmly of the opinion that the polling solution was less desirable,
>> but if slow LDAP server response is going to be a common occurance, I'd
>> prefer the background polling as well.
>>
>> Scott
>>
>
This patchset adds translations for credential definition labels, instances and deployments status, user status and unlimited quota.
Applying credentials translations patch requires reseeding the application
Hi, sending proposal for "robust instance launching" scenario. Any
thoughts or improvement ideas are welcomed.
Cut&paste from
https://www.aeolusproject.org/redmine/projects/aeolus/wiki/Robust_instance_…
Summary
This page describes multi-instances deployment launch process
Owner
Jan Provaznik (jprovazn(a)redhat.com)
Current status
Targeted release:
Last update:
When launching a deployment, deployment object is created and saved,
Then ‘launch’ method is called on this deployment which creates required
instances in conductor DB and associates them w/ the deployment object.
Then it tries to find suitable ‘match’ (combination of hwp, provider
account, realm) where all instances of this deployment can be launched.
If a match is found, launch params are computed for all instances.
Finally we iterate through all instances and try to launch them. If any
instance launch fails, we set create_failed state on this instance and
continue with next.
All of above steps are not in transaction, IOW if match is not found or
launch params upload fails or instance launch fails, deployment and
instances stay created. There is not retry or fallback plan if an error
occurs (for example the provider of chosen match is not accessible).
Screencast Demo
1) Successful deployment launch
All instances should be launched in proper order
2) Launch on first provider account fails, succeeds on second provider
account
Launch two deployment's instances, third instance fails to launch
Launch on the first account should be rolled back -the two launched
instances should be stopped
launch should be done on the second account and should be successful
3) Launch fails on both providers
Launch two deployment's instances, third instance fails to launch
Launch on the first account should be rolled back - the two launched
instances should be stopped
Same for second account
Deployment should be destroyed
Log for this launch should be created in a log
Implementation tasks
Tasks which were already in Redmine cover whole deployment launch
process, though may be broken into smaller tasks soon:
#3060 - Refactor the launch process to include better error reporting,
retries, switching to alternate providers etc.
#3061 - Ensure that the UI doesn't contain unlaunched instances
#3062 - Ensure that multi-instance deployments always launch fully or
not at all. Conductor should automatically clean-up partial deployments
Detailed description
whole deployment launch process can be split into 3 phases:
1) pre-launch: we prepare deployment and instances objects (in conductor
db), prepare launch params and compute dependencies between instances in
this phase - if anything goes wrong, we just call rollback, nothing is
saved and user stays on launch page
2)launch of non-blocked instances: send dc-api create instance request
for each instance which is not blocked. This step is done on foreground
together with phase 1 when a user presses "launch" button (note: it’s
possible to do this call from dbomatic too, if we decide it’s better).
3)launch instances on state change: instances which have not been
launched in phase 2. because they depend on instanceX are launched when
instanceX is running. This can be done from instance after_update
callback - when instance’s state is changed to ‘running’, get list of
instances which becomes unblocked and launch them. Phase 3 will be
usually executed on background, because instances states are usually
updated by dbomatic (though not always - in some cases instance’s state
is updated directly on dc-api request call).
If an instance launch fails for some reason, we try to deploy somewhere
else: stop all instances which have been already launched, then find
another match (skipping all matches which failed), reset state to NEW
for all instances (or drop and recreate them)
launch progress page (TBD)
Angus suggested that there could be something like “launch progress
page” where details of what’s being done w/ deployment would be showed.
So if the user checks “show me details” checkbox before clicking
“launch” button, he is redirected to this progress page where info which
step is being done is displayed:
"Selecting provider account... account_name"
“Making launch request for instance... x”
This could be probably just displaying of all events associated with
this deployment.
Showing of this page would be optional, alternatively it could be part
of deployment’s show page where a user could redirected after launch.
High-level implementation details
Add 'state' attribute to Deployment model, states can be:
new - deployment is created in Conductor DB, but no instance has been
launched yet
pending - at least one instance launch has been requested
failed - final state, deployment launch/shutdown failed
rollback_in_progress - an error occurred during launching an instance
and there are already some launched instances which have to be stopped
rollback_failed - stopping of already launched instances failed
rollback_complete - stopping of already launched instances, now the
deployment can be launched somewhere else
running - all instances were successfully are in running state
shutting_down - sthutdown was initiated
stopped - all instances are stopped
Allowed state transitions:
new -> pending
pending -> running|rollback_in_progress|failed
rollback_in_progress -> rollback_complete|rollback_failed
rollback_complete -> pending|failed
running -> shutting_down
shutting_down -> stopped
Deployment state will be used to track deployment's history and decide
what to do on a change - for example if last deployment's instance is
stopped, deployment relaunch is done only if deployment was in
rollback_in_progress state, otherwise the deployment stays stopped.
State will be also used in UI for displaying deployment's state -
currently we use only 3 states: pending, running and failed and these
are computed "per request" by checking state of all instances in deployment.
deployment_launch:
in transaction do
create deployment
create deployment’s instances
compute instances dependencies (covered by task 3054)
find match where all instances can be launched (covered by task 3064)
invoke instances_launch
on error:
deployment and instances are not created in conductor’s db
user stays on deployment launch page
proper error with reason why launch was not successful is displayed
instances_launch:
for each deployment’s instance which is not blocked do
check quota
send dc api launch request
on error:
initiate deployment rollback
instance’s after update callback:
if instance is in running state then invoke instances_launch
elsif instance is in failed state then invoke deployment_rollback
deployment_rollback:
if all instances are stopped/failed invoke deployment_relaunch
else send stop request to any instances in pending or running state
deployment_relaunch
find new match where all instances can be launched (skipping matches
which we tried before)
if match is found, invoke instances_launch
elsif match is not found, retry for all matches -> use first match
which failed before
if match is not found, create log about failed launch in some history
log (covered scenario 3037) and destroy this deployment
Instance launch timeout
On deployment launch when an instance is in pending state for X minutes,
the launch is terminated and deployment rollback is initiated.
This timeout should be configurable, default timeout could be 15 minutes?
Future plan
The above is short/mid-term solution how to improve instance launching,
it doesn't add any new dependency/tool. Long-term solution is to
integrate Heat (https://github.com/heat-api) which is expected to do
all things we need (take care of deps between instances, launch
instances in proper order, rollback of failed launch, monitoring...).
We don't care about dependencies between instances when stopping a
deployment.
References
Links and other references related to the feature.
Mails, IRC logs, documentation for libraries used, links to other parts
of project documentation, etc.