this patch adds a separate IPA authentication target which glues together
Kerberos and LDAP authentication to support IPA password migration.
To test this patch the following two uncommitted patches are needed on
the server side:
- "Allow adding entries with pre-hashed passwords, but don't generate
keys for them."
- "Add BIND pre-op for DS->IPA password migration to ipa-pwd-extop DS
There is a TODO left, namely to read from the DS if password migration
in enabled or not. Is there already a place where this information can
I am pretty close to having ELAPI based on the async processing to at
This means that I finally embraced the logic or async programming and
managed to understand what should be done where and how.
Big progress I should say. So...
The events now can be created and logged asynchronously. This means that
the function that does the logging will create a memory object that
would travel though async callbacks and that all the processing will
done within these callback. The function itself returns right away
without any blocking. The memory object will be traveling through
different states and finally end up bing logged or not. Caller provided
callbacks will be invoked at different points of the journey, to
communicate the state back to the caller. The sync API wraps async API.
I was planning to describe it in more details on wiki so this is just a
bit of context. I also presented it to Steven and he agreed that this
seems to work the right way.
However IMO there is a fundamental problem with the whole async approach.
And this is what I just realized and wanted to talk about.
Keep in mind that the purpose of the ELAPI is: to provide pluggable,
configurable, reliable delivery of the log data to the destination
preserving original format and order of the event creation. Everything
works well in the async model except the "order".
I do not see a reasonable way how the "order" can be preserved in what I
have designed and nearly implemented.
The problem looks like this:
a) Application created two events in a row A and B. A has an earlier
time stamp than B.
b) Application logs these events right away one after another into two
different targets T1 and T2.
c) T1 consists of sinks S1 and S2, T2 consists of sinks S2 and S3.
d) S1 sink is slow and flaky because of the remote network connection.
e) The event A starts writing to the sink S1 and fails. Because of the
failure the event will then move to sink S2.
f) Meanwhile B goes to sink S2 right away.
g) Since all these events processed asynchronously the event A will
arrive on the sink S2 after event B.
h) The result will be that events A and B will be logged in B then A
sequence so time stamp order will be violated.
I see only two ways to try to resolve this problem:
a) Resolve time stamps at the sink not at the beginning when the event
is just created and passed to ELAPI.
This will solve the problem of order but will create another obstacle.
The issue would be then with one event sent into multiple different
targets at the same time. If the time stamp is resolved at sink and not
at the beginning of the journey, the instances of the same event sent to
different targets (and thus potentially to different sinks) will have
different time stamps. This will create a huge problem when someone
would try to correlate the events between different logs. That would be
a nightmare to determine is that was one event or multiple different
events. So I do not like this approach.
b) Hold the events in queue inside ELAPI dispatcher and let them go
through the whole sink chain only one at a time. In this case there will
be only one event traveling the callbacks at a time. This approach
though avoids the blocking of the execution of the application and
guarantees the order might lead to the following problems:
1) The events might start to pile up in the queue due to one sink being
faulty. This is bad.
2) If caller wants to call sync interface to log event, the queue must
be emptied first if the sync call uses same loop. Steven was suggesting
to use a different (internal) loop for the sync functions.
This can cause even more problems with the order of the events. Imagine
two parallel event loops trying to pump events into the same set of
sinks... The events will be logged out of sequence for sure.
In my original approach the whole dispatcher was either sync or async.
Now I redesigned it so it is always async internally. Now there is no
difference between sync and async implementation internally. The only
difference is which loop the API is using: internal or the one that
application uses. And this is determined when the instance of the ELAPI
dispatcher is initialized.
I think: if the application wants to generally use async processing but
from time to time log an event synchronously and wait for its delivery
because it is important, then it should instantiate two different ELAPI
dispatchers and configure them in such way that sinks from one
dispatcher do not overlap with sinks from other dispatcher. If they do
it might lead to events logged in the same file in out of sequence
order. Is this recommendation acceptable? I think it is.
If we say: "Use two dispatchers with two different configuration files
if you want to do sync and async logging of the events from one
application and better not use same file as destination for file sink in
these two configurations otherwise the order of the events is
unpredictable. Generally do not use one ELAPI dispatcher for async and
sync logging since it may cause events delivered out of sequence to
some sinks." then we offset part of the problem to the developer and
administrator who develop application and configure sinks.
So what it leaves us with?
It will be up to the application developer to decide if he wants to use
same async loop for the second ELAPI dispatcher or use an internal one
provided by ELAPI. We will document the pros and cons of each and leave
it to developer. Fine - nothing specific needed here then.
However is leaves us with a problem of order inside one dispatcher in
case of async logging.
Should we pile all the events in one queue, resolve time stamp at sink
or there is some other approach I do not see? Have two time stamps may
be? One the "created" time stamp and another "recorded" time stamp?
Any ideas will be appreciated.
Engineering Manager IPA project,
Red Hat Inc.
Looking to carve out IT costs?
this patch improves the handling of ccache files. It addresses two
issues already discussed on the list.
When randomized ccache file are used (or the client process id is used
in the name of the ccache file) each authentication of the user created
a new ccache file. This patch saves the name of the ccache in sysdb and
reuses the saved file name if the user has running processes on the
system. So a single user only has one active ccache file.
If the authentication happens when the system is offline the kerberos
related environment variables were not sent to the client. If a later
authentication happens online the old session still cannot see the
ccache file with the valid credentials. This patch send the environment
variables bach to the client even when offline.
-----BEGIN PGP SIGNED MESSAGE-----
[PATCH 1/2] Add Simo's ipachangeconf
This patch adds the ipachangeconf class from FreeIPA and packages it in
makefile and with python setuptools
[PATCH 2/2] Change the upgrade script to use ipachangeconf
With this patch, the upgrade script we use for changing the config files
is able to keep ordering and comments.
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.10 (GNU/Linux)
Comment: Using GnuPG with Fedora - http://enigmail.mozdev.org/
-----END PGP SIGNATURE-----
this patch add the possibility to validate the credentials obtained from
a Kerberos server with a local keytab. The boolean option krb5_validate
switches the validation on and off. It is disabled by default in the
kerberos provider and enabled by default in the IPA provider.
Typically root privileges are needed to read a keytab. As a consequence
if validation is enabled the privileges cannot be drop before starting
krb5_child, but only after reading the keytab.