[SSSD] Patch to fix LDAP ID backend GSSAPI credential expired messages

Eugene Indenbom eindenbom at gmail.com
Thu Apr 1 11:05:03 UTC 2010


Hi Simo,

On 03/31/2010 11:08 PM, Simo Sorce wrote:
> Also given the complexity of the patch can you use the --patience
> switch to git format-patch and resubmit it, preferably rebased on top
> of master, but if it is too much work just tell me on top of which
> commit you created it so I can simply rebase my test branch and apply
> it w/o issues.
> It should make things much more readable as unfortunately it seem the
> patch does not apply on top of master now, so reviewing it properly is
> a bit hard.
>    
The git version of patch was based on 
80c8a4f94d54b23bce206fdd75ff2648977ce271 parent. The original version 
was based 1.1.0 release. I have rebased the patch to the latest version 
(7acaaa6c6563cf3b8ab20bf6431898d20d735842) and attached it. The format 
is git, at least my git thinks so.

NB: I am not getting impatient. The things seem to move quite fast. My 
English traits me here and there leaving a lot of room for misunderstanding.
>> The patch can be theoretically split into 3 parts:
>> 1. Changes to ldap_child related to returned ticket expiration date;
>>      
> This should be a separate patch.
>    
Then it should be applied first. The changes to return ticket expiration 
date is in:

    src/providers/ldap/ldap_child.c
    src/providers/ldap/sdap.h
    src/providers/ldap/sdap_async.h (first hunk)
    src/providers/ldap/sdap_async_connection.c (all but last hunks)
    src/providers/ldap/sdap_async_private.h
    src/providers/ldap/sdap_child_helpers.c

I will create the separate patch for them next time.
>> 2. Changes to failover subsystem needed to return number of servers
>> registered for failover;
>>      
> I am trying to understand what's the reason for this.
> Why the retries should depend on server availability ?
> I would expect to retry once per server until there are servers
> available. Can you explain why you think you should try to calculate
> servers availability in advance ?
>
> Just fyi, the problem is that we might not know in advance, in future
> we plan to add support for reading SRV records to determine the list of
> servers, so the number of available servers may change w/o notice.
>    
That's exactly the reason I have add methods to get the number of 
servers. The number of connection retries for each operation is 
calculated as two times number of servers:

    - one retry per server for broken connection
    - one retry for failed connection attempt

This can be illustrated by typical failure scenario:

    - operation reuses cached connection that is already broken as LDAP
    service (not computer) is down
    - so the first operation attempt detects that the cached connection
    is broken
    - the second operation attempt determins that LDAP port is down on
    this server (failover mechanics will retry last used server)
    - on third attempt the operation connects to next failover server
    - if this server goes down the scenario will repeat.

Please, note that if failover mechanics reports that there is no more 
servers to try (see last hunk in operation attempt) the operation is 
stopped altogether and backend is put offline.

The changes to get number of failover servers are in:

    src/providers/data_provider_fo.c
    src/providers/dp_backend.h
    src/providers/fail_over.c
    src/providers/fail_over.h

>> 3. Changes to LDAP ID backend connection and retry logic.
>>
>> As you can see, the first two items are really small and absolutely
>> pointless without the last.
>>
>> The reason why the changes to LDAP ID backend connection and retry
>> logic must go together are very simple: old logic relies on gsh member
>> of sdap_id_ctx, while in new logic there is no such a member.
>>
>> The reason why gsh needs to go away is as follows:
>> 1. gsh enforces that there will be one and only one connection to DS;
>>      
> Yes, this is completely intentional. Hence the questions.
>
>    
>> 2. When connection is about to expire we can not use it for new
>> request as it will expire halfway;
>>      
> ok
>
>    
>> 3. But at the same time connection could be yet busy with previous
>> request;
>>      
> I am not sure why this would be relevant though.
>
>    
>> 4. Therefore we have to make a new connection and close old
>> one as soon as requests using it are finished.
>>      
> Yes, but this can be easily done with a destructor attached to the
> queue code within the sdap_handle, that's why the ops list is in there.
>
>    
The destructor is called whenever talloc_zfree(ctx->gsh) is called. This 
is currently done in at least 15 places. And as there is no reference 
counting for requests using connection all other operation are 
immediately aborted.

The main feature of my patch is reference counting of sdap_id_connection 
by sdap_id_op to ensure that connection is closed as soon as all the 
consumers are done and connection is no longer cached.
>>> The goal of SSSD is to never use more than one connection at a time
>>> for account information. So your patch is kind of changing our
>>> fundamental goal by allowing multiple connections. We need to
>>> carefully evaluate that part.
>>>        
>> As I have explained above, the only time we have more then one
>> connection to DS is when old connection is about to expire and we need
>> to open new one. So when ticket lifetime is long enough (as it is in
>> normal Kerberos configuration) there will be no more then 2
>> connections open.
>>      
> I still don't see why we should remove gsh from sdap_ctx. gsh is "the
> connection you should currently use". We probably need to be able to
> tell if a re-connection is happening already and delay new requests
> until that is done.
>
> I think this would be something similar to the pooling code we have in
> the nss daemon where we check if we are already performing a specific
> request and use a queue to wait for a reply.
>    

The gsh member of sdap_id_ctx is replaced with cached_connection member 
having the same semantics. The connection that will be used by next 
operation requesting connection. Unlike gsh member cached_connection is 
not accessed and modified directly by 3 source files but rather managed 
in opaque way by sdap_id_op_connect and sdap_id_op_done methods.

>>> I see you started passing around sdap_id_op. The memory hierarchy
>>> around sdap_is_op is very delicate and required a lot of very
>>> careful handling to avoid having it disappear under our feet at the
>>> wrong time. It is meant to represent a single ldap operation tied
>>> to a specific ldap context, any changes to its use should be in a
>>> separate patch that I want to review carefully. But ideally
>>> sdap_id_op is opaque to most of the code and is internal to the
>>> processing of replies from the openldap libraries. It should never
>>> be used out of this context.
>>>        
> I have to caution here that I confused sdap_id_op and sdap_op when I
> wrote this remark. It looks like sdap_id_op is a queue for requests,
> and that goes in the right direction, but it looks a bit overly complex.
> Yet I have not fully analyzed the patch because I can't apply it.
>
>    
>> I agree that both sdap_id_op and sdap_id_connection are opaque types.
>> You can move the definitions to ldap_common.c from headers. More over
>> even declaration of sdap_id_connection can be visible only to
>> ldap_common.c.
>>      
> The point I am trying to make here is that I don't like
> sdap_id_op_handle(), as that means you are probably using an old
> connection. Newer calls should always use new connections, so what is
> the point of trying to fetch and old handler ?
>    
There are two primary methods in new connection logic: 
sdap_id_op_connect and sdap_id_op_done. They define scope of single 
operation, during which the LDAP connection is in use. The operation may 
consist of a single query to server as it is now or it may span several 
interrelated queries that must run on the same server. So if any of 
queries fail we have to redo a complete thing on the other server.

sdap_id_op represents one step higher level of business logic than 
sdap_op. sdap_op - is a single request to a single LDAP server, while 
sdap_id_op represents a retry-capable, possibly multistage operation 
targeted at a failover cluster of LDAP servers.

I agree with your concern on possible repeated use of stale connection. 
I will change the code to release connection in sdap_id_op_done. So the 
next time sdap_id_op_connect is called either cached or new connection 
will be returned.

> Btw can you avoid comments as /* sdap handle */ when defining a
> sdap_handle in a structure? it look pointless to me. Comments are
> good but only when they tell you something that is not clear from
> the code at hand already :)
>    
The reason I have put the comment on this member is: all other members 
needed a comment and I prefer to have comments on all members rather 
then on all but one. Anyway that's the matter of coding style. If you 
find the comment superficial let's remove it.

>> I were not really sure what coding style is used in project. There are
>> files coded quite differently from each other.
>>
>> On the other hand I do not see why you find handling of these
>> structures delicate:
>>
>> 1. sdap_id_op is owned by operation state (e.g. by global_enum_state).
>> So it will be automatically destroyed as operation (tevent request) is
>> completed
>>      
> Why sdap_id_op_connect() is not a tevent request ?
> Passing around callbacks is usually frowned upon, as tevent requests is
> the way we want to handle any async event if at all possible.
>
> Unless there is a *very* good reason why it is not a tevent_req then
> this is one of the things that needs to change before the patch can be
> accepted.
>
> I know that at first it seem it doesn't matter, but trust me, there is
> an almost 4 years of attempts in the samba community to get up with the
> tevent_req style for a number of subtle and painful reasons. We have
> gone through at least 4 different ways to do continuations, and
> tevent_req is the one that finally makes thing bearable. The style is
> important both formally and substantially for too many reasons to
> explain in this mail (you can jump on IRC in the #freeipa channel if you
> want to discuss it).
>    
OK, that will be on my TODO list. I'll change that as soon as you finish 
the first review pass.
>> 2. sdap_id_connection is owned by sdap_id_ctx and logic of its
>> life-cycle boils down to single method - sdap_id_release_connection.
>> Connection is released when:
>>     a) There is no operation using it
>>     b) It is not cached
>>     c) It is not in connection notify loop (notify_lock == 0)
>>
>> I hope I have explained why changes were made the way they have been
>> done.
>>      
> What I don't understand here is what you added sdap_id_op_connection at
> all, sdap_handle is meant to represent a connection, why adding yet
> another structure here ?
>    
sdap_id_connection represents a connection attempt and later an 
established connection on the level sdap_id_ctx. It corresponds to 
sdap_id_handle the same way as struct be_svc_data in data_provider_fo.c 
corresponds to to struct fo_service. It simply holds the data required 
by sdap_id_ctx to keep track of connection. And as this structure is 
completely opaque and hidden from outside world this should not be a 
problem. Probably better name for it would be sdap_id_handle_data.
> Also I *really* don't like the fact that sdap_id_op_connection has
> members like: connect_req, expire_timer
>    
sdap_id_connection owns both (connect_req and expire_timer). It is their 
TALLOC_CTX. So there is no way they can be deleted without 
sdap_id_connection knowing it. Also talloc library removes the need to 
keep all pointers we have allocated, I' d rather prefer to keep them in 
order to be able to cancel timer and request if needed.
> The only way to access a request should always be through the
> tevent_req_callback_data() function, because that guarantee the request
> is around (it's the parent of the subreq you are in).
> If you store it in a structure, you can generally end up trying to
> access freed memory during clean up operations.
>
> Storing these pointers usually means the code is wrong, and you may
> have been so careful and took it in account, but in general I prefer to
> avoid it, because even if you were careful enough, almost certainly the
> next person that will touch the code will screw it up.
> I am still reading through it, but these are normally signs that the
> architecture needs to be heavily adjusted.
>
>    
See above. The ownership goes the other way around: connect_req __is__ 
sub-request to connect operation, not vice versa.
>> I really do not see a way to split the patch and would appreciate very
>> much if you give me some advice on how to make it more readable and
>> easier to understand.
>>      
> Each major architectural change should be in a separate patch even
> though it doesn't adding anything useful until the next patch comes
> in. The only rule is that the code compiles and works.
>    
OK, let's discuss it again later. Currently we have agreed on splitting 
expire_time and failover server count code.
>> If you have any ideas on how to split the patch, I am ready to discuss
>> them and implement if needed.
>>      
> I'd really like to see an explanation of the re-connection
> code, if you can provide a new patch as requested above I will be able
> to better evaluate the re-connection logic.
>    
The reconnect logic can be outlined as follows:

1. Operation starts with sdap_id_op_connect to connect to the server:

    - if cached connection is available it is used
    - if no connection is in progress new connection is started
    - otherwise the sdap_id_op is put on the queue waiting for connect
    to complete

2. When connection to LDAP server completes:

    - if connection is successful all sdap_id_op waiting for connect are
    notified
    - if connection failed and there is no more servers to try backend
    is put offline and all sdap_id_op are notified of failure.
    - otherwise a reconnect retry is attempted on all sdap_id_op that
    have not exceeded retry limit
    - all other sdap_id_op are notified of connection failure

3. When operation is complete sdap_id_op_done is called:

    - the connection is released from operation
    - if operation succeeds - all done
    - if operation succeeds and retry limit is not exceed
    sdap_id_op_done suggests a retry and operation returns to the step 1
    - if operation succeeds and retry limit is exceed, an error is reported

The common reconnect logic implementation is in:

    src/providers/ldap/ldap_common.c
    src/providers/ldap/ldap_common.h

while it's usage is in:

    src/providers/ipa/ipa_access.c
    src/providers/ldap/ldap_id.c
    src/providers/ldap/ldap_id_enum.c


> Simo.
>    

Regards, Eugene
-------------- next part --------------
An embedded and charset-unspecified text was scrubbed...
Name: sssd-git20090401-gss.patch
URL: <https://lists.fedorahosted.org/pipermail/sssd-devel/attachments/20100401/4a160cf1/attachment.ksh>


More information about the sssd-devel mailing list