[SSSD] Patch to fix LDAP ID backend GSSAPI credential expired messages

Eugene Indenbom eindenbom at gmail.com
Tue Apr 6 15:23:35 UTC 2010


Dear Simo and Stephen,

I want to let you known that I haven't dropped the work on the patch. I 
had other business to attend last week, so I returned to the patch only 
on Monday.

I have already implemented all the changes we have discussed and 
currently testing them. Unfortunately I have found a severe problem in 
sdap_handle destruction sequence. It can be reproduced as follows:

1. There is an active sdap_op;
2. Connection breaks midway;
3. sdap_process_result is called and releases broken connection using 
sdap_handle_release
4. sdap_handle_release start calling callbacks of all active sdap_op
5. Inside the callback sdap_op destroys sdap_handle
6. When callback returns control sdap_handle_release is already 
deallocated so further actions either assert or segfault the backend.

The problem exists in sssd-1.1.1.1 as well as in my patched version.
I am currently working on the solution, which hopefully will be 
available tomorrow.

Eugene

On 04/01/2010 05:20 PM, Simo Sorce wrote:
> On Thu, 01 Apr 2010 15:05:03 +0400
> Eugene Indenbom<eindenbom at gmail.com>  wrote:
>
>    
>> Hi Simo,
>>
>> On 03/31/2010 11:08 PM, Simo Sorce wrote:
>>      
>>> Also given the complexity of the patch can you use the --patience
>>> switch to git format-patch and resubmit it, preferably rebased on
>>> top of master, but if it is too much work just tell me on top of
>>> which commit you created it so I can simply rebase my test branch
>>> and apply it w/o issues.
>>> It should make things much more readable as unfortunately it seem
>>> the patch does not apply on top of master now, so reviewing it
>>> properly is a bit hard.
>>>
>>>        
>> The git version of patch was based on
>> 80c8a4f94d54b23bce206fdd75ff2648977ce271 parent. The original version
>> was based 1.1.0 release. I have rebased the patch to the latest
>> version (7acaaa6c6563cf3b8ab20bf6431898d20d735842) and attached it.
>> The format is git, at least my git thinks so.
>>      
> Please use this command:
>
> git format-patch -1 --patience<commit id>
>
> git diff is a horrible format as it can't be used correctly with git am
>
>    
>> NB: I am not getting impatient. The things seem to move quite fast.
>> My English traits me here and there leaving a lot of room for
>> misunderstanding.
>>      
>>>> The patch can be theoretically split into 3 parts:
>>>> 1. Changes to ldap_child related to returned ticket expiration
>>>> date;
>>>>          
>>> This should be a separate patch.
>>>
>>>        
>> Then it should be applied first. The changes to return ticket
>> expiration date is in:
>>
>>      src/providers/ldap/ldap_child.c
>>      src/providers/ldap/sdap.h
>>      src/providers/ldap/sdap_async.h (first hunk)
>>      src/providers/ldap/sdap_async_connection.c (all but last hunks)
>>      src/providers/ldap/sdap_async_private.h
>>      src/providers/ldap/sdap_child_helpers.c
>>
>> I will create the separate patch for them next time.
>>      
>>>> 2. Changes to failover subsystem needed to return number of servers
>>>> registered for failover;
>>>>
>>>>          
>>> I am trying to understand what's the reason for this.
>>> Why the retries should depend on server availability ?
>>> I would expect to retry once per server until there are servers
>>> available. Can you explain why you think you should try to calculate
>>> servers availability in advance ?
>>>
>>> Just fyi, the problem is that we might not know in advance, in
>>> future we plan to add support for reading SRV records to determine
>>> the list of servers, so the number of available servers may change
>>> w/o notice.
>>>        
>> That's exactly the reason I have add methods to get the number of
>> servers. The number of connection retries for each operation is
>> calculated as two times number of servers:
>>
>>      - one retry per server for broken connection
>>      - one retry for failed connection attempt
>>
>> This can be illustrated by typical failure scenario:
>>
>>      - operation reuses cached connection that is already broken as
>> LDAP service (not computer) is down
>>      - so the first operation attempt detects that the cached
>> connection is broken
>>      - the second operation attempt determins that LDAP port is down on
>>      this server (failover mechanics will retry last used server)
>>      - on third attempt the operation connects to next failover server
>>      - if this server goes down the scenario will repeat.
>>
>> Please, note that if failover mechanics reports that there is no more
>> servers to try (see last hunk in operation attempt) the operation is
>> stopped altogether and backend is put offline.
>>      
> Sorry but I still fail to see why you need the count.
> You just need to stop when the failover code tells you there are no
> more servers anyway. Why making it more complicate by calculating a
> number we don't really need ?
>
>    
>> The changes to get number of failover servers are in:
>>
>>      src/providers/data_provider_fo.c
>>      src/providers/dp_backend.h
>>      src/providers/fail_over.c
>>      src/providers/fail_over.h
>>
>>      
>>>> 3. Changes to LDAP ID backend connection and retry logic.
>>>>
>>>> As you can see, the first two items are really small and absolutely
>>>> pointless without the last.
>>>>
>>>> The reason why the changes to LDAP ID backend connection and retry
>>>> logic must go together are very simple: old logic relies on gsh
>>>> member of sdap_id_ctx, while in new logic there is no such a
>>>> member.
>>>>
>>>> The reason why gsh needs to go away is as follows:
>>>> 1. gsh enforces that there will be one and only one connection to
>>>> DS;
>>>>          
>>> Yes, this is completely intentional. Hence the questions.
>>>
>>>
>>>        
>>>> 2. When connection is about to expire we can not use it for new
>>>> request as it will expire halfway;
>>>>
>>>>          
>>> ok
>>>
>>>
>>>        
>>>> 3. But at the same time connection could be yet busy with previous
>>>> request;
>>>>
>>>>          
>>> I am not sure why this would be relevant though.
>>>
>>>
>>>        
>>>> 4. Therefore we have to make a new connection and close old
>>>> one as soon as requests using it are finished.
>>>>
>>>>          
>>> Yes, but this can be easily done with a destructor attached to the
>>> queue code within the sdap_handle, that's why the ops list is in
>>> there.
>>>
>>>
>>>        
>> The destructor is called whenever talloc_zfree(ctx->gsh) is called.
>> This is currently done in at least 15 places. And as there is no
>> reference counting for requests using connection all other operation
>> are immediately aborted.
>>      
> I know, my proposal was to remove the talloc_zfree of ctx->gsh
> Instead add a function that marks the connection as "free when queue is
> empty", setting a boolean gsh->free_empty and then changing the code
> that handle the queue to free gsh when the last call is done.
> at the same time simply "unlink" gsh by setting ctx->gsh = NULL
>
> This way existing operations will continue to use the context (we have
> a copy of i in sdap_op) while any new request will use the new server.
>
>    
>> The main feature of my patch is reference counting of
>> sdap_id_connection by sdap_id_op to ensure that connection is closed
>> as soon as all the consumers are done and connection is no longer
>> cached.
>>      
> Yes, I am just disagreeing on the way it is done :)
>
>    
>>>>> The goal of SSSD is to never use more than one connection at a
>>>>> time for account information. So your patch is kind of changing
>>>>> our fundamental goal by allowing multiple connections. We need to
>>>>> carefully evaluate that part.
>>>>>
>>>>>            
>>>> As I have explained above, the only time we have more then one
>>>> connection to DS is when old connection is about to expire and we
>>>> need to open new one. So when ticket lifetime is long enough (as
>>>> it is in normal Kerberos configuration) there will be no more then
>>>> 2 connections open.
>>>>
>>>>          
>>> I still don't see why we should remove gsh from sdap_ctx. gsh is
>>> "the connection you should currently use". We probably need to be
>>> able to tell if a re-connection is happening already and delay new
>>> requests until that is done.
>>>
>>> I think this would be something similar to the pooling code we have
>>> in the nss daemon where we check if we are already performing a
>>> specific request and use a queue to wait for a reply.
>>>
>>>        
>> The gsh member of sdap_id_ctx is replaced with cached_connection
>> member having the same semantics. The connection that will be used by
>> next operation requesting connection. Unlike gsh member
>> cached_connection is not accessed and modified directly by 3 source
>> files but rather managed in opaque way by sdap_id_op_connect and
>> sdap_id_op_done methods.
>>      
> point is I don't think we want to keep around "cached connections",
> that's why it looks overkill to me.
>
>    
>>>>> I see you started passing around sdap_id_op. The memory hierarchy
>>>>> around sdap_is_op is very delicate and required a lot of very
>>>>> careful handling to avoid having it disappear under our feet at
>>>>> the wrong time. It is meant to represent a single ldap operation
>>>>> tied to a specific ldap context, any changes to its use should be
>>>>> in a separate patch that I want to review carefully. But ideally
>>>>> sdap_id_op is opaque to most of the code and is internal to the
>>>>> processing of replies from the openldap libraries. It should never
>>>>> be used out of this context.
>>>>>
>>>>>            
>>> I have to caution here that I confused sdap_id_op and sdap_op when I
>>> wrote this remark. It looks like sdap_id_op is a queue for requests,
>>> and that goes in the right direction, but it looks a bit overly
>>> complex. Yet I have not fully analyzed the patch because I can't
>>> apply it.
>>>
>>>
>>>        
>>>> I agree that both sdap_id_op and sdap_id_connection are opaque
>>>> types. You can move the definitions to ldap_common.c from headers.
>>>> More over even declaration of sdap_id_connection can be visible
>>>> only to ldap_common.c.
>>>>
>>>>          
>>> The point I am trying to make here is that I don't like
>>> sdap_id_op_handle(), as that means you are probably using an old
>>> connection. Newer calls should always use new connections, so what
>>> is the point of trying to fetch and old handler ?
>>>
>>>        
>> There are two primary methods in new connection logic:
>> sdap_id_op_connect and sdap_id_op_done. They define scope of single
>> operation, during which the LDAP connection is in use. The operation
>> may consist of a single query to server as it is now or it may span
>> several interrelated queries that must run on the same server. So if
>> any of queries fail we have to redo a complete thing on the other
>> server.
>>      
> No I don't think it really makes sense to reason along these lines.
> We have three cases here: 1) gss ctx is about to expire and we reconnect
> to the same server. 2) the connection already expired and was killed.
> and 3) the server has really gone down and we need to connect to a new
> one.
>
> In case 1) we have no need to perform following operations on the same
> connection, we can simply perform them on the new one as they are
> against the same server anyway.
>
> In case 2) the former connection is dead, no reason to try against
> it, operations will simply fail.
>
> In case 3) the server is down so the previous connection is dead and we
> are in the same situation as point 2.
>
> So in all cases trying to reuse the previous connection is an
> unnecessary complication. We can simply always use the new one.
> Even if acquiring the new one will slow down an operation, this is
> happens rarely and I am willing to accept a small delay once in a
> while (waiting for the reconnection to complete) in order to keep the
> code much simpler.
>
>    
>> sdap_id_op represents one step higher level of business logic than
>> sdap_op. sdap_op - is a single request to a single LDAP server, while
>> sdap_id_op represents a retry-capable, possibly multistage operation
>> targeted at a failover cluster of LDAP servers.
>>      
> Please rename sdap_id_op to something like sdap_connection, all
> it does is to represent a connection operation.
>
>    
>> I agree with your concern on possible repeated use of stale
>> connection. I will change the code to release connection in
>> sdap_id_op_done. So the next time sdap_id_op_connect is called either
>> cached or new connection will be returned.
>>      
> We should never have a "cached connection" I think. See the above
> arguments. Without a pool of cached connection the code should become
> much simpler.
>
>    
>>> Btw can you avoid comments as /* sdap handle */ when defining a
>>> sdap_handle in a structure? it look pointless to me. Comments are
>>> good but only when they tell you something that is not clear from
>>> the code at hand already :)
>>>
>>>        
>> The reason I have put the comment on this member is: all other
>> members needed a comment and I prefer to have comments on all members
>> rather then on all but one. Anyway that's the matter of coding style.
>> If you find the comment superficial let's remove it.
>>
>>      
>>>> I were not really sure what coding style is used in project. There
>>>> are files coded quite differently from each other.
>>>>
>>>> On the other hand I do not see why you find handling of these
>>>> structures delicate:
>>>>
>>>> 1. sdap_id_op is owned by operation state (e.g. by
>>>> global_enum_state). So it will be automatically destroyed as
>>>> operation (tevent request) is completed
>>>>
>>>>          
>>> Why sdap_id_op_connect() is not a tevent request ?
>>> Passing around callbacks is usually frowned upon, as tevent
>>> requests is the way we want to handle any async event if at all
>>> possible.
>>>
>>> Unless there is a *very* good reason why it is not a tevent_req then
>>> this is one of the things that needs to change before the patch can
>>> be accepted.
>>>
>>> I know that at first it seem it doesn't matter, but trust me, there
>>> is an almost 4 years of attempts in the samba community to get up
>>> with the tevent_req style for a number of subtle and painful
>>> reasons. We have gone through at least 4 different ways to do
>>> continuations, and tevent_req is the one that finally makes thing
>>> bearable. The style is important both formally and substantially
>>> for too many reasons to explain in this mail (you can jump on IRC
>>> in the #freeipa channel if you want to discuss it).
>>>
>>>        
>> OK, that will be on my TODO list. I'll change that as soon as you
>> finish the first review pass.
>>      
>>>> 2. sdap_id_connection is owned by sdap_id_ctx and logic of its
>>>> life-cycle boils down to single method -
>>>> sdap_id_release_connection. Connection is released when:
>>>>      a) There is no operation using it
>>>>      b) It is not cached
>>>>      c) It is not in connection notify loop (notify_lock == 0)
>>>>
>>>> I hope I have explained why changes were made the way they have
>>>> been done.
>>>>
>>>>          
>>> What I don't understand here is what you added
>>> sdap_id_op_connection at all, sdap_handle is meant to represent a
>>> connection, why adding yet another structure here ?
>>>
>>>        
>> sdap_id_connection represents a connection attempt and later an
>> established connection on the level sdap_id_ctx. It corresponds to
>> sdap_id_handle the same way as struct be_svc_data in
>> data_provider_fo.c corresponds to to struct fo_service.
>>      
> please don't take be_svc as an example of good code style, that
> interface is what it is for historical reasons (it used to be a
> synchronous interface that we used before standardizing on tevent_req
> and was influenced by the fact we needed to interface with dbus that
> has a different style) but it is in now way to emulate.
>
>    
>> It simply
>> holds the data required by sdap_id_ctx to keep track of connection.
>> And as this structure is completely opaque and hidden from outside
>> world this should not be a problem. Probably better name for it would
>> be sdap_id_handle_data.
>>      
> sdap_connection_data perhaps, but I still think you should have all you
> need in sdap_handle, and make that more opaque if necessary.
>
>    
>>> Also I *really* don't like the fact that sdap_id_op_connection has
>>> members like: connect_req, expire_timer
>>>
>>>        
>> sdap_id_connection owns both (connect_req and expire_timer).
>>      
> It's not a matter of hierarchies, it is just an sign something is not
> good. It's very rare that saving a req pointer is a good thing.
>
>    
>> It is their TALLOC_CTX. So there is no way they can be deleted
>> without sdap_id_connection knowing it. Also talloc library removes
>> the need to keep all pointers we have allocated, I' d rather prefer
>> to keep them in order to be able to cancel timer and request if
>> needed.
>>      
> The timer should *always* be allocated on the request so that when it
> is finished it is freed and the timer with it.
>
> If your request does not complete in a short time, then there is
> definitely something *very* wrong here. Request must not be kept around
> they are not meant to keep state. They are meant to carry on a very
> specific operation and to be released when the operation is completed.
> If state needs to survive the request, it must be returned in the
> _recv() function and stolen on an appropriate memory context.
>
> This is why requests are not kept around, there isn't a case where you
> may want to free a request. Either a timeout (eventually in a parent
> request) kicks in and an error is returned causeing the whole hierarchy
> of requests to be ultimately freed. Or the request completes, returns
> data in recv() and is freed.
>
>    
>>> The only way to access a request should always be through the
>>> tevent_req_callback_data() function, because that guarantee the
>>> request is around (it's the parent of the subreq you are in).
>>> If you store it in a structure, you can generally end up trying to
>>> access freed memory during clean up operations.
>>>
>>> Storing these pointers usually means the code is wrong, and you may
>>> have been so careful and took it in account, but in general I
>>> prefer to avoid it, because even if you were careful enough, almost
>>> certainly the next person that will touch the code will screw it up.
>>> I am still reading through it, but these are normally signs that the
>>> architecture needs to be heavily adjusted.
>>>
>>>
>>>        
>> See above. The ownership goes the other way around: connect_req
>> __is__ sub-request to connect operation, not vice versa.
>>      
> This is not the point.
>
>    
>>>> I really do not see a way to split the patch and would appreciate
>>>> very much if you give me some advice on how to make it more
>>>> readable and easier to understand.
>>>>
>>>>          
>>> Each major architectural change should be in a separate patch even
>>> though it doesn't adding anything useful until the next patch comes
>>> in. The only rule is that the code compiles and works.
>>>
>>>        
>> OK, let's discuss it again later. Currently we have agreed on
>> splitting expire_time and failover server count code.
>>      
> While I think failover modification must be split I also think they are
> unnecessary. I am still not convinced they are useful, at least not in
> the context of the patch.
>
>    
>>>> If you have any ideas on how to split the patch, I am ready to
>>>> discuss them and implement if needed.
>>>>
>>>>          
>>> I'd really like to see an explanation of the re-connection
>>> code, if you can provide a new patch as requested above I will be
>>> able to better evaluate the re-connection logic.
>>>
>>>        
>> The reconnect logic can be outlined as follows:
>>
>> 1. Operation starts with sdap_id_op_connect to connect to the server:
>>
>>      - if cached connection is available it is used
>>      - if no connection is in progress new connection is started
>>      - otherwise the sdap_id_op is put on the queue waiting for connect
>>      to complete
>>      
> ack, although s/cached/existing/
>
>    
>> 2. When connection to LDAP server completes:
>>
>>      - if connection is successful all sdap_id_op waiting for connect
>> are notified
>>      - if connection failed and there is no more servers to try backend
>>      is put offline and all sdap_id_op are notified of failure.
>>      - otherwise a reconnect retry is attempted on all sdap_id_op that
>>      have not exceeded retry limit
>>      - all other sdap_id_op are notified of connection failure
>>      
> ack except the retry limit, the only 2 factors that should matter here
> are:
> 1) the operation timeout kicked in so the operation is aborted
> This timeout should be a timed event attached to the connection
> request, and cleared when the request is terminated and the memory
> released.
>
> 2) the failover code returned there are no more servers to try.
>
>
> There is no other retry limit that really makes sense to me.
>
> I was also thinking that we should probably have a task that
> refreshes/idles a connection so that reconnections can happen
> completely outside of normal calls timeouts, and hopefully will reduce
> the latency of calls. Except for the case when we realize we should
> terminate the connection because we are idle. In that case the first
> connection after the idle disconnect will suffer connection
> establishment latency.
> But this should be a following, separate patch.
>
>    
>> 3. When operation is complete sdap_id_op_done is called:
>>
>>      - the connection is released from operation
>>      - if operation succeeds - all done
>>      - if operation succeeds and retry limit is not exceed
>>      sdap_id_op_done suggests a retry and operation returns to the
>> step 1
>>      - if operation succeeds and retry limit is exceed, an error is
>> reported
>>      
> Uhmm some "succeeds" here look suspicious, I guess the last 2 are
> actually "failed" ?
>
>    
>> The common reconnect logic implementation is in:
>>
>>      src/providers/ldap/ldap_common.c
>>      src/providers/ldap/ldap_common.h
>>
>> while it's usage is in:
>>
>>      src/providers/ipa/ipa_access.c
>>      src/providers/ldap/ldap_id.c
>>      src/providers/ldap/ldap_id_enum.c
>>      
> Yep placement of the functions is correct, though I think we need some
> quite big chances based on my comments above.
>
> Simo.
>
>    




More information about the sssd-devel mailing list