Hi Simon,
I have to admit that the patch is really quite big and, actually, it has by far exceeded size and time limits I would normally apply to patches to third party components.
The patch can be theoretically split into 3 parts: 1. Changes to ldap_child related to returned ticket expiration date; 2. Changes to failover subsystem needed to return number of servers registered for failover; 3. Changes to LDAP ID backend connection and retry logic.
As you can see, the first two items are really small and absolutely pointless without the last.
The reason why the changes to LDAP ID backend connection and retry logic must go together are very simple: old logic relies on gsh member of sdap_id_ctx, while in new logic there is no such a member.
The reason why gsh needs to go away is as follows: 1. gsh enforces that there will be one and only one connection to DS; 2. When connection is about to expire we can not use it for new request as it will expire halfway; 3. But at the same time connection could be yet busy with previous request; 4. Therefore we have to make a new connection and close old one as soon as requests using it are finished.
The goal of SSSD is to never use more than one connection at a time for account information. So your patch is kind of changing our fundamental goal by allowing multiple connections. We need to carefully evaluate that part.
As I have explained above, the only time we have more then one connection to DS is when old connection is about to expire and we need to open new one. So when ticket lifetime is long enough (as it is in normal Kerberos configuration) there will be no more then 2 connections open.
I see you started passing around sdap_id_op. The memory hierarchy around sdap_is_op is very delicate and required a lot of very careful handling to avoid having it disappear under our feet at the wrong time. It is meant to represent a single ldap operation tied to a specific ldap context, any changes to its use should be in a separate patch that I want to review carefully. But ideally sdap_id_op is opaque to most of the code and is internal to the processing of replies from the openldap libraries. It should never be used out of this context.
I agree that both sdap_id_op and sdap_id_connection are opaque types. You can move the definitions to ldap_common.c from headers. More over even declaration of sdap_id_connection can be visible only to ldap_common.c.
I were not really sure what coding style is used in project. There are files coded quite differently from each other.
On the other hand I do not see why you find handling of these structures delicate:
1. sdap_id_op is owned by operation state (e.g. by global_enum_state). So it will be automatically destroyed as operation (tevent request) is completed
2. sdap_id_connection is owned by sdap_id_ctx and logic of its life-cycle boils down to single method - sdap_id_release_connection. Connection is released when: a) There is no operation using it b) It is not cached c) It is not in connection notify loop (notify_lock == 0)
I hope I have explained why changes were made the way they have been done.
I really do not see a way to split the patch and would appreciate very much if you give me some advice on how to make it more readable and easier to understand.
If you have any ideas on how to split the patch, I am ready to discuss them and implement if needed.
Regards, Eugene
On Mon, 29 Mar 2010 19:41:57 +0400 Eugene Indenbom eindenbom@gmail.com wrote:
Hi Simon,
I have to admit that the patch is really quite big and, actually, it has by far exceeded size and time limits I would normally apply to patches to third party components.
The patch can be theoretically split into 3 parts:
- Changes to ldap_child related to returned ticket expiration date;
- Changes to failover subsystem needed to return number of servers
registered for failover; 3. Changes to LDAP ID backend connection and retry logic.
As you can see, the first two items are really small and absolutely pointless without the last.
The reason why the changes to LDAP ID backend connection and retry logic must go together are very simple: old logic relies on gsh member of sdap_id_ctx, while in new logic there is no such a member.
The reason why gsh needs to go away is as follows:
- gsh enforces that there will be one and only one connection to DS;
- When connection is about to expire we can not use it for new
request as it will expire halfway; 3. But at the same time connection could be yet busy with previous request; 4. Therefore we have to make a new connection and close old one as soon as requests using it are finished.
The goal of SSSD is to never use more than one connection at a time for account information. So your patch is kind of changing our fundamental goal by allowing multiple connections. We need to carefully evaluate that part.
As I have explained above, the only time we have more then one connection to DS is when old connection is about to expire and we need to open new one. So when ticket lifetime is long enough (as it is in normal Kerberos configuration) there will be no more then 2 connections open.
I see you started passing around sdap_id_op. The memory hierarchy around sdap_is_op is very delicate and required a lot of very careful handling to avoid having it disappear under our feet at the wrong time. It is meant to represent a single ldap operation tied to a specific ldap context, any changes to its use should be in a separate patch that I want to review carefully. But ideally sdap_id_op is opaque to most of the code and is internal to the processing of replies from the openldap libraries. It should never be used out of this context.
I agree that both sdap_id_op and sdap_id_connection are opaque types. You can move the definitions to ldap_common.c from headers. More over even declaration of sdap_id_connection can be visible only to ldap_common.c.
I were not really sure what coding style is used in project. There are files coded quite differently from each other.
On the other hand I do not see why you find handling of these structures delicate:
- sdap_id_op is owned by operation state (e.g. by global_enum_state).
So it will be automatically destroyed as operation (tevent request) is completed
- sdap_id_connection is owned by sdap_id_ctx and logic of its
life-cycle boils down to single method - sdap_id_release_connection. Connection is released when: a) There is no operation using it b) It is not cached c) It is not in connection notify loop (notify_lock == 0)
I hope I have explained why changes were made the way they have been done.
I really do not see a way to split the patch and would appreciate very much if you give me some advice on how to make it more readable and easier to understand.
If you have any ideas on how to split the patch, I am ready to discuss them and implement if needed.
Hi Eugene, There are still a few things that do not resonate here, but at this point I will have to go through the patch to be able to give back proper comments. I will try to do that as soon as I can.
Simo.
On 03/30/2010 10:58 PM, Simo Sorce wrote:
Hi Eugene, There are still a few things that do not resonate here, but at this point I will have to go through the patch to be able to give back proper comments. I will try to do that as soon as I can.
Simo.
Hi Simo,
As there is SSSD 1.1.1 release coming I suggest to postpone patch review.
After SSSD 1.1.1 release I shall: - Merge the patch with SSSD 1.1.1 changes - Make sdap_id_op and sdap_id_connection opaque types as you suggested - Unit test changes made - Put the result into preproduction testing in my domain - Resubmit the patch with more detailed description of changes so it would be easier to review it
Regards, Eugene
On Wed, 31 Mar 2010 09:35:34 +0400 Eugene Indenbom eindenbom@gmail.com wrote:
On 03/30/2010 10:58 PM, Simo Sorce wrote:
Hi Eugene, There are still a few things that do not resonate here, but at this point I will have to go through the patch to be able to give back proper comments. I will try to do that as soon as I can.
Simo.
Hi Simo,
As there is SSSD 1.1.1 release coming I suggest to postpone patch review.
No real need, 1.1 has been branched quite some time ago, we can commit to master at any time.
After SSSD 1.1.1 release I shall: - Merge the patch with SSSD 1.1.1 changes - Make sdap_id_op and sdap_id_connection opaque types as you suggested - Unit test changes made - Put the result into preproduction testing in my domain - Resubmit the patch with more detailed description of changes so it would be easier to review it
This would be good, but I am going to comment on the basics of your proposed change first so that you do not do work that maybe we decide is not in the right direction later.
Simo.
On Mon, 29 Mar 2010 19:41:57 +0400 Eugene Indenbom eindenbom@gmail.com wrote:
Hi Simon,
I have to admit that the patch is really quite big and, actually, it has by far exceeded size and time limits I would normally apply to patches to third party components.
The patch can be theoretically split into 3 parts:
- Changes to ldap_child related to returned ticket expiration date;
This should be a separate patch.
- Changes to failover subsystem needed to return number of servers
registered for failover;
I am trying to understand what's the reason for this. Why the retries should depend on server availability ? I would expect to retry once per server until there are servers available. Can you explain why you think you should try to calculate servers availability in advance ?
Just fyi, the problem is that we might not know in advance, in future we plan to add support for reading SRV records to determine the list of servers, so the number of available servers may change w/o notice.
- Changes to LDAP ID backend connection and retry logic.
As you can see, the first two items are really small and absolutely pointless without the last.
The reason why the changes to LDAP ID backend connection and retry logic must go together are very simple: old logic relies on gsh member of sdap_id_ctx, while in new logic there is no such a member.
The reason why gsh needs to go away is as follows:
- gsh enforces that there will be one and only one connection to DS;
Yes, this is completely intentional. Hence the questions.
- When connection is about to expire we can not use it for new
request as it will expire halfway;
ok
- But at the same time connection could be yet busy with previous
request;
I am not sure why this would be relevant though.
- Therefore we have to make a new connection and close old
one as soon as requests using it are finished.
Yes, but this can be easily done with a destructor attached to the queue code within the sdap_handle, that's why the ops list is in there.
The goal of SSSD is to never use more than one connection at a time for account information. So your patch is kind of changing our fundamental goal by allowing multiple connections. We need to carefully evaluate that part.
As I have explained above, the only time we have more then one connection to DS is when old connection is about to expire and we need to open new one. So when ticket lifetime is long enough (as it is in normal Kerberos configuration) there will be no more then 2 connections open.
I still don't see why we should remove gsh from sdap_ctx. gsh is "the connection you should currently use". We probably need to be able to tell if a re-connection is happening already and delay new requests until that is done.
I think this would be something similar to the pooling code we have in the nss daemon where we check if we are already performing a specific request and use a queue to wait for a reply.
I see you started passing around sdap_id_op. The memory hierarchy around sdap_is_op is very delicate and required a lot of very careful handling to avoid having it disappear under our feet at the wrong time. It is meant to represent a single ldap operation tied to a specific ldap context, any changes to its use should be in a separate patch that I want to review carefully. But ideally sdap_id_op is opaque to most of the code and is internal to the processing of replies from the openldap libraries. It should never be used out of this context.
I have to caution here that I confused sdap_id_op and sdap_op when I wrote this remark. It looks like sdap_id_op is a queue for requests, and that goes in the right direction, but it looks a bit overly complex. Yet I have not fully analyzed the patch because I can't apply it.
I agree that both sdap_id_op and sdap_id_connection are opaque types. You can move the definitions to ldap_common.c from headers. More over even declaration of sdap_id_connection can be visible only to ldap_common.c.
The point I am trying to make here is that I don't like sdap_id_op_handle(), as that means you are probably using an old connection. Newer calls should always use new connections, so what is the point of trying to fetch and old handler ?
Btw can you avoid comments as /* sdap handle */ when defining a sdap_handle in a structure? it look pointless to me. Comments are good but only when they tell you something that is not clear from the code at hand already :)
Also given the complexity of the patch can you use the --patience switch to git format-patch and resubmit it, preferably rebased on top of master, but if it is too much work just tell me on top of which commit you created it so I can simply rebase my test branch and apply it w/o issues. It should make things much more readable as unfortunately it seem the patch does not apply on top of master now, so reviewing it properly is a bit hard.
I were not really sure what coding style is used in project. There are files coded quite differently from each other.
On the other hand I do not see why you find handling of these structures delicate:
- sdap_id_op is owned by operation state (e.g. by global_enum_state).
So it will be automatically destroyed as operation (tevent request) is completed
Why sdap_id_op_connect() is not a tevent request ? Passing around callbacks is usually frowned upon, as tevent requests is the way we want to handle any async event if at all possible.
Unless there is a *very* good reason why it is not a tevent_req then this is one of the things that needs to change before the patch can be accepted.
I know that at first it seem it doesn't matter, but trust me, there is an almost 4 years of attempts in the samba community to get up with the tevent_req style for a number of subtle and painful reasons. We have gone through at least 4 different ways to do continuations, and tevent_req is the one that finally makes thing bearable. The style is important both formally and substantially for too many reasons to explain in this mail (you can jump on IRC in the #freeipa channel if you want to discuss it).
- sdap_id_connection is owned by sdap_id_ctx and logic of its
life-cycle boils down to single method - sdap_id_release_connection. Connection is released when: a) There is no operation using it b) It is not cached c) It is not in connection notify loop (notify_lock == 0)
I hope I have explained why changes were made the way they have been done.
What I don't understand here is what you added sdap_id_op_connection at all, sdap_handle is meant to represent a connection, why adding yet another structure here ?
Also I *really* don't like the fact that sdap_id_op_connection has members like: connect_req, expire_timer
The only way to access a request should always be through the tevent_req_callback_data() function, because that guarantee the request is around (it's the parent of the subreq you are in). If you store it in a structure, you can generally end up trying to access freed memory during clean up operations.
Storing these pointers usually means the code is wrong, and you may have been so careful and took it in account, but in general I prefer to avoid it, because even if you were careful enough, almost certainly the next person that will touch the code will screw it up. I am still reading through it, but these are normally signs that the architecture needs to be heavily adjusted.
I really do not see a way to split the patch and would appreciate very much if you give me some advice on how to make it more readable and easier to understand.
Each major architectural change should be in a separate patch even though it doesn't adding anything useful until the next patch comes in. The only rule is that the code compiles and works.
If you have any ideas on how to split the patch, I am ready to discuss them and implement if needed.
I'd really like to see an explanation of the re-connection code, if you can provide a new patch as requested above I will be able to better evaluate the re-connection logic.
Simo.
Hi Simo,
On 03/31/2010 11:08 PM, Simo Sorce wrote:
Also given the complexity of the patch can you use the --patience switch to git format-patch and resubmit it, preferably rebased on top of master, but if it is too much work just tell me on top of which commit you created it so I can simply rebase my test branch and apply it w/o issues. It should make things much more readable as unfortunately it seem the patch does not apply on top of master now, so reviewing it properly is a bit hard.
The git version of patch was based on 80c8a4f94d54b23bce206fdd75ff2648977ce271 parent. The original version was based 1.1.0 release. I have rebased the patch to the latest version (7acaaa6c6563cf3b8ab20bf6431898d20d735842) and attached it. The format is git, at least my git thinks so.
NB: I am not getting impatient. The things seem to move quite fast. My English traits me here and there leaving a lot of room for misunderstanding.
The patch can be theoretically split into 3 parts:
- Changes to ldap_child related to returned ticket expiration date;
This should be a separate patch.
Then it should be applied first. The changes to return ticket expiration date is in:
src/providers/ldap/ldap_child.c src/providers/ldap/sdap.h src/providers/ldap/sdap_async.h (first hunk) src/providers/ldap/sdap_async_connection.c (all but last hunks) src/providers/ldap/sdap_async_private.h src/providers/ldap/sdap_child_helpers.c
I will create the separate patch for them next time.
- Changes to failover subsystem needed to return number of servers
registered for failover;
I am trying to understand what's the reason for this. Why the retries should depend on server availability ? I would expect to retry once per server until there are servers available. Can you explain why you think you should try to calculate servers availability in advance ?
Just fyi, the problem is that we might not know in advance, in future we plan to add support for reading SRV records to determine the list of servers, so the number of available servers may change w/o notice.
That's exactly the reason I have add methods to get the number of servers. The number of connection retries for each operation is calculated as two times number of servers:
- one retry per server for broken connection - one retry for failed connection attempt
This can be illustrated by typical failure scenario:
- operation reuses cached connection that is already broken as LDAP service (not computer) is down - so the first operation attempt detects that the cached connection is broken - the second operation attempt determins that LDAP port is down on this server (failover mechanics will retry last used server) - on third attempt the operation connects to next failover server - if this server goes down the scenario will repeat.
Please, note that if failover mechanics reports that there is no more servers to try (see last hunk in operation attempt) the operation is stopped altogether and backend is put offline.
The changes to get number of failover servers are in:
src/providers/data_provider_fo.c src/providers/dp_backend.h src/providers/fail_over.c src/providers/fail_over.h
- Changes to LDAP ID backend connection and retry logic.
As you can see, the first two items are really small and absolutely pointless without the last.
The reason why the changes to LDAP ID backend connection and retry logic must go together are very simple: old logic relies on gsh member of sdap_id_ctx, while in new logic there is no such a member.
The reason why gsh needs to go away is as follows:
- gsh enforces that there will be one and only one connection to DS;
Yes, this is completely intentional. Hence the questions.
- When connection is about to expire we can not use it for new
request as it will expire halfway;
ok
- But at the same time connection could be yet busy with previous
request;
I am not sure why this would be relevant though.
- Therefore we have to make a new connection and close old
one as soon as requests using it are finished.
Yes, but this can be easily done with a destructor attached to the queue code within the sdap_handle, that's why the ops list is in there.
The destructor is called whenever talloc_zfree(ctx->gsh) is called. This is currently done in at least 15 places. And as there is no reference counting for requests using connection all other operation are immediately aborted.
The main feature of my patch is reference counting of sdap_id_connection by sdap_id_op to ensure that connection is closed as soon as all the consumers are done and connection is no longer cached.
The goal of SSSD is to never use more than one connection at a time for account information. So your patch is kind of changing our fundamental goal by allowing multiple connections. We need to carefully evaluate that part.
As I have explained above, the only time we have more then one connection to DS is when old connection is about to expire and we need to open new one. So when ticket lifetime is long enough (as it is in normal Kerberos configuration) there will be no more then 2 connections open.
I still don't see why we should remove gsh from sdap_ctx. gsh is "the connection you should currently use". We probably need to be able to tell if a re-connection is happening already and delay new requests until that is done.
I think this would be something similar to the pooling code we have in the nss daemon where we check if we are already performing a specific request and use a queue to wait for a reply.
The gsh member of sdap_id_ctx is replaced with cached_connection member having the same semantics. The connection that will be used by next operation requesting connection. Unlike gsh member cached_connection is not accessed and modified directly by 3 source files but rather managed in opaque way by sdap_id_op_connect and sdap_id_op_done methods.
I see you started passing around sdap_id_op. The memory hierarchy around sdap_is_op is very delicate and required a lot of very careful handling to avoid having it disappear under our feet at the wrong time. It is meant to represent a single ldap operation tied to a specific ldap context, any changes to its use should be in a separate patch that I want to review carefully. But ideally sdap_id_op is opaque to most of the code and is internal to the processing of replies from the openldap libraries. It should never be used out of this context.
I have to caution here that I confused sdap_id_op and sdap_op when I wrote this remark. It looks like sdap_id_op is a queue for requests, and that goes in the right direction, but it looks a bit overly complex. Yet I have not fully analyzed the patch because I can't apply it.
I agree that both sdap_id_op and sdap_id_connection are opaque types. You can move the definitions to ldap_common.c from headers. More over even declaration of sdap_id_connection can be visible only to ldap_common.c.
The point I am trying to make here is that I don't like sdap_id_op_handle(), as that means you are probably using an old connection. Newer calls should always use new connections, so what is the point of trying to fetch and old handler ?
There are two primary methods in new connection logic: sdap_id_op_connect and sdap_id_op_done. They define scope of single operation, during which the LDAP connection is in use. The operation may consist of a single query to server as it is now or it may span several interrelated queries that must run on the same server. So if any of queries fail we have to redo a complete thing on the other server.
sdap_id_op represents one step higher level of business logic than sdap_op. sdap_op - is a single request to a single LDAP server, while sdap_id_op represents a retry-capable, possibly multistage operation targeted at a failover cluster of LDAP servers.
I agree with your concern on possible repeated use of stale connection. I will change the code to release connection in sdap_id_op_done. So the next time sdap_id_op_connect is called either cached or new connection will be returned.
Btw can you avoid comments as /* sdap handle */ when defining a sdap_handle in a structure? it look pointless to me. Comments are good but only when they tell you something that is not clear from the code at hand already :)
The reason I have put the comment on this member is: all other members needed a comment and I prefer to have comments on all members rather then on all but one. Anyway that's the matter of coding style. If you find the comment superficial let's remove it.
I were not really sure what coding style is used in project. There are files coded quite differently from each other.
On the other hand I do not see why you find handling of these structures delicate:
- sdap_id_op is owned by operation state (e.g. by global_enum_state).
So it will be automatically destroyed as operation (tevent request) is completed
Why sdap_id_op_connect() is not a tevent request ? Passing around callbacks is usually frowned upon, as tevent requests is the way we want to handle any async event if at all possible.
Unless there is a *very* good reason why it is not a tevent_req then this is one of the things that needs to change before the patch can be accepted.
I know that at first it seem it doesn't matter, but trust me, there is an almost 4 years of attempts in the samba community to get up with the tevent_req style for a number of subtle and painful reasons. We have gone through at least 4 different ways to do continuations, and tevent_req is the one that finally makes thing bearable. The style is important both formally and substantially for too many reasons to explain in this mail (you can jump on IRC in the #freeipa channel if you want to discuss it).
OK, that will be on my TODO list. I'll change that as soon as you finish the first review pass.
- sdap_id_connection is owned by sdap_id_ctx and logic of its
life-cycle boils down to single method - sdap_id_release_connection. Connection is released when: a) There is no operation using it b) It is not cached c) It is not in connection notify loop (notify_lock == 0)
I hope I have explained why changes were made the way they have been done.
What I don't understand here is what you added sdap_id_op_connection at all, sdap_handle is meant to represent a connection, why adding yet another structure here ?
sdap_id_connection represents a connection attempt and later an established connection on the level sdap_id_ctx. It corresponds to sdap_id_handle the same way as struct be_svc_data in data_provider_fo.c corresponds to to struct fo_service. It simply holds the data required by sdap_id_ctx to keep track of connection. And as this structure is completely opaque and hidden from outside world this should not be a problem. Probably better name for it would be sdap_id_handle_data.
Also I *really* don't like the fact that sdap_id_op_connection has members like: connect_req, expire_timer
sdap_id_connection owns both (connect_req and expire_timer). It is their TALLOC_CTX. So there is no way they can be deleted without sdap_id_connection knowing it. Also talloc library removes the need to keep all pointers we have allocated, I' d rather prefer to keep them in order to be able to cancel timer and request if needed.
The only way to access a request should always be through the tevent_req_callback_data() function, because that guarantee the request is around (it's the parent of the subreq you are in). If you store it in a structure, you can generally end up trying to access freed memory during clean up operations.
Storing these pointers usually means the code is wrong, and you may have been so careful and took it in account, but in general I prefer to avoid it, because even if you were careful enough, almost certainly the next person that will touch the code will screw it up. I am still reading through it, but these are normally signs that the architecture needs to be heavily adjusted.
See above. The ownership goes the other way around: connect_req __is__ sub-request to connect operation, not vice versa.
I really do not see a way to split the patch and would appreciate very much if you give me some advice on how to make it more readable and easier to understand.
Each major architectural change should be in a separate patch even though it doesn't adding anything useful until the next patch comes in. The only rule is that the code compiles and works.
OK, let's discuss it again later. Currently we have agreed on splitting expire_time and failover server count code.
If you have any ideas on how to split the patch, I am ready to discuss them and implement if needed.
I'd really like to see an explanation of the re-connection code, if you can provide a new patch as requested above I will be able to better evaluate the re-connection logic.
The reconnect logic can be outlined as follows:
1. Operation starts with sdap_id_op_connect to connect to the server:
- if cached connection is available it is used - if no connection is in progress new connection is started - otherwise the sdap_id_op is put on the queue waiting for connect to complete
2. When connection to LDAP server completes:
- if connection is successful all sdap_id_op waiting for connect are notified - if connection failed and there is no more servers to try backend is put offline and all sdap_id_op are notified of failure. - otherwise a reconnect retry is attempted on all sdap_id_op that have not exceeded retry limit - all other sdap_id_op are notified of connection failure
3. When operation is complete sdap_id_op_done is called:
- the connection is released from operation - if operation succeeds - all done - if operation succeeds and retry limit is not exceed sdap_id_op_done suggests a retry and operation returns to the step 1 - if operation succeeds and retry limit is exceed, an error is reported
The common reconnect logic implementation is in:
src/providers/ldap/ldap_common.c src/providers/ldap/ldap_common.h
while it's usage is in:
src/providers/ipa/ipa_access.c src/providers/ldap/ldap_id.c src/providers/ldap/ldap_id_enum.c
Simo.
Regards, Eugene
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1
On 04/01/2010 07:05 AM, Eugene Indenbom wrote:
Hi Simo,
On 03/31/2010 11:08 PM, Simo Sorce wrote:
Also given the complexity of the patch can you use the --patience switch to git format-patch and resubmit it, preferably rebased on top of master, but if it is too much work just tell me on top of which commit you created it so I can simply rebase my test branch and apply it w/o issues. It should make things much more readable as unfortunately it seem the patch does not apply on top of master now, so reviewing it properly is a bit hard.
The git version of patch was based on 80c8a4f94d54b23bce206fdd75ff2648977ce271 parent. The original version was based 1.1.0 release. I have rebased the patch to the latest version (7acaaa6c6563cf3b8ab20bf6431898d20d735842) and attached it. The format is git, at least my git thinks so.
Please use 'git format-patch' rather than 'git diff' to generate patches. This adds additional metadata to the diff file that makes it possible to apply the patch to different parents (because it can locate a common ancestor)
NB: I am not getting impatient. The things seem to move quite fast. My English traits me here and there leaving a lot of room for misunderstanding.
The patch can be theoretically split into 3 parts:
- Changes to ldap_child related to returned ticket expiration date;
This should be a separate patch.
Then it should be applied first. The changes to return ticket expiration date is in:
src/providers/ldap/ldap_child.c src/providers/ldap/sdap.h src/providers/ldap/sdap_async.h (first hunk) src/providers/ldap/sdap_async_connection.c (all but last hunks) src/providers/ldap/sdap_async_private.h src/providers/ldap/sdap_child_helpers.c
I will create the separate patch for them next time.
Please submit a separate patch for this.
- Changes to failover subsystem needed to return number of servers
registered for failover;
I am trying to understand what's the reason for this. Why the retries should depend on server availability ? I would expect to retry once per server until there are servers available. Can you explain why you think you should try to calculate servers availability in advance ?
Just fyi, the problem is that we might not know in advance, in future we plan to add support for reading SRV records to determine the list of servers, so the number of available servers may change w/o notice.
That's exactly the reason I have add methods to get the number of servers. The number of connection retries for each operation is calculated as two times number of servers:
- one retry per server for broken connection
- one retry for failed connection attempt
This can be illustrated by typical failure scenario:
- operation reuses cached connection that is already broken as LDAP
service (not computer) is down
- so the first operation attempt detects that the cached connection
is broken
- the second operation attempt determins that LDAP port is down on
this server (failover mechanics will retry last used server)
- on third attempt the operation connects to next failover server
- if this server goes down the scenario will repeat.
Please, note that if failover mechanics reports that there is no more servers to try (see last hunk in operation attempt) the operation is stopped altogether and backend is put offline.
The changes to get number of failover servers are in:
src/providers/data_provider_fo.c src/providers/dp_backend.h src/providers/fail_over.c src/providers/fail_over.h
Please submit this as a patch on its own.
- Changes to LDAP ID backend connection and retry logic.
As you can see, the first two items are really small and absolutely pointless without the last.
The reason why the changes to LDAP ID backend connection and retry logic must go together are very simple: old logic relies on gsh member of sdap_id_ctx, while in new logic there is no such a member.
The reason why gsh needs to go away is as follows:
- gsh enforces that there will be one and only one connection to DS;
Yes, this is completely intentional. Hence the questions.
- When connection is about to expire we can not use it for new
request as it will expire halfway;
ok
- But at the same time connection could be yet busy with previous
request;
I am not sure why this would be relevant though.
- Therefore we have to make a new connection and close old
one as soon as requests using it are finished.
Yes, but this can be easily done with a destructor attached to the queue code within the sdap_handle, that's why the ops list is in there.
The destructor is called whenever talloc_zfree(ctx->gsh) is called. This is currently done in at least 15 places. And as there is no reference counting for requests using connection all other operation are immediately aborted.
The main feature of my patch is reference counting of sdap_id_connection by sdap_id_op to ensure that connection is closed as soon as all the consumers are done and connection is no longer cached.
I'm not sure I like this. There's a lot of overhead involved in creating the initial connection. I don't think we should close it off immediately after all active operations are completed. If we get another transaction immediately after that. I'd rather that we left the connection open until it times itself out, so that if another request comes in quickly, we can just continue to handle it.
The goal of SSSD is to never use more than one connection at a time for account information. So your patch is kind of changing our fundamental goal by allowing multiple connections. We need to carefully evaluate that part.
As I have explained above, the only time we have more then one connection to DS is when old connection is about to expire and we need to open new one. So when ticket lifetime is long enough (as it is in normal Kerberos configuration) there will be no more then 2 connections open.
I still don't see why we should remove gsh from sdap_ctx. gsh is "the connection you should currently use". We probably need to be able to tell if a re-connection is happening already and delay new requests until that is done.
I think this would be something similar to the pooling code we have in the nss daemon where we check if we are already performing a specific request and use a queue to wait for a reply.
The gsh member of sdap_id_ctx is replaced with cached_connection member having the same semantics. The connection that will be used by next operation requesting connection. Unlike gsh member cached_connection is not accessed and modified directly by 3 source files but rather managed in opaque way by sdap_id_op_connect and sdap_id_op_done methods.
I see you started passing around sdap_id_op. The memory hierarchy around sdap_is_op is very delicate and required a lot of very careful handling to avoid having it disappear under our feet at the wrong time. It is meant to represent a single ldap operation tied to a specific ldap context, any changes to its use should be in a separate patch that I want to review carefully. But ideally sdap_id_op is opaque to most of the code and is internal to the processing of replies from the openldap libraries. It should never be used out of this context.
I have to caution here that I confused sdap_id_op and sdap_op when I wrote this remark. It looks like sdap_id_op is a queue for requests, and that goes in the right direction, but it looks a bit overly complex. Yet I have not fully analyzed the patch because I can't apply it.
I agree that both sdap_id_op and sdap_id_connection are opaque types. You can move the definitions to ldap_common.c from headers. More over even declaration of sdap_id_connection can be visible only to ldap_common.c.
The point I am trying to make here is that I don't like sdap_id_op_handle(), as that means you are probably using an old connection. Newer calls should always use new connections, so what is the point of trying to fetch and old handler ?
There are two primary methods in new connection logic: sdap_id_op_connect and sdap_id_op_done. They define scope of single operation, during which the LDAP connection is in use. The operation may consist of a single query to server as it is now or it may span several interrelated queries that must run on the same server. So if any of queries fail we have to redo a complete thing on the other server.
sdap_id_op represents one step higher level of business logic than sdap_op. sdap_op - is a single request to a single LDAP server, while sdap_id_op represents a retry-capable, possibly multistage operation targeted at a failover cluster of LDAP servers.
I agree with your concern on possible repeated use of stale connection. I will change the code to release connection in sdap_id_op_done. So the next time sdap_id_op_connect is called either cached or new connection will be returned.
Btw can you avoid comments as /* sdap handle */ when defining a sdap_handle in a structure? it look pointless to me. Comments are good but only when they tell you something that is not clear from the code at hand already :)
The reason I have put the comment on this member is: all other members needed a comment and I prefer to have comments on all members rather then on all but one. Anyway that's the matter of coding style. If you find the comment superficial let's remove it.
I were not really sure what coding style is used in project. There are files coded quite differently from each other.
Yeah, there's a moderate amount of code that predated our switching over to the tevent_req style. We don't like it and we're slowly moving everything over.
On the other hand I do not see why you find handling of these structures delicate:
- sdap_id_op is owned by operation state (e.g. by global_enum_state).
So it will be automatically destroyed as operation (tevent request) is completed
Why sdap_id_op_connect() is not a tevent request ? Passing around callbacks is usually frowned upon, as tevent requests is the way we want to handle any async event if at all possible.
Unless there is a *very* good reason why it is not a tevent_req then this is one of the things that needs to change before the patch can be accepted.
I know that at first it seem it doesn't matter, but trust me, there is an almost 4 years of attempts in the samba community to get up with the tevent_req style for a number of subtle and painful reasons. We have gone through at least 4 different ways to do continuations, and tevent_req is the one that finally makes thing bearable. The style is important both formally and substantially for too many reasons to explain in this mail (you can jump on IRC in the #freeipa channel if you want to discuss it).
OK, that will be on my TODO list. I'll change that as soon as you finish the first review pass.
If you would please break this patch into the three patches I've recommended, I'll sit down and try to do a more formal review of the logic. Then I can walk you through the tevent_req style.
- sdap_id_connection is owned by sdap_id_ctx and logic of its
life-cycle boils down to single method - sdap_id_release_connection. Connection is released when: a) There is no operation using it b) It is not cached c) It is not in connection notify loop (notify_lock == 0)
I hope I have explained why changes were made the way they have been done.
What I don't understand here is what you added sdap_id_op_connection at all, sdap_handle is meant to represent a connection, why adding yet another structure here ?
sdap_id_connection represents a connection attempt and later an established connection on the level sdap_id_ctx. It corresponds to sdap_id_handle the same way as struct be_svc_data in data_provider_fo.c corresponds to to struct fo_service. It simply holds the data required by sdap_id_ctx to keep track of connection. And as this structure is completely opaque and hidden from outside world this should not be a problem. Probably better name for it would be sdap_id_handle_data.
Also I *really* don't like the fact that sdap_id_op_connection has members like: connect_req, expire_timer
sdap_id_connection owns both (connect_req and expire_timer). It is their TALLOC_CTX. So there is no way they can be deleted without sdap_id_connection knowing it. Also talloc library removes the need to keep all pointers we have allocated, I' d rather prefer to keep them in order to be able to cancel timer and request if needed.
The only way to access a request should always be through the tevent_req_callback_data() function, because that guarantee the request is around (it's the parent of the subreq you are in). If you store it in a structure, you can generally end up trying to access freed memory during clean up operations.
Storing these pointers usually means the code is wrong, and you may have been so careful and took it in account, but in general I prefer to avoid it, because even if you were careful enough, almost certainly the next person that will touch the code will screw it up. I am still reading through it, but these are normally signs that the architecture needs to be heavily adjusted.
See above. The ownership goes the other way around: connect_req __is__ sub-request to connect operation, not vice versa.
I really do not see a way to split the patch and would appreciate very much if you give me some advice on how to make it more readable and easier to understand.
Each major architectural change should be in a separate patch even though it doesn't adding anything useful until the next patch comes in. The only rule is that the code compiles and works.
OK, let's discuss it again later. Currently we have agreed on splitting expire_time and failover server count code.
If you have any ideas on how to split the patch, I am ready to discuss them and implement if needed.
I'd really like to see an explanation of the re-connection code, if you can provide a new patch as requested above I will be able to better evaluate the re-connection logic.
The reconnect logic can be outlined as follows:
Operation starts with sdap_id_op_connect to connect to the server:
- if cached connection is available it is used
- if no connection is in progress new connection is started
- otherwise the sdap_id_op is put on the queue waiting for connect
to complete
When connection to LDAP server completes:
- if connection is successful all sdap_id_op waiting for connect are
notified
- if connection failed and there is no more servers to try backend
is put offline and all sdap_id_op are notified of failure.
- otherwise a reconnect retry is attempted on all sdap_id_op that
have not exceeded retry limit
- all other sdap_id_op are notified of connection failure
When operation is complete sdap_id_op_done is called:
- the connection is released from operation
- if operation succeeds - all done
- if operation succeeds and retry limit is not exceed
sdap_id_op_done suggests a retry and operation returns to the step 1
- if operation succeeds and retry limit is exceed, an error is reported
The common reconnect logic implementation is in:
src/providers/ldap/ldap_common.c src/providers/ldap/ldap_common.h
while it's usage is in:
src/providers/ipa/ipa_access.c src/providers/ldap/ldap_id.c src/providers/ldap/ldap_id_enum.c
This should also please be a unique patch.
Breaking the code up into digestible chunks like this will make the review and subsequent changes much more manageable.
Simo.
Regards, Eugene
sssd-devel mailing list sssd-devel@lists.fedorahosted.org https://fedorahosted.org/mailman/listinfo/sssd-devel
- -- Stephen Gallagher RHCE 804006346421761
Delivering value year after year. Red Hat ranks #1 in value among software vendors. http://www.redhat.com/promo/vendor/
On Thu, 01 Apr 2010 15:05:03 +0400 Eugene Indenbom eindenbom@gmail.com wrote:
Hi Simo,
On 03/31/2010 11:08 PM, Simo Sorce wrote:
Also given the complexity of the patch can you use the --patience switch to git format-patch and resubmit it, preferably rebased on top of master, but if it is too much work just tell me on top of which commit you created it so I can simply rebase my test branch and apply it w/o issues. It should make things much more readable as unfortunately it seem the patch does not apply on top of master now, so reviewing it properly is a bit hard.
The git version of patch was based on 80c8a4f94d54b23bce206fdd75ff2648977ce271 parent. The original version was based 1.1.0 release. I have rebased the patch to the latest version (7acaaa6c6563cf3b8ab20bf6431898d20d735842) and attached it. The format is git, at least my git thinks so.
Please use this command:
git format-patch -1 --patience <commit id>
git diff is a horrible format as it can't be used correctly with git am
NB: I am not getting impatient. The things seem to move quite fast. My English traits me here and there leaving a lot of room for misunderstanding.
The patch can be theoretically split into 3 parts:
- Changes to ldap_child related to returned ticket expiration
date;
This should be a separate patch.
Then it should be applied first. The changes to return ticket expiration date is in:
src/providers/ldap/ldap_child.c src/providers/ldap/sdap.h src/providers/ldap/sdap_async.h (first hunk) src/providers/ldap/sdap_async_connection.c (all but last hunks) src/providers/ldap/sdap_async_private.h src/providers/ldap/sdap_child_helpers.c
I will create the separate patch for them next time.
- Changes to failover subsystem needed to return number of servers
registered for failover;
I am trying to understand what's the reason for this. Why the retries should depend on server availability ? I would expect to retry once per server until there are servers available. Can you explain why you think you should try to calculate servers availability in advance ?
Just fyi, the problem is that we might not know in advance, in future we plan to add support for reading SRV records to determine the list of servers, so the number of available servers may change w/o notice.
That's exactly the reason I have add methods to get the number of servers. The number of connection retries for each operation is calculated as two times number of servers:
- one retry per server for broken connection - one retry for failed connection attempt
This can be illustrated by typical failure scenario:
- operation reuses cached connection that is already broken as
LDAP service (not computer) is down - so the first operation attempt detects that the cached connection is broken - the second operation attempt determins that LDAP port is down on this server (failover mechanics will retry last used server) - on third attempt the operation connects to next failover server - if this server goes down the scenario will repeat.
Please, note that if failover mechanics reports that there is no more servers to try (see last hunk in operation attempt) the operation is stopped altogether and backend is put offline.
Sorry but I still fail to see why you need the count. You just need to stop when the failover code tells you there are no more servers anyway. Why making it more complicate by calculating a number we don't really need ?
The changes to get number of failover servers are in:
src/providers/data_provider_fo.c src/providers/dp_backend.h src/providers/fail_over.c src/providers/fail_over.h
- Changes to LDAP ID backend connection and retry logic.
As you can see, the first two items are really small and absolutely pointless without the last.
The reason why the changes to LDAP ID backend connection and retry logic must go together are very simple: old logic relies on gsh member of sdap_id_ctx, while in new logic there is no such a member.
The reason why gsh needs to go away is as follows:
- gsh enforces that there will be one and only one connection to
DS;
Yes, this is completely intentional. Hence the questions.
- When connection is about to expire we can not use it for new
request as it will expire halfway;
ok
- But at the same time connection could be yet busy with previous
request;
I am not sure why this would be relevant though.
- Therefore we have to make a new connection and close old
one as soon as requests using it are finished.
Yes, but this can be easily done with a destructor attached to the queue code within the sdap_handle, that's why the ops list is in there.
The destructor is called whenever talloc_zfree(ctx->gsh) is called. This is currently done in at least 15 places. And as there is no reference counting for requests using connection all other operation are immediately aborted.
I know, my proposal was to remove the talloc_zfree of ctx->gsh Instead add a function that marks the connection as "free when queue is empty", setting a boolean gsh->free_empty and then changing the code that handle the queue to free gsh when the last call is done. at the same time simply "unlink" gsh by setting ctx->gsh = NULL
This way existing operations will continue to use the context (we have a copy of i in sdap_op) while any new request will use the new server.
The main feature of my patch is reference counting of sdap_id_connection by sdap_id_op to ensure that connection is closed as soon as all the consumers are done and connection is no longer cached.
Yes, I am just disagreeing on the way it is done :)
The goal of SSSD is to never use more than one connection at a time for account information. So your patch is kind of changing our fundamental goal by allowing multiple connections. We need to carefully evaluate that part.
As I have explained above, the only time we have more then one connection to DS is when old connection is about to expire and we need to open new one. So when ticket lifetime is long enough (as it is in normal Kerberos configuration) there will be no more then 2 connections open.
I still don't see why we should remove gsh from sdap_ctx. gsh is "the connection you should currently use". We probably need to be able to tell if a re-connection is happening already and delay new requests until that is done.
I think this would be something similar to the pooling code we have in the nss daemon where we check if we are already performing a specific request and use a queue to wait for a reply.
The gsh member of sdap_id_ctx is replaced with cached_connection member having the same semantics. The connection that will be used by next operation requesting connection. Unlike gsh member cached_connection is not accessed and modified directly by 3 source files but rather managed in opaque way by sdap_id_op_connect and sdap_id_op_done methods.
point is I don't think we want to keep around "cached connections", that's why it looks overkill to me.
I see you started passing around sdap_id_op. The memory hierarchy around sdap_is_op is very delicate and required a lot of very careful handling to avoid having it disappear under our feet at the wrong time. It is meant to represent a single ldap operation tied to a specific ldap context, any changes to its use should be in a separate patch that I want to review carefully. But ideally sdap_id_op is opaque to most of the code and is internal to the processing of replies from the openldap libraries. It should never be used out of this context.
I have to caution here that I confused sdap_id_op and sdap_op when I wrote this remark. It looks like sdap_id_op is a queue for requests, and that goes in the right direction, but it looks a bit overly complex. Yet I have not fully analyzed the patch because I can't apply it.
I agree that both sdap_id_op and sdap_id_connection are opaque types. You can move the definitions to ldap_common.c from headers. More over even declaration of sdap_id_connection can be visible only to ldap_common.c.
The point I am trying to make here is that I don't like sdap_id_op_handle(), as that means you are probably using an old connection. Newer calls should always use new connections, so what is the point of trying to fetch and old handler ?
There are two primary methods in new connection logic: sdap_id_op_connect and sdap_id_op_done. They define scope of single operation, during which the LDAP connection is in use. The operation may consist of a single query to server as it is now or it may span several interrelated queries that must run on the same server. So if any of queries fail we have to redo a complete thing on the other server.
No I don't think it really makes sense to reason along these lines. We have three cases here: 1) gss ctx is about to expire and we reconnect to the same server. 2) the connection already expired and was killed. and 3) the server has really gone down and we need to connect to a new one.
In case 1) we have no need to perform following operations on the same connection, we can simply perform them on the new one as they are against the same server anyway.
In case 2) the former connection is dead, no reason to try against it, operations will simply fail.
In case 3) the server is down so the previous connection is dead and we are in the same situation as point 2.
So in all cases trying to reuse the previous connection is an unnecessary complication. We can simply always use the new one. Even if acquiring the new one will slow down an operation, this is happens rarely and I am willing to accept a small delay once in a while (waiting for the reconnection to complete) in order to keep the code much simpler.
sdap_id_op represents one step higher level of business logic than sdap_op. sdap_op - is a single request to a single LDAP server, while sdap_id_op represents a retry-capable, possibly multistage operation targeted at a failover cluster of LDAP servers.
Please rename sdap_id_op to something like sdap_connection, all it does is to represent a connection operation.
I agree with your concern on possible repeated use of stale connection. I will change the code to release connection in sdap_id_op_done. So the next time sdap_id_op_connect is called either cached or new connection will be returned.
We should never have a "cached connection" I think. See the above arguments. Without a pool of cached connection the code should become much simpler.
Btw can you avoid comments as /* sdap handle */ when defining a sdap_handle in a structure? it look pointless to me. Comments are good but only when they tell you something that is not clear from the code at hand already :)
The reason I have put the comment on this member is: all other members needed a comment and I prefer to have comments on all members rather then on all but one. Anyway that's the matter of coding style. If you find the comment superficial let's remove it.
I were not really sure what coding style is used in project. There are files coded quite differently from each other.
On the other hand I do not see why you find handling of these structures delicate:
- sdap_id_op is owned by operation state (e.g. by
global_enum_state). So it will be automatically destroyed as operation (tevent request) is completed
Why sdap_id_op_connect() is not a tevent request ? Passing around callbacks is usually frowned upon, as tevent requests is the way we want to handle any async event if at all possible.
Unless there is a *very* good reason why it is not a tevent_req then this is one of the things that needs to change before the patch can be accepted.
I know that at first it seem it doesn't matter, but trust me, there is an almost 4 years of attempts in the samba community to get up with the tevent_req style for a number of subtle and painful reasons. We have gone through at least 4 different ways to do continuations, and tevent_req is the one that finally makes thing bearable. The style is important both formally and substantially for too many reasons to explain in this mail (you can jump on IRC in the #freeipa channel if you want to discuss it).
OK, that will be on my TODO list. I'll change that as soon as you finish the first review pass.
- sdap_id_connection is owned by sdap_id_ctx and logic of its
life-cycle boils down to single method - sdap_id_release_connection. Connection is released when: a) There is no operation using it b) It is not cached c) It is not in connection notify loop (notify_lock == 0)
I hope I have explained why changes were made the way they have been done.
What I don't understand here is what you added sdap_id_op_connection at all, sdap_handle is meant to represent a connection, why adding yet another structure here ?
sdap_id_connection represents a connection attempt and later an established connection on the level sdap_id_ctx. It corresponds to sdap_id_handle the same way as struct be_svc_data in data_provider_fo.c corresponds to to struct fo_service.
please don't take be_svc as an example of good code style, that interface is what it is for historical reasons (it used to be a synchronous interface that we used before standardizing on tevent_req and was influenced by the fact we needed to interface with dbus that has a different style) but it is in now way to emulate.
It simply holds the data required by sdap_id_ctx to keep track of connection. And as this structure is completely opaque and hidden from outside world this should not be a problem. Probably better name for it would be sdap_id_handle_data.
sdap_connection_data perhaps, but I still think you should have all you need in sdap_handle, and make that more opaque if necessary.
Also I *really* don't like the fact that sdap_id_op_connection has members like: connect_req, expire_timer
sdap_id_connection owns both (connect_req and expire_timer).
It's not a matter of hierarchies, it is just an sign something is not good. It's very rare that saving a req pointer is a good thing.
It is their TALLOC_CTX. So there is no way they can be deleted without sdap_id_connection knowing it. Also talloc library removes the need to keep all pointers we have allocated, I' d rather prefer to keep them in order to be able to cancel timer and request if needed.
The timer should *always* be allocated on the request so that when it is finished it is freed and the timer with it.
If your request does not complete in a short time, then there is definitely something *very* wrong here. Request must not be kept around they are not meant to keep state. They are meant to carry on a very specific operation and to be released when the operation is completed. If state needs to survive the request, it must be returned in the _recv() function and stolen on an appropriate memory context.
This is why requests are not kept around, there isn't a case where you may want to free a request. Either a timeout (eventually in a parent request) kicks in and an error is returned causeing the whole hierarchy of requests to be ultimately freed. Or the request completes, returns data in recv() and is freed.
The only way to access a request should always be through the tevent_req_callback_data() function, because that guarantee the request is around (it's the parent of the subreq you are in). If you store it in a structure, you can generally end up trying to access freed memory during clean up operations.
Storing these pointers usually means the code is wrong, and you may have been so careful and took it in account, but in general I prefer to avoid it, because even if you were careful enough, almost certainly the next person that will touch the code will screw it up. I am still reading through it, but these are normally signs that the architecture needs to be heavily adjusted.
See above. The ownership goes the other way around: connect_req __is__ sub-request to connect operation, not vice versa.
This is not the point.
I really do not see a way to split the patch and would appreciate very much if you give me some advice on how to make it more readable and easier to understand.
Each major architectural change should be in a separate patch even though it doesn't adding anything useful until the next patch comes in. The only rule is that the code compiles and works.
OK, let's discuss it again later. Currently we have agreed on splitting expire_time and failover server count code.
While I think failover modification must be split I also think they are unnecessary. I am still not convinced they are useful, at least not in the context of the patch.
If you have any ideas on how to split the patch, I am ready to discuss them and implement if needed.
I'd really like to see an explanation of the re-connection code, if you can provide a new patch as requested above I will be able to better evaluate the re-connection logic.
The reconnect logic can be outlined as follows:
Operation starts with sdap_id_op_connect to connect to the server:
- if cached connection is available it is used
- if no connection is in progress new connection is started
- otherwise the sdap_id_op is put on the queue waiting for connect
to complete
ack, although s/cached/existing/
When connection to LDAP server completes:
- if connection is successful all sdap_id_op waiting for connect
are notified - if connection failed and there is no more servers to try backend is put offline and all sdap_id_op are notified of failure. - otherwise a reconnect retry is attempted on all sdap_id_op that have not exceeded retry limit - all other sdap_id_op are notified of connection failure
ack except the retry limit, the only 2 factors that should matter here are: 1) the operation timeout kicked in so the operation is aborted This timeout should be a timed event attached to the connection request, and cleared when the request is terminated and the memory released.
2) the failover code returned there are no more servers to try.
There is no other retry limit that really makes sense to me.
I was also thinking that we should probably have a task that refreshes/idles a connection so that reconnections can happen completely outside of normal calls timeouts, and hopefully will reduce the latency of calls. Except for the case when we realize we should terminate the connection because we are idle. In that case the first connection after the idle disconnect will suffer connection establishment latency. But this should be a following, separate patch.
When operation is complete sdap_id_op_done is called:
- the connection is released from operation
- if operation succeeds - all done
- if operation succeeds and retry limit is not exceed
sdap_id_op_done suggests a retry and operation returns to the
step 1 - if operation succeeds and retry limit is exceed, an error is reported
Uhmm some "succeeds" here look suspicious, I guess the last 2 are actually "failed" ?
The common reconnect logic implementation is in:
src/providers/ldap/ldap_common.c src/providers/ldap/ldap_common.h
while it's usage is in:
src/providers/ipa/ipa_access.c src/providers/ldap/ldap_id.c src/providers/ldap/ldap_id_enum.c
Yep placement of the functions is correct, though I think we need some quite big chances based on my comments above.
Simo.
Dear Simo and Stephen,
I want to let you known that I haven't dropped the work on the patch. I had other business to attend last week, so I returned to the patch only on Monday.
I have already implemented all the changes we have discussed and currently testing them. Unfortunately I have found a severe problem in sdap_handle destruction sequence. It can be reproduced as follows:
1. There is an active sdap_op; 2. Connection breaks midway; 3. sdap_process_result is called and releases broken connection using sdap_handle_release 4. sdap_handle_release start calling callbacks of all active sdap_op 5. Inside the callback sdap_op destroys sdap_handle 6. When callback returns control sdap_handle_release is already deallocated so further actions either assert or segfault the backend.
The problem exists in sssd-1.1.1.1 as well as in my patched version. I am currently working on the solution, which hopefully will be available tomorrow.
Eugene
On 04/01/2010 05:20 PM, Simo Sorce wrote:
On Thu, 01 Apr 2010 15:05:03 +0400 Eugene Indenbomeindenbom@gmail.com wrote:
Hi Simo,
On 03/31/2010 11:08 PM, Simo Sorce wrote:
Also given the complexity of the patch can you use the --patience switch to git format-patch and resubmit it, preferably rebased on top of master, but if it is too much work just tell me on top of which commit you created it so I can simply rebase my test branch and apply it w/o issues. It should make things much more readable as unfortunately it seem the patch does not apply on top of master now, so reviewing it properly is a bit hard.
The git version of patch was based on 80c8a4f94d54b23bce206fdd75ff2648977ce271 parent. The original version was based 1.1.0 release. I have rebased the patch to the latest version (7acaaa6c6563cf3b8ab20bf6431898d20d735842) and attached it. The format is git, at least my git thinks so.
Please use this command:
git format-patch -1 --patience<commit id>
git diff is a horrible format as it can't be used correctly with git am
NB: I am not getting impatient. The things seem to move quite fast. My English traits me here and there leaving a lot of room for misunderstanding.
The patch can be theoretically split into 3 parts:
- Changes to ldap_child related to returned ticket expiration
date;
This should be a separate patch.
Then it should be applied first. The changes to return ticket expiration date is in:
src/providers/ldap/ldap_child.c src/providers/ldap/sdap.h src/providers/ldap/sdap_async.h (first hunk) src/providers/ldap/sdap_async_connection.c (all but last hunks) src/providers/ldap/sdap_async_private.h src/providers/ldap/sdap_child_helpers.c
I will create the separate patch for them next time.
- Changes to failover subsystem needed to return number of servers
registered for failover;
I am trying to understand what's the reason for this. Why the retries should depend on server availability ? I would expect to retry once per server until there are servers available. Can you explain why you think you should try to calculate servers availability in advance ?
Just fyi, the problem is that we might not know in advance, in future we plan to add support for reading SRV records to determine the list of servers, so the number of available servers may change w/o notice.
That's exactly the reason I have add methods to get the number of servers. The number of connection retries for each operation is calculated as two times number of servers:
- one retry per server for broken connection - one retry for failed connection attempt
This can be illustrated by typical failure scenario:
- operation reuses cached connection that is already broken as
LDAP service (not computer) is down - so the first operation attempt detects that the cached connection is broken - the second operation attempt determins that LDAP port is down on this server (failover mechanics will retry last used server) - on third attempt the operation connects to next failover server - if this server goes down the scenario will repeat.
Please, note that if failover mechanics reports that there is no more servers to try (see last hunk in operation attempt) the operation is stopped altogether and backend is put offline.
Sorry but I still fail to see why you need the count. You just need to stop when the failover code tells you there are no more servers anyway. Why making it more complicate by calculating a number we don't really need ?
The changes to get number of failover servers are in:
src/providers/data_provider_fo.c src/providers/dp_backend.h src/providers/fail_over.c src/providers/fail_over.h
- Changes to LDAP ID backend connection and retry logic.
As you can see, the first two items are really small and absolutely pointless without the last.
The reason why the changes to LDAP ID backend connection and retry logic must go together are very simple: old logic relies on gsh member of sdap_id_ctx, while in new logic there is no such a member.
The reason why gsh needs to go away is as follows:
- gsh enforces that there will be one and only one connection to
DS;
Yes, this is completely intentional. Hence the questions.
- When connection is about to expire we can not use it for new
request as it will expire halfway;
ok
- But at the same time connection could be yet busy with previous
request;
I am not sure why this would be relevant though.
- Therefore we have to make a new connection and close old
one as soon as requests using it are finished.
Yes, but this can be easily done with a destructor attached to the queue code within the sdap_handle, that's why the ops list is in there.
The destructor is called whenever talloc_zfree(ctx->gsh) is called. This is currently done in at least 15 places. And as there is no reference counting for requests using connection all other operation are immediately aborted.
I know, my proposal was to remove the talloc_zfree of ctx->gsh Instead add a function that marks the connection as "free when queue is empty", setting a boolean gsh->free_empty and then changing the code that handle the queue to free gsh when the last call is done. at the same time simply "unlink" gsh by setting ctx->gsh = NULL
This way existing operations will continue to use the context (we have a copy of i in sdap_op) while any new request will use the new server.
The main feature of my patch is reference counting of sdap_id_connection by sdap_id_op to ensure that connection is closed as soon as all the consumers are done and connection is no longer cached.
Yes, I am just disagreeing on the way it is done :)
The goal of SSSD is to never use more than one connection at a time for account information. So your patch is kind of changing our fundamental goal by allowing multiple connections. We need to carefully evaluate that part.
As I have explained above, the only time we have more then one connection to DS is when old connection is about to expire and we need to open new one. So when ticket lifetime is long enough (as it is in normal Kerberos configuration) there will be no more then 2 connections open.
I still don't see why we should remove gsh from sdap_ctx. gsh is "the connection you should currently use". We probably need to be able to tell if a re-connection is happening already and delay new requests until that is done.
I think this would be something similar to the pooling code we have in the nss daemon where we check if we are already performing a specific request and use a queue to wait for a reply.
The gsh member of sdap_id_ctx is replaced with cached_connection member having the same semantics. The connection that will be used by next operation requesting connection. Unlike gsh member cached_connection is not accessed and modified directly by 3 source files but rather managed in opaque way by sdap_id_op_connect and sdap_id_op_done methods.
point is I don't think we want to keep around "cached connections", that's why it looks overkill to me.
I see you started passing around sdap_id_op. The memory hierarchy around sdap_is_op is very delicate and required a lot of very careful handling to avoid having it disappear under our feet at the wrong time. It is meant to represent a single ldap operation tied to a specific ldap context, any changes to its use should be in a separate patch that I want to review carefully. But ideally sdap_id_op is opaque to most of the code and is internal to the processing of replies from the openldap libraries. It should never be used out of this context.
I have to caution here that I confused sdap_id_op and sdap_op when I wrote this remark. It looks like sdap_id_op is a queue for requests, and that goes in the right direction, but it looks a bit overly complex. Yet I have not fully analyzed the patch because I can't apply it.
I agree that both sdap_id_op and sdap_id_connection are opaque types. You can move the definitions to ldap_common.c from headers. More over even declaration of sdap_id_connection can be visible only to ldap_common.c.
The point I am trying to make here is that I don't like sdap_id_op_handle(), as that means you are probably using an old connection. Newer calls should always use new connections, so what is the point of trying to fetch and old handler ?
There are two primary methods in new connection logic: sdap_id_op_connect and sdap_id_op_done. They define scope of single operation, during which the LDAP connection is in use. The operation may consist of a single query to server as it is now or it may span several interrelated queries that must run on the same server. So if any of queries fail we have to redo a complete thing on the other server.
No I don't think it really makes sense to reason along these lines. We have three cases here: 1) gss ctx is about to expire and we reconnect to the same server. 2) the connection already expired and was killed. and 3) the server has really gone down and we need to connect to a new one.
In case 1) we have no need to perform following operations on the same connection, we can simply perform them on the new one as they are against the same server anyway.
In case 2) the former connection is dead, no reason to try against it, operations will simply fail.
In case 3) the server is down so the previous connection is dead and we are in the same situation as point 2.
So in all cases trying to reuse the previous connection is an unnecessary complication. We can simply always use the new one. Even if acquiring the new one will slow down an operation, this is happens rarely and I am willing to accept a small delay once in a while (waiting for the reconnection to complete) in order to keep the code much simpler.
sdap_id_op represents one step higher level of business logic than sdap_op. sdap_op - is a single request to a single LDAP server, while sdap_id_op represents a retry-capable, possibly multistage operation targeted at a failover cluster of LDAP servers.
Please rename sdap_id_op to something like sdap_connection, all it does is to represent a connection operation.
I agree with your concern on possible repeated use of stale connection. I will change the code to release connection in sdap_id_op_done. So the next time sdap_id_op_connect is called either cached or new connection will be returned.
We should never have a "cached connection" I think. See the above arguments. Without a pool of cached connection the code should become much simpler.
Btw can you avoid comments as /* sdap handle */ when defining a sdap_handle in a structure? it look pointless to me. Comments are good but only when they tell you something that is not clear from the code at hand already :)
The reason I have put the comment on this member is: all other members needed a comment and I prefer to have comments on all members rather then on all but one. Anyway that's the matter of coding style. If you find the comment superficial let's remove it.
I were not really sure what coding style is used in project. There are files coded quite differently from each other.
On the other hand I do not see why you find handling of these structures delicate:
- sdap_id_op is owned by operation state (e.g. by
global_enum_state). So it will be automatically destroyed as operation (tevent request) is completed
Why sdap_id_op_connect() is not a tevent request ? Passing around callbacks is usually frowned upon, as tevent requests is the way we want to handle any async event if at all possible.
Unless there is a *very* good reason why it is not a tevent_req then this is one of the things that needs to change before the patch can be accepted.
I know that at first it seem it doesn't matter, but trust me, there is an almost 4 years of attempts in the samba community to get up with the tevent_req style for a number of subtle and painful reasons. We have gone through at least 4 different ways to do continuations, and tevent_req is the one that finally makes thing bearable. The style is important both formally and substantially for too many reasons to explain in this mail (you can jump on IRC in the #freeipa channel if you want to discuss it).
OK, that will be on my TODO list. I'll change that as soon as you finish the first review pass.
- sdap_id_connection is owned by sdap_id_ctx and logic of its
life-cycle boils down to single method - sdap_id_release_connection. Connection is released when: a) There is no operation using it b) It is not cached c) It is not in connection notify loop (notify_lock == 0)
I hope I have explained why changes were made the way they have been done.
What I don't understand here is what you added sdap_id_op_connection at all, sdap_handle is meant to represent a connection, why adding yet another structure here ?
sdap_id_connection represents a connection attempt and later an established connection on the level sdap_id_ctx. It corresponds to sdap_id_handle the same way as struct be_svc_data in data_provider_fo.c corresponds to to struct fo_service.
please don't take be_svc as an example of good code style, that interface is what it is for historical reasons (it used to be a synchronous interface that we used before standardizing on tevent_req and was influenced by the fact we needed to interface with dbus that has a different style) but it is in now way to emulate.
It simply holds the data required by sdap_id_ctx to keep track of connection. And as this structure is completely opaque and hidden from outside world this should not be a problem. Probably better name for it would be sdap_id_handle_data.
sdap_connection_data perhaps, but I still think you should have all you need in sdap_handle, and make that more opaque if necessary.
Also I *really* don't like the fact that sdap_id_op_connection has members like: connect_req, expire_timer
sdap_id_connection owns both (connect_req and expire_timer).
It's not a matter of hierarchies, it is just an sign something is not good. It's very rare that saving a req pointer is a good thing.
It is their TALLOC_CTX. So there is no way they can be deleted without sdap_id_connection knowing it. Also talloc library removes the need to keep all pointers we have allocated, I' d rather prefer to keep them in order to be able to cancel timer and request if needed.
The timer should *always* be allocated on the request so that when it is finished it is freed and the timer with it.
If your request does not complete in a short time, then there is definitely something *very* wrong here. Request must not be kept around they are not meant to keep state. They are meant to carry on a very specific operation and to be released when the operation is completed. If state needs to survive the request, it must be returned in the _recv() function and stolen on an appropriate memory context.
This is why requests are not kept around, there isn't a case where you may want to free a request. Either a timeout (eventually in a parent request) kicks in and an error is returned causeing the whole hierarchy of requests to be ultimately freed. Or the request completes, returns data in recv() and is freed.
The only way to access a request should always be through the tevent_req_callback_data() function, because that guarantee the request is around (it's the parent of the subreq you are in). If you store it in a structure, you can generally end up trying to access freed memory during clean up operations.
Storing these pointers usually means the code is wrong, and you may have been so careful and took it in account, but in general I prefer to avoid it, because even if you were careful enough, almost certainly the next person that will touch the code will screw it up. I am still reading through it, but these are normally signs that the architecture needs to be heavily adjusted.
See above. The ownership goes the other way around: connect_req __is__ sub-request to connect operation, not vice versa.
This is not the point.
I really do not see a way to split the patch and would appreciate very much if you give me some advice on how to make it more readable and easier to understand.
Each major architectural change should be in a separate patch even though it doesn't adding anything useful until the next patch comes in. The only rule is that the code compiles and works.
OK, let's discuss it again later. Currently we have agreed on splitting expire_time and failover server count code.
While I think failover modification must be split I also think they are unnecessary. I am still not convinced they are useful, at least not in the context of the patch.
If you have any ideas on how to split the patch, I am ready to discuss them and implement if needed.
I'd really like to see an explanation of the re-connection code, if you can provide a new patch as requested above I will be able to better evaluate the re-connection logic.
The reconnect logic can be outlined as follows:
Operation starts with sdap_id_op_connect to connect to the server:
- if cached connection is available it is used
- if no connection is in progress new connection is started
- otherwise the sdap_id_op is put on the queue waiting for connect
to complete
ack, although s/cached/existing/
When connection to LDAP server completes:
- if connection is successful all sdap_id_op waiting for connect
are notified - if connection failed and there is no more servers to try backend is put offline and all sdap_id_op are notified of failure. - otherwise a reconnect retry is attempted on all sdap_id_op that have not exceeded retry limit - all other sdap_id_op are notified of connection failure
ack except the retry limit, the only 2 factors that should matter here are:
- the operation timeout kicked in so the operation is aborted
This timeout should be a timed event attached to the connection request, and cleared when the request is terminated and the memory released.
- the failover code returned there are no more servers to try.
There is no other retry limit that really makes sense to me.
I was also thinking that we should probably have a task that refreshes/idles a connection so that reconnections can happen completely outside of normal calls timeouts, and hopefully will reduce the latency of calls. Except for the case when we realize we should terminate the connection because we are idle. In that case the first connection after the idle disconnect will suffer connection establishment latency. But this should be a following, separate patch.
When operation is complete sdap_id_op_done is called:
- the connection is released from operation
- if operation succeeds - all done
- if operation succeeds and retry limit is not exceed
sdap_id_op_done suggests a retry and operation returns to the
step 1 - if operation succeeds and retry limit is exceed, an error is reported
Uhmm some "succeeds" here look suspicious, I guess the last 2 are actually "failed" ?
The common reconnect logic implementation is in:
src/providers/ldap/ldap_common.c src/providers/ldap/ldap_common.h
while it's usage is in:
src/providers/ipa/ipa_access.c src/providers/ldap/ldap_id.c src/providers/ldap/ldap_id_enum.c
Yep placement of the functions is correct, though I think we need some quite big chances based on my comments above.
Simo.
Eugene Indenbom wrote:
Dear Simo and Stephen,
I want to let you known that I haven't dropped the work on the patch. I had other business to attend last week, so I returned to the patch only on Monday.
I have already implemented all the changes we have discussed and currently testing them. Unfortunately I have found a severe problem in sdap_handle destruction sequence. It can be reproduced as follows:
- There is an active sdap_op;
- Connection breaks midway;
- sdap_process_result is called and releases broken connection using
sdap_handle_release 4. sdap_handle_release start calling callbacks of all active sdap_op 5. Inside the callback sdap_op destroys sdap_handle 6. When callback returns control sdap_handle_release is already deallocated so further actions either assert or segfault the backend.
The problem exists in sssd-1.1.1.1 as well as in my patched version. I am currently working on the solution, which hopefully will be available tomorrow.
Eugene,
Thank you for reporting this. Can you please address it as a separate stand alone patch independent from other work you do? That would make things much easier to review, apply and test.
Thanks Dmitri
On Tue, 06 Apr 2010 19:23:35 +0400 Eugene Indenbom eindenbom@gmail.com wrote:
I have already implemented all the changes we have discussed and currently testing them. Unfortunately I have found a severe problem in sdap_handle destruction sequence. It can be reproduced as follows:
- There is an active sdap_op;
- Connection breaks midway;
- sdap_process_result is called and releases broken connection using
sdap_handle_release 4. sdap_handle_release start calling callbacks of all active sdap_op 5. Inside the callback sdap_op destroys sdap_handle 6. When callback returns control sdap_handle_release is already deallocated so further actions either assert or segfault the backend.
The problem exists in sssd-1.1.1.1 as well as in my patched version. I am currently working on the solution, which hopefully will be available tomorrow.
Hi Eugene, I am trying to see how this can happen, do you have a valgrind trace by chance ? Can you tell me exactly in what function the free happens ?
Simo.
Dear colleagues,
I have finally finished my work on the patch refactoring. All the changes made are structured into 4 separate patches: 0001-GSSAPI-ticket-expiry-time-is-returned-from-ldap_chil.patch 0002-Added-an-interface-to-query-number-of-configured-fai.patch 0003-Fixed-recursive-sdap_handle-disconnect-sequence-from.patch 0004-The-LDAP-ID-backend-connection-logic-has-been-refact.patch
All of them are made by git format-patch with all the patience git has managed to find. :) I have taken into all the notes and suggestion from your previous e-mails, but now I want to start from a blank sheet. The work has been substantially redone and old quotations take to much room.
--- 0001-GSSAPI-ticket-expiry-time-is-returned-from-ldap_chil.patch
This patch adds expire_time member to sdap_handle. expire_time is the time when connection is expected to expire. Currently the time is set only for GSSAPI SASL mech. For other currently supported authentication types connection never expires and expire_time is set to 0.
Having this time at hand saves from complicated query of kerberos ticket cache (see sdap_check_gssapi_reconnect) and is more reliable as kerberos ticket cache could be modified since time of connection.
--- 0002-Added-an-interface-to-query-number-of-configured-fai.patch
This patch simply adds an interface to query number of configured failover servers. The interface will be really useful when DNS SRV record service discovery will be supported. I have added it as a provision for forthcoming dynamic failover configuration.
--- 0003-Fixed-recursive-sdap_handle-disconnect-sequence-from.patch
This patch fixes a critical bug of accessing freed memory in sdap_handle_release. The problem can be reproduced as follows: 1. Break LDAP connection when LDAP query is in progress (e.g. by restarting directory server) 2. sdap_process_result detects communication error and calls sdap_handle_release 3. sdap_handle_release calls active sdap_op callback 4. callback deletes sdap_handle 5. sdap_release_handle segfaults or asserts on access to deallocated memory
I have solved the problem by splitting private data from sdap_handle into separate structure sdap_handle_data. The deallocation of sdap_handle_data is delayed until all enumeration of active ops chain are finished (currently only sdap_handle_release enumerates operations). The destruction protected region is scoped by sdap_handle_data_lock/sdap_handle_data_release calls.
sdap_handle_release contains also other assorted changes to prevent double data deallocation.
--- 0004-The-LDAP-ID-backend-connection-logic-has-been-refact.patch
This patch contains refactoring of reconnect retry logic for LDAP operations. The ldap_common.c/.h contain reconnect retry API and implementation, while changes in ipa_access.c, ldap_id.c and ldap_id_enum.c convert existing LDAP queries to new API.
The current reconnect logic does not address the following issues: - There is no failover retries after connection breakdown during operation - It is possible that 2 connections would be opened in parallel when 2 operations are executed concurrently
The primary entry points of new reconnect retry framework are: sdap_id_op_connect and sdap_id_op_done. The reconnect retry usage can be outlined as follows:
1. Operation creates an op handle with sdap_id_op_create 2. When LDAP connection is needed operation call sdap_id_op_connect 3. sdap_id_op_connect: - attaches cached connection to sdap_id_op if available - or starts a new connection and returns tevent_req for synchronization. 4. When asynchronous connection request is completed operation calls sdap_id_op_connect_recv to get connection result 5. When operation is done with connection it calls sdap_id_op_done to: - notify that LDAP connection is no longer in use - check whether reconnect retry is allowed 6. If reconnect retry is allowed operation restarts at step 2.
Therefore sdap_id_op_connect and sdap_id_op_done define a scope, when LDAP connection is in use by operation. This allows to organize effective and transparent connection caching. More over further changes to caching strategy would not affect operation code.
The connection caching strategy employed by patch is simplistic: 1. The connection is established on demand 2. A single connection is cached and discarded from cache when: - it is broken or - it is about to expire 3. Cached connection is reused (concurrently if needed) by all other operations while it is valid 4. The connection is closed when: - it is not cached and - it is not in use
So typically connection is established on first LDAP query and kept until kerberos ticket is expired. Then the connection is reestablished with next LDAP query. During normal operation there is strictly one connection open.
The number of reconnect retries for each operation is determined as follows: 1. Reconnect retries are stopped and backend put offline when there is no more servers to try. 2. There is an absolute limit on retries performed per operation. This is required to handle the case when during each retry the connection is successfully established, but breaks before any results are returned. In such a scenario failover logic will try indefinitely one and the same server. 3. The absolute retry limit is calculated as 2 x <number of servers> to allow: - one established and broken midway connection to each server (e.g. reused cached connection) - one failed connection attempt to this server, after which failover logic moves to next server in the list
And finally it is possible that blackout time for first failed server to expire before last failover server is tried. This is also a possibility for creation of infinite retry loop.
All changes together has been tested by me as follows: 1. I have tested each operation separately: - sdap_account_info: user info, group info, user groups (see ldap_id.c) - enum_users and enum_groups (see ldap_id_enum.c) - hbac_get_host_info and hbac_get_rules (see ipa_access.c) 2. In the following scenarios: - normal operation - 1 retry on server failure - switch to offline during operation after all trying all servers
Currently patches are being tested during overnight operation.
Eugene
Eugene Indenbom wrote:
Dear colleagues,
Eugine,
Thank you very much for your contribution! It is really great and we appreciate your effort. However I just wanted to mention that Simo is on vacation this week and I know for sure that he would want to look at these patches before they get committed. The patches look well structured now so it would be easier to review and discuss them but it will most likely not happen this week. Sorry for delay.
Thanks, Dmitri
I have finally finished my work on the patch refactoring. All the changes made are structured into 4 separate patches: 0001-GSSAPI-ticket-expiry-time-is-returned-from-ldap_chil.patch 0002-Added-an-interface-to-query-number-of-configured-fai.patch 0003-Fixed-recursive-sdap_handle-disconnect-sequence-from.patch 0004-The-LDAP-ID-backend-connection-logic-has-been-refact.patch
All of them are made by git format-patch with all the patience git has managed to find. :) I have taken into all the notes and suggestion from your previous e-mails, but now I want to start from a blank sheet. The work has been substantially redone and old quotations take to much room.
--- 0001-GSSAPI-ticket-expiry-time-is-returned-from-ldap_chil.patch
This patch adds expire_time member to sdap_handle. expire_time is the time when connection is expected to expire. Currently the time is set only for GSSAPI SASL mech. For other currently supported authentication types connection never expires and expire_time is set to 0.
Having this time at hand saves from complicated query of kerberos ticket cache (see sdap_check_gssapi_reconnect) and is more reliable as kerberos ticket cache could be modified since time of connection.
--- 0002-Added-an-interface-to-query-number-of-configured-fai.patch
This patch simply adds an interface to query number of configured failover servers. The interface will be really useful when DNS SRV record service discovery will be supported. I have added it as a provision for forthcoming dynamic failover configuration.
--- 0003-Fixed-recursive-sdap_handle-disconnect-sequence-from.patch
This patch fixes a critical bug of accessing freed memory in sdap_handle_release. The problem can be reproduced as follows:
- Break LDAP connection when LDAP query is in progress (e.g. by
restarting directory server) 2. sdap_process_result detects communication error and calls sdap_handle_release 3. sdap_handle_release calls active sdap_op callback 4. callback deletes sdap_handle 5. sdap_release_handle segfaults or asserts on access to deallocated memory
I have solved the problem by splitting private data from sdap_handle into separate structure sdap_handle_data. The deallocation of sdap_handle_data is delayed until all enumeration of active ops chain are finished (currently only sdap_handle_release enumerates operations). The destruction protected region is scoped by sdap_handle_data_lock/sdap_handle_data_release calls.
sdap_handle_release contains also other assorted changes to prevent double data deallocation.
--- 0004-The-LDAP-ID-backend-connection-logic-has-been-refact.patch
This patch contains refactoring of reconnect retry logic for LDAP operations. The ldap_common.c/.h contain reconnect retry API and implementation, while changes in ipa_access.c, ldap_id.c and ldap_id_enum.c convert existing LDAP queries to new API.
The current reconnect logic does not address the following issues:
- There is no failover retries after connection breakdown during
operation
- It is possible that 2 connections would be opened in parallel when
2 operations are executed concurrently
The primary entry points of new reconnect retry framework are: sdap_id_op_connect and sdap_id_op_done. The reconnect retry usage can be outlined as follows:
- Operation creates an op handle with sdap_id_op_create
- When LDAP connection is needed operation call sdap_id_op_connect
- sdap_id_op_connect:
- attaches cached connection to sdap_id_op if available
- or starts a new connection and returns tevent_req for
synchronization. 4. When asynchronous connection request is completed operation calls sdap_id_op_connect_recv to get connection result 5. When operation is done with connection it calls sdap_id_op_done to: - notify that LDAP connection is no longer in use - check whether reconnect retry is allowed 6. If reconnect retry is allowed operation restarts at step 2.
Therefore sdap_id_op_connect and sdap_id_op_done define a scope, when LDAP connection is in use by operation. This allows to organize effective and transparent connection caching. More over further changes to caching strategy would not affect operation code.
The connection caching strategy employed by patch is simplistic:
- The connection is established on demand
- A single connection is cached and discarded from cache when:
- it is broken or
- it is about to expire
- Cached connection is reused (concurrently if needed) by all other
operations while it is valid 4. The connection is closed when: - it is not cached and - it is not in use
So typically connection is established on first LDAP query and kept until kerberos ticket is expired. Then the connection is reestablished with next LDAP query. During normal operation there is strictly one connection open.
The number of reconnect retries for each operation is determined as follows:
- Reconnect retries are stopped and backend put offline when there is
no more servers to try. 2. There is an absolute limit on retries performed per operation. This is required to handle the case when during each retry the connection is successfully established, but breaks before any results are returned. In such a scenario failover logic will try indefinitely one and the same server. 3. The absolute retry limit is calculated as 2 x <number of servers> to allow: - one established and broken midway connection to each server (e.g. reused cached connection) - one failed connection attempt to this server, after which failover logic moves to next server in the list
And finally it is possible that blackout time for first failed server to expire before last failover server is tried. This is also a possibility for creation of infinite retry loop.
All changes together has been tested by me as follows:
- I have tested each operation separately:
- sdap_account_info: user info, group info, user groups (see
ldap_id.c) - enum_users and enum_groups (see ldap_id_enum.c) - hbac_get_host_info and hbac_get_rules (see ipa_access.c) 2. In the following scenarios: - normal operation - 1 retry on server failure - switch to offline during operation after all trying all servers
Currently patches are being tested during overnight operation.
Eugene
sssd-devel mailing list sssd-devel@lists.fedorahosted.org https://fedorahosted.org/mailman/listinfo/sssd-devel
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1
On 04/07/2010 10:10 AM, Eugene Indenbom wrote:
--- 0003-Fixed-recursive-sdap_handle-disconnect-sequence-from.patch
This patch fixes a critical bug of accessing freed memory in sdap_handle_release. The problem can be reproduced as follows:
- Break LDAP connection when LDAP query is in progress (e.g. by
restarting directory server) 2. sdap_process_result detects communication error and calls sdap_handle_release 3. sdap_handle_release calls active sdap_op callback 4. callback deletes sdap_handle 5. sdap_release_handle segfaults or asserts on access to deallocated memory
I am unable to find any code path such that step 4 (callback deletes sdap_handle) is possible. Every callback that we ever add with sdap_op_add() behaves the same way when called from sdap_handle_release().
Since sdap_handle_release() invokes the callbacks with the error code of EIO, they all immediately set tevent_req_error() and return. There is no path in which they delete any memory.
Furthermore, all sdap_op_add() calls use a memory heirarchy that is either below or completely unrelated to the sdap_handle hierarchy. So calling talloc_free() on the sdap_op object cannot free the sdap_handle.
Eugene, if you are sure that this exists in the current master, can you point out the relevant problem area?
- -- Stephen Gallagher RHCE 804006346421761
Delivering value year after year. Red Hat ranks #1 in value among software vendors. http://www.redhat.com/promo/vendor/
Hi Stephen and Simo,
On 04/13/2010 12:23 AM, Stephen Gallagher wrote:
On 04/07/2010 10:10 AM, Eugene Indenbom wrote:
--- 0003-Fixed-recursive-sdap_handle-disconnect-sequence-from.patch
This patch fixes a critical bug of accessing freed memory in sdap_handle_release. The problem can be reproduced as follows:
- Break LDAP connection when LDAP query is in progress (e.g. by
restarting directory server) 2. sdap_process_result detects communication error and calls sdap_handle_release 3. sdap_handle_release calls active sdap_op callback 4. callback deletes sdap_handle 5. sdap_release_handle segfaults or asserts on access to deallocated memory
I am unable to find any code path such that step 4 (callback deletes sdap_handle) is possible. Every callback that we ever add with sdap_op_add() behaves the same way when called from sdap_handle_release().
Since sdap_handle_release() invokes the callbacks with the error code of EIO, they all immediately set tevent_req_error() and return. There is no path in which they delete any memory.
Furthermore, all sdap_op_add() calls use a memory heirarchy that is either below or completely unrelated to the sdap_handle hierarchy. So calling talloc_free() on the sdap_op object cannot free the sdap_handle.
Eugene, if you are sure that this exists in the current master, can you point out the relevant problem area?
Yes, I am sure. The callback deleting the sdap_handle structure is not direct callback to sdap_op, but chain callback on associated tevent_req. The complete list of offending callbacks is: - sdap_account_info_users_done - sdap_account_info_groups_done - sdap_account_info_initgr_done - ldap_id_enum_users_done - ldap_id_enum_groups_done
The crash itself happens in sdap_release_connection.
Actually, I have already sent a patch fixing the issue (please, take a look at it) and it is explained in the fix and accompanying mail how does crash happens and how it has been fixed.
Steps to reproduce are extremely simple: 1. Using debugger break in sdap_generic_send 2. Restart LDAP server or network connection leading to it 3. Resume execution 4. Observe crash.
Regards, Eugene
On Tue, 13 Apr 2010 09:28:28 +0400 Eugene Indenbom eindenbom@gmail.com wrote:
Hi Stephen and Simo,
On 04/13/2010 12:23 AM, Stephen Gallagher wrote:
On 04/07/2010 10:10 AM, Eugene Indenbom wrote:
--- 0003-Fixed-recursive-sdap_handle-disconnect-sequence-from.patch
This patch fixes a critical bug of accessing freed memory in sdap_handle_release. The problem can be reproduced as follows:
- Break LDAP connection when LDAP query is in progress (e.g. by
restarting directory server) 2. sdap_process_result detects communication error and calls sdap_handle_release 3. sdap_handle_release calls active sdap_op callback 4. callback deletes sdap_handle 5. sdap_release_handle segfaults or asserts on access to deallocated memory
I am unable to find any code path such that step 4 (callback deletes sdap_handle) is possible. Every callback that we ever add with sdap_op_add() behaves the same way when called from sdap_handle_release().
Since sdap_handle_release() invokes the callbacks with the error code of EIO, they all immediately set tevent_req_error() and return. There is no path in which they delete any memory.
Furthermore, all sdap_op_add() calls use a memory heirarchy that is either below or completely unrelated to the sdap_handle hierarchy. So calling talloc_free() on the sdap_op object cannot free the sdap_handle.
Eugene, if you are sure that this exists in the current master, can you point out the relevant problem area?
Yes, I am sure. The callback deleting the sdap_handle structure is not direct callback to sdap_op, but chain callback on associated tevent_req. The complete list of offending callbacks is: - sdap_account_info_users_done - sdap_account_info_groups_done - sdap_account_info_initgr_done - ldap_id_enum_users_done - ldap_id_enum_groups_done
The crash itself happens in sdap_release_connection.
Well sdap_release_connection does not exist in current master :-)
However thanks to this explanation I think I see where the problem lies.
In these callbacks we call sdap_mark_offline() on some errors, and that frees ctx->gsh which causes us this nice loop:
sdap_process_result() -> sdap_handle_release() -> callback -> sdap_mark_offline() -> talloc_zfree() -> sdap_handle_destructor() -> sdap_handle_release()
And so on depending on how many ops we have.
The solution though is much simpler than the patch you propose, I will remove the free from sdap_mark_offline() and replace it with marking sdap_handle as to be released, then free it at the end of sdap_handle_release() I will also move the current sdap_handle_release() code in sdap_handle_release_internal() so that it can be called from the destructor without us attempting to free sh within the destructor itself.
Actually, I have already sent a patch fixing the issue (please, take a look at it) and it is explained in the fix and accompanying mail how does crash happens and how it has been fixed.
Yes I looked at the patch but it didn't make things any clearer, and introduced changes that were a bit more invasive than needed IMO.
Steps to reproduce are extremely simple:
- Using debugger break in sdap_generic_send
- Restart LDAP server or network connection leading to it
- Resume execution
- Observe crash.
I will send an alternative patch shortly. Then will resume my review of your patches. 1 and 2 looks reasonable although there are 2 minor nitpicks. Your email address is probably wrong. And you add trailing spaces in some lines.
On the 4th I'll try to do a bit more analysis later, but it looks a bit too complicated and out of style. In particular I don't like much the sdap_id_op idea and the associated _create() function. I will try to give some more constructive feedback on it later.
Simo.
Hi Simo,
On 04/13/2010 06:44 PM, Simo Sorce wrote:
On Tue, 13 Apr 2010 09:28:28 +0400 Eugene Indenbomeindenbom@gmail.com wrote:
Hi Stephen and Simo,
On 04/13/2010 12:23 AM, Stephen Gallagher wrote:
On 04/07/2010 10:10 AM, Eugene Indenbom wrote:
--- 0003-Fixed-recursive-sdap_handle-disconnect-sequence-from.patch
This patch fixes a critical bug of accessing freed memory in sdap_handle_release. The problem can be reproduced as follows:
- Break LDAP connection when LDAP query is in progress (e.g. by
restarting directory server) 2. sdap_process_result detects communication error and calls sdap_handle_release 3. sdap_handle_release calls active sdap_op callback 4. callback deletes sdap_handle 5. sdap_release_handle segfaults or asserts on access to deallocated memory
I am unable to find any code path such that step 4 (callback deletes sdap_handle) is possible. Every callback that we ever add with sdap_op_add() behaves the same way when called from sdap_handle_release().
Since sdap_handle_release() invokes the callbacks with the error code of EIO, they all immediately set tevent_req_error() and return. There is no path in which they delete any memory.
Furthermore, all sdap_op_add() calls use a memory heirarchy that is either below or completely unrelated to the sdap_handle hierarchy. So calling talloc_free() on the sdap_op object cannot free the sdap_handle.
Eugene, if you are sure that this exists in the current master, can you point out the relevant problem area?
Yes, I am sure. The callback deleting the sdap_handle structure is not direct callback to sdap_op, but chain callback on associated tevent_req. The complete list of offending callbacks is: - sdap_account_info_users_done - sdap_account_info_groups_done - sdap_account_info_initgr_done - ldap_id_enum_users_done - ldap_id_enum_groups_done
The crash itself happens in sdap_release_connection.
Well sdap_release_connection does not exist in current master :-)
I am sorry for a typo mistake. The function I am talking about is sdap_handle_release.
However thanks to this explanation I think I see where the problem lies.
In these callbacks we call sdap_mark_offline() on some errors, and that frees ctx->gsh which causes us this nice loop:
sdap_process_result() -> sdap_handle_release() -> callback -> sdap_mark_offline() -> talloc_zfree() -> sdap_handle_destructor() -> sdap_handle_release()
talloc_zfree(sdap_ctx->gsh) is called in many other places as well. All reconnect code relies on timely destruction of singleton connection (sdap_ctx->gsh). For example take a look at code right after calls to sdap_check_gssapi_reconnect.
And so on depending on how many ops we have.
The solution though is much simpler than the patch you propose, I will remove the free from sdap_mark_offline() and replace it with marking sdap_handle as to be released, then free it at the end of sdap_handle_release()
You can not really rely on the fact that no code calls talloc_zfree(sdap_ctx->gsh). This is potential source of future errors. Somebody will call it and we will never find out until crash report comes.
I will also move the current sdap_handle_release() code in sdap_handle_release_internal() so that it can be called from the destructor without us attempting to free sh within the destructor itself.
This would not help. I have evaluated this solution. The problem is: 1. The top level call to sdap_handle_release is not a destructor. It's bare disconnect. 2. The memory is freed after inner call to sdap_handle_destructor completes. You can not change this. 3. When sdap_handle_release iterates over ops and calls callbacks on disconnect, it has to have an access to sdap_handle->ops member __after__ callback returns. This means that memory sdap_handle->ops occupies must survive destructor. That's exactly what my patch does.
Actually, I have already sent a patch fixing the issue (please, take a look at it) and it is explained in the fix and accompanying mail how does crash happens and how it has been fixed.
Yes I looked at the patch but it didn't make things any clearer, and introduced changes that were a bit more invasive than needed IMO.
Steps to reproduce are extremely simple:
- Using debugger break in sdap_generic_send
- Restart LDAP server or network connection leading to it
- Resume execution
- Observe crash.
I will send an alternative patch shortly. Then will resume my review of your patches.
Please, before sending your patch, repeat the test scenario I have described above and you will still have a crash. Unless you really manage to kill in your patch all occurrences of talloc_zfree(sdap_ctx->gsh).
1 and 2 looks reasonable although there are 2 minor nitpicks. Your email address is probably wrong. And you add trailing spaces in some lines.
I am sorry for trailing spaces. It has taken some time to set up development environment. I am starting a new Linux project after more than 3 years without Linux. So it takes time to get all tools at hand.
I'll find all such formatting errors and correct them.
In what places the e-mail is wrong? It is eindenbom@gmail.com. I have sent a test e-mail to myself and appears to work. Sorry about that too.
On the 4th I'll try to do a bit more analysis later, but it looks a bit too complicated and out of style. In particular I don't like much the sdap_id_op idea and the associated _create() function. I will try to give some more constructive feedback on it later.
OK. One more argument, sdap_id_op is needed for: 1. To keep track of current connection. 2. To reference count the connection (otherwise it would be indeterminable when to destroy connection). 3. To keep track of number of retries.
This does not amount to much work and responsibility, but nevertheless it does not make sense to repeat retry tracking and other staff in upper logical layers.
Regards, Eugene
On Tue, 13 Apr 2010 19:38:27 +0400 Eugene Indenbom eindenbom@gmail.com wrote:
Hi Simo,
On 04/13/2010 06:44 PM, Simo Sorce wrote:
[..]
Well sdap_release_connection does not exist in current master :-)
I am sorry for a typo mistake. The function I am talking about is sdap_handle_release.
Ah ok, this make much more sense and is what we found as well :)
However thanks to this explanation I think I see where the problem lies.
In these callbacks we call sdap_mark_offline() on some errors, and that frees ctx->gsh which causes us this nice loop:
sdap_process_result() -> sdap_handle_release() -> callback -> sdap_mark_offline() -> talloc_zfree() -> sdap_handle_destructor() -> sdap_handle_release()
talloc_zfree(sdap_ctx->gsh) is called in many other places as well. All reconnect code relies on timely destruction of singleton connection (sdap_ctx->gsh). For example take a look at code right after calls to sdap_check_gssapi_reconnect.
Yes I am aware of this problem.
And so on depending on how many ops we have.
The solution though is much simpler than the patch you propose, I will remove the free from sdap_mark_offline() and replace it with marking sdap_handle as to be released, then free it at the end of sdap_handle_release()
You can not really rely on the fact that no code calls talloc_zfree(sdap_ctx->gsh). This is potential source of future errors. Somebody will call it and we will never find out until crash report comes.
I am making the rule that only _send() functions can call it for now. Later on we can change how things work, but we need a minimal patch we can apply also to released branches that do not introduce too many changes.
I will also move the current sdap_handle_release() code in sdap_handle_release_internal() so that it can be called from the destructor without us attempting to free sh within the destructor itself.
This would not help. I have evaluated this solution. The problem is:
- The top level call to sdap_handle_release is not a destructor.
It's bare disconnect. 2. The memory is freed after inner call to sdap_handle_destructor completes. You can not change this. 3. When sdap_handle_release iterates over ops and calls callbacks on disconnect, it has to have an access to sdap_handle->ops member __after__ callback returns. This means that memory sdap_handle->ops occupies must survive destructor. That's exactly what my patch does.
Yes, in fact I discussed this on IRC already with other developers and we decided to simply change talloc_zfree(ctx->gsh) with ctx->gsh->connectd = false; and let the first _send() function free it and reconnect. This means sdap_handle_release() remains unchanged, but also that it will not recurse as we never free gsh within a callback.
Actually, I have already sent a patch fixing the issue (please, take a look at it) and it is explained in the fix and accompanying mail how does crash happens and how it has been fixed.
Yes I looked at the patch but it didn't make things any clearer, and introduced changes that were a bit more invasive than needed IMO.
Steps to reproduce are extremely simple:
- Using debugger break in sdap_generic_send
- Restart LDAP server or network connection leading to it
- Resume execution
- Observe crash.
I will send an alternative patch shortly. Then will resume my review of your patches.
Please, before sending your patch, repeat the test scenario I have described above and you will still have a crash. Unless you really manage to kill in your patch all occurrences of talloc_zfree(sdap_ctx->gsh).
Yes, we will definitely test it :)
1 and 2 looks reasonable although there are 2 minor nitpicks. Your email address is probably wrong. And you add trailing spaces in some lines.
I am sorry for trailing spaces. It has taken some time to set up development environment. I am starting a new Linux project after more than 3 years without Linux. So it takes time to get all tools at hand.
Don't worry, as I said, they are just minor nitpicks.
I'll find all such formatting errors and correct them.
In what places the e-mail is wrong? It is eindenbom@gmail.com. I have sent a test e-mail to myself and appears to work. Sorry about that too.
I think you forgot to set your email in ~/.gitconfig, the author of the patches in the From is: eindenbom eindenbom@indenbom-f12.abbyy.ru See the second line of the patches you've sent.
On the 4th I'll try to do a bit more analysis later, but it looks a bit too complicated and out of style. In particular I don't like much the sdap_id_op idea and the associated _create() function. I will try to give some more constructive feedback on it later.
OK. One more argument, sdap_id_op is needed for:
- To keep track of current connection.
- To reference count the connection (otherwise it would be
indeterminable when to destroy connection). 3. To keep track of number of retries.
This does not amount to much work and responsibility, but nevertheless it does not make sense to repeat retry tracking and other staff in upper logical layers.
Yeah, the intent is clear, I need to think if we can easily have a common layer that abstract this stuff or if repeating causes less issues than abstracting.
Simo.
sssd-devel@lists.fedorahosted.org