[SSSD] [PATCHES] Primary server support in SSSD

Pavel Březina pbrezina at redhat.com
Wed Aug 1 12:34:05 UTC 2012


On 08/01/2012 01:13 PM, Jan Zelený wrote:
> Dne úterý 31 července 2012 15:14:16, Pavel Březina napsal(a):
>> On 31.7.2012 10:39, Jan Zelený wrote:
>>> Dne pondělí 30 července 2012 13:53:50, Pavel Březina napsal(a):
>>>> On 07/20/2012 02:04 PM, Jan Zelený wrote:
>>>>> Dne čtvrtek 19 července 2012 16:10:08, Stephen Gallagher napsal(a):
>>>>>> On Tue, 2012-07-17 at 11:21 +0200, Jan Zelený wrote:
>>>>>>> Dne středa 11 července 2012 13:34:33, Stephen Gallagher napsal(a):
>>>>>>>> On Thu, 2012-06-21 at 12:15 +0200, Jan Zelený wrote:
>>>>>>>>>> On Fri, 2012-06-08 at 12:56 +0200, Jan Zelený wrote:
>>>>>>>>>>> Hi everybody,
>>>>>>>>>>> I'm sending some patches which implement the primary server
>>>>>>>>>>> support as
>>>>>>>>>>> I
>>>>>>>>>>> see it. It it based on comment #6 in the related ticket
>>>>>>>>>>> https://fedorahosted.org/sssd/ticket/1128
>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>> Patch #0001 basically adds the necessary support in failover code
>>>>>>>>>>> Patches #0002 - #0004 extend this support in each provider
>>>>>>>>>>> Patch #0005 documents the new concept in failover section in man
>>>>>>>>>>> pages
>>>>>>>>>>> Patches #0006 - #0008 add new options for each provider which
>>>>>>>>>>> utilize
>>>>>>>>>>> this
>>>>>>>>>>> concept.
>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>> Just briefly about the approach. When adding a new server to the
>>>>>>>>>>> list
>>>>>>>>>>> of
>>>>>>>>>>> servers related to a service, each server can be marked either as
>>>>>>>>>>> primary
>>>>>>>>>>> or secondary.
>>>>>>>>>>>
>>>>>>>>>>> When selecting new server from failover list, the algorithm
>>>>>>>>>>> iterates
>>>>>>>>>>> over
>>>>>>>>>>> the list twice - first it tries to look for primary server and if
>>>>>>>>>>> none
>>>>>>>>>>> is
>>>>>>>>>>> found, it tries also secondary server.
>>>>>>>>>>>
>>>>>>>>>>> If a server is returned from failover, and it is not primary
>>>>>>>>>>> server, a
>>>>>>>>>>> timeout is set (currently hard-coded for 30 seconds) for primary
>>>>>>>>>>> server
>>>>>>>>>>> lookup. This timeout is rescheduled until a primary server is
>>>>>>>>>>> found.
>>>>>>>>>>> If a
>>>>>>>>>>> primary server (either working or neutral) is found after this
>>>>>>>>>>> timeout,
>>>>>>>>>>> status of the backend is reset, i.e. first all offline and then
>>>>>>>>>>> all
>>>>>>>>>>> online callbacks are called. This is done to interrupt connection
>>>>>>>>>>> to
>>>>>>>>>>> the
>>>>>>>>>>> secondary server in favor of new connection to a primary server.
>>>>>>>>>>
>>>>>>>>>> I think this is dangerous. I don't want us to force offline
>>>>>>>>>> operation
>>>>>>>>>> even momentarily.
>>>>>>>>>
>>>>>>>>> I changed the concept to re-connection, which is done instantly,
>>>>>>>>> without
>>>>>>>>> scheduling any events. Combined with the possibility to mark LDAP
>>>>>>>>> connections as "disconnecting", it is addressing the original issue
>>>>>>>>> of
>>>>>>>>> going offline and back online.
>>>>>>>>>
>>>>>>>>>>> I have just couple concerns about things which I have yet to
>>>>>>>>>>> inspect.
>>>>>>>>>>> First of all when connection to the old server is interrupted,
>>>>>>>>>>> what if
>>>>>>>>>>> an
>>>>>>>>>>> operation is currently in progress on this connection? I know
>>>>>>>>>>> ticket
>>>>>>>>>>> #1027 induces similar scenario. Maybe to kill two birds with one
>>>>>>>>>>> stone
>>>>>>>>>>> I
>>>>>>>>>>> could design an extension to immediately invoke callbacks of all
>>>>>>>>>>> operations running on existing connection. What do you think about
>>>>>>>>>>> that?
>>>>>>>>>>
>>>>>>>>>> Better behavior would be to allow existing communications to
>>>>>>>>>> complete,
>>>>>>>>>> and only direct new requests to the primary server. I think
>>>>>>>>>> interrupting
>>>>>>>>>> in-progress requests would be dangerous and unpredictable.
>>>>>>>>>
>>>>>>>>> Done, see the proposed concept of reconnection.
>>>>>>>>>
>>>>>>>>>>> My second concern is about port status timeout. After some time a
>>>>>>>>>>> port
>>>>>>>>>>> is
>>>>>>>>>>> marked as neutral if originally marked as not-working. That
>>>>>>>>>>> effectively
>>>>>>>>>>> leads to second attempt to primary server reconnection being
>>>>>>>>>>> successful
>>>>>>>>>>> even though the server is still not running. As a result, an
>>>>>>>>>>> unnecessary
>>>>>>>>>>> reconnection to secondary server is performed once some data are
>>>>>>>>>>> needed
>>>>>>>>>>> from the server.
>>>>>>>>>>
>>>>>>>>>> This is intentional behavior. It's done so that if we eventually
>>>>>>>>>> lose
>>>>>>>>>> connection to the current server, we'll be able to retry that one
>>>>>>>>>> again
>>>>>>>>>> while looping through the failover code. I'd suggest just resetting
>>>>>>>>>> the
>>>>>>>>>> port-neutralizer timeout when you make the primary reconnection
>>>>>>>>>> attempt
>>>>>>>>>> and it fails.
>>>>>>>>>
>>>>>>>>> I left it as it was. It is impossible for failover code to check if
>>>>>>>>> the
>>>>>>>>> port is online. Therefore this solution is probably the best we can
>>>>>>>>> do.>
>>>>>>>>>
>>>>>>>>>>> Thank you very much for your opinions on this
>>>>>>>>>>> Jan
>>>>>>>>>>
>>>>>>>>>> General comment: I don't think I like the term "secondary" here. I
>>>>>>>>>> think
>>>>>>>>>> we might want to use something more descriptive. Perhaps "backup"?
>>>>>>>>>> I'm
>>>>>>>>>> soliciting suggestions :)
>>>>>>>>>
>>>>>>>>> Backup seems as good as any. Should I rename all those config
>>>>>>>>> options
>>>>>>>>> or
>>>>>>>>> do you have any other ideas?
>>>>>>>>>
>>>>>>>>>> Patch 0001: Nack
>>>>>>>>>> See above concerns.
>>>>>>>>>
>>>>>>>>> Hopefully addressed.
>>>>>>>>>
>>>>>>>>>> Patch 0002: Nack
>>>>>>>>>>
>>>>>>>>>> Please add a comment explaining why secondary_urls gets promoted to
>>>>>>>>>> primary.
>>>>>>>>>
>>>>>>>>> Added DEBUG message, that's more useful I guess
>>>>>>>>>
>>>>>>>>>> Patch 0003: Ack
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>> Patch 0004: Nack
>>>>>>>>>>
>>>>>>>>>> Please add a comment explaining why secondary_urls gets promoted to
>>>>>>>>>> primary.
>>>>>>>>>
>>>>>>>>> Added DEBUG message, that's more useful I guess
>>>>>>>>>
>>>>>>>>>> Patch 0005: Nack
>>>>>>>>>>
>>>>>>>>>> Typos in manpage:
>>>>>>>>>> "For each failover-enabled config option two variants exists:"
>>>>>>>>>> should be
>>>>>>>>>> "For each failover-enabled config option, two variants exist:"
>>>>>>>>>> and
>>>>>>>>>> "it will replace current" should be "it will replace the current"
>>>>>>>>>
>>>>>>>>> Fixed
>>>>>>>>>
>>>>>>>>>> Patch 0006: Nack
>>>>>>>>>>
>>>>>>>>>> Typo in the manpage:
>>>>>>>>>> "If neither options is specified," should be "If neither option is
>>>>>>>>>> specified,"
>>>>>>>>>>
>>>>>>>>>> You removed the warning about missing ldap_uri. You should put it
>>>>>>>>>> back
>>>>>>>>>> in if both urls and secondary_urls are NULL.
>>>>>>>>>>
>>>>>>>>>> Please add the same config debug information that you used in the
>>>>>>>>>> IPA
>>>>>>>>>> provider (about promoting secondary servers to primary)
>>>>>>>>>
>>>>>>>>> Done
>>>>>>>>>
>>>>>>>>>> Patch 0007: Nack
>>>>>>>>>>
>>>>>>>>>> Same comment about the warning when the servers are unspecified.
>>>>>>>>>
>>>>>>>>> Done
>>>>>>>>
>>>>>>>> These patches look good by inspection, but due to divergence in the
>>>>>>>> codebase, they no longer apply atop the master. Please rebase them so
>>>>>>>> I
>>>>>>>> can run some tests.
>>>>>>>>
>>>>>>>> Also, the AD provider probably needs modification as well to take
>>>>>>>> advantage of this new feature (sorry, it landed first).
>>>>>>>
>>>>>>> Rebased on top of current master. Those AD provider changes are
>>>>>>> included
>>>>>>> as
>>>>>>> well, I didn't have a chance to test them though.
>>>>>>
>>>>>> These patches don't apply cleanly. Patch 0010 fails with missing blobs.
>>>>>> Please submit rebased patches (or identify which other patches are
>>>>>> required).
>>>>>
>>>>> Here it is, rebased on top of current master (strange that many times
>>>>> rebase works while applying the patch doesn't). No additional patches
>>>>> required.
>>>>>
>>>>> Thanks
>>>>> Jan
>>>>
>>>> Nack.
>>>>
>>>> IPA and LDAP works. I didn't test AD because I don't have any AD server
>>>> available.
>>>
>>> I'll send you details about our testing AD environment.
>>>
>>>> Kerberos backup servers don't seem to work. I think it is because you
>>>> are loading wrong option in krb5_init.c:103.
>>>
>>> You are right, fixed.
>>>
>>>> Manpage failover section:
>>>> - says that the primary server timeout is 60 seconds but you use 30
>>>> seconds in the code.
>>>> - "After this timeout SSSD will periodically try to reconnect to one of
>>>> *the* primary servers."?
>>>
>>> Both corrected.
>>>
>>>> data_provider_fo.c:452 You have two subsequent spaces in the debug
>>>> message.
>>>
>>> Deleted
>>>
>>>> data_provider_fo.c:477 Can you put the DEBUG statement just on two lines
>>>> please? I found this little confusing.
>>>
>>> Done
>>>
>>>> data_provider_fo.c:538 You should check if new_subreq is not null. I
>>>> know it is not possible in the current code flow, but what if.
>>>
>>> That is too defensive for no gain I think. If someone will modify this
>>> code? I'm sure the modification will be better suited there.
>>>
>>>> fail_over.c:836 The commentary says: "Also cancel the primary server
>>>> reactivation event until the lookup is complete". Are you cancelling the
>>>> timer somewhere?
>>>
>>> No, the comment was left there from older version of the patch set.
>>> Removed
>>
>> Nack.
>>
>>>> sdap_service_init(), ad_servers_init(): You're allocating tmp_ctx on
>>>> mem_ctx instead of NULL.
>>>
>>> That was issue in the original code that was in place. Fixed
>>
>> You left it in ad_servers_init().
>
> Ooops, I fixed ipa_servers_init() instead. Fixed now.
>
>>>> krb5_common.c:623 Invalid debug message. At this point backup servers
>>>> aren't present.
>>>
>>> Good catch. Corrected.
>>>
>>>
>>> Sending corrected patches.
>>
>> Works for IPA, LDAP, KRB5 and AD.
>>
>> Backup KDC for GSSAPI is not functional. You need to add backup servers
>> in sdap_gssapi_init().
>
> Fixed
>
>> When you provide unresolvable hostnames, it does not try next server
>> (log and sssd.conf attached).

ACK!

Good job.



More information about the sssd-devel mailing list