[SSSD] [PATCH v2] intg: Add more LDAP tests

Nikolai Kondrashov Nikolai.Kondrashov at redhat.com
Tue Nov 3 17:30:09 UTC 2015


On 10/29/2015 04:52 PM, Michal Židek wrote:
> On 10/27/2015 04:10 PM, Nikolai Kondrashov wrote:
>> On 10/23/2015 02:54 PM, Michal Židek wrote:
>>>> +def format_interactive_conf(ldap_conn, schema):
>>>> +    """Format an SSSD configuration with all caches refreshing in 4
>>>> seconds"""
>>>> +    return \
>>>> +        format_basic_conf(ldap_conn, schema, enum=True) + \
>>>> +        unindent("""
>>>> +            [nss]
>>>> +            memcache_timeout                    = 4
>>>
>>> It is better to set memcache timeout to zero outside tests
>>> that are not dedicated to memcache. This will also probably
>>> solve the membership tests failure that you saw when using the
>>> workaround for the memcache tests failure.
>>
>> Hmm, perhaps. However, could you please explain why the membership tests
>> fail?
>>
>> I noticed that they also fail if I simply don't run the group
>> addition/removal
>> tests before them, in addition to enabling the workaround. Also, they
>> work if
>> I put "run_shell" before each "assert" and immediately exit the spawned
>> shells
>> even without doing anything in them.
>
> I really do not know why the run_shell helped. Maybe it
> forced pytest to initialize new client memory cache context.
> I tried to do some tests with run_shell, but they did not
> work for me.
>
> Anyway, the failures were connected to memory cache
> and memory cache currently is not reliable in these
> test, so I did not investigated further, why they
> failed.

Alright.

>>
>>>> +            enum_cache_timeout                  = 4
>>>> +            entry_negative_timeout              = 4
>>>
>>> I would set negative cache timeout to zero as well,
>>> see comments below.
>>
>> Replying below.
>>
>>>> +def test_add_remove_user(ldap_conn, blank_rfc2307):
>>>> +    """Test user addition and removal are reflected by SSSD"""
>>>> +    e = ldap_ent.user(ldap_conn.ds_inst.base_dn, "user", 1001, 2000)
>>>> +    time.sleep(2)
>>>
>>> What is the purpose of this timeout? It does not seem to be
>>> necessary. You use it in all the other tests as well, so
>>> I guess it had some meaning (but I tried to remove them and
>>> tests passed for me without problems).
>>
>> This puts test actions and tests in the middle of the 4 second cache
>> timeouts,
>> so they are more reliable and time drift doesn't affect them that much.
>>
>> E.g.:
>>
>>      0 - sssd start
>>      1 -
>>      2 - add user, check it's not yet present
>>      3 -
>>      4 - cache expiry/purging
>>      5 -
>>      6 - check user added
>>      7 -
>>      8 - cache expiry/purging
>>
>> etc.
>>
>> IIRC, I got 400+ perfect runs before the first failure occurred with this -
>> better than other schemes.
>
> Hmmm...maybe this was because SSSD was not initialized (fully
> started) when the test was run. In which case it is problem
> of the fixture that starts SSSD and should be solved inside
> the fixture (the sleep() should be added there).

IIRC it wasn't always failing on the first test. That rules out the
initialization issue. Besides, IIRC, about two years ago we worked on ensuring
SSSD is fully initialized and ready to serve request when service start
completes. Red Hat's downstream tests rely on that.

However, the initial delay might be better moved to the fixture. Will try to
do that and see how it fits.

>>>> +    # Add the user
>>>> +    ent.assert_passwd(ent.contains_only())
>>>> +    ldap_conn.add_s(*e)
>>>> +    ent.assert_passwd(ent.contains_only())
>>>
>>> I would avoid testing the negative cache outside tests
>>> that are dedicated to negative cache. We had this same
>>> pattern in our C code tests and it timed out when CI
>>> was under heavy load. We can add dedicated tests
>>> for negative cache in CI later with big enough timeout
>>> to pass even under heavy load CI.
>>>
>>> So please, remove the negative cache testing from everywhere
>>> in these tests.
>>
>> I'm OK with minimizing the tests and not exercising some caching
>> mechanisms. I understand it will make tests more reliable.
>>
>> However, my intention was to test the full end-user functionality, what
>> actually matters to users. Users don't really care about caches, they just
>> want everything work fast and reliably. They just want their LDAP changes
>> propagated. For that matter I only touch cache timeouts to make tests
>> run in
>> reasonable time. I don't disable them, because users will probably have
>> them
>> enabled as well.
>>
>> We can test all the cache mechanisms separately, but we will still have to
>> test them working together. Can we do that? Can we have these particular
>> tests
>> do that? Or is it too hard/impossible?
>>
>> If we can do that, how would you like to see them?
>>
>> If not, I'll just disable memory cache and remove negative cache testing as
>> you request.
>>
>> Or do I misunderstand the idea behind this?
>
> I understand your point, but negative cache IMO just slows
> down the tests. If we disable it, we still actually use it
> under the hood, it just timeouts immediately (so we trigger
> the same codepath, that most users will trigger in their
> environment). Users will usually not add and delete users
> in such short time, so using negative cache does not get
> us closer to real cases and it really is just test scenario.
>
> Main reason why I would like to have separate tests for
> negative cache (which I do not want you to add now)
> is that in order to test it in CI, we need higher timeout
> value for pattern:
> test_if_user_exists (does not)
> add_user
> test_if_user_exists (does not - negative cache hit)
>
> This is the pattern that you use and we had problems
> with this pattern in our C unit test (which we solved
> by increasing the timeout). In python we may need
> even higher values than in C.

Now, I think I got confused here. I'm not actually testing negative cache, but
only the fact that the changes to LDAP don't appear before the caches expire.
Since the negative cache timeout equals the timeout of positive caches, there
are really no negative cache-specific tests.

That said, I can still remove the tests you mention in your patch, if you
think we should remove them. However, just for the record, I didn't see any
issues with them.

Another thing is, I couldn't make the tests work with those sleep(4)'s removed
as you did in your patch up-thread. In fact all *add_remove* tests failed
except the add_remove_user_memcache_mix (delay results are not checked there).

The problem being, the assertions following these delays check enumeration and
enumeration timeout is still 4 seconds, so we need to wait for it. For that
matter removing tests you mention doesn't have an impact on execution time.

I'm not quite sure how you made it work. Was some other change omitted from
your patch accidentally? I'm talking about changes like this:

  def create_sssd_cleanup(request):
@@ -462,12 +464,9 @@ def user_and_groups_rfc2307_bis(request, ldap_conn):
  def test_add_remove_user(ldap_conn, blank_rfc2307):
      """Test user addition and removal are reflected by SSSD"""
      e = ldap_ent.user(ldap_conn.ds_inst.base_dn, "user", 1001, 2000)
-    time.sleep(2)
      # Add the user
      ent.assert_passwd(ent.contains_only())
      ldap_conn.add_s(*e)
-    ent.assert_passwd(ent.contains_only())
-    time.sleep(4)
      ent.assert_passwd(ent.contains_only(dict(name="user", uid=1001)))
      # Remove the user
      ldap_conn.delete_s(e[0])

All in all, setting memcache_timeout to zero and commenting-out memcache file
removal was enough to make *all* tests work fine.

>>>> +            conf = \
>>>> +                format_basic_conf(ldap_conn, SCHEMA_RFC2307,
>>>> enum=True) + \
>>>> +                unindent("""
>>>> +                    [nss]
>>>> +                    filter_users            = {filter_users_str}
>>>> +                    filter_users_in_groups  = {filter_users_in_groups}
>>>
>>> Instead of generating the conf files in nested for loops, it would be
>>> better to split the test to multiple smaller tests where each has
>>> different config file fixture (instad of void_conf and void_sssd).
>>> This may seem like code duplication, but I believe it will result in
>>> better readable code.
>>
>> I would really like to do that, but the problem is we will either need to
>> copy-paste a ton of tests and fixtures, or use py.test's convoluted
>> means of
>> sharing parameters between fixtures and tests which will produce much more
>> complicated code that this. At least this is my impression after
>> researching
>> that.
>>
>> Do you have any particular design for this in mind?
>
> No, I dont. I think copy-pasting is Ok in this case. The
> copy-pasted functions will not be big and all on one
> place.

Alright, I'll try to do that and we'll see how it goes.

>>>> Add a memcache invalidation failure integration test. It fails as is,
>>>> and passes if ldap_enumeration_refresh_timeout in the first half is set
>>>> to e.g. 30. The sss_nss_check_header function was observed to always
>>>> return 0 for the second half of the test, when it was failing.
>>>
>>> This test passes for me with the memcache workaround. I guess we do
>>> not need it then. The reproducer for the failure can be achieved
>>> by removing the workaround (which we will do once we fix the
>>> root cause).
>>
>> Do you mean we should not merge it, remove it from the patchset? Or add it
>> along with a workaround, so after we fix the failure and remove the
>> workaround
>> it stays on the lookout?
>
> I wanted to remove the test completely, but maybe it can
> serve as regression test. Can you add the patch as attachment
> to the ticket #2726? We can decide later if we want to
> use it or not, still I would like to push it out of this
> thread.

Alright, will try to remember to do that when we merge the other tests and I
could rebase it.

Nick


More information about the sssd-devel mailing list