[SSSD] [PATCH] test_memory_cache: Wait short time after cache invalidation

Michal Židek mzidek at redhat.com
Thu Aug 13 11:31:36 UTC 2015


On 08/13/2015 07:05 AM, Lukas Slebodnik wrote:
> On (12/08/15 14:35), Michal Židek wrote:
>> On 08/12/2015 06:21 AM, Lukas Slebodnik wrote:
>>> On (11/08/15 14:43), Michal Židek wrote:
>>>> On 08/11/2015 01:41 PM, Lukas Slebodnik wrote:
>>>>> On (10/08/15 13:32), Michal Židek wrote:
>>>>>> On 08/10/2015 12:26 PM, Lukas Slebodnik wrote:
>>>>>>> On (10/08/15 10:41), Lukas Slebodnik wrote:
>>>>>>>> On (07/08/15 21:19), Michal Židek wrote:
>>>>>>>>> On 08/07/2015 06:16 AM, Lukas Slebodnik wrote:
>>>>>>>>>>
>>>>>>>>>> +def wait_till_nss_responder_invalidate_cache():
>>>>>>>>>> +    # 1 second (200 * 0.005) should be enough time for nss responder
>>>>>>>>>> +    for _ in range(1, 200):
>>>>>>>>>> +        if os.path.isfile(config.MCACHE_PATH + "/clear_mc_flag"):
>>>>>>>>>> +            time.sleep(.005)
>>>>>>>>>> +        else:
>>>>>>>>>> +            return
>>>>>>>>>> +
>>>>>>>>>> +    assert False, "nss responder didn't invalidate memory cache within second"
>>>>>>>>>
>>>>>>>>> Grammar nazi nitpick :) : Missing "a" -> within a second
>>>>>>>>> But the nitpick is not relevant due to the following:
>>>>>>>>>
>>>>>>>>> I would give it at least 5 seconds for 2 reasons:
>>>>>>>>> a) if it ends sooner nothing happens, everything is fine
>>>>>>>>> b) if it ends later (CI machine under heavy load
>>>>>>>>>    makes even this possible) it breaks the test.
>>>>>>>>>
>>>>>>>> I realized it might be better to implement it directly in the utility
>>>>>>>> sss_cache. I didn't increased time to 5 seconds because
>>>>>>>> It would be a long time for users
>>>>>>>> and they might to decide to cancel it (ctrl-c).
>>>>>>>>
>>>>>>> and now with patches
>>>>>>>
>>>>>>> LS
>>>>>>>
>>>>>>
>>>>>> 227
>>>>>> 228         ret = wait_till_nss_responder_invalidate_cache();
>>>>>> 229         if (ret != EOK) {
>>>>>> 230             ERROR("The fast memory caches was...
>>>>>> 231                   "responder.\n");
>>>>>> 232         }
>>>>>> 233     }
>>>>>>
>>>>>> We should not ignore the error here. If the function
>>>>>> returns EAGAIN we should propagate EAGAIN to
>>>>>> sss_memcache_clear_all() caller and let him call
>>>>>> the sss_memcache_clear_all() again with limited
>>>>>> number of tries. My proposal is shortening the
>>>>>> max_wait to 500 000 with 10 retries).
>>>>>>
>>>>> It is a teoretical and very very very very very very very very very very very
>>>>> very very very very very very very very very very very very very very very very
>>>>> very very very very very very very very very very very very very very very very
>>>>> very very very very very very very very very very very very very very very very
>>>>> very very very very very very very very very very very very very very very very
>>>>> very very very very very very very very very very very very very very very very
>>>>> very very very very very very very very very very very very very very very very
>>>>> unlikely situation. It can happen even nowadays but nobody has complained yet.
>>>>> It does worth to complicate code.
>>>>>
>>>>> The only difference between curent versions of sssd and this patche is that
>>>>> user will be informed about this teoretical and very very very very very very
>>>>> very very very very very very very very very very very very very very very very
>>>>> very very very very very very very very very very very very very very very very
>>>>> very very very very very very very very very very very very very very very very
>>>>> very very very very very very very very very very very very very very very very
>>>>> very very very very very very very very very very very very very very very very
>>>>> very very very very very very very very very very very very very very very very
>>>>> very very very very very situation.
>>>>                           ^^^^
>>>> You are missing "unlikely" here.
>>>>
>>>>>
>>>>> I'm willing to change error message because could be printed to users in other
>>>>> situation. Feel free to propose better one.
>>>>>
>>>>> LS
>>>>
>>>> I'm willing to accept your patches if the attached patch
>>>> is accepted as well so that this unlikely to happen bug
>>>> will not occur.
>>>>
>>>> I do not think that the fact that some situation is unlikely
>>>> to happen is a reason to ignore it,
>>> It's not ignored. The error message is printed.
>>>
>>> >From 6608ddc5789e98cea284229a79cd615a142debd8 Mon Sep 17 00:00:00 2001
>>>> From: =?UTF-8?q?Michal=20=C5=BDidek?= <mzidek at redhat.com>
>>>> Date: Tue, 11 Aug 2015 13:43:02 +0200
>>>> Subject: [PATCH] sss_cache: Try to remove mc files if last attempt failed
>>>>
>>>> When sss_cache checks if nss responder is running
>>>> and the answer is yes, but nss stops right after
>>>> that, the memory cache files may be left undeleted.
>>>>
>>>> So we should try at least one more time to remove
>>>> the files.
>>>> ---
>>> NACK
>>>
>>> Attached is alternative solution for issues in git master.
>>>
>>> LS
>>>
>>
>> Seriously? Is that your proposed solution?
>>
> Yes, it's better to remove test rather then
> have intermittent failures.

You are twisting the plot here. Removing the test
would only be an option if we had no idea how to
fix them. Which was definitely not the case.

>
>> It is better to go with the patches you sent before
>> (not the one that removes the tests, that would be
>> very wrong) rather than trying to continue this
>> discussion.
>>
> There's nothing to continue with discussion,
>
> Q:Why?
> You proposed to add unnecessary code complication to
> workaround race condition. It will not fix the race condition.
> Because the same issue can happen next time.

It was not unnecessary and it can barely be called a
complication. As you said it is very unlikely to happen
but it definitely can happen.

>
>
> Q:How is it possible?
> Becausue veryx107 unlikely race condition is typical TOCTOU[1].
> In teory, you can hit this same situation when sombody restarted sssd.
> My paches added 1 second delay and you proposed try one more time.
> It is even less likely that it will happen but sssd can be restarted
> after 1 second as well and you will hit the same race condition next time.
> So we would add unnecessary complicated code which does not solve anything.
> The most important reason to not use your proposal is that
> patches for ticket #2748 did not introduce the race condition.
>
>
> Q:So does it mean that race condition is already in master?
> yes,

I do know it is in master. What is the point if this?

> It was introduced by commit 33cbb789ff71be5dccbb4a0acd68814b0d53da34
> and will not solve the ticket #2748
>
> commit 33cbb789ff71be5dccbb4a0acd68814b0d53da34
> Author:     Michal Zidek <mzidek at redhat.com>
> AuthorDate: Fri Oct 26 17:36:51 2012 +0200
> Commit:     Jakub Hrozek <jhrozek at redhat.com>
> CommitDate: Tue Nov 6 12:29:28 2012 +0100
>
>      sss_cache: Remove fastcache even if sssd is not running.
>
>      https://fedorahosted.org/sssd/ticket/1584
>
>
> Q: But it's a race conditions and we should fix it; or no?
> As I already wrote it's a is typical TOCTOU. The problematic
> code is neither in library nor daemon part of sssd.
> The proper solution without atomic function would be very complicated.
> We do not have atomic function obtain locks or create file.
> The full solution would require complicated analysis of all corner
> case. The ideal would be to use final state automata(FSA) or another
> formal method to prove there race condition is fixed. It's not a solution to
> retry one more time after a second. It will not fix the race condition and code
> will become complicated. The proper solution would be even more
> complicated. FSA converted to programming language are not very readable.
> It really does not worth such solution in a command line utility
>

The change I proposed is simple. It basically goes like this:
1. You hit the race condition -> try again few times
2. If you try at least one more time and you are not constantly
restarting sssd, the problem is solved and the caches will be
removed with the second attempt.
3. Even if you are constantly restarting sssd, the possibility to
hit the "bad condition" several times in a row (I had 3 tries
in the patch) is much (very much) less likely to happen. In
this case you can call it hardening.

>
> Q:OK. You mentioned that fixing race condition will not solve #2748
>    How is ti possible?
> Because the veryx107 unlikely race condition did not cause
> intermittent failures in test. It could not happed. Stopping sssd and
> invalidation of cache was not executed in parallel and moreover
> failure are caused by invalidation cache before halting sssd.

I know it was unrelated to the tests we have so far.

> The nss responder did not have a time to invalidate cache.
> And a second delay should be enough time for cache invalidation in nss
> responder. Otherwise there is an issue with nss responder and should be fixed.
> That's the reason of an error message logged from the utility sss_cache.
>
>
>> If someone wants to review the small patch I
>> sent as separate patch feel free to do so. I
>> still think it should be fixed.
>>
> The patch was NACKed.
> Part of explanation was provided offline two days ago
> and part of it was explained above
>
> Short summary:
> * missing unit test for race condition.

So if I find an issue in your patches during review
I first have to create a unit test that will give
me a special right to raise my concerns. Great.

> * unnecessary complicated code in command line utility.

I do not think so.

> * proposed patch did not solve the race condition.

It does. Of course if you constanly restart sssd and
call sss_cache with it, you may "theoreticaly" hit
the same condition with each restart. But since the
issue is unlikely to happen, I think the proposal I
made is reasonable and sufficient hardening.

> * proposed patch did not solve #2748. It's unrelated to #2748

It is related to the review of solution of #2748. I simply
wanted you to change behavior in part of the code you added as
the solution, because the current behavior is not
ideal in case of this "unlikely to happen" situation.

>
>
>> Anyway, I am sending the Lukas's patches
>> again so it is easier for Jakub to push.
>>
> I'm glad you realized my patches fixes the ticket #2748.

I knew this from the very beginning. I just wanted you
to add additional hardening to "very unlikely to happen"
race condition that I realized we have in the code as well.
But you failed to do so completely and even went as far as
proposing something that goes completely against our efforts.

> As I already wrote fell free to propose better error message
> becasue it's part of translated strings.

Feel free to do so yourself. I am no longer reviewing these
patches. If you added the changes I wanted, the error message
would be almost irrelevant.

>
>> They are both ACKed.
>>
> Thank you for review.

You are welcome.

>
>> CI passed on all machines (I disabled the problematic
>> tests):
>> http://sssd-ci.duckdns.org/logs/job/21/36/summary.html
>>
>> I also ran local CI couple of times and it passed
>> always.
>>
> BTW: The test's passed on my local machine even without patches.
> I tried 100 times in a row.

Yes, this is very machine dependent. It was failing often
on one of my local WMs, which is where I tested it.

>
> [1] https://en.wikipedia.org/wiki/Time_of_check_to_time_of_use



More information about the sssd-devel mailing list