[SSSD] [PATCH v5] Add basic support for CI test execution

Michal Židek mzidek at redhat.com
Mon Sep 1 19:38:09 UTC 2014


On 09/01/2014 07:44 PM, Nikolai Kondrashov wrote:
> On 09/01/2014 07:48 PM, Michal Židek wrote:
>>> My response and plan here would be similar to the Valgrind's
>>> case. Only here we may perhaps disable certain error classes for
>>> the start, or use Clang augmentation in the code, making ignore
>>> the difficult issues, but in the end we should arrive to the same
>>> binary status.
>>>
>>
>> About Valgrind and Clang, I think you underestimate the amount of
>> time that is in front of us before we get to the binary PASS/FAIL
>> state.
>
> Perhaps, but I'd like to try it. If it turns out so, with Valgrind we
> can still use suppressions and I'll try finding something similar for
> Clang.
>
>> Until then, these checks are completely useless and only take time
>>  (which is one of the reasons why I wanted to be able to
>> disable/enable specific steps).
>
> As automatic checks they are useless, but they're still providing an
>  easy way to run Valgrind and Clang to help people fix those issues.
>
> Yes, I understand why you'd want to skip them and I expect there will
> be more tests people would like to skip or run alone. Perhaps we can
> simply move both Valgrind and Clang to the "rigorous" test set for
> now.

As for valgrind, if the problem will turn out to be the memory leaks,
than we can simply turn off check for memory leaks. We have different
mechanism to check for leaks in the unit tests. So in case of
Valgrind we might get to the binary state relatively fast, so
the numbers will not be needed.

But in case of clang's static analysis, which produces a lot of
false positives, displaying the number of issues is probably the
only option to quickly check if new issues where introduced.

>
>> The number would tell us if something changed from the past run or
>> not. (and would make these steps less useless). It would be a
>> valuable improvement IMO.
>
> It could be, even though we won't be able to tell if one failure
> disappeared and another one appeared. But I'd like to try for
> something better first.
>
>> Also I do not think disabling specific category of test in static
>> analyser is a good idea.
>
> Yes, it might be dangerous. Perhaps a failure count will be safer
> than that.
>
> BTW, did you notice how coverage percentage is verified? What do you
> think about that?
>

When it comes to the mechanism of how it is tested, I think it is
OK. I am not sure about the exact percentage though. But this can
be tweaked later.


>>>> 3. More options: I would like to have possibility to
>>>> disable/enable specific parts. The -e/-m/-r flags are IMO not
>>>> sufficient (for example as long as we do not fix 1 and 2, I
>>>> would like to disable/not enable these parts using some
>>>> argument)
>>>
>>> I would like that as well. That's why I would like to try using
>>> Epoxy - my Bash test framework
>>> (https://github.com/spbnick/epoxy), which allows just that and
>>> more.
>>
>> I am not familiar with epoxy, but Having better framework for shell
>>  script based tests would be a nice thing. But I would like to
>> stick to programming languages our project already depends on
>> (python/C/shell). I see that epoxy depends on Lua. Python framework
>> would be better for us IMO.
>
> The framework itself is written in Bash, only extended glob matching
> and some log processing is done in Lua. Lua knowledge is not needed
> when writing test suites. I don't think it will be very hard to
> rewrite that in Python, if necessary.
>
> Is it a problem of an extra dependency, or understanding source
> code?
>
> If it's the former, then I think can fix it. If it's the latter,
> perhaps it doesn't need to be fixed as Lua parts are non-core logic
> and are clearly separated?
>

For me it was the problem of extra dependency, but it does not mean,
that if this one thing is solved I would accept epoxy. I would
need to look at it more closely and play with it a little more.
Right now I only checked dependencies and stopped there :)

Also if we wanted some framework to run bash scripts, I am really
not sure if bash is the right language to use.
I really dislike longer code in bash. I think the language just
is not suitable for anything longer than few dozen lines of code.

The current patch is balancing on my personal level of acceptance
when it comes to the size of code and used language, but it is
still acceptable. Partially because I do not think it will
expand much in the future (we will mostly expand unit test
which are not part of this script) and partially because
it is not so difficult to read (a.k.a. well written).

But if I wanted a framework to run tests in bash for SSSD,
I would probably start writing it in Python, simply because
it is more manageable.

OTOH people who write code in bash more often than me,
might have different opinion on this.

But the question here is: Do we need a framework for
this? I think it would be nice, and we may think
about adopting/implementing one in the future if the
script that we have now will not be sufficient, BUT I think
if we fix the few issues I mentioned in this thread
we will be happy (or at least mostly happy) about what
we have and will not need to expand functionality
in the future too much ( == we do not need any
framwork).

If I understood the meeting we had last week, this
script is part of tier 1 and 2 of the 3 tiered
test suite we were talking about and the framework/
tools used in the 3rd tier will be different (probably
adopted from IPA). That is why this script already
does almost everything we want from it.

Michal



More information about the sssd-devel mailing list