[SSSD] [PATCH v2] CI: Allow disabling distro-(in)dependent tests

Nikolai Kondrashov Nikolai.Kondrashov at redhat.com
Fri Sep 12 12:58:40 UTC 2014


On 09/12/2014 02:36 PM, Lukas Slebodnik wrote:
> On (12/09/14 14:22), Nikolai Kondrashov wrote:
>> On 09/12/2014 01:47 PM, Michal Židek wrote:
>>> On 09/12/2014 12:05 PM, Nikolai Kondrashov wrote:
>>>> I understand, but distro-dependent/independent test running is not a target
>>>> in itself. The target is to have the CI servers available for routine use.
>>>>
>>>> Currently rigorous test runs typically take around an hour. Once Jakub goes
>>>> through pending and acknowledged patches and merges them in quick
>>>> succession, the CI servers become unavailable for most of the day, running
>>>> rigorous post-commit tests. Having people submit jobs for at least moderate
>>>> tests will also slow it down.
>>>>
>>>
>>> Maybe we could do something like Lukas proposed. Increasing the
>>> virtual machines RAM and use tmpfs. Maybe the speed problem
>>> will disappear.
>>
>> I can try increasing RAM on all five CI hosts and enabling mock's tmpfs plugin
>> on them. How much physical memory would you like me to configure them with?
>>
> I would use say 2 GiB should be enough (4x more than current amount).
> If there is problem we can increase to 3GiB.

I ran the rigorous tests with 2GiB of RAM and default tmpfs plugin configuration.

The run time on each machine has reduced to about 24 minutes, which is quite
close to the maximum of 20 minutes I got with the patch being discussed, and
could be acceptable. It also indicates that the bottleneck was disk I/O, IMO.

However, it turned out that RHEL6 cannot install our dependencies on tmpfs:
http://sssd-ci.duckdns.org/logs/job/0/77/rhel6/ci-build-debug/ci-mock-fedora20.log

Probably due to this:
https://bugzilla.redhat.com/show_bug.cgi?id=648654

If we can find a quick workaround for this without increasing overall run time
and employing another patch, I think it might be acceptable. Otherwise,
merging this patch would be preferable for the time being, IMHO.

Nick



More information about the sssd-devel mailing list