Hi everyone,
This fixes test-io execution with relative test dir.
Sincerely, Nick
On (15/04/14 20:00), Nikolai Kondrashov wrote:
Hi everyone,
This fixes test-io execution with relative test dir.
Sincerely, Nick
From 76b7d1ff6b1a4117a5e8917b29d8b2b916d1e439 Mon Sep 17 00:00:00 2001 From: Nikolai Kondrashov Nikolai.Kondrashov@redhat.com Date: Tue, 15 Apr 2014 19:08:34 +0300 Subject: [PATCH 1/1] tests: Don't assume absolute test dir in test_io.c
Do not assume TEST_DIR path is absolute in test_io.c, assume it is the current directory.
This fixes test-io execution with relative test dir path configured with --with-test-dir.
src/tests/cmocka/test_io.c | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/src/tests/cmocka/test_io.c b/src/tests/cmocka/test_io.c index 266f2ce..453e8f7 100644 --- a/src/tests/cmocka/test_io.c +++ b/src/tests/cmocka/test_io.c @@ -40,7 +40,7 @@ #include "util/util.h" #include "tests/common.h"
-#define FILE_PATH TEST_DIR"/test_io.XXXXXX" +#define FILE_PATH "test_io.XXXXXX" #define NON_EX_PATH "non-existent-path"
/* Creates a unique temporary file inside TEST_DIR and returns its path*/ @@ -63,7 +63,7 @@ static char *get_filepath(char path[])
void setup_dirp(void **state) {
- DIR *dirp = opendir(TEST_DIR);
- DIR *dirp = opendir("."); if (dirp == NULL) { int err = errno; fprintf(stderr, "Could not open directory:'%s' [%s]\n",
-- 1.9.1
This patch is useless.
The default value of TESTDIR is "."
AC_DEFUN([WITH_TEST_DIR], [ AC_ARG_WITH([test-dir], [AC_HELP_STRING([--with-test-dir=PATH], [Directory used for make check temporary files [$builddir]] ) ], [TEST_DIR=$withval], [TEST_DIR="."] ) AC_SUBST(TEST_DIR) AC_DEFINE_UNQUOTED(TEST_DIR, "$TEST_DIR", [Directory used for 'make check' temporary files]) ])
LS
On 04/15/2014 09:42 PM, Lukas Slebodnik wrote:
On (15/04/14 20:00), Nikolai Kondrashov wrote:
From 76b7d1ff6b1a4117a5e8917b29d8b2b916d1e439 Mon Sep 17 00:00:00 2001 From: Nikolai Kondrashov Nikolai.Kondrashov@redhat.com Date: Tue, 15 Apr 2014 19:08:34 +0300 Subject: [PATCH 1/1] tests: Don't assume absolute test dir in test_io.c
Do not assume TEST_DIR path is absolute in test_io.c, assume it is the current directory.
This fixes test-io execution with relative test dir path configured with --with-test-dir.
src/tests/cmocka/test_io.c | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/src/tests/cmocka/test_io.c b/src/tests/cmocka/test_io.c index 266f2ce..453e8f7 100644 --- a/src/tests/cmocka/test_io.c +++ b/src/tests/cmocka/test_io.c @@ -40,7 +40,7 @@ #include "util/util.h" #include "tests/common.h"
-#define FILE_PATH TEST_DIR"/test_io.XXXXXX" +#define FILE_PATH "test_io.XXXXXX" #define NON_EX_PATH "non-existent-path"
/* Creates a unique temporary file inside TEST_DIR and returns its path*/ @@ -63,7 +63,7 @@ static char *get_filepath(char path[])
void setup_dirp(void **state) {
- DIR *dirp = opendir(TEST_DIR);
- DIR *dirp = opendir("."); if (dirp == NULL) { int err = errno; fprintf(stderr, "Could not open directory:'%s' [%s]\n",
-- 1.9.1
This patch is useless.
Wrong.
The default value of TESTDIR is "."
Correct.
It would be nice to at least test and provide more substantial evidence, before embarrassing a developer and making him spend time defending his patch.
The default will always work as there is always a current directory. The path used in contrib/fedora/bashrc_sssd (/dev/shm), which some are probably using, works as well.
However, before running the actual tests, test_io.c invokes "tests_set_cwd", which chdir's to TEST_DIR. This breaks running with relative paths, as the tests try to access TEST_DIR below TEST_DIR.
Sincerely, Nick
On 04/16/2014 02:22 PM, Nikolai Kondrashov wrote:
On 04/15/2014 09:42 PM, Lukas Slebodnik wrote:
On (15/04/14 20:00), Nikolai Kondrashov wrote:
From 76b7d1ff6b1a4117a5e8917b29d8b2b916d1e439 Mon Sep 17 00:00:00 2001 From: Nikolai Kondrashov Nikolai.Kondrashov@redhat.com Date: Tue, 15 Apr 2014 19:08:34 +0300 Subject: [PATCH 1/1] tests: Don't assume absolute test dir in test_io.c
Do not assume TEST_DIR path is absolute in test_io.c, assume it is the current directory.
This fixes test-io execution with relative test dir path configured with --with-test-dir.
src/tests/cmocka/test_io.c | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/src/tests/cmocka/test_io.c b/src/tests/cmocka/test_io.c index 266f2ce..453e8f7 100644 --- a/src/tests/cmocka/test_io.c +++ b/src/tests/cmocka/test_io.c @@ -40,7 +40,7 @@ #include "util/util.h" #include "tests/common.h"
-#define FILE_PATH TEST_DIR"/test_io.XXXXXX" +#define FILE_PATH "test_io.XXXXXX" #define NON_EX_PATH "non-existent-path"
/* Creates a unique temporary file inside TEST_DIR and returns its path*/ @@ -63,7 +63,7 @@ static char *get_filepath(char path[])
void setup_dirp(void **state) {
- DIR *dirp = opendir(TEST_DIR);
- DIR *dirp = opendir("."); if (dirp == NULL) { int err = errno; fprintf(stderr, "Could not open directory:'%s' [%s]\n",
-- 1.9.1
This patch is useless.
Wrong.
The default value of TESTDIR is "."
Correct.
It would be nice to at least test and provide more substantial evidence, before embarrassing a developer and making him spend time defending his patch.
Sorry, I felt a bit upset about "This patch is useless". I understand that my patch probably didn't explain enough. However, saying "this is unclear, please elaborate" would have been nicer.
Sincerely, Nick
On (16/04/14 14:39), Nikolai Kondrashov wrote:
On 04/16/2014 02:22 PM, Nikolai Kondrashov wrote:
On 04/15/2014 09:42 PM, Lukas Slebodnik wrote:
On (15/04/14 20:00), Nikolai Kondrashov wrote:
From 76b7d1ff6b1a4117a5e8917b29d8b2b916d1e439 Mon Sep 17 00:00:00 2001 From: Nikolai Kondrashov Nikolai.Kondrashov@redhat.com Date: Tue, 15 Apr 2014 19:08:34 +0300 Subject: [PATCH 1/1] tests: Don't assume absolute test dir in test_io.c
Do not assume TEST_DIR path is absolute in test_io.c, assume it is the current directory.
This fixes test-io execution with relative test dir path configured with --with-test-dir.
src/tests/cmocka/test_io.c | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/src/tests/cmocka/test_io.c b/src/tests/cmocka/test_io.c index 266f2ce..453e8f7 100644 --- a/src/tests/cmocka/test_io.c +++ b/src/tests/cmocka/test_io.c @@ -40,7 +40,7 @@ #include "util/util.h" #include "tests/common.h"
-#define FILE_PATH TEST_DIR"/test_io.XXXXXX" +#define FILE_PATH "test_io.XXXXXX" #define NON_EX_PATH "non-existent-path"
/* Creates a unique temporary file inside TEST_DIR and returns its path*/ @@ -63,7 +63,7 @@ static char *get_filepath(char path[])
void setup_dirp(void **state) {
- DIR *dirp = opendir(TEST_DIR);
- DIR *dirp = opendir("."); if (dirp == NULL) { int err = errno; fprintf(stderr, "Could not open directory:'%s' [%s]\n",
-- 1.9.1
This patch is useless.
Wrong.
The default value of TESTDIR is "."
Correct.
It would be nice to at least test and provide more substantial evidence, before embarrassing a developer and making him spend time defending his patch.
Sorry, I felt a bit upset about "This patch is useless". I understand that my patch probably didn't explain enough. However, saying "this is unclear, please elaborate" would have been nicer.
If configure is run without argument "--with-test-dir" test will pass If configure is run with argument "--with-test-dir=/tmp/dir" test will pass
result: This patch is useless
I can see only one problem. We do not have a test whether value of --with-test-dir is directory.
LS
On 04/16/2014 02:50 PM, Lukas Slebodnik wrote:
If configure is run without argument "--with-test-dir" test will pass If configure is run with argument "--with-test-dir=/tmp/dir" test will pass
result: This patch is useless
The test will fail if you run it with "--with-test-dir=test-dir".
I can see only one problem. We do not have a test whether value of --with-test-dir is directory.
Where should it be done?
On Wed, Apr 16, 2014 at 02:39:06PM +0300, Nikolai Kondrashov wrote:
Sorry, I felt a bit upset about "This patch is useless". I understand that my patch probably didn't explain enough. However, saying "this is unclear, please elaborate" would have been nicer.
I think the wording can be attributted to language barrier a bit. I don't think anyone tried to embarass you, Nick.
On the other hand, I agree the wording was harsh. If you were seeing a problem and you have a patch that fixes the problem for you, then the patch is mostly certainly not useless, we just need to see if it's the right way to tackle that problem.
I'm still not sure I understand where exactly the root cause is, can you give me an example of how can I configure sssd to reproduce the breakage locally?
On 04/16/2014 03:00 PM, Jakub Hrozek wrote:
On Wed, Apr 16, 2014 at 02:39:06PM +0300, Nikolai Kondrashov wrote:
Sorry, I felt a bit upset about "This patch is useless". I understand that my patch probably didn't explain enough. However, saying "this is unclear, please elaborate" would have been nicer.
I think the wording can be attributted to language barrier a bit. I don't think anyone tried to embarass you, Nick.
On the other hand, I agree the wording was harsh. If you were seeing a problem and you have a patch that fixes the problem for you, then the patch is mostly certainly not useless, we just need to see if it's the right way to tackle that problem.
Thanks, Jakub.
I'm still not sure I understand where exactly the root cause is, can you give me an example of how can I configure sssd to reproduce the breakage locally?
My latest reply to Lukas has the answer, but seems to be delayed somewhere.
So, use "--with-test-dir=test-dir", don't forget to create the directory itself. The test will fail then.
Sincerely, Nick
On (16/04/14 15:26), Nikolai Kondrashov wrote:
On 04/16/2014 03:00 PM, Jakub Hrozek wrote:
On Wed, Apr 16, 2014 at 02:39:06PM +0300, Nikolai Kondrashov wrote:
Sorry, I felt a bit upset about "This patch is useless". I understand that my patch probably didn't explain enough. However, saying "this is unclear, please elaborate" would have been nicer.
I think the wording can be attributted to language barrier a bit. I don't think anyone tried to embarass you, Nick.
On the other hand, I agree the wording was harsh. If you were seeing a problem and you have a patch that fixes the problem for you, then the patch is mostly certainly not useless, we just need to see if it's the right way to tackle that problem.
Thanks, Jakub.
I'm still not sure I understand where exactly the root cause is, can you give me an example of how can I configure sssd to reproduce the breakage locally?
My latest reply to Lukas has the answer, but seems to be delayed somewhere.
So, use "--with-test-dir=test-dir", don't forget to create the directory itself. The test will fail then.
In my opinion, directory "${test-dir}" should not be created by configure script. We can add test for validation of argument. test -d ${TEST_DIR}
LS
On (16/04/14 14:38), Lukas Slebodnik wrote:
On (16/04/14 15:26), Nikolai Kondrashov wrote:
On 04/16/2014 03:00 PM, Jakub Hrozek wrote:
On Wed, Apr 16, 2014 at 02:39:06PM +0300, Nikolai Kondrashov wrote:
Sorry, I felt a bit upset about "This patch is useless". I understand that my patch probably didn't explain enough. However, saying "this is unclear, please elaborate" would have been nicer.
I think the wording can be attributted to language barrier a bit. I don't think anyone tried to embarass you, Nick.
On the other hand, I agree the wording was harsh. If you were seeing a problem and you have a patch that fixes the problem for you, then the patch is mostly certainly not useless, we just need to see if it's the right way to tackle that problem.
Thanks, Jakub.
I'm still not sure I understand where exactly the root cause is, can you give me an example of how can I configure sssd to reproduce the breakage locally?
My latest reply to Lukas has the answer, but seems to be delayed somewhere.
So, use "--with-test-dir=test-dir", don't forget to create the directory itself. The test will fail then.
In my opinion, directory "${test-dir}" should not be created by configure script. We can add test for validation of argument. test -d ${TEST_DIR}
My fingers were ver fast.
You should fix problem and not symptoms.
LS
On 04/16/2014 03:39 PM, Lukas Slebodnik wrote:
On (16/04/14 14:38), Lukas Slebodnik wrote:
On (16/04/14 15:26), Nikolai Kondrashov wrote:
On 04/16/2014 03:00 PM, Jakub Hrozek wrote:
On Wed, Apr 16, 2014 at 02:39:06PM +0300, Nikolai Kondrashov wrote:
Sorry, I felt a bit upset about "This patch is useless". I understand that my patch probably didn't explain enough. However, saying "this is unclear, please elaborate" would have been nicer.
I think the wording can be attributted to language barrier a bit. I don't think anyone tried to embarass you, Nick.
On the other hand, I agree the wording was harsh. If you were seeing a problem and you have a patch that fixes the problem for you, then the patch is mostly certainly not useless, we just need to see if it's the right way to tackle that problem.
Thanks, Jakub.
I'm still not sure I understand where exactly the root cause is, can you give me an example of how can I configure sssd to reproduce the breakage locally?
My latest reply to Lukas has the answer, but seems to be delayed somewhere.
So, use "--with-test-dir=test-dir", don't forget to create the directory itself. The test will fail then.
In my opinion, directory "${test-dir}" should not be created by configure script. We can add test for validation of argument. test -d ${TEST_DIR}
My fingers were ver fast.
You should fix problem and not symptoms.
Agreed.
However, the problem lies entirely in the test, and I'm trying to solve it.
The test first changes into TEST_DIR, then tries to open TEST_DIR. This wouldn't work with a relative path such as "test-dir", as there is not likely to be "test-dir" within "test-dir".
It also tries to create a temporary file in TEST_DIR, while already being in TEST_DIR. This also doesn't work with "test-dir", for the same reason.
All of the above with TEST_DIR existing at the time of running the test.
Sincerely, Nick
On (16/04/14 15:50), Nikolai Kondrashov wrote:
On 04/16/2014 03:39 PM, Lukas Slebodnik wrote:
On (16/04/14 14:38), Lukas Slebodnik wrote:
On (16/04/14 15:26), Nikolai Kondrashov wrote:
On 04/16/2014 03:00 PM, Jakub Hrozek wrote:
On Wed, Apr 16, 2014 at 02:39:06PM +0300, Nikolai Kondrashov wrote:
Sorry, I felt a bit upset about "This patch is useless". I understand that my patch probably didn't explain enough. However, saying "this is unclear, please elaborate" would have been nicer.
I think the wording can be attributted to language barrier a bit. I don't think anyone tried to embarass you, Nick.
On the other hand, I agree the wording was harsh. If you were seeing a problem and you have a patch that fixes the problem for you, then the patch is mostly certainly not useless, we just need to see if it's the right way to tackle that problem.
Thanks, Jakub.
I'm still not sure I understand where exactly the root cause is, can you give me an example of how can I configure sssd to reproduce the breakage locally?
My latest reply to Lukas has the answer, but seems to be delayed somewhere.
So, use "--with-test-dir=test-dir", don't forget to create the directory itself. The test will fail then.
In my opinion, directory "${test-dir}" should not be created by configure script. We can add test for validation of argument. test -d ${TEST_DIR}
My fingers were ver fast.
You should fix problem and not symptoms.
Agreed. However, the problem lies entirely in the test, and I'm trying to solve it.
The test first changes into TEST_DIR, then tries to open TEST_DIR. This wouldn't work with a relative path such as "test-dir", as there is not likely to be "test-dir" within "test-dir".
It also tries to create a temporary file in TEST_DIR, while already being in TEST_DIR. This also doesn't work with "test-dir", for the same reason.
test_io is simple unit test and it does not worth to refactor this test. It works if $TEST_DIR is created. Please focus to writing new tests :-)
All of the above with TEST_DIR existing at the time of running the test.
testing value of "--with-test-dir" can be done in macro WITH_TEST_DIR (file src/conf_macros.m4) It would be simple test and macro will not be very complicated after change.
You can write warning or error AC_MSG_WARN, AC_MSG_ERROR. It would be good to test value before macro "AC_SUBST(TEST_DIR)"
LS
On 04/16/2014 04:03 PM, Lukas Slebodnik wrote:
Agreed. However, the problem lies entirely in the test, and I'm trying to solve it.
The test first changes into TEST_DIR, then tries to open TEST_DIR. This wouldn't work with a relative path such as "test-dir", as there is not likely to be "test-dir" within "test-dir".
It also tries to create a temporary file in TEST_DIR, while already being in TEST_DIR. This also doesn't work with "test-dir", for the same reason.
test_io is simple unit test and it does not worth to refactor this test.
Sure, but I'm not suggesting we should refactor it. The proposed patch fixes the problem in its entirety.
It works if $TEST_DIR is created.
Not always. Not when the path is relative (and not "."). Please execute this and see for yourself:
mkdir test-dir ./configure --with-test-dir=test-dir make check
The test output will be something like this:
[==========] Running 8 test(s). [ RUN ] test_sss_open_cloexec_success mkstemp failed with path:'test-dir/test_io.eTfWX8' [No such file or directory] ret == -1 src/tests/cmocka/test_io.c:59: error: Failure! [ FAILED ] test_sss_open_cloexec_success [ RUN ] test_sss_open_cloexec_fail [ OK ] test_sss_open_cloexec_fail Could not open directory:'test-dir' [No such file or directory] dirp src/tests/cmocka/test_io.c:72: error: Failure! [ FAILED ] test_sss_openat_cloexec_success_setup_dirp [ RUN ] test_sss_openat_cloexec_success 11 [ FAILED ] test_sss_openat_cloexec_success Could not open directory:'test-dir' [No such file or directory] dirp src/tests/cmocka/test_io.c:72: error: Failure! [ FAILED ] test_sss_openat_cloexec_fail_setup_dirp [ RUN ] test_sss_openat_cloexec_fail
All of the above with TEST_DIR existing at the time of running the test.
Please focus to writing new tests :-)
I will, when I have my functionality to implement, or will know enough of one of others :)
testing value of "--with-test-dir" can be done in macro WITH_TEST_DIR (file src/conf_macros.m4) It would be simple test and macro will not be very complicated after change.
You can write warning or error AC_MSG_WARN, AC_MSG_ERROR. It would be good to test value before macro "AC_SUBST(TEST_DIR)"
Thanks, I'll think about doing this in a separate patch.
Sincerely, Nick
On (16/04/14 16:21), Nikolai Kondrashov wrote:
On 04/16/2014 04:03 PM, Lukas Slebodnik wrote:
Agreed. However, the problem lies entirely in the test, and I'm trying to solve it.
The test first changes into TEST_DIR, then tries to open TEST_DIR. This wouldn't work with a relative path such as "test-dir", as there is not likely to be "test-dir" within "test-dir".
It also tries to create a temporary file in TEST_DIR, while already being in TEST_DIR. This also doesn't work with "test-dir", for the same reason.
test_io is simple unit test and it does not worth to refactor this test.
Sure, but I'm not suggesting we should refactor it. The proposed patch fixes the problem in its entirety.
It works if $TEST_DIR is created.
Not always. Not when the path is relative (and not "."). Please execute this and see for yourself:
mkdir test-dir ./configure --with-test-dir=test-dir
1. It is not recommended to run configure from source directory.
make check
2. It will fail, but You do not know purpose of argument --with-test-dir It is not docummented.
The configure argument "with-test-dir" was created because sysdb test are slow on HDD. In my case, It takes 14 minutes (IIRC) If you want to speed up tests you will need run test in ramdisk(/dev/shm) and you will specify full path to ramdisk.
LS
On 04/16/2014 05:44 PM, Lukas Slebodnik wrote:
On (16/04/14 16:21), Nikolai Kondrashov wrote:
On 04/16/2014 04:03 PM, Lukas Slebodnik wrote:
Agreed. However, the problem lies entirely in the test, and I'm trying to solve it.
The test first changes into TEST_DIR, then tries to open TEST_DIR. This wouldn't work with a relative path such as "test-dir", as there is not likely to be "test-dir" within "test-dir".
It also tries to create a temporary file in TEST_DIR, while already being in TEST_DIR. This also doesn't work with "test-dir", for the same reason.
test_io is simple unit test and it does not worth to refactor this test.
Sure, but I'm not suggesting we should refactor it. The proposed patch fixes the problem in its entirety.
It works if $TEST_DIR is created.
Not always. Not when the path is relative (and not "."). Please execute this and see for yourself:
mkdir test-dir ./configure --with-test-dir=test-dir
- It is not recommended to run configure from source directory.
make check
- It will fail, but You do not know purpose of argument --with-test-dir It is not docummented.
The configure argument "with-test-dir" was created because sysdb test are slow on HDD. In my case, It takes 14 minutes (IIRC) If you want to speed up tests you will need run test in ramdisk(/dev/shm) and you will specify full path to ramdisk.
I don't want to run them on ramdisk. I want to keep the results in separate directories from separate runs and archive them. Also, on one run the performance will be swamped by valgrind anyway, and another will be impaired by coverage data output.
Sincerely, Nick
On (16/04/14 17:54), Nikolai Kondrashov wrote:
On 04/16/2014 05:44 PM, Lukas Slebodnik wrote:
On (16/04/14 16:21), Nikolai Kondrashov wrote:
On 04/16/2014 04:03 PM, Lukas Slebodnik wrote:
Agreed. However, the problem lies entirely in the test, and I'm trying to solve it.
The test first changes into TEST_DIR, then tries to open TEST_DIR. This wouldn't work with a relative path such as "test-dir", as there is not likely to be "test-dir" within "test-dir".
It also tries to create a temporary file in TEST_DIR, while already being in TEST_DIR. This also doesn't work with "test-dir", for the same reason.
test_io is simple unit test and it does not worth to refactor this test.
Sure, but I'm not suggesting we should refactor it. The proposed patch fixes the problem in its entirety.
It works if $TEST_DIR is created.
Not always. Not when the path is relative (and not "."). Please execute this and see for yourself:
mkdir test-dir ./configure --with-test-dir=test-dir
- It is not recommended to run configure from source directory.
make check
- It will fail, but You do not know purpose of argument --with-test-dir It is not docummented.
The configure argument "with-test-dir" was created because sysdb test are slow on HDD. In my case, It takes 14 minutes (IIRC) If you want to speed up tests you will need run test in ramdisk(/dev/shm) and you will specify full path to ramdisk.
I don't want to run them on ramdisk. I want to keep the results in separate directories from separate runs and archive them. Also, on one run the performance will be swamped by valgrind anyway, and another will be impaired by coverage data output.
Sincerely, Nick _______________________________________________ sssd-devel mailing list sssd-devel@lists.fedorahosted.org https://lists.fedorahosted.org/mailman/listinfo/sssd-devel
On 04/16/2014 06:27 PM, Lukas Slebodnik wrote:
On (16/04/14 17:54), Nikolai Kondrashov wrote:
On 04/16/2014 05:44 PM, Lukas Slebodnik wrote:
On (16/04/14 16:21), Nikolai Kondrashov wrote:
On 04/16/2014 04:03 PM, Lukas Slebodnik wrote:
Agreed. However, the problem lies entirely in the test, and I'm trying to solve it.
The test first changes into TEST_DIR, then tries to open TEST_DIR. This wouldn't work with a relative path such as "test-dir", as there is not likely to be "test-dir" within "test-dir".
It also tries to create a temporary file in TEST_DIR, while already being in TEST_DIR. This also doesn't work with "test-dir", for the same reason.
test_io is simple unit test and it does not worth to refactor this test.
Sure, but I'm not suggesting we should refactor it. The proposed patch fixes the problem in its entirety.
It works if $TEST_DIR is created.
Not always. Not when the path is relative (and not "."). Please execute this and see for yourself:
mkdir test-dir ./configure --with-test-dir=test-dir
- It is not recommended to run configure from source directory.
make check
- It will fail, but You do not know purpose of argument --with-test-dir It is not docummented.
The configure argument "with-test-dir" was created because sysdb test are slow on HDD. In my case, It takes 14 minutes (IIRC) If you want to speed up tests you will need run test in ramdisk(/dev/shm) and you will specify full path to ramdisk.
I don't want to run them on ramdisk. I want to keep the results in separate directories from separate runs and archive them. Also, on one run the performance will be swamped by valgrind anyway, and another will be impaired by coverage data output.
Lukas, did you want to reply?
Hi Lukas,
A slightly more complete answer to your last message.
On 04/16/2014 05:44 PM, Lukas Slebodnik wrote:
On (16/04/14 16:21), Nikolai Kondrashov wrote:
On 04/16/2014 04:03 PM, Lukas Slebodnik wrote:
It works if $TEST_DIR is created.
Not always. Not when the path is relative (and not "."). Please execute this and see for yourself:
mkdir test-dir ./configure --with-test-dir=test-dir
- It is not recommended to run configure from source directory.
I gathered as much. It should work though and I made it this way to have instructions clearer.
make check
- It will fail, but You do not know purpose of argument --with-test-dir It is not docummented.
I knew the purpose of --with-test-dir when making this patch, and its function is documented clear enough in src/conf_macros.m4 and "./configure --help" output. It says: "Directory used for make check temporary files". Nowhere it does say: "don't supply relative paths, because they don't work."
Even if it wasn't documented, the straightforward "directory for temporary files" is still easier to remember than "directory for temporary files, but not pointed by a relative path, or a test will break".
This tiny patch doesn't make the code more complicated, but instead slightly *less* complicated and prevents confusion in the future.
The configure argument "with-test-dir" was created because sysdb test are slow on HDD. In my case, It takes 14 minutes (IIRC) If you want to speed up tests you will need run test in ramdisk(/dev/shm) and you will specify full path to ramdisk.
In my case there is no beneficial performance difference when running with valgrind. There is about 25% improvement using /dev/shm when running with coverage enabled, but even in a VM it takes less than a minute on my machine.
I will use /dev/shm and will mess with moving test results around, when/if I'll need to tune the performance, but for now I'd prefer to keep my CI code simple and use relative directories.
Sincerely, Nick
On (16/04/14 21:30), Nikolai Kondrashov wrote:
Hi Lukas,
A slightly more complete answer to your last message.
On 04/16/2014 05:44 PM, Lukas Slebodnik wrote:
On (16/04/14 16:21), Nikolai Kondrashov wrote:
On 04/16/2014 04:03 PM, Lukas Slebodnik wrote:
It works if $TEST_DIR is created.
Not always. Not when the path is relative (and not "."). Please execute this and see for yourself:
mkdir test-dir ./configure --with-test-dir=test-dir
- It is not recommended to run configure from source directory.
I gathered as much. It should work though and I made it this way to have instructions clearer.
make check
- It will fail, but You do not know purpose of argument --with-test-dir It is not docummented.
I knew the purpose of --with-test-dir when making this patch, and its function is documented clear enough in src/conf_macros.m4 and "./configure --help" output. It says: "Directory used for make check temporary files". Nowhere it does say: "don't supply relative paths, because they don't work."
Even if it wasn't documented, the straightforward "directory for temporary files" is still easier to remember than "directory for temporary files, but not pointed by a relative path, or a test will break".
This tiny patch doesn't make the code more complicated, but instead slightly *less* complicated and prevents confusion in the future.
The configure argument "with-test-dir" was created because sysdb test are slow on HDD. In my case, It takes 14 minutes (IIRC) If you want to speed up tests you will need run test in ramdisk(/dev/shm) and you will specify full path to ramdisk.
In my case there is no beneficial performance difference when running with valgrind. There is about 25% improvement using /dev/shm when running with coverage enabled, but even in a VM it takes less than a minute on my machine.
I will use /dev/shm and will mess with moving test results around, when/if I'll need to tune the performance, but for now I'd prefer to keep my CI code simple and use relative directories.
It does not make sense to argue.
test-io was not ideal unit test and adding another workaroud(fix/hack) is not good. I could sleep, but ...
Patch is attached.
LS
On Thu, Apr 17, 2014 at 11:06:16AM +0300, Nikolai Kondrashov wrote:
On 04/17/2014 02:29 AM, Lukas Slebodnik wrote:
It does not make sense to argue.
test-io was not ideal unit test and adding another workaroud(fix/hack) is not good. I could sleep, but ...
Patch is attached.
Thanks, Lukas. This fixes my problem.
I have one more request -- don't use strdup and free, but rather create a test context in the setup function and just free the test context in the teardown function. I know the current code works, but it's not the best example for future developers (who tend to look at unit tests for examples on how to use code).
Conversely, get_random_filepath should accept a TALLOC_CTX as the first parameter.
You can also consider creating a destructor for the contents of struct dir_state, but here I think the teardown function is semantically close enough to a destructor.
On (17/04/14 10:44), Jakub Hrozek wrote:
On Thu, Apr 17, 2014 at 11:06:16AM +0300, Nikolai Kondrashov wrote:
On 04/17/2014 02:29 AM, Lukas Slebodnik wrote:
It does not make sense to argue.
test-io was not ideal unit test and adding another workaroud(fix/hack) is not good. I could sleep, but ...
Patch is attached.
Thanks, Lukas. This fixes my problem.
I have one more request -- don't use strdup and free, but rather create a test context in the setup function and just free the test context in the teardown function. I know the current code works, but it's not the best example for future developers (who tend to look at unit tests for examples on how to use code).
Conversely, get_random_filepath should accept a TALLOC_CTX as the first parameter.
I didn't use talloc for this small test, because I did not want to add new libraries in Makefile.am (test_io_LDADD)
You can also consider creating a destructor for the contents of struct dir_state, but here I think the teardown function is semantically close enough to a destructor.
LS
On 04/17/2014 01:13 PM, Lukas Slebodnik wrote:
On (17/04/14 10:44), Jakub Hrozek wrote:
On Thu, Apr 17, 2014 at 11:06:16AM +0300, Nikolai Kondrashov wrote:
On 04/17/2014 02:29 AM, Lukas Slebodnik wrote:
It does not make sense to argue.
test-io was not ideal unit test and adding another workaroud(fix/hack) is not good. I could sleep, but ...
Patch is attached.
Thanks, Lukas. This fixes my problem.
I have one more request -- don't use strdup and free, but rather create a test context in the setup function and just free the test context in the teardown function. I know the current code works, but it's not the best example for future developers (who tend to look at unit tests for examples on how to use code).
Conversely, get_random_filepath should accept a TALLOC_CTX as the first parameter.
I didn't use talloc for this small test, because I did not want to add new libraries in Makefile.am (test_io_LDADD)
You can also consider creating a destructor for the contents of struct dir_state, but here I think the teardown function is semantically close enough to a destructor.
Jakub, what do you think about Lukas' response? Do you think his patch still needs the changes you requested, or you can merge it?
Nick
On Thu, May 15, 2014 at 09:12:22PM +0300, Nikolai Kondrashov wrote:
On 04/17/2014 01:13 PM, Lukas Slebodnik wrote:
On (17/04/14 10:44), Jakub Hrozek wrote:
On Thu, Apr 17, 2014 at 11:06:16AM +0300, Nikolai Kondrashov wrote:
On 04/17/2014 02:29 AM, Lukas Slebodnik wrote:
It does not make sense to argue.
test-io was not ideal unit test and adding another workaroud(fix/hack) is not good. I could sleep, but ...
Patch is attached.
Thanks, Lukas. This fixes my problem.
I have one more request -- don't use strdup and free, but rather create a test context in the setup function and just free the test context in the teardown function. I know the current code works, but it's not the best example for future developers (who tend to look at unit tests for examples on how to use code).
Conversely, get_random_filepath should accept a TALLOC_CTX as the first parameter.
I didn't use talloc for this small test, because I did not want to add new libraries in Makefile.am (test_io_LDADD)
You can also consider creating a destructor for the contents of struct dir_state, but here I think the teardown function is semantically close enough to a destructor.
Jakub, what do you think about Lukas' response? Do you think his patch still needs the changes you requested, or you can merge it?
Nick
Sorry about letting this thread stall.
I'm not exactly thrilled about not using talloc, but the test works fine and doesn't leak any memory.
ACK.
On Fri, May 16, 2014 at 08:51:36AM +0200, Jakub Hrozek wrote:
On Thu, May 15, 2014 at 09:12:22PM +0300, Nikolai Kondrashov wrote:
On 04/17/2014 01:13 PM, Lukas Slebodnik wrote:
On (17/04/14 10:44), Jakub Hrozek wrote:
On Thu, Apr 17, 2014 at 11:06:16AM +0300, Nikolai Kondrashov wrote:
On 04/17/2014 02:29 AM, Lukas Slebodnik wrote:
It does not make sense to argue.
test-io was not ideal unit test and adding another workaroud(fix/hack) is not good. I could sleep, but ...
Patch is attached.
Thanks, Lukas. This fixes my problem.
I have one more request -- don't use strdup and free, but rather create a test context in the setup function and just free the test context in the teardown function. I know the current code works, but it's not the best example for future developers (who tend to look at unit tests for examples on how to use code).
Conversely, get_random_filepath should accept a TALLOC_CTX as the first parameter.
I didn't use talloc for this small test, because I did not want to add new libraries in Makefile.am (test_io_LDADD)
You can also consider creating a destructor for the contents of struct dir_state, but here I think the teardown function is semantically close enough to a destructor.
Jakub, what do you think about Lukas' response? Do you think his patch still needs the changes you requested, or you can merge it?
Nick
Sorry about letting this thread stall.
I'm not exactly thrilled about not using talloc, but the test works fine and doesn't leak any memory.
ACK.
Pushed to master: f333ca01311000475db0fbd059243d05f9a90e96
On (16/04/14 21:30), Nikolai Kondrashov wrote:
Hi Lukas,
A slightly more complete answer to your last message.
On 04/16/2014 05:44 PM, Lukas Slebodnik wrote:
On (16/04/14 16:21), Nikolai Kondrashov wrote:
On 04/16/2014 04:03 PM, Lukas Slebodnik wrote:
It works if $TEST_DIR is created.
Not always. Not when the path is relative (and not "."). Please execute this and see for yourself:
mkdir test-dir ./configure --with-test-dir=test-dir
- It is not recommended to run configure from source directory.
I gathered as much. It should work though and I made it this way to have instructions clearer.
make check
- It will fail, but You do not know purpose of argument --with-test-dir It is not docummented.
I knew the purpose of --with-test-dir when making this patch, and its function is documented clear enough in src/conf_macros.m4 and "./configure --help" output. It says: "Directory used for make check temporary files". Nowhere it does say: "don't supply relative paths, because they don't work."
Even if it wasn't documented, the straightforward "directory for temporary files" is still easier to remember than "directory for temporary files, but not pointed by a relative path, or a test will break".
This tiny patch doesn't make the code more complicated, but instead slightly *less* complicated and prevents confusion in the future.
The configure argument "with-test-dir" was created because sysdb test are slow on HDD. In my case, It takes 14 minutes (IIRC) If you want to speed up tests you will need run test in ramdisk(/dev/shm) and you will specify full path to ramdisk.
In my case there is no beneficial performance difference when running with valgrind. There is about 25% improvement using /dev/shm when running with coverage enabled, but even in a VM it takes less than a minute on my machine.
I was waiting for pshing patch to the upstream. This is a reason of late reply.
I will use /dev/shm and will mess with moving test results around, when/if I'll need to tune the performance, but for now I'd prefer to keep my CI code simple and use relative directories.
I have totally different results. The most problematic is sysdb test, because there is a lof of IO operations.
[user@host][/dev/shm/sssd_build]$time ./sysdb-tests Warning: LDB_MODULES_PATH is not set, will use LDB plugins installed in system paths. Running suite(s): sysdb 100%: Checks: 807, Failures: 0, Errors: 0
real 0m0.747s user 0m0.551s sys 0m0.187s
[user@host][/dev/shm/sssd_build]$time libtool --mode=execute valgrind ./sysdb-tests ==18119== Memcheck, a memory error detector ==18119== Copyright (C) 2002-2013, and GNU GPL'd, by Julian Seward et al. ==18119== Using Valgrind-3.9.0 and LibVEX; rerun with -h for copyright info ==18119== Command: /dev/shm/sssd_build/.libs/lt-sysdb-tests ==18119== //snip
real 0m32.283s user 0m28.058s sys 0m0.856s
[user@host][/var/tmp/sssd_build]$time ./sysdb-tests Warning: LDB_MODULES_PATH is not set, will use LDB plugins installed in system paths. Running suite(s): sysdb 100%: Checks: 807, Failures: 0, Errors: 0
real 7m59.268s user 0m3.054s sys 0m3.995s
[user@host][/var/tmp/sssd_build]$time libtool --mode=execute valgrind ./sysdb-tests ==18253== Memcheck, a memory error detector ==18253== Copyright (C) 2002-2013, and GNU GPL'd, by Julian Seward et al. ==18253== Using Valgrind-3.9.0 and LibVEX; rerun with -h for copyright info ==18253== Command: /var/tmp/sssd_build/.libs/lt-sysdb-tests ==18253== //snip
real 8m47.860s user 1m13.385s sys 0m5.411s
Sumarry ======================================= ramdisk ........ 0.8 sec 1x ramdisk + valgrind ........ 32.2 sec 40x hdd ........ 479.2 sec 599x hdd + valgrind ........ 527.8 sec 659x
"ramdisk + valgrind" is 16 times faster than "hdd + valgrind" "ramdisk" is 600 times faster then "hdd"
LS
On 05/16/2014 11:32 PM, Lukas Slebodnik wrote:
On (16/04/14 21:30), Nikolai Kondrashov wrote:
In my case there is no beneficial performance difference when running with valgrind. There is about 25% improvement using /dev/shm when running with coverage enabled, but even in a VM it takes less than a minute on my machine.
I was waiting for pshing patch to the upstream. This is a reason of late reply.
I will use /dev/shm and will mess with moving test results around, when/if I'll need to tune the performance, but for now I'd prefer to keep my CI code simple and use relative directories.
I have totally different results. The most problematic is sysdb test, because there is a lof of IO operations.
I admit I didn't test this thoroughly and might have hit a corner case and perhaps I was biased. I'll retry my tests properly once I get to the point of optimizing performance.
Thanks for testing and posting your results, Lukas.
Sincerely, Nick
On (17/05/14 01:14), Nikolai Kondrashov wrote:
On 05/16/2014 11:32 PM, Lukas Slebodnik wrote:
On (16/04/14 21:30), Nikolai Kondrashov wrote:
In my case there is no beneficial performance difference when running with valgrind. There is about 25% improvement using /dev/shm when running with coverage enabled, but even in a VM it takes less than a minute on my machine.
I was waiting for pshing patch to the upstream. This is a reason of late reply.
I will use /dev/shm and will mess with moving test results around, when/if I'll need to tune the performance, but for now I'd prefer to keep my CI code simple and use relative directories.
I have totally different results. The most problematic is sysdb test, because there is a lof of IO operations.
It might look I was exaggerating in my previous mail.
There are results of execution of all tests. * tests were precompiled (make tests) * I executed tests with "make -j4 check", because I have 4-core CPU
ramdisk: time make -j4 check V=0 // snip real 0m12.462s user 0m4.887s sys 0m0.812s
HDD: time make -j4 check V=0 // snip real 9m44.722s user 0m11.920s sys 0m8.038s
"ramdisk" is 58x faster then HDD. 3 tests failed on HDD due to timeout. src/tests/sysdb-tests.c:1399:E:SYSDB Tests:test_sysdb_get_new_id:0: (after this point) Test timeout expired
src/tests/sysdb_ssh-tests.c:202:E:SYSDB_SSH Tests:store_one_host_test:0: (after this point) Test timeout expired
src/tests/simple_access-tests.c:122:S:user allow/deny:test_both_empty:0: (after this point) Test timeout expired
The next time, I executed test withour running tests simultaneously: make check ramdisk: time make check V=0 // snip real 0m25.722s user 0m4.573s sys 0m0.785s
HDD: time make check V=0 // snip real 11m3.424s user 0m9.509s sys 0m7.085s
"ramdisk" is 26x faster then HDD.
I admit I didn't test this thoroughly and might have hit a corner case and perhaps I was biased. I'll retry my tests properly once I get to the point of optimizing performance.
Running tests in ramdisk is necessary for developers to pass tests in reasonable time. It is not optimization, it is well known fact and I will appreciate if continous integration is aware of this fact.
LS
On Mon, May 19, 2014 at 09:33:49AM +0200, Lukas Slebodnik wrote:
On (17/05/14 01:14), Nikolai Kondrashov wrote:
On 05/16/2014 11:32 PM, Lukas Slebodnik wrote:
On (16/04/14 21:30), Nikolai Kondrashov wrote:
In my case there is no beneficial performance difference when running with valgrind. There is about 25% improvement using /dev/shm when running with coverage enabled, but even in a VM it takes less than a minute on my machine.
I was waiting for pshing patch to the upstream. This is a reason of late reply.
I will use /dev/shm and will mess with moving test results around, when/if I'll need to tune the performance, but for now I'd prefer to keep my CI code simple and use relative directories.
I have totally different results. The most problematic is sysdb test, because there is a lof of IO operations.
It might look I was exaggerating in my previous mail.
There are results of execution of all tests.
- tests were precompiled (make tests)
- I executed tests with "make -j4 check", because I have 4-core CPU
ramdisk: time make -j4 check V=0 // snip real 0m12.462s user 0m4.887s sys 0m0.812s
HDD: time make -j4 check V=0 // snip real 9m44.722s user 0m11.920s sys 0m8.038s
"ramdisk" is 58x faster then HDD. 3 tests failed on HDD due to timeout. src/tests/sysdb-tests.c:1399:E:SYSDB Tests:test_sysdb_get_new_id:0: (after this point) Test timeout expired
src/tests/sysdb_ssh-tests.c:202:E:SYSDB_SSH Tests:store_one_host_test:0: (after this point) Test timeout expired
src/tests/simple_access-tests.c:122:S:user allow/deny:test_both_empty:0: (after this point) Test timeout expired
The next time, I executed test withour running tests simultaneously: make check ramdisk: time make check V=0 // snip real 0m25.722s user 0m4.573s sys 0m0.785s
HDD: time make check V=0 // snip real 11m3.424s user 0m9.509s sys 0m7.085s
"ramdisk" is 26x faster then HDD.
I admit I didn't test this thoroughly and might have hit a corner case and perhaps I was biased. I'll retry my tests properly once I get to the point of optimizing performance.
Running tests in ramdisk is necessary for developers to pass tests in reasonable time. It is not optimization, it is well known fact and I will appreciate if continous integration is aware of this fact.
I wonder if we should raise the CK_TIMEOUT variable (or its equivalent) in the tests? I'm not sure if requiring the tests to be ran in ramdisk is the right thing to do, mostly because --with-test-dir only defaults to shm in our specfile, not the source.
So I think we should either set the --with-test-dir value to ramdisk by default (we already set it this way for distcheck) or raise the CK_TIMEOUT.
On 05/19/2014 10:33 AM, Lukas Slebodnik wrote:
On (17/05/14 01:14), Nikolai Kondrashov wrote:
On 05/16/2014 11:32 PM, Lukas Slebodnik wrote:
On (16/04/14 21:30), Nikolai Kondrashov wrote:
In my case there is no beneficial performance difference when running with valgrind. There is about 25% improvement using /dev/shm when running with coverage enabled, but even in a VM it takes less than a minute on my machine.
I was waiting for pshing patch to the upstream. This is a reason of late reply.
I will use /dev/shm and will mess with moving test results around, when/if I'll need to tune the performance, but for now I'd prefer to keep my CI code simple and use relative directories.
I have totally different results. The most problematic is sysdb test, because there is a lof of IO operations.
It might look I was exaggerating in my previous mail.
No, it didn't really.
"ramdisk" is 58x faster then HDD. "ramdisk" is 26x faster then HDD.
Out of curiosity: why is it so much faster? Does the code sync to disc often?
I did some light tests and while I don't see such a dramatic difference on my setup, it is still considerable. A "make check" with Valgrind took 26 minutes with test-dir on HDD and 16 minutes on ramdisk.
Although I tried investingating this a while ago (and will try again later), I'm not sure why it takes such a long time on my machine. This is not very important right now, though, as I will need to optimize for CI VMs first.
I was using this command, without setting CK_FORK:
make -j 4 check LOG_COMPILER=libtool \ 'LOG_FLAGS=--mode=execute \ valgrind \ --error-exitcode=99 \ --trace-children=yes \ --trace-children-skip=/* \ --leak-check=full'
I'll need to improve child filtering, though.
I admit I didn't test this thoroughly and might have hit a corner case and perhaps I was biased. I'll retry my tests properly once I get to the point of optimizing performance.
Running tests in ramdisk is necessary for developers to pass tests in reasonable time. It is not optimization, it is well known fact and I will appreciate if continous integration is aware of this fact.
Let's skip discussion on term use. The request is valid as the time difference is not to be sneezed at. I've already implemented using /dev/shm/<subdir> as storage location for running tests and moving it to the build directory for archival.
Sincerely, Nick
On 05/19/2014 05:55 PM, Nikolai Kondrashov wrote:
On 05/19/2014 10:33 AM, Lukas Slebodnik wrote:
On (17/05/14 01:14), Nikolai Kondrashov wrote:
On 05/16/2014 11:32 PM, Lukas Slebodnik wrote:
On (16/04/14 21:30), Nikolai Kondrashov wrote:
In my case there is no beneficial performance difference when running with valgrind. There is about 25% improvement using /dev/shm when running with coverage enabled, but even in a VM it takes less than a minute on my machine.
I was waiting for pshing patch to the upstream. This is a reason of late reply.
I will use /dev/shm and will mess with moving test results around, when/if I'll need to tune the performance, but for now I'd prefer to keep my CI code simple and use relative directories.
I have totally different results. The most problematic is sysdb test, because there is a lof of IO operations.
It might look I was exaggerating in my previous mail.
No, it didn't really.
"ramdisk" is 58x faster then HDD. "ramdisk" is 26x faster then HDD.
Out of curiosity: why is it so much faster? Does the code sync to disc often?
I did some light tests and while I don't see such a dramatic difference on my setup, it is still considerable. A "make check" with Valgrind took 26 minutes with test-dir on HDD and 16 minutes on ramdisk.
I just noticed that the difference actually fits your observation of bare parallel run on ramdisk/HDD. It is about 9 minutes for my Valgrind run as well.
Also, you ran Valgrind with single tests only and without --leak-check=full. That's probably why I was confused about expected runtime with Valgrind.
Could you perhaps try timing the command running all tests under Valgrind, which I mentioned in the previous message, if time allows?
Thank you.
Sorry my replies are a bit fumbled, I'm still having cold symptoms.
Sincerely, Nick
On (19/05/14 18:15), Nikolai Kondrashov wrote:
On 05/19/2014 05:55 PM, Nikolai Kondrashov wrote:
On 05/19/2014 10:33 AM, Lukas Slebodnik wrote:
On (17/05/14 01:14), Nikolai Kondrashov wrote:
On 05/16/2014 11:32 PM, Lukas Slebodnik wrote:
On (16/04/14 21:30), Nikolai Kondrashov wrote:
In my case there is no beneficial performance difference when running with valgrind. There is about 25% improvement using /dev/shm when running with coverage enabled, but even in a VM it takes less than a minute on my machine.
I was waiting for pshing patch to the upstream. This is a reason of late reply.
I will use /dev/shm and will mess with moving test results around, when/if I'll need to tune the performance, but for now I'd prefer to keep my CI code simple and use relative directories.
I have totally different results. The most problematic is sysdb test, because there is a lof of IO operations.
It might look I was exaggerating in my previous mail.
No, it didn't really.
"ramdisk" is 58x faster then HDD. "ramdisk" is 26x faster then HDD.
Out of curiosity: why is it so much faster? Does the code sync to disc often?
I did some light tests and while I don't see such a dramatic difference on my setup, it is still considerable. A "make check" with Valgrind took 26 minutes with test-dir on HDD and 16 minutes on ramdisk.
I just noticed that the difference actually fits your observation of bare parallel run on ramdisk/HDD. It is about 9 minutes for my Valgrind run as well.
Also, you ran Valgrind with single tests only and without --leak-check=full. That's probably why I was confused about expected runtime with Valgrind.
Could you perhaps try timing the command running all tests under Valgrind, which I mentioned in the previous message, if time allows?
Sure,
//parallel sh-4.2$ time make -j4 check LOG_COMPILER=libtool 'LOG_FLAGS=--mode=execute valgrind --trace-children=yes --trace-children-skip=/* --leak-check=full' make check-recursive //snip real 0m47.355s user 1m46.597s sys 0m4.711s
//single sh-4.2$ time make check LOG_COMPILER=libtool 'LOG_FLAGS=--mode=execute valgrind --trace-children=yes --trace-children-skip=/* --leak-check=full'
real 1m49.825s user 1m26.396s sys 0m4.228s
I had to execute tests without valgring argument "--error-exitcode=99". With this argument, talloc destructor "mock_server_cleanup" failed.
src/tests/common_dbus.c mock_server_cleanup 66 /* Wait for the server child, it always returns mock */ 67 verify_eq (waitpid(mock->pid, &child_status, 0), mock->pid);
68 verify_eq (child_status, 0);
69 70 file = strchr(mock->dbus_address, '/');
and verify_eq is macro which use abort.
do { if ((x) != (y)) { fprintf(stderr, "failed: %s == %s\n", #x, #y); abort(); } } while (0)
LS
On 05/19/2014 07:40 PM, Lukas Slebodnik wrote:
On (19/05/14 18:15), Nikolai Kondrashov wrote:
Could you perhaps try timing the command running all tests under Valgrind, which I mentioned in the previous message, if time allows?
Sure,
//parallel sh-4.2$ time make -j4 check LOG_COMPILER=libtool 'LOG_FLAGS=--mode=execute valgrind --trace-children=yes --trace-children-skip=/* --leak-check=full' make check-recursive //snip real 0m47.355s user 1m46.597s sys 0m4.711s
//single sh-4.2$ time make check LOG_COMPILER=libtool 'LOG_FLAGS=--mode=execute valgrind --trace-children=yes --trace-children-skip=/* --leak-check=full'
real 1m49.825s user 1m26.396s sys 0m4.228s
Thank you! Wow, this runs really fast on your machine. I'll have to investigate this, after all.
I had to execute tests without valgring argument "--error-exitcode=99". With this argument, talloc destructor "mock_server_cleanup" failed.
src/tests/common_dbus.c mock_server_cleanup 66 /* Wait for the server child, it always returns mock */ 67 verify_eq (waitpid(mock->pid, &child_status, 0), mock->pid);
68 verify_eq (child_status, 0);
69 70 file = strchr(mock->dbus_address, '/');
and verify_eq is macro which use abort.
do { if ((x) != (y)) { fprintf(stderr, "failed: %s == %s\n", #x, #y); abort(); } } while (0)
Yes, I managed not to notice this before, behind other errors, but I get these too. It seems it is reall a Valgrind issue. I.e. it doesn't seem to make sense to apply "--error-exitcode" to child processes.
However, it seems that the exit status of the process in question is hardcoded to zero. So, can we perhaps work this around for now by changing the assertion to the following?
verify_eq (WIFSIGNALED(child_status), 0);
If yes, I'll submit a patch.
Sincerely, Nick
On (19/05/14 20:34), Nikolai Kondrashov wrote:
On 05/19/2014 07:40 PM, Lukas Slebodnik wrote:
On (19/05/14 18:15), Nikolai Kondrashov wrote:
Could you perhaps try timing the command running all tests under Valgrind, which I mentioned in the previous message, if time allows?
Sure,
//parallel sh-4.2$ time make -j4 check LOG_COMPILER=libtool 'LOG_FLAGS=--mode=execute valgrind --trace-children=yes --trace-children-skip=/* --leak-check=full' make check-recursive //snip real 0m47.355s user 1m46.597s sys 0m4.711s
//single sh-4.2$ time make check LOG_COMPILER=libtool 'LOG_FLAGS=--mode=execute valgrind --trace-children=yes --trace-children-skip=/* --leak-check=full'
real 1m49.825s user 1m26.396s sys 0m4.228s
Thank you! Wow, this runs really fast on your machine. I'll have to investigate this, after all.
I found a reasoon why it is faster on my machine. I was debugging crash in sbus_test, therefore I exported environment variable CK_FORK=no. This is a reason why it is 4x faster.
export CK_FORK=no time make check LOG_COMPILER=libtool 'LOG_FLAGS=--mode=execute valgrind --trace-children=yes --trace-children-skip=/* --leak-check=full'
//snip real 1m48.853s user 1m25.409s sys 0m4.146s
unset CK_FORK time make check LOG_COMPILER=libtool 'LOG_FLAGS=--mode=execute valgrind --trace-children=yes --trace-children-skip=/* --leak-check=full'
//snip real 8m31.100s user 7m49.765s sys 0m21.273s
LS
On 05/20/2014 11:52 AM, Lukas Slebodnik wrote:
On (19/05/14 20:34), Nikolai Kondrashov wrote:
On 05/19/2014 07:40 PM, Lukas Slebodnik wrote:
On (19/05/14 18:15), Nikolai Kondrashov wrote:
Could you perhaps try timing the command running all tests under Valgrind, which I mentioned in the previous message, if time allows?
Sure,
//parallel sh-4.2$ time make -j4 check LOG_COMPILER=libtool 'LOG_FLAGS=--mode=execute valgrind --trace-children=yes --trace-children-skip=/* --leak-check=full' make check-recursive //snip real 0m47.355s user 1m46.597s sys 0m4.711s
//single sh-4.2$ time make check LOG_COMPILER=libtool 'LOG_FLAGS=--mode=execute valgrind --trace-children=yes --trace-children-skip=/* --leak-check=full'
real 1m49.825s user 1m26.396s sys 0m4.228s
Thank you! Wow, this runs really fast on your machine. I'll have to investigate this, after all.
I found a reasoon why it is faster on my machine. I was debugging crash in sbus_test, therefore I exported environment variable CK_FORK=no. This is a reason why it is 4x faster.
export CK_FORK=no time make check LOG_COMPILER=libtool 'LOG_FLAGS=--mode=execute valgrind --trace-children=yes --trace-children-skip=/* --leak-check=full'
//snip real 1m48.853s user 1m25.409s sys 0m4.146s
unset CK_FORK time make check LOG_COMPILER=libtool 'LOG_FLAGS=--mode=execute valgrind --trace-children=yes --trace-children-skip=/* --leak-check=full'
//snip real 8m31.100s user 7m49.765s sys 0m21.273s
I see, thank you. On my machine setting CK_FORK=no results in parallel runs completing in a little over a minute, which is close to what you see.
I guess I'll use CK_FORK=no for Valgrind runs, after all.
Nick
On 05/19/2014 08:34 PM, Nikolai Kondrashov wrote:
On 05/19/2014 07:40 PM, Lukas Slebodnik wrote:
I had to execute tests without valgring argument "--error-exitcode=99". With this argument, talloc destructor "mock_server_cleanup" failed.
src/tests/common_dbus.c mock_server_cleanup 66 /* Wait for the server child, it always returns mock */ 67 verify_eq (waitpid(mock->pid, &child_status, 0), mock->pid);
68 verify_eq (child_status, 0);
69 70 file = strchr(mock->dbus_address, '/');
and verify_eq is macro which use abort.
do { if ((x) != (y)) { fprintf(stderr, "failed: %s == %s\n", #x, #y); abort(); } } while (0)
Yes, I managed not to notice this before, behind other errors, but I get these too. It seems it is reall a Valgrind issue. I.e. it doesn't seem to make sense to apply "--error-exitcode" to child processes.
However, it seems that the exit status of the process in question is hardcoded to zero. So, can we perhaps work this around for now by changing the assertion to the following?
verify_eq (WIFSIGNALED(child_status), 0);
If yes, I'll submit a patch.
Scratch that. The "--error-exitcode" option is useless with child tracking, anyway. Any errors in children will not affect exit code of the parent Valgrind. I guess I'll have to do XML Valgrind output and analyze that. Sigh...
Nick
On 05/21/2014 01:20 PM, Nikolai Kondrashov wrote:
On 05/19/2014 08:34 PM, Nikolai Kondrashov wrote:
Yes, I managed not to notice this before, behind other errors, but I get these too. It seems it is reall a Valgrind issue. I.e. it doesn't seem to make sense to apply "--error-exitcode" to child processes.
However, it seems that the exit status of the process in question is hardcoded to zero. So, can we perhaps work this around for now by changing the assertion to the following?
verify_eq (WIFSIGNALED(child_status), 0);
If yes, I'll submit a patch.
Scratch that. The "--error-exitcode" option is useless with child tracking, anyway. Any errors in children will not affect exit code of the parent Valgrind. I guess I'll have to do XML Valgrind output and analyze that. Sigh...
Nope. Valgrind's XML output is useless for forked programs. XML gets all jumbled up. Valgrind outputs to separate files only for children which exec'd.
This leaves me with parsing plain text output, which is even worse. Besides, Valgrind seems to lose count of errors which happened after forking, but before exec'ing. Still, there doesn't seem to be a better way.
Nick
On Fri, May 23, 2014 at 12:39:55PM +0300, Nikolai Kondrashov wrote:
On 05/21/2014 01:20 PM, Nikolai Kondrashov wrote:
On 05/19/2014 08:34 PM, Nikolai Kondrashov wrote:
Yes, I managed not to notice this before, behind other errors, but I get these too. It seems it is reall a Valgrind issue. I.e. it doesn't seem to make sense to apply "--error-exitcode" to child processes.
However, it seems that the exit status of the process in question is hardcoded to zero. So, can we perhaps work this around for now by changing the assertion to the following?
verify_eq (WIFSIGNALED(child_status), 0);
If yes, I'll submit a patch.
Scratch that. The "--error-exitcode" option is useless with child tracking, anyway. Any errors in children will not affect exit code of the parent Valgrind. I guess I'll have to do XML Valgrind output and analyze that. Sigh...
Nope. Valgrind's XML output is useless for forked programs. XML gets all jumbled up. Valgrind outputs to separate files only for children which exec'd.
This leaves me with parsing plain text output, which is even worse. Besides, Valgrind seems to lose count of errors which happened after forking, but before exec'ing. Still, there doesn't seem to be a better way.
Nick
Why is there forking involved even with CK_FORK=no?
On 05/23/2014 12:49 PM, Jakub Hrozek wrote:
On Fri, May 23, 2014 at 12:39:55PM +0300, Nikolai Kondrashov wrote:
On 05/21/2014 01:20 PM, Nikolai Kondrashov wrote:
On 05/19/2014 08:34 PM, Nikolai Kondrashov wrote:
Yes, I managed not to notice this before, behind other errors, but I get these too. It seems it is reall a Valgrind issue. I.e. it doesn't seem to make sense to apply "--error-exitcode" to child processes.
However, it seems that the exit status of the process in question is hardcoded to zero. So, can we perhaps work this around for now by changing the assertion to the following?
verify_eq (WIFSIGNALED(child_status), 0);
If yes, I'll submit a patch.
Scratch that. The "--error-exitcode" option is useless with child tracking, anyway. Any errors in children will not affect exit code of the parent Valgrind. I guess I'll have to do XML Valgrind output and analyze that. Sigh...
Nope. Valgrind's XML output is useless for forked programs. XML gets all jumbled up. Valgrind outputs to separate files only for children which exec'd.
This leaves me with parsing plain text output, which is even worse. Besides, Valgrind seems to lose count of errors which happened after forking, but before exec'ing. Still, there doesn't seem to be a better way.
Why is there forking involved even with CK_FORK=no?
The test_dbus_setup_mock forks the sbus server.
Nick
On 05/23/2014 01:00 PM, Nikolai Kondrashov wrote:
On 05/23/2014 12:49 PM, Jakub Hrozek wrote:
On Fri, May 23, 2014 at 12:39:55PM +0300, Nikolai Kondrashov wrote:
On 05/21/2014 01:20 PM, Nikolai Kondrashov wrote:
On 05/19/2014 08:34 PM, Nikolai Kondrashov wrote:
Yes, I managed not to notice this before, behind other errors, but I get these too. It seems it is reall a Valgrind issue. I.e. it doesn't seem to make sense to apply "--error-exitcode" to child processes.
However, it seems that the exit status of the process in question is hardcoded to zero. So, can we perhaps work this around for now by changing the assertion to the following?
verify_eq (WIFSIGNALED(child_status), 0);
If yes, I'll submit a patch.
Scratch that. The "--error-exitcode" option is useless with child tracking, anyway. Any errors in children will not affect exit code of the parent Valgrind. I guess I'll have to do XML Valgrind output and analyze that. Sigh...
Nope. Valgrind's XML output is useless for forked programs. XML gets all jumbled up. Valgrind outputs to separate files only for children which exec'd.
This leaves me with parsing plain text output, which is even worse. Besides, Valgrind seems to lose count of errors which happened after forking, but before exec'ing. Still, there doesn't seem to be a better way.
Why is there forking involved even with CK_FORK=no?
The test_dbus_setup_mock forks the sbus server.
Anyway, we shouldn't rule out the need to fork in tests.
Nick
On 05/23/2014 12:39 PM, Nikolai Kondrashov wrote:
On 05/21/2014 01:20 PM, Nikolai Kondrashov wrote:
On 05/19/2014 08:34 PM, Nikolai Kondrashov wrote:
Yes, I managed not to notice this before, behind other errors, but I get these too. It seems it is reall a Valgrind issue. I.e. it doesn't seem to make sense to apply "--error-exitcode" to child processes.
However, it seems that the exit status of the process in question is hardcoded to zero. So, can we perhaps work this around for now by changing the assertion to the following?
verify_eq (WIFSIGNALED(child_status), 0);
If yes, I'll submit a patch.
Scratch that. The "--error-exitcode" option is useless with child tracking, anyway. Any errors in children will not affect exit code of the parent Valgrind. I guess I'll have to do XML Valgrind output and analyze that. Sigh...
Nope. Valgrind's XML output is useless for forked programs. XML gets all jumbled up. Valgrind outputs to separate files only for children which exec'd.
It appears this problem has been known for a long time and, for example, Google maintains their own version of Valgrind, which has it fixed: http://code.google.com/p/valgrind-variant/
However, I don't think we're in the business of tracking that fork. This is just an illustration.
Nick
On Mon, May 19, 2014 at 05:55:45PM +0300, Nikolai Kondrashov wrote:
"ramdisk" is 58x faster then HDD. "ramdisk" is 26x faster then HDD.
Out of curiosity: why is it so much faster? Does the code sync to disc often?
Every sysdb_transaction_commit or ldb_transaction_commit if we rely on ldb's autotransaction is several fsyncs (4 I think). That's the reason we try to collect all the data first in the daemon and write larger chunks of data in a single transaction.
On 05/19/2014 08:28 PM, Jakub Hrozek wrote:
On Mon, May 19, 2014 at 05:55:45PM +0300, Nikolai Kondrashov wrote:
"ramdisk" is 58x faster then HDD. "ramdisk" is 26x faster then HDD.
Out of curiosity: why is it so much faster? Does the code sync to disc often?
Every sysdb_transaction_commit or ldb_transaction_commit if we rely on ldb's autotransaction is several fsyncs (4 I think). That's the reason we try to collect all the data first in the daemon and write larger chunks of data in a single transaction.
Ah, as I suspected, thanks! I should have probably just run the tests under strace, instead of asking.
Sincerely, Nick
On 05/19/2014 01:36 PM, Nikolai Kondrashov wrote:
On 05/19/2014 08:28 PM, Jakub Hrozek wrote:
On Mon, May 19, 2014 at 05:55:45PM +0300, Nikolai Kondrashov wrote:
"ramdisk" is 58x faster then HDD. "ramdisk" is 26x faster then HDD.
Out of curiosity: why is it so much faster? Does the code sync to disc often?
Every sysdb_transaction_commit or ldb_transaction_commit if we rely on ldb's autotransaction is several fsyncs (4 I think). That's the reason we try to collect all the data first in the daemon and write larger chunks of data in a single transaction.
Ah, as I suspected, thanks! I should have probably just run the tests under strace, instead of asking.
I wonder if tdb is really what we should use in the long run. May be there is a way to shard it internally so that you do not need to lock the whole database but only a portion of it?
Sincerely, Nick _______________________________________________ sssd-devel mailing list sssd-devel@lists.fedorahosted.org https://lists.fedorahosted.org/mailman/listinfo/sssd-devel
On Wed, May 21, 2014 at 03:31:22PM -0400, Dmitri Pal wrote:
On 05/19/2014 01:36 PM, Nikolai Kondrashov wrote:
On 05/19/2014 08:28 PM, Jakub Hrozek wrote:
On Mon, May 19, 2014 at 05:55:45PM +0300, Nikolai Kondrashov wrote:
"ramdisk" is 58x faster then HDD. "ramdisk" is 26x faster then HDD.
Out of curiosity: why is it so much faster? Does the code sync to disc often?
Every sysdb_transaction_commit or ldb_transaction_commit if we rely on ldb's autotransaction is several fsyncs (4 I think). That's the reason we try to collect all the data first in the daemon and write larger chunks of data in a single transaction.
Ah, as I suspected, thanks! I should have probably just run the tests under strace, instead of asking.
I wonder if tdb is really what we should use in the long run. May be there is a way to shard it internally so that you do not need to lock the whole database but only a portion of it?
I was working on mdb backend for ldb in my spare time: https://github.com/jhrozek/samba-ldb-mdb/tree/mdb
So far the performance seems to be really nice, but given this is a spare time project, the progress wasn't as rapid as I would like.
I'd like to finish the backend before I go for vacation in early June, although I suspect indexing wouldn't be finished.
On Wed, 2014-05-21 at 15:31 -0400, Dmitri Pal wrote:
On 05/19/2014 01:36 PM, Nikolai Kondrashov wrote:
On 05/19/2014 08:28 PM, Jakub Hrozek wrote:
On Mon, May 19, 2014 at 05:55:45PM +0300, Nikolai Kondrashov wrote:
"ramdisk" is 58x faster then HDD. "ramdisk" is 26x faster then HDD.
Out of curiosity: why is it so much faster? Does the code sync to disc often?
Every sysdb_transaction_commit or ldb_transaction_commit if we rely on ldb's autotransaction is several fsyncs (4 I think). That's the reason we try to collect all the data first in the daemon and write larger chunks of data in a single transaction.
Ah, as I suspected, thanks! I should have probably just run the tests under strace, instead of asking.
I wonder if tdb is really what we should use in the long run. May be there is a way to shard it internally so that you do not need to lock the whole database but only a portion of it?
tdb can lock per record, but that is not exposed directly via ldb nor sysdb, as it would be really difficult to determine ahead of time what is the record set to lock for complex transactions.
Changing databases will not change this problem. However ldb has some performance issues, some structures internally use even O(n^3) algorithms to deal with data, and also memory allocation could be substantially improved, it's just a matter of dedicating someone to deal with performance. The fsyncs do matter to some degree, but with today's SSDs I don't think it is a huge issue in production, and for tests a ramdisk can be used.
Simo.
On 05/22/2014 08:40 AM, Simo Sorce wrote:
On Wed, 2014-05-21 at 15:31 -0400, Dmitri Pal wrote:
On 05/19/2014 01:36 PM, Nikolai Kondrashov wrote:
On 05/19/2014 08:28 PM, Jakub Hrozek wrote:
On Mon, May 19, 2014 at 05:55:45PM +0300, Nikolai Kondrashov wrote:
"ramdisk" is 58x faster then HDD. "ramdisk" is 26x faster then HDD.
Out of curiosity: why is it so much faster? Does the code sync to disc often?
Every sysdb_transaction_commit or ldb_transaction_commit if we rely on ldb's autotransaction is several fsyncs (4 I think). That's the reason we try to collect all the data first in the daemon and write larger chunks of data in a single transaction.
Ah, as I suspected, thanks! I should have probably just run the tests under strace, instead of asking.
I wonder if tdb is really what we should use in the long run. May be there is a way to shard it internally so that you do not need to lock the whole database but only a portion of it?
tdb can lock per record, but that is not exposed directly via ldb nor sysdb, as it would be really difficult to determine ahead of time what is the record set to lock for complex transactions.
Changing databases will not change this problem. However ldb has some performance issues, some structures internally use even O(n^3) algorithms to deal with data, and also memory allocation could be substantially improved, it's just a matter of dedicating someone to deal with performance. The fsyncs do matter to some degree, but with today's SSDs I don't think it is a huge issue in production, and for tests a ramdisk can be used.
Simo.
I think people who moved to SSSD do not like that it is significantly slower than older solutions. Based on the comments on conferences and on the list I get the feeling that it is accepted as inevitable evil as a price for other benefits but this does not mean that people are happy. If we see a way of getting a significant performance improvement I am all for it and I would consider it in planning. I just have not seen a reasonable alternative to what we so far have. If you say that redoing ldb will give us a say 50% boost I will find time for us to make the change.
On 04/16/2014 03:38 PM, Lukas Slebodnik wrote:
On (16/04/14 15:26), Nikolai Kondrashov wrote:
On 04/16/2014 03:00 PM, Jakub Hrozek wrote:
On Wed, Apr 16, 2014 at 02:39:06PM +0300, Nikolai Kondrashov wrote:
Sorry, I felt a bit upset about "This patch is useless". I understand that my patch probably didn't explain enough. However, saying "this is unclear, please elaborate" would have been nicer.
I think the wording can be attributted to language barrier a bit. I don't think anyone tried to embarass you, Nick.
On the other hand, I agree the wording was harsh. If you were seeing a problem and you have a patch that fixes the problem for you, then the patch is mostly certainly not useless, we just need to see if it's the right way to tackle that problem.
Thanks, Jakub.
I'm still not sure I understand where exactly the root cause is, can you give me an example of how can I configure sssd to reproduce the breakage locally?
My latest reply to Lukas has the answer, but seems to be delayed somewhere.
So, use "--with-test-dir=test-dir", don't forget to create the directory itself. The test will fail then.
In my opinion, directory "${test-dir}" should not be created by configure script.
Agreed.
We can add test for validation of argument. test -d ${TEST_DIR}
Good idea. However, it wouldn't stop the test from failing.
Sincerely, Nick
sssd-devel@lists.fedorahosted.org