Hello,
this patch resolves [1]. This adds new behaviour of negative caching of locals, so it needs new tests. I will send patch with tests soon, but not today.
It is applicable after RESPONDERS: Negcache in resp_ctx [2].
For clarity, there is branch with all negative cache's patches [3].
Links:
[1] https://fedorahosted.org/sssd/ticket/2928
[2] https://www.mail-archive.com/sssd-devel@lists.fedorahosted.org/msg26580.html
[3] https://github.com/celestian/sssd/commits/ncache_v2
Regards
On 05/05/2016 05:34 PM, Petr Cech wrote:
Hello,
this patch resolves [1]. This adds new behaviour of negative caching of locals, so it needs new tests. I will send patch with tests soon, but not today.
It is applicable after RESPONDERS: Negcache in resp_ctx [2].
For clarity, there is branch with all negative cache's patches [3].
Links:
[1] https://fedorahosted.org/sssd/ticket/2928
[2] https://www.mail-archive.com/sssd-devel@lists.fedorahosted.org/msg26580.html
[3] https://github.com/celestian/sssd/commits/ncache_v2
Regards
Hello,
I founded little bug in this patch. I will send new one with tests soon.
Regards
On 05/09/2016 01:15 PM, Petr Cech wrote:
On 05/05/2016 05:34 PM, Petr Cech wrote:
Hello,
this patch resolves [1]. This adds new behaviour of negative caching of locals, so it needs new tests. I will send patch with tests soon, but not today.
It is applicable after RESPONDERS: Negcache in resp_ctx [2].
For clarity, there is branch with all negative cache's patches [3].
Links:
[1] https://fedorahosted.org/sssd/ticket/2928
[2] https://www.mail-archive.com/sssd-devel@lists.fedorahosted.org/msg26580.html
[3] https://github.com/celestian/sssd/commits/ncache_v2
Regards
Hello,
I founded little bug in this patch. I will send new one with tests soon.
Regards
Hi,
new patch set is attached. There are tests too.
Regards
On 05/09/2016 04:51 PM, Petr Cech wrote:
On 05/09/2016 01:15 PM, Petr Cech wrote:
On 05/05/2016 05:34 PM, Petr Cech wrote:
Hello,
this patch resolves [1]. This adds new behaviour of negative caching of locals, so it needs new tests. I will send patch with tests soon, but not today.
It is applicable after RESPONDERS: Negcache in resp_ctx [2].
Hi, patches works as expected. I would like you to rename few things though... "locals" refers more to a resident and it is not being used in the meaning of local users and groups. I'd rather use the word "local" (singular), is_local (where boolean is used).
Please rename the new option into something more similar as entry_negative_timeou, maybe local_negative_timeout, unix_negative_timeout or files_negative_timeout...
Its man page description doesn't read well in english (especially the second sentence). Maybe something like this would be better:
Specifies for how many seconds nss_sss should keep local users and groups in negative cache before trying to look the up in the back end again.
Would it be beneficial to add some magic value (say -1) to represent permanent ncache record?
Can you add some debuggin to is_user_local_by_name and similar functions so we can see that local ncache timeout was used? Something like:
if (ret == EOK && pwd_result != NULL) { DEBUG(SSSDBG_TRACE_FUNC, "User %s is a local user\n", name); is_local = true; }
On 05/10/2016 11:57 AM, Pavel Březina wrote:
Hi,
Hello Pavel,
patches works as expected. I would like you to rename few things though... "locals" refers more to a resident and it is not being used in the meaning of local users and groups. I'd rather use the word "local" (singular), is_local (where boolean is used).
Addressed.
Please rename the new option into something more similar as entry_negative_timeou, maybe local_negative_timeout, unix_negative_timeout or files_negative_timeout...
Addressed.
Its man page description doesn't read well in english (especially the second sentence). Maybe something like this would be better:
Specifies for how many seconds nss_sss should keep local users and groups in negative cache before trying to look the up in the back end again.
Addressed.
Would it be beneficial to add some magic value (say -1) to represent permanent ncache record?
I don't think so. We haven't such behaviour yet. Permanent negcaching is set item by item by set function. And during discussion about negcaching of local users, the opinion has been that we didn't want it.
But if you know case for it I am open for it.
Can you add some debuggin to is_user_local_by_name and similar functions so we can see that local ncache timeout was used? Something like:
if (ret == EOK && pwd_result != NULL) { DEBUG(SSSDBG_TRACE_FUNC, "User %s is a local user\n", name); is_local = true; }
Addressed.
Thank you for review, Pavel.
Regards
On 05/10/2016 02:02 PM, Petr Cech wrote:
On 05/10/2016 11:57 AM, Pavel Březina wrote:
Hi,
Hello Pavel,
patches works as expected. I would like you to rename few things though... "locals" refers more to a resident and it is not being used in the meaning of local users and groups. I'd rather use the word "local" (singular), is_local (where boolean is used).
Addressed.
Please rename the new option into something more similar as entry_negative_timeou, maybe local_negative_timeout, unix_negative_timeout or files_negative_timeout...
Addressed.
Its man page description doesn't read well in english (especially the second sentence). Maybe something like this would be better:
Specifies for how many seconds nss_sss should keep local users and groups in negative cache before trying to look the up in the back end again.
Addressed.
Would it be beneficial to add some magic value (say -1) to represent permanent ncache record?
I don't think so. We haven't such behaviour yet. Permanent negcaching is set item by item by set function. And during discussion about negcaching of local users, the opinion has been that we didn't want it.
But if you know case for it I am open for it.
Can you add some debuggin to is_user_local_by_name and similar functions so we can see that local ncache timeout was used? Something like:
if (ret == EOK && pwd_result != NULL) { DEBUG(SSSDBG_TRACE_FUNC, "User %s is a local user\n", name); is_local = true; }
Addressed.
Thank you for review, Pavel.
Regards
CI failed http://sssd-ci.duckdns.org/logs/job/43/16/summary.html
On 05/11/2016 11:14 AM, Pavel Březina wrote:
On 05/10/2016 02:02 PM, Petr Cech wrote:
On 05/10/2016 11:57 AM, Pavel Březina wrote:
Hi,
Hello Pavel,
patches works as expected. I would like you to rename few things though... "locals" refers more to a resident and it is not being used in the meaning of local users and groups. I'd rather use the word "local" (singular), is_local (where boolean is used).
Addressed.
Please rename the new option into something more similar as entry_negative_timeou, maybe local_negative_timeout, unix_negative_timeout or files_negative_timeout...
Addressed.
Its man page description doesn't read well in english (especially the second sentence). Maybe something like this would be better:
Specifies for how many seconds nss_sss should keep local users and groups in negative cache before trying to look the up in the back end again.
Addressed.
I made a typo there and you copy pasted it there :-) It should read:
Specifies for how many seconds nss_sss should keep local users and groups in negative cache before trying to look it up in the back end again.
s/the/it
Would it be beneficial to add some magic value (say -1) to represent permanent ncache record?
I don't think so. We haven't such behaviour yet. Permanent negcaching is set item by item by set function. And during discussion about negcaching of local users, the opinion has been that we didn't want it.
But if you know case for it I am open for it.
Ok, I don't require it.
Can you add some debuggin to is_user_local_by_name and similar functions so we can see that local ncache timeout was used? Something like:
if (ret == EOK && pwd_result != NULL) { DEBUG(SSSDBG_TRACE_FUNC, "User %s is a local user\n", name); is_local = true; }
Addressed.
You want to use SPRIuid as format specifier for uid_t and SPRIgid for gid_t.
Thank you for review, Pavel.
Regards
CI failed http://sssd-ci.duckdns.org/logs/job/43/16/summary.html
On 05/11/2016 11:21 AM, Pavel Březina wrote:
On 05/11/2016 11:14 AM, Pavel Březina wrote:
On 05/10/2016 02:02 PM, Petr Cech wrote:
On 05/10/2016 11:57 AM, Pavel Březina wrote:
Hi,
Hello Pavel,
patches works as expected. I would like you to rename few things though... "locals" refers more to a resident and it is not being used in the meaning of local users and groups. I'd rather use the word "local" (singular), is_local (where boolean is used).
Addressed.
Please rename the new option into something more similar as entry_negative_timeou, maybe local_negative_timeout, unix_negative_timeout or files_negative_timeout...
Addressed.
Its man page description doesn't read well in english (especially the second sentence). Maybe something like this would be better:
Specifies for how many seconds nss_sss should keep local users and groups in negative cache before trying to look the up in the back end again.
Addressed.
I made a typo there and you copy pasted it there :-) It should read:
Specifies for how many seconds nss_sss should keep local users and groups in negative cache before trying to look it up in the back end again.
s/the/it
Addressed, thanks.
Would it be beneficial to add some magic value (say -1) to represent permanent ncache record?
I don't think so. We haven't such behaviour yet. Permanent negcaching is set item by item by set function. And during discussion about negcaching of local users, the opinion has been that we didn't want it.
But if you know case for it I am open for it.
Ok, I don't require it.
Can you add some debuggin to is_user_local_by_name and similar functions so we can see that local ncache timeout was used? Something like:
if (ret == EOK && pwd_result != NULL) { DEBUG(SSSDBG_TRACE_FUNC, "User %s is a local user\n", name); is_local = true; }
Addressed.
You want to use SPRIuid as format specifier for uid_t and SPRIgid for gid_t.
Addressed. Thanks. I searched for the right form but I didn't find one. Locally, it worked for me.
Thank you for review, Pavel.
Regards
CI failed http://sssd-ci.duckdns.org/logs/job/43/16/summary.html
Hi,
I have new version of this patch set. I fixed CI tests on debian [1]. My thanks belongs to Lukas and Nikolai.
[1] http://sssd-ci.duckdns.org/logs/job/44/04/summary.html
Regards
On 05/27/2016 04:32 PM, Petr Cech wrote:
Hi,
I have new version of this patch set. I fixed CI tests on debian [1]. My thanks belongs to Lukas and Nikolai.
[1] http://sssd-ci.duckdns.org/logs/job/44/04/summary.html
Regards
Ack to the patches, I'm running CI now. The only thing I'm worried about is that we're using blocking calls getpwnam_r and similar. We already do this on some parts of sssd but looking into the cases it's either initialization or a very rare condition or a data provider.
Here we block for every local object and since this is an NSS responder I think we should create a non-blocking way rather sooner than later. It will be for free when sssd will manage local users though so I'm not sure if it's worth the work.
On Mon, May 30, 2016 at 10:42:13AM +0200, Pavel Březina wrote:
On 05/27/2016 04:32 PM, Petr Cech wrote:
Hi,
I have new version of this patch set. I fixed CI tests on debian [1]. My thanks belongs to Lukas and Nikolai.
[1] http://sssd-ci.duckdns.org/logs/job/44/04/summary.html
Regards
Ack to the patches, I'm running CI now. The only thing I'm worried about is that we're using blocking calls getpwnam_r and similar. We already do this on some parts of sssd but looking into the cases it's either initialization or a very rare condition or a data provider.
Here we block for every local object and since this is an NSS responder I think we should create a non-blocking way rather sooner than later. It will be for free when sssd will manage local users though so I'm not sure if it's worth the work.
If we see some objections we will be able to switch to calling functions from libnss_files directly. The only scenario where I think this might break is that if someone has another module (like ldap or nis) after sss..
On 05/30/2016 01:54 PM, Jakub Hrozek wrote:
On Mon, May 30, 2016 at 10:42:13AM +0200, Pavel Březina wrote:
On 05/27/2016 04:32 PM, Petr Cech wrote:
Hi,
I have new version of this patch set. I fixed CI tests on debian [1]. My thanks belongs to Lukas and Nikolai.
[1] http://sssd-ci.duckdns.org/logs/job/44/04/summary.html
Regards
Ack to the patches, I'm running CI now. The only thing I'm worried about is that we're using blocking calls getpwnam_r and similar. We already do this on some parts of sssd but looking into the cases it's either initialization or a very rare condition or a data provider.
Here we block for every local object and since this is an NSS responder I think we should create a non-blocking way rather sooner than later. It will be for free when sssd will manage local users though so I'm not sure if it's worth the work.
If we see some objections we will be able to switch to calling functions from libnss_files directly. The only scenario where I think this might break is that if someone has another module (like ldap or nis) after sss..
Well, if there will be some remote source like ldap defined aside sssd we might have big troubles having blocking code in sssd due to much bigger latency... so I don't even consider this as a valid configuration while local negative cache is on :-)
On 05/30/2016 10:42 AM, Pavel Březina wrote:
On 05/27/2016 04:32 PM, Petr Cech wrote:
Hi,
I have new version of this patch set. I fixed CI tests on debian [1]. My thanks belongs to Lukas and Nikolai.
[1] http://sssd-ci.duckdns.org/logs/job/44/04/summary.html
Regards
Ack to the patches, I'm running CI now. The only thing I'm worried about is that we're using blocking calls getpwnam_r and similar. We already do this on some parts of sssd but looking into the cases it's either initialization or a very rare condition or a data provider.
Here we block for every local object and since this is an NSS responder I think we should create a non-blocking way rather sooner than later. It will be for free when sssd will manage local users though so I'm not sure if it's worth the work. _______________________________________________ sssd-devel mailing list sssd-devel@lists.fedorahosted.org https://lists.fedorahosted.org/admin/lists/sssd-devel@lists.fedorahosted.org
Hi Pavel,
thanks for review. I know that CI tests failed [1]. Actually this CI tests passed [2].
[1] http://sssd-ci.duckdns.org/logs/job/44/05/summary.html [2] http://sssd-ci.duckdns.org/logs/job/44/07/summary.html
Regards
On 05/30/2016 05:49 PM, Petr Cech wrote:
On 05/30/2016 10:42 AM, Pavel Březina wrote:
On 05/27/2016 04:32 PM, Petr Cech wrote:
Hi,
I have new version of this patch set. I fixed CI tests on debian [1]. My thanks belongs to Lukas and Nikolai.
[1] http://sssd-ci.duckdns.org/logs/job/44/04/summary.html
Regards
Ack to the patches, I'm running CI now. The only thing I'm worried about is that we're using blocking calls getpwnam_r and similar. We already do this on some parts of sssd but looking into the cases it's either initialization or a very rare condition or a data provider.
Here we block for every local object and since this is an NSS responder I think we should create a non-blocking way rather sooner than later. It will be for free when sssd will manage local users though so I'm not sure if it's worth the work. _______________________________________________ sssd-devel mailing list sssd-devel@lists.fedorahosted.org https://lists.fedorahosted.org/admin/lists/sssd-devel@lists.fedorahosted.org
Hi Pavel,
thanks for review. I know that CI tests failed [1]. Actually this CI tests passed [2].
[1] http://sssd-ci.duckdns.org/logs/job/44/05/summary.html [2] http://sssd-ci.duckdns.org/logs/job/44/07/summary.html
With your patches, two tests sometimes fails and sometimes not. With master it seems to always succeed.
[1] http://sssd-ci.duckdns.org/logs/job/44/05/summary.html ldap_test.py::test_add_remove_group_rfc2307 PASSED ldap_test.py::test_add_remove_group_rfc2307_bis FAILED
[2] http://sssd-ci.duckdns.org/logs/job/44/08/summary.html ldap_test.py::test_add_remove_group_rfc2307 FAILED ldap_test.py::test_add_remove_group_rfc2307_bis PASSED
I do not know whether it is related to your patches or it is a random bug.
On 05/31/2016 02:42 PM, Pavel Březina wrote:
On 05/30/2016 05:49 PM, Petr Cech wrote:
On 05/30/2016 10:42 AM, Pavel Březina wrote:
On 05/27/2016 04:32 PM, Petr Cech wrote:
Hi,
I have new version of this patch set. I fixed CI tests on debian [1]. My thanks belongs to Lukas and Nikolai.
[1] http://sssd-ci.duckdns.org/logs/job/44/04/summary.html
Regards
Ack to the patches, I'm running CI now. The only thing I'm worried about is that we're using blocking calls getpwnam_r and similar. We already do this on some parts of sssd but looking into the cases it's either initialization or a very rare condition or a data provider.
Here we block for every local object and since this is an NSS responder I think we should create a non-blocking way rather sooner than later. It will be for free when sssd will manage local users though so I'm not sure if it's worth the work. _______________________________________________ sssd-devel mailing list sssd-devel@lists.fedorahosted.org https://lists.fedorahosted.org/admin/lists/sssd-devel@lists.fedorahosted.org
Hi Pavel,
thanks for review. I know that CI tests failed [1]. Actually this CI tests passed [2].
[1] http://sssd-ci.duckdns.org/logs/job/44/05/summary.html [2] http://sssd-ci.duckdns.org/logs/job/44/07/summary.html
With your patches, two tests sometimes fails and sometimes not. With master it seems to always succeed.
[1] http://sssd-ci.duckdns.org/logs/job/44/05/summary.html ldap_test.py::test_add_remove_group_rfc2307 PASSED ldap_test.py::test_add_remove_group_rfc2307_bis FAILED
[2] http://sssd-ci.duckdns.org/logs/job/44/08/summary.html ldap_test.py::test_add_remove_group_rfc2307 FAILED ldap_test.py::test_add_remove_group_rfc2307_bis PASSED
I do not know whether it is related to your patches or it is a random bug.
Hi Pavel,
I think this is not related to my patch.
Regards
On 06/08/2016 02:20 PM, Petr Cech wrote:
On 05/31/2016 02:42 PM, Pavel Březina wrote:
On 05/30/2016 05:49 PM, Petr Cech wrote:
On 05/30/2016 10:42 AM, Pavel Březina wrote:
On 05/27/2016 04:32 PM, Petr Cech wrote:
Hi,
I have new version of this patch set. I fixed CI tests on debian [1]. My thanks belongs to Lukas and Nikolai.
[1] http://sssd-ci.duckdns.org/logs/job/44/04/summary.html
Regards
Ack to the patches, I'm running CI now. The only thing I'm worried about is that we're using blocking calls getpwnam_r and similar. We already do this on some parts of sssd but looking into the cases it's either initialization or a very rare condition or a data provider.
Here we block for every local object and since this is an NSS responder I think we should create a non-blocking way rather sooner than later. It will be for free when sssd will manage local users though so I'm not sure if it's worth the work. _______________________________________________ sssd-devel mailing list sssd-devel@lists.fedorahosted.org https://lists.fedorahosted.org/admin/lists/sssd-devel@lists.fedorahosted.org
Hi Pavel,
thanks for review. I know that CI tests failed [1]. Actually this CI tests passed [2].
[1] http://sssd-ci.duckdns.org/logs/job/44/05/summary.html [2] http://sssd-ci.duckdns.org/logs/job/44/07/summary.html
With your patches, two tests sometimes fails and sometimes not. With master it seems to always succeed.
[1] http://sssd-ci.duckdns.org/logs/job/44/05/summary.html ldap_test.py::test_add_remove_group_rfc2307 PASSED ldap_test.py::test_add_remove_group_rfc2307_bis FAILED
[2] http://sssd-ci.duckdns.org/logs/job/44/08/summary.html ldap_test.py::test_add_remove_group_rfc2307 FAILED ldap_test.py::test_add_remove_group_rfc2307_bis PASSED
I do not know whether it is related to your patches or it is a random bug.
Hi Pavel,
I think this is not related to my patch.
Me as well. Feel free to push it.
Regards
On (09/06/16 14:39), Pavel Březina wrote:
On 06/08/2016 02:20 PM, Petr Cech wrote:
On 05/31/2016 02:42 PM, Pavel Březina wrote:
On 05/30/2016 05:49 PM, Petr Cech wrote:
On 05/30/2016 10:42 AM, Pavel Březina wrote:
On 05/27/2016 04:32 PM, Petr Cech wrote:
Hi,
I have new version of this patch set. I fixed CI tests on debian [1]. My thanks belongs to Lukas and Nikolai.
[1] http://sssd-ci.duckdns.org/logs/job/44/04/summary.html
Regards
Ack to the patches, I'm running CI now. The only thing I'm worried about is that we're using blocking calls getpwnam_r and similar. We already do this on some parts of sssd but looking into the cases it's either initialization or a very rare condition or a data provider.
Here we block for every local object and since this is an NSS responder I think we should create a non-blocking way rather sooner than later. It will be for free when sssd will manage local users though so I'm not sure if it's worth the work.
Hi Pavel,
thanks for review. I know that CI tests failed [1]. Actually this CI tests passed [2].
[1] http://sssd-ci.duckdns.org/logs/job/44/05/summary.html [2] http://sssd-ci.duckdns.org/logs/job/44/07/summary.html
With your patches, two tests sometimes fails and sometimes not. With master it seems to always succeed.
[1] http://sssd-ci.duckdns.org/logs/job/44/05/summary.html ldap_test.py::test_add_remove_group_rfc2307 PASSED ldap_test.py::test_add_remove_group_rfc2307_bis FAILED
[2] http://sssd-ci.duckdns.org/logs/job/44/08/summary.html ldap_test.py::test_add_remove_group_rfc2307 FAILED ldap_test.py::test_add_remove_group_rfc2307_bis PASSED
I do not know whether it is related to your patches or it is a random bug.
Hi Pavel,
I think this is not related to my patch.
Me as well. Feel free to push it.
master: * d9e88bddc99bae0542b2179c9b94c968855b0fd0 * e7ccfb139388c947ec2dee16cfe3005f5643b90d
LS
sssd-devel@lists.fedorahosted.org