On Mon, 2013-09-23 at 13:14 +0200, Pavel Březina wrote:
On 09/23/2013 12:40 PM, Jean-Baptiste Denis wrote:
> Hi,
>
> I'm using a trick, suggested with CAUTION by Jakub, that allows me
> to have all user using local home directory and some honoring their
> ldap ones.
>
> This is the spirit of my sssd.conf :
>
> ==================== [sssd] config_file_version = 2 services = nss,
> pam domains = local_home, ldap_home
>
> [nss] filter_users =
> root,ldap,named,avahi,haldaemon,dbus,radiusd,news,nscd override_shell
> = /bin/bash
>
> [domain/local_home] override_homedir = /home/%u filter_users = john
> id_provider = ldap ldap_uri =
ldap://ldap.example.com/
>
> # same as above, without override_homedir and filter_users
> [domain/ldap_home] id_provider = ldap ldap_uri =
>
ldap://ldap.example.com/ ====================
>
>
> I think I've got a side effect from this setup when I'm dealing
> directly with uid.
>
> For example, if the filtered user has uid 4242, after a fresh restart
> of sssd, I cannot perform an uid -> username resolution :
>
> $ getent passwd 4242 $ # NOANSWER
>
> But if try to get the john entry, everything is good after that
>
> $ getent passwd john john:*:4242:1010:John
> John:/ldap_home/john:/bin/bash $ getent passwd 4242
> john:*:4242:1010:John John:/ldap_home/john:/bin/bash
>
> But, if I restart sssd, it doesn't work anymore :
>
> # service sssd restart # getent passwd 4242 # # NOANSWER
>
> Here is the corresponding sssd_nss.log when I've got no answer after
> sssd restart :
>
> (Mon Sep 23 12:32:02 2013) [sssd[nss]] [accept_fd_handler] (0x0400):
> Client connected! (Mon Sep 23 12:32:02 2013) [sssd[nss]]
> [sss_cmd_get_version] (0x0200): Received client version [1]. (Mon Sep
> 23 12:32:02 2013) [sssd[nss]] [sss_cmd_get_version] (0x0200): Offered
> version [1]. (Mon Sep 23 12:32:02 2013) [sssd[nss]]
> [sss_dp_issue_request] (0x0400): Issuing request for
> [0x4339e0:domains@local_home] (Mon Sep 23 12:32:02 2013) [sssd[nss]]
> [sss_dp_get_domains_msg] (0x0400): Sending get domains request for
> [local_home][not forced][] (Mon Sep 23 12:32:02 2013) [sssd[nss]]
> [sss_dp_internal_get_send] (0x0400): Entering request
> [0x4339e0:domains@local_home] (Mon Sep 23 12:32:02 2013) [sssd[nss]]
> [sss_dp_issue_request] (0x0400): Issuing request for
> [0x4339e0:domains@ldap_home] (Mon Sep 23 12:32:02 2013) [sssd[nss]]
> [sss_dp_get_domains_msg] (0x0400): Sending get domains request for
> [ldap_home][not forced][] (Mon Sep 23 12:32:02 2013) [sssd[nss]]
> [sss_dp_internal_get_send] (0x0400): Entering request
> [0x4339e0:domains@ldap_home] (Mon Sep 23 12:32:02 2013) [sssd[nss]]
> [sss_dp_req_destructor] (0x0400): Deleting request:
> [0x4339e0:domains@local_home] (Mon Sep 23 12:32:02 2013) [sssd[nss]]
> [nss_cmd_getpwuid_search] (0x0100): Requesting info for
> [4242@local_home] (Mon Sep 23 12:32:02 2013) [sssd[nss]]
> [check_cache] (0x0400): Cached entry is valid, returning.. (Mon Sep
> 23 12:32:02 2013) [sssd[nss]] [nss_cmd_getpwuid_search] (0x0400):
> Returning info for uid [4242@local_home] (Mon Sep 23 12:32:02 2013)
> [sssd[nss]] [fill_pwent] (0x0100): User [john@local_home] filtered
> out! (negative cache) (Mon Sep 23 12:32:02 2013) [sssd[nss]]
> [sss_dp_req_destructor] (0x0400): Deleting request:
> [0x4339e0:domains@ldap_home] (Mon Sep 23 12:32:02 2013) [sssd[nss]]
> [client_recv] (0x0200): Client disconnected!
>
> The line that worries me is this one '[fill_pwent] (0x0100): User
> [john@local_home] filtered out! (negative cache)'. My request does
> not seem to go through my fallback domain "ldap_home".
>
> What do you think ?
When you run 'getent passwd 4242' SSSD first searches in local_home, it
is successful since the user exists in LDAP. We check the negative cache
before we return the result, john is found in negative cache and we quit
- we don't try next domain.
When you run 'getent passwd john' SSSD searches john@local_home in
negative cache, founds him and continue with john@ldap_home. Since he is
not present in the negative cache we return the result. Next calls for
4242 are returned from memory cache.
So it is a bug, that we don't continue with next domain in the first case.
It's not a bug, is by design, when we created sssd we decidd you had to
have a different uid range per domain. We later relaxed this constraint
due to demand, but we cannot properly handle the case where 2 domains
have the same uid and one is filtered by name.
What you should do is to use ldap filters to completely filter out the
entry, instead of filter_users, this way the entry will simply not exist
at all in the first domain and we continue with the second.
Devel note:
I think we should not handle filter_users from [domain] in responder but
in provider - i.e. don't save users that should be filtered out at all.
This is also an option.
Simo.
--
Simo Sorce * Red Hat, Inc * New York