Hi all,
SSSD 1.9.2 on CentOS 6.
I am attempting to configure SSSD to authenticate against AD via LDAP. When starting the daemon though, the logs get filled with failure messages about being unable to convert the SID properly for every user. The extra strange part is the SID it says it cannot convert is the same for every user. Example:
(Mon Apr 15 15:52:47 2013) [sssd[be[LDAP]]] [sdap_save_user] (0x1000): Mapping user [REDACTED] objectSID to unix ID (Mon Apr 15 15:52:47 2013) [sssd[be[LDAP]]] [sdap_idmap_sid_to_unix] (0x0080): Could not convert objectSID [S-1-5-21-3220130920-4012199101-135577023-1153286127] to a UNIX ID (Mon Apr 15 15:52:47 2013) [sssd[be[LDAP]]] [sdap_save_user] (0x0040): Failed to save user [REDACTED] (Mon Apr 15 15:52:47 2013) [sssd[be[LDAP]]] [sdap_save_users] (0x0040): Failed to store user 988. Ignoring.
Where can I get more information on why it's failing? The following is my sssd.conf:
[sssd] domains = LDAP services = nss, pam config_file_version = 2 ;debug_level = 0x1310
[nss] filter_groups = root filter_users = root
[pam]
[domain/LDAP] ldap_id_use_start_tls = True id_provider = ldap chpass_provider = ldap ldap_uri = ldap://REDACTED ldap_search_base = REDACTED auth_provider = ldap cache_credentials = true ldap_schema = ad enumerate = True ldap_id_mapping = True ldap_user_objectsid = objectSid ldap_idmap_range_min = 100000 ldap_idmap_range_max = 1000000
ldap_default_bind_dn = REDACTED ldap_default_authtok_type = password ldap_default_authtok = REDACTED
ldap_tls_cacertdir = /etc/sssd/cacerts
debug_level = 9
ldap_force_upper_case_realm = True
Also, here's what ObjectSID looks like from LDAP (via ldapsearch) for one of the users it's complaining about: objectSid:: AQUAAAAAAAUVAAAAaEzvv71MJe+/vRQI77+9RE1a77+977+9AAA=
When comparing this to the other user's not being mapped, the objectSid coming from LDAP, at initial glance, is not the same.
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1
On 04/16/2013 12:27 PM, Russell Jones wrote:
Hi all,
SSSD 1.9.2 on CentOS 6.
I am attempting to configure SSSD to authenticate against AD via LDAP. When starting the daemon though, the logs get filled with failure messages about being unable to convert the SID properly for every user. The extra strange part is the SID it says it cannot convert is the same for every user. Example:
(Mon Apr 15 15:52:47 2013) [sssd[be[LDAP]]] [sdap_save_user] (0x1000): Mapping user [REDACTED] objectSID to unix ID (Mon Apr 15 15:52:47 2013) [sssd[be[LDAP]]] [sdap_idmap_sid_to_unix] (0x0080): Could not convert objectSID [S-1-5-21-3220130920-4012199101-135577023-1153286127] to a UNIX ID (Mon Apr 15 15:52:47 2013) [sssd[be[LDAP]]] [sdap_save_user] (0x0040): Failed to save user [REDACTED] (Mon Apr 15 15:52:47 2013) [sssd[be[LDAP]]] [sdap_save_users] (0x0040): Failed to store user 988. Ignoring.
Looking at that SID, the RID portion of it is is *really* large. The last section there is 1153286127 (split up, that's 1,153,286,127).
Given that you've set an ldap_idmap_range_max of 1,000,000, this pretty much explains why you can't convert this user. The conversion of this should be 1153286127+100000 (your ldap_idmap_range_min is the base, which leaves it at 1,153,386,127, which is FAR above the 1,000,000 you have allocated.
I'm at a loss to explain why some of your users have IDs in the billion-RID range, but if you want these to be handled properly, I think you're going to need to set the following values:
ldap_idmap_range_min = 100000 ldap_idmap_range_max = 2000100000 ldap_idmap_range_size = 2000000000
This will allow you to convert all entries in this domain. However, because it requires reserving all 2 billion possible IDs for one domain, you won't be able to handle a multi-domain forest.
I'd contact your Microsoft representatives to figure out why you have entries with such high RID values.
Where can I get more information on why it's failing? The following is my sssd.conf:
[sssd] domains = LDAP services = nss, pam config_file_version = 2 ;debug_level = 0x1310
[nss] filter_groups = root filter_users = root
[pam]
[domain/LDAP] ldap_id_use_start_tls = True id_provider = ldap chpass_provider = ldap ldap_uri = ldap://REDACTED ldap_search_base = REDACTED auth_provider = ldap cache_credentials = true ldap_schema = ad enumerate = True ldap_id_mapping = True ldap_user_objectsid = objectSid ldap_idmap_range_min = 100000 ldap_idmap_range_max = 1000000
ldap_default_bind_dn = REDACTED ldap_default_authtok_type = password ldap_default_authtok = REDACTED
ldap_tls_cacertdir = /etc/sssd/cacerts
debug_level = 9
ldap_force_upper_case_realm = True
Also, here's what ObjectSID looks like from LDAP (via ldapsearch) for one of the users it's complaining about: objectSid:: AQUAAAAAAAUVAAAAaEzvv71MJe+/vRQI77+9RE1a77+977+9AAA=
Be aware, this is a base-64 conversion of a raw proprietary value. What you see in the SSSD logs are the results of converting that into the human-readable SID format. Attempting to compare this value directly to any other SID value will provide you very little information. You need to look at the decoded version (the S-1-5-21-* representation).
When comparing this to the other user's not being mapped, the objectSid coming from LDAP, at initial glance, is not the same.
Could you elaborate on this? Can you show me some examples of objectSIDs that *do* work and several that do not as well?
On 4/16/2013 1:40 PM, Stephen Gallagher wrote:
Looking at that SID, the RID portion of it is is*really* large. The last section there is 1153286127 (split up, that's 1,153,286,127).
Given that you've set an ldap_idmap_range_max of 1,000,000, this pretty much explains why you can't convert this user. The conversion of this should be 1153286127+100000 (your ldap_idmap_range_min is the base, which leaves it at 1,153,386,127, which is FAR above the 1,000,000 you have allocated.
I'm at a loss to explain why some of your users have IDs in the billion-RID range, but if you want these to be handled properly, I think you're going to need to set the following values:
ldap_idmap_range_min = 100000 ldap_idmap_range_max = 2000100000 ldap_idmap_range_size = 2000000000
This will allow you to convert all entries in this domain. However, because it requires reserving all 2 billion possible IDs for one domain, you won't be able to handle a multi-domain forest.
I'd contact your Microsoft representatives to figure out why you have entries with such high RID values.
Thank Stephen,
I've resolved the issue with that - the original server I was querying was returning bad SID data.
On another note, I'm slightly confused reading the man page on how slices get assigned and used, and would like to understand it further. For example, here's a clean start for SSSD, with enumeration disabled, and the caches cleared. In other words, brand new:
(Tue Apr 16 15:49:51 2013) [sssd[be[LDAP]]] [sdap_idmap_add_domain] (0x0100): Adding domain [S-1-5-21-1289899112-135578405-1515013291] as slice [20] (Tue Apr 16 15:49:52 2013) [sssd[be[LDAP]]] [sdap_idmap_add_domain] (0x0100): Adding domain [S-1-5-21-241006572-1396723338-2091147243] as slice [8]
When doing an "ID" on a user, the number that gets prepended to their userid is not the slice numbers being shown above. It appears to be "41" in this instance:
[root@server db]# id USER uid=4165522(USER) gid=4100513
The "65522" remains the same no matter how I edit the idmap_range_max, but the numbers before them (41) change. What do the slice numbers up above, and the "41" here, represent?
Thanks for your help!
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1
On 04/16/2013 07:15 PM, Russell Jones wrote:
On 4/16/2013 1:40 PM, Stephen Gallagher wrote:
Looking at that SID, the RID portion of it is is *really* large. The last section there is 1153286127 (split up, that's 1,153,286,127).
Given that you've set an ldap_idmap_range_max of 1,000,000, this pretty much explains why you can't convert this user. The conversion of this should be 1153286127+100000 (your ldap_idmap_range_min is the base, which leaves it at 1,153,386,127, which is FAR above the 1,000,000 you have allocated.
I'm at a loss to explain why some of your users have IDs in the billion-RID range, but if you want these to be handled properly, I think you're going to need to set the following values:
ldap_idmap_range_min = 100000 ldap_idmap_range_max = 2000100000 ldap_idmap_range_size = 2000000000
This will allow you to convert all entries in this domain. However, because it requires reserving all 2 billion possible IDs for one domain, you won't be able to handle a multi-domain forest.
I'd contact your Microsoft representatives to figure out why you have entries with such high RID values.
Thank Stephen,
I've resolved the issue with that - the original server I was querying was returning bad SID data.
On another note, I'm slightly confused reading the man page on how slices get assigned and used, and would like to understand it further. For example, here's a clean start for SSSD, with enumeration disabled, and the caches cleared. In other words, brand new:
(Tue Apr 16 15:49:51 2013) [sssd[be[LDAP]]] [sdap_idmap_add_domain] (0x0100): Adding domain [S-1-5-21-1289899112-135578405-1515013291] as slice [20] (Tue Apr 16 15:49:52 2013) [sssd[be[LDAP]]] [sdap_idmap_add_domain] (0x0100): Adding domain [S-1-5-21-241006572-1396723338-2091147243] as slice [8]
When doing an "ID" on a user, the number that gets prepended to their userid is not the slice numbers being shown above. It appears to be "41" in this instance:
[root@server db]# id USER uid=4165522(USER) gid=4100513
The "65522" remains the same no matter how I edit the idmap_range_max, but the numbers before them (41) change. What do the slice numbers up above, and the "41" here, represent?
Thanks for your help!
In the default configuration of SSSD, we create 10,000 slices, each capable of handling up to 200,000 IDs. When we see a new user/group objectSID, we parse it into two pieces; the first seven components of data in the objectSID (S-1-5-21-1289899112-135578405-1515013291) identifies the domain that the user belongs to. What we do is take this value and pass it through a hashing function. This hashing function will give us a predictable slice ID, one of the 10,000 slices we created at startup. This slice ID defines the base value for UIDs/GIDs in that domain. So if your domain hashes to slice 20, in the default configuration this means that the base ID value would be (200,000 + 20*200,000) (ldap_idmap_range_min plus twenty times the ldap_idmap_range_size value). or: 4200000
I'm guessing that you modified the idmap_range_min to be 100000 instead of the default 200000 (like I had originally recommended), and that's why your range was starting at 4100000
Once we have the base ID value identified by the hashing algorithm, we look at the remaining part of the objectSID, which is called the RID (relative ID). We take this number and just use it as an offset from the base ID value. So the end result is base_value + RID.
When you tweak the size of the idmap_range_*, it alters the total number of slices that are available to the configuration, which means that the hashing algorithm will end up returning a different slice value. (In technical terms, after we hash the domain SID, we take its modulus with the total available slices in order to figure out which slice to assign it).
I hope this has been informative.
On 4/16/2013 11:40 PM, Stephen Gallagher wrote:
In the default configuration of SSSD, we create 10,000 slices, each capable of handling up to 200,000 IDs. When we see a new user/group objectSID, we parse it into two pieces; the first seven components of data in the objectSID (S-1-5-21-1289899112-135578405-1515013291) identifies the domain that the user belongs to. What we do is take this value and pass it through a hashing function. This hashing function will give us a predictable slice ID, one of the 10,000 slices we created at startup. This slice ID defines the base value for UIDs/GIDs in that domain. So if your domain hashes to slice 20, in the default configuration this means that the base ID value would be (200,000 + 20*200,000) (ldap_idmap_range_min plus twenty times the ldap_idmap_range_size value). or: 4200000
I'm guessing that you modified the idmap_range_min to be 100000 instead of the default 200000 (like I had originally recommended), and that's why your range was starting at 4100000
Once we have the base ID value identified by the hashing algorithm, we look at the remaining part of the objectSID, which is called the RID (relative ID). We take this number and just use it as an offset from the base ID value. So the end result is base_value + RID.
When you tweak the size of the idmap_range_*, it alters the total number of slices that are available to the configuration, which means that the hashing algorithm will end up returning a different slice value. (In technical terms, after we hash the domain SID, we take its modulus with the total available slices in order to figure out which slice to assign it).
Thank you Stephen, that was very thorough and informative, much appreciated!
One additional question for you regarding how collisions are handled. Reading the man page, I understand how they can happen, but I am not understanding how configuring a default domain to ensure at least one is always consistent in the slice it is given resolves the issue.
For arguments sake, if we have default domain "A", and normal domain "B" as slice 0 and 1 respectively on both clients 1 and 2, and then domain C on client 1 and domain D on client 2 collide with their hash and are both given the next available slice, slice 2, it seems like we would still have a problem.
Where am I going wrong in my understanding of the scenario?
Thanks again!
On 04/17/2013 10:50 AM, Russell Jones wrote:
On 4/16/2013 11:40 PM, Stephen Gallagher wrote:
In the default configuration of SSSD, we create 10,000 slices, each capable of handling up to 200,000 IDs. When we see a new user/group objectSID, we parse it into two pieces; the first seven components of data in the objectSID (S-1-5-21-1289899112-135578405-1515013291) identifies the domain that the user belongs to. What we do is take this value and pass it through a hashing function. This hashing function will give us a predictable slice ID, one of the 10,000 slices we created at startup. This slice ID defines the base value for UIDs/GIDs in that domain. So if your domain hashes to slice 20, in the default configuration this means that the base ID value would be (200,000 + 20*200,000) (ldap_idmap_range_min plus twenty times the ldap_idmap_range_size value). or: 4200000
I'm guessing that you modified the idmap_range_min to be 100000 instead of the default 200000 (like I had originally recommended), and that's why your range was starting at 4100000
Once we have the base ID value identified by the hashing algorithm, we look at the remaining part of the objectSID, which is called the RID (relative ID). We take this number and just use it as an offset from the base ID value. So the end result is base_value + RID.
When you tweak the size of the idmap_range_*, it alters the total number of slices that are available to the configuration, which means that the hashing algorithm will end up returning a different slice value. (In technical terms, after we hash the domain SID, we take its modulus with the total available slices in order to figure out which slice to assign it).
Thank you Stephen, that was very thorough and informative, much appreciated!
One additional question for you regarding how collisions are handled. Reading the man page, I understand how they can happen, but I am not understanding how configuring a default domain to ensure at least one is always consistent in the slice it is given resolves the issue.
For arguments sake, if we have default domain "A", and normal domain "B" as slice 0 and 1 respectively on both clients 1 and 2, and then domain C on client 1 and domain D on client 2 collide with their hash and are both given the next available slice, slice 2, it seems like we would still have a problem.
Where am I going wrong in my understanding of the scenario?
Slices are not given out by the clients.
Let me try to illustrate. Imagine that your slices are storage containers in a storage facility. Each slice has a number. You come in and you want to use a container. Which one? We then take you name, SSN, eye color, DOB etc. and hash. As a result we get a number. This number is the number of your storage container. Same here. The part of the SID is the unique identifier that identities the domain. The hash of it will always have number X. This number will always define which storage container (slice) would be used for your domain. There is a chance that some other domain would have the same number but the probability is very low.
No you can configure how big slices are. The bigger the slice the smaller the number of the slices. So X might map to a different container if you start re-configuring things and reducing the width of the slices.
HTH.
Thanks again! _______________________________________________ sssd-users mailing list sssd-users@lists.fedorahosted.org https://lists.fedorahosted.org/mailman/listinfo/sssd-users
On 4/17/2013 5:52 PM, Dmitri Pal wrote:
Slices are not given out by the clients.
Let me try to illustrate. Imagine that your slices are storage containers in a storage facility. Each slice has a number. You come in and you want to use a container. Which one? We then take you name, SSN, eye color, DOB etc. and hash. As a result we get a number. This number is the number of your storage container. Same here. The part of the SID is the unique identifier that identities the domain. The hash of it will always have number X. This number will always define which storage container (slice) would be used for your domain. There is a chance that some other domain would have the same number but the probability is very low.
No you can configure how big slices are. The bigger the slice the smaller the number of the slices. So X might map to a different container if you start re-configuring things and reducing the width of the slices.
HTH.
Thanks Dmitri!
By "given to the clients" I meant by the SSSD daemon. Sorry for the confusion with how I typed up my scenario. What I am not understanding is how setting the default domain in the configuration prevents non-deterministic handling of slice collisions. The man page states:
"NOTE: It is possible to encounter collisions in the hash and subsequent modulus. In these situations, we will select the next available slice, but it may not be possible to reproduce the same exact set of slices on other machines (since the order that they are encountered will determine their slice). In this situation, it is recommended to..... <snip> ..... configure a default domain to guarantee that at least one is always consistent."
How does having one domain always be slice 0 resolve the other domains possibly not being consistent? Or is the "at least" in the man page saying that users of the domain assigned to slice 0 will always have the same UID's, but not necessarily users in the other domains?
On 4/17/2013 7:05 PM, Russell Jones wrote:
On 4/17/2013 5:52 PM, Dmitri Pal wrote:
Slices are not given out by the clients.
Let me try to illustrate. Imagine that your slices are storage containers in a storage facility. Each slice has a number. You come in and you want to use a container. Which one? We then take you name, SSN, eye color, DOB etc. and hash. As a result we get a number. This number is the number of your storage container. Same here. The part of the SID is the unique identifier that identities the domain. The hash of it will always have number X. This number will always define which storage container (slice) would be used for your domain. There is a chance that some other domain would have the same number but the probability is very low.
No you can configure how big slices are. The bigger the slice the smaller the number of the slices. So X might map to a different container if you start re-configuring things and reducing the width of the slices.
HTH.
Thanks Dmitri!
By "given to the clients" I meant by the SSSD daemon. Sorry for the confusion with how I typed up my scenario. What I am not understanding is how setting the default domain in the configuration prevents non-deterministic handling of slice collisions. The man page states:
"NOTE: It is possible to encounter collisions in the hash and subsequent modulus. In these situations, we will select the next available slice, but it may not be possible to reproduce the same exact set of slices on other machines (since the order that they are encountered will determine their slice). In this situation, it is recommended to..... <snip> ..... configure a default domain to guarantee that at least one is always consistent."
How does having one domain always be slice 0 resolve the other domains possibly not being consistent? Or is the "at least" in the man page saying that users of the domain assigned to slice 0 will always have the same UID's, but not necessarily users in the other domains?
Also, of course, in the non-default domains I'm referring to this issue only in the event of collisions. I understand that normally domains hash out to deterministic slices.
On 04/17/2013 08:05 PM, Russell Jones wrote:
On 4/17/2013 5:52 PM, Dmitri Pal wrote:
Slices are not given out by the clients.
Let me try to illustrate. Imagine that your slices are storage containers in a storage facility. Each slice has a number. You come in and you want to use a container. Which one? We then take you name, SSN, eye color, DOB etc. and hash. As a result we get a number. This number is the number of your storage container. Same here. The part of the SID is the unique identifier that identities the domain. The hash of it will always have number X. This number will always define which storage container (slice) would be used for your domain. There is a chance that some other domain would have the same number but the probability is very low.
No you can configure how big slices are. The bigger the slice the smaller the number of the slices. So X might map to a different container if you start re-configuring things and reducing the width of the slices.
HTH.
Thanks Dmitri!
By "given to the clients" I meant by the SSSD daemon. Sorry for the confusion with how I typed up my scenario. What I am not understanding is how setting the default domain in the configuration prevents non-deterministic handling of slice collisions. The man page states:
"NOTE: It is possible to encounter collisions in the hash and subsequent modulus. In these situations, we will select the next available slice, but it may not be possible to reproduce the same exact set of slices on other machines (since the order that they are encountered will determine their slice). In this situation, it is recommended to..... <snip> ..... configure a default domain to guarantee that at least one is always consistent."
I think it refers to the case when you have configured two domains in SSSD. A & B. The domains happen to produce colliding hashes (bad luck). Depending upon the order of users coming and authenticating either of the domains can be mapped to the right slice and the other one to the next available. On two machines with the same configuration the order of users logging can be different so different slices would be picked for the two domains. To prevent it we suggest explicitely configuring at least one of the domains to map to a specific slice.
How does having one domain always be slice 0 resolve the other domains possibly not being consistent? Or is the "at least" in the man page saying that users of the domain assigned to slice 0 will always have the same UID's, but not necessarily users in the other domains?
sssd-users mailing list sssd-users@lists.fedorahosted.org https://lists.fedorahosted.org/mailman/listinfo/sssd-users
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1
On 04/17/2013 08:05 PM, Russell Jones wrote:
On 4/17/2013 5:52 PM, Dmitri Pal wrote:
Slices are not given out by the clients.
Let me try to illustrate. Imagine that your slices are storage containers in a storage facility. Each slice has a number. You come in and you want to use a container. Which one? We then take you name, SSN, eye color, DOB etc. and hash. As a result we get a number. This number is the number of your storage container. Same here. The part of the SID is the unique identifier that identities the domain. The hash of it will always have number X. This number will always define which storage container (slice) would be used for your domain. There is a chance that some other domain would have the same number but the probability is very low.
No you can configure how big slices are. The bigger the slice the smaller the number of the slices. So X might map to a different container if you start re-configuring things and reducing the width of the slices.
HTH.
Thanks Dmitri!
By "given to the clients" I meant by the SSSD daemon. Sorry for the confusion with how I typed up my scenario. What I am not understanding is how setting the default domain in the configuration prevents non-deterministic handling of slice collisions. The man page states:
"NOTE: It is possible to encounter collisions in the hash and subsequent modulus. In these situations, we will select the next available slice, but it may not be possible to reproduce the same exact set of slices on other machines (since the order that they are encountered will determine their slice). In this situation, it is recommended to..... <snip> ..... configure a default domain to guarantee that at least one is always consistent."
How does having one domain always be slice 0 resolve the other domains possibly not being consistent? Or is the "at least" in the man page saying that users of the domain assigned to slice 0 will always have the same UID's, but not necessarily users in the other domains?
Yes, this last sentence is correct. What we are recommending is that you would normally set the "primary" domain (the one which the client is directly enrolled in) to the default domain. Then you know that at least for that domain, it's always guaranteed to be the same value on all systems.
Samba's autorid plugin behaves similarly to ours, except that it is *always* based on order of encounter. We added the hash trickery in order to significantly increase the likelihood that it would come up as the same on two different systems. In the case of hash collisions, we fall back on something close to the autorid behavior (where we just assign it the next first free value).
sssd-users@lists.fedorahosted.org