sssd performance on large domains
by zfnoctis@gmail.com
Hi,
I'm wondering if there are any plans to improve sssd performance on large active directory domains (100k+ users, 40k+ groups), or if there are settings I am not aware of that can greatly improve performance, specifically for workstation use cases.
Currently if I do not set "ignore_group_members = True" in sssd.conf, logins can take upwards of 6 minutes and "sssd_be" will max the CPU for up to 20 minutes after logon, which makes it a non-starter. The reason I want to allow group members to be seen is that I want certain domain groups to be able to perform elevated actions using polkit. If I ignore group members, polkit reports that the group is empty and so no one can elevate in the graphical environment.
Ultimately this means that Linux workstations are at a severe disadvantage since they cannot be bound to the domain and have the normal set of access features users and IT expect from macOS or Windows.
Distributions used: Ubuntu 16.04 (sssd 1.13.4-1ubuntu1.1), Ubuntu 16.10 (sssd 1.13.4-3) and Fedora 24 (sssd-1.13.4-3.fc24). All exhibit the same problems.
I've also tried "ldap_group_nesting_level = 1" without seeing any noticeable improvement with respect to performance. Putting the database on /tmp isn't viable as these are workstations that will reboot semi-frequently, and I don't believe this is an I/O bound performance issue anyways.
Thanks for your time.
1 year, 9 months
ID Views for IPA ID Views for AD users inconsistent resolution
by Louis Abel
I didn't get a response in #sssd, so I figured I'll try here at the mail list.
# rpm -q sssd ipa-server
sssd-1.16.0-19.el7_5.5.x86_64
ipa-server-4.5.4-10.el7_5.3.x86_64
I've been scratching my head trying to resolve this particular issue. I'm having issues with AD users where when they login, they'll get the UID/GID assigned in the ID views correctly, but only some of the time. Other times, they won't get the id view assigned to them. This is all done in the default trust view. What makes this issue even more interesting is that out of my 6 domain controllers, sometimes it'll be one server out of the six that does it, sometimes it's two. But it's never the same ones, so it's difficult to track the particular issue down. What's even more interesting is this is not occurring with some users (like my own). I have yet to see it occur with my account or even the rest of my team's accounts. One of the things I tried to do is delete the ID views of the offending users and recreate them to no avail.
I put SSSD into debug mode on the IPA servers and tried to get some relevant logs and such to try and figure this out. Below is my SSSD configuration, ldb info, and debug logs (removing private information where possible). I'm trying to determine if this is either a bug within SSSD or if this is a misconfiguration on my part.
$ ldbsearch -H cache_ipa.example.com.ldb name=user.name(a)ad.example.com originalADuidNumber uidNumber originalADgidNumber gidNumber
asq: Unable to register control with rootdse!
# record 1
dn: name=user.name(a)ad.example.com,cn=users,cn=ad.example.com,cn=sysdb
originalADuidNumber: 55616902
originalADgidNumber: 55616902
uidNumber: 55616902
gidNumber: 55616902
$ ipa idoverrideuser-show "Default Trust View" user.name(a)ad.example.com
Anchor to override: user.name(a)ad.example.com
UID: 40001
GID: 40001
Home directory: /home/user.name
Login shell: /bin/bash
$ ldbsearch -H timestamps_ipa.example.com.ldb | less
dn: name=user.name(a)ad.example.com,cn=users,cn=ad.example.com,cn=sysdb
objectCategory: user
originalModifyTimestamp: 20180823172515.0Z
entryUSN: 92632390
initgrExpireTimestamp: 1535133621
lastUpdate: 1535128235
dataExpireTimestamp: 1535133635
distinguishedName: name=user.name(a)ad.example.com,cn=users,cn=ad.example.com,cn=sysdb
## DEBUG LOGS
(Fri Aug 24 16:30:12 2018) [sssd[be[ipa.example.com]]] [sysdb_set_entry_attr] (0x0200): Entry [name=user.name(a)ad.example.com,cn=users,cn=ad.example.com,cn=sysdb] has set [ts_cache] attrs.
(Fri Aug 24 16:30:12 2018) [sssd[be[ipa.example.com]]] [ldb] (0x4000): commit ldb transaction (nesting: 0)
(Fri Aug 24 16:30:12 2018) [sssd[be[ipa.example.com]]] [sdap_id_op_connect_step] (0x4000): reusing cached connection
(Fri Aug 24 16:30:12 2018) [sssd[be[ipa.example.com]]] [ipa_get_ad_override_connect_done] (0x4000): Searching for overrides in view [Default Trust View] with filter [(&(objectClass=ipaOverrideAnchor)(ipaAnchorUUID=:SID:S-1-5-21-922099545-2851689246-2917073205-16902))].
(Fri Aug 24 16:30:12 2018) [sssd[be[ipa.example.com]]] [sdap_print_server] (0x2000): Searching 172.20.23.190:389
(Fri Aug 24 16:30:12 2018) [sssd[be[ipa.example.com]]] [sdap_get_generic_ext_step] (0x0400): calling ldap_search_ext with [(&(objectClass=ipaOverrideAnchor)(ipaAnchorUUID=:SID:S-1-5-21-922099545-2851689246-2917073205-16902))][cn=Default Trust View,cn=views,cn=accounts,dc=ipa,dc=chotel,dc=com].
(Fri Aug 24 16:30:12 2018) [sssd[be[ipa.example.com]]] [sdap_get_generic_ext_step] (0x2000): ldap_search_ext called, msgid = 32
(Fri Aug 24 16:30:12 2018) [sssd[be[ipa.example.com]]] [sdap_op_add] (0x2000): New operation 32 timeout 6
(Fri Aug 24 16:30:12 2018) [sssd[be[ipa.example.com]]] [sdap_process_result] (0x2000): Trace: sh[0x55f30a5d1080], connected[1], ops[(nil)], ldap[0x55f30a5d0f90]
(Fri Aug 24 16:30:12 2018) [sssd[be[ipa.example.com]]] [sdap_process_result] (0x2000): Trace: end of ldap_result list
(Fri Aug 24 16:30:12 2018) [sssd[be[ipa.example.com]]] [sdap_process_result] (0x2000): Trace: sh[0x55f30a5d1940], connected[1], ops[0x55f30a645310], ldap[0x55f30a5ce320]
(Fri Aug 24 16:30:12 2018) [sssd[be[ipa.example.com]]] [sdap_process_message] (0x4000): Message type: [LDAP_RES_SEARCH_ENTRY]
(Fri Aug 24 16:30:12 2018) [sssd[be[ipa.example.com]]] [sdap_parse_entry] (0x1000): OriginalDN: [ipaanchoruuid=:SID:S-1-5-21-922099545-2851689246-2917073205-16902,cn=Default Trust View,cn=views,cn=accounts,dc=ipa,dc=chotel,dc=com].
(Fri Aug 24 16:30:12 2018) [sssd[be[ipa.example.com]]] [sdap_parse_range] (0x2000): No sub-attributes for [objectClass]
(Fri Aug 24 16:30:12 2018) [sssd[be[ipa.example.com]]] [sdap_parse_range] (0x2000): No sub-attributes for [loginShell]
(Fri Aug 24 16:30:12 2018) [sssd[be[ipa.example.com]]] [sdap_parse_range] (0x2000): No sub-attributes for [uidNumber]
(Fri Aug 24 16:30:12 2018) [sssd[be[ipa.example.com]]] [sdap_parse_range] (0x2000): No sub-attributes for [ipaAnchorUUID]
(Fri Aug 24 16:30:12 2018) [sssd[be[ipa.example.com]]] [sdap_parse_range] (0x2000): No sub-attributes for [gidNumber]
(Fri Aug 24 16:30:12 2018) [sssd[be[ipa.example.com]]] [sdap_parse_range] (0x2000): No sub-attributes for [homeDirectory]
(Fri Aug 24 16:30:12 2018) [sssd[be[ipa.example.com]]] [sdap_parse_range] (0x2000): No sub-attributes for [ipaOriginalUid]
(Fri Aug 24 16:30:12 2018) [sssd[be[ipa.example.com]]] [sdap_process_result] (0x2000): Trace: sh[0x55f30a5d1940], connected[1], ops[0x55f30a645310], ldap[0x55f30a5ce320]
(Fri Aug 24 16:30:12 2018) [sssd[be[ipa.example.com]]] [sdap_process_message] (0x4000): Message type: [LDAP_RES_SEARCH_RESULT]
(Fri Aug 24 16:30:12 2018) [sssd[be[ipa.example.com]]] [sdap_get_generic_op_finished] (0x0400): Search result: Success(0), no errmsg set
(Fri Aug 24 16:30:12 2018) [sssd[be[ipa.example.com]]] [sdap_op_destructor] (0x2000): Operation 32 finished
(Fri Aug 24 16:30:12 2018) [sssd[be[ipa.example.com]]] [ipa_get_ad_override_done] (0x4000): Found override for object with filter [(&(objectClass=ipaOverrideAnchor)(ipaAnchorUUID=:SID:S-1-5-21-922099545-2851689246-2917073205-16902))].
(Fri Aug 24 16:30:12 2018) [sssd[be[ipa.example.com]]] [sdap_id_op_destroy] (0x4000): releasing operation connection
(Fri Aug 24 16:30:12 2018) [sssd[be[ipa.example.com]]] [sysdb_apply_default_override] (0x4000): Override [uidNumber] with [40001] for [name=user.name(a)ad.example.com,cn=users,cn=ad.example.com,cn=sysdb].
(Fri Aug 24 16:30:12 2018) [sssd[be[ipa.example.com]]] [sysdb_apply_default_override] (0x0080): Override attribute for [gidNumber] has more [2] than one value, using only the first.
(Fri Aug 24 16:30:12 2018) [sssd[be[ipa.example.com]]] [sysdb_apply_default_override] (0x4000): Override [gidNumber] with [40001] for [name=user.name(a)ad.example.com,cn=users,cn=ad.example.com,cn=sysdb].
(Fri Aug 24 16:30:12 2018) [sssd[be[ipa.example.com]]] [sysdb_apply_default_override] (0x4000): Override [homeDirectory] with [/home/user.name] for [name=user.name(a)ad.example.com,cn=users,cn=ad.example.com,cn=sysdb].
(Fri Aug 24 16:30:12 2018) [sssd[be[ipa.example.com]]] [sysdb_apply_default_override] (0x4000): Override [loginShell] with [/bin/bash] for [name=user.name(a)ad.example.com,cn=users,cn=ad.example.com,cn=sysdb].
(Fri Aug 24 16:30:12 2018) [sssd[be[ipa.example.com]]] [ldb] (0x4000): Added timed event "ltdb_callback": 0x55f30a6819a0
(Fri Aug 24 16:30:12 2018) [sssd[be[ipa.example.com]]] [ldb] (0x4000): Added timed event "ltdb_timeout": 0x55f30a681a60
(Fri Aug 24 16:30:12 2018) [sssd[be[ipa.example.com]]] [ldb] (0x4000): Running timer event 0x55f30a6819a0 "ltdb_callback"
(Fri Aug 24 16:30:12 2018) [sssd[be[ipa.example.com]]] [ldb] (0x4000): Destroying timer event 0x55f30a681a60 "ltdb_timeout"
(Fri Aug 24 16:30:12 2018) [sssd[be[ipa.example.com]]] [ldb] (0x4000): Ending timer event 0x55f30a6819a0 "ltdb_callback"
(Fri Aug 24 16:30:12 2018) [sssd[be[ipa.example.com]]] [safe_original_attributes] (0x4000): Original object does not have [sshPublicKey] set.
(Fri Aug 24 16:30:12 2018) [sssd[be[ipa.example.com]]] [ldb] (0x4000): Added timed event "ltdb_callback": 0x55f30a683c50
(Fri Aug 24 16:30:12 2018) [sssd[be[ipa.example.com]]] [ldb] (0x4000): Added timed event "ltdb_timeout": 0x55f30a683d10
(Fri Aug 24 16:30:12 2018) [sssd[be[ipa.example.com]]] [ldb] (0x4000): Running timer event 0x55f30a683c50 "ltdb_callback"
(Fri Aug 24 16:30:12 2018) [sssd[be[ipa.example.com]]] [ldb] (0x4000): Destroying timer event 0x55f30a683d10 "ltdb_timeout"
(Fri Aug 24 16:30:12 2018) [sssd[be[ipa.example.com]]] [ldb] (0x4000): Ending timer event 0x55f30a683c50 "ltdb_callback"
(Fri Aug 24 16:30:12 2018) [sssd[be[ipa.example.com]]] [sysdb_ldb_msg_difference] (0x2000): Replaced/extended attr [uidNumber] of entry [name=user.name(a)ad.example.com,cn=users,cn=ad.example.com,cn=sysdb]
(Fri Aug 24 16:30:12 2018) [sssd[be[ipa.example.com]]] [ldb] (0x4000): start ldb transaction (nesting: 0)
(Fri Aug 24 16:30:12 2018) [sssd[be[ipa.example.com]]] [ldb] (0x4000): Added timed event "ltdb_callback": 0x55f30a68d1c0
(Fri Aug 24 16:30:12 2018) [sssd[be[ipa.example.com]]] [ldb] (0x4000): Added timed event "ltdb_timeout": 0x55f30a68d280
(Fri Aug 24 16:30:12 2018) [sssd[be[ipa.example.com]]] [ldb] (0x4000): Running timer event 0x55f30a68d1c0 "ltdb_callback"
(Fri Aug 24 16:30:12 2018) [sssd[be[ipa.example.com]]] [ldb] (0x4000): Destroying timer event 0x55f30a68d280 "ltdb_timeout"
(Fri Aug 24 16:30:12 2018) [sssd[be[ipa.example.com]]] [ldb] (0x4000): Ending timer event 0x55f30a68d1c0 "ltdb_callback"
(Fri Aug 24 16:30:12 2018) [sssd[be[ipa.example.com]]] [ldb] (0x4000): commit ldb transaction (nesting: 0)
(Fri Aug 24 16:30:12 2018) [sssd[be[ipa.example.com]]] [sysdb_set_entry_attr] (0x0200): Entry [name=user.name(a)ad.example.com,cn=users,cn=ad.example.com,cn=sysdb] has set [cache, ts_cache] attrs.
(Fri Aug 24 16:30:12 2018) [sssd[be[ipa.example.com]]] [ldb] (0x4000): Added timed event "ltdb_callback": 0x55f30a68d330
(Fri Aug 24 16:30:12 2018) [sssd[be[ipa.example.com]]] [ldb] (0x4000): Added timed event "ltdb_timeout": 0x55f30a688900
(Fri Aug 24 16:30:12 2018) [sssd[be[ipa.example.com]]] [ldb] (0x4000): Running timer event 0x55f30a68d330 "ltdb_callback"
(Fri Aug 24 16:30:12 2018) [sssd[be[ipa.example.com]]] [ldb] (0x4000): Added timed event "ltdb_callback": 0x55f30a689320
(Fri Aug 24 16:30:12 2018) [sssd[be[ipa.example.com]]] [ldb] (0x4000): Added timed event "ltdb_timeout": 0x55f30a6893e0
(Fri Aug 24 16:30:12 2018) [sssd[be[ipa.example.com]]] [ldb] (0x4000): Destroying timer event 0x55f30a688900 "ltdb_timeout"
(Fri Aug 24 16:30:12 2018) [sssd[be[ipa.example.com]]] [ldb] (0x4000): Ending timer event 0x55f30a68d330 "ltdb_callback"
(Fri Aug 24 16:30:12 2018) [sssd[be[ipa.example.com]]] [ldb] (0x4000): Running timer event 0x55f30a689320 "ltdb_callback"
(Fri Aug 24 16:30:12 2018) [sssd[be[ipa.example.com]]] [ldb] (0x4000): Added timed event "ltdb_callback": 0x55f30a634920
(Fri Aug 24 16:30:12 2018) [sssd[be[ipa.example.com]]] [ldb] (0x4000): Added timed event "ltdb_timeout": 0x55f30a6349e0
(Fri Aug 24 16:30:12 2018) [sssd[be[ipa.example.com]]] [ldb] (0x4000): Destroying timer event 0x55f30a6893e0 "ltdb_timeout"
(Fri Aug 24 16:30:12 2018) [sssd[be[ipa.example.com]]] [ldb] (0x4000): Ending timer event 0x55f30a689320 "ltdb_callback"
(Fri Aug 24 16:30:12 2018) [sssd[be[ipa.example.com]]] [ldb] (0x4000): Running timer event 0x55f30a634920 "ltdb_callback"
(Fri Aug 24 16:30:12 2018) [sssd[be[ipa.example.com]]] [ldb] (0x4000): Destroying timer event 0x55f30a6349e0 "ltdb_timeout"
(Fri Aug 24 16:30:12 2018) [sssd[be[ipa.example.com]]] [ldb] (0x4000): Ending timer event 0x55f30a634920 "ltdb_callback"
(Fri Aug 24 16:30:12 2018) [sssd[be[ipa.example.com]]] [ipa_initgr_get_overrides_step] (0x1000): Processing group 0/1
(Fri Aug 24 16:30:12 2018) [sssd[be[ipa.example.com]]] [ipa_initgr_get_overrides_step] (0x1000): Fetching group S-1-5-21-922099545-2851689246-2917073205-20676
(Fri Aug 24 16:30:12 2018) [sssd[be[ipa.example.com]]] [sdap_id_op_connect_step] (0x4000): reusing cached connection
(Fri Aug 24 16:30:12 2018) [sssd[be[ipa.example.com]]] [ipa_get_ad_override_connect_done] (0x4000): Searching for overrides in view [Default Trust View] with filter [(&(objectClass=ipaOverrideAnchor)(ipaAnchorUUID=:SID:S-1-5-21-922099545-2851689246-2917073205-20676))].
(Fri Aug 24 16:30:12 2018) [sssd[be[ipa.example.com]]] [sdap_print_server] (0x2000): Searching 172.20.23.190:389
(Fri Aug 24 16:30:12 2018) [sssd[be[ipa.example.com]]] [sdap_get_generic_ext_step] (0x0400): calling ldap_search_ext with [(&(objectClass=ipaOverrideAnchor)(ipaAnchorUUID=:SID:S-1-5-21-922099545-2851689246-2917073205-20676))][cn=Default Trust View,cn=views,cn=accounts,dc=ipa,dc=chotel,dc=com].
(Fri Aug 24 16:30:12 2018) [sssd[be[ipa.example.com]]] [sdap_get_generic_ext_step] (0x2000): ldap_search_ext called, msgid = 33
(Fri Aug 24 16:30:12 2018) [sssd[be[ipa.example.com]]] [sdap_op_add] (0x2000): New operation 33 timeout 6
(Fri Aug 24 16:30:12 2018) [sssd[be[ipa.example.com]]] [sdap_process_result] (0x2000): Trace: sh[0x55f30a5d1940], connected[1], ops[0x55f30a63f270], ldap[0x55f30a5ce320]
(Fri Aug 24 16:30:12 2018) [sssd[be[ipa.example.com]]] [sdap_process_result] (0x2000): Trace: end of ldap_result list
(Fri Aug 24 16:30:12 2018) [sssd[be[ipa.example.com]]] [sdap_process_result] (0x2000): Trace: sh[0x55f30a5d1940], connected[1], ops[0x55f30a63f270], ldap[0x55f30a5ce320]
(Fri Aug 24 16:30:12 2018) [sssd[be[ipa.example.com]]] [sdap_process_message] (0x4000): Message type: [LDAP_RES_SEARCH_RESULT]
(Fri Aug 24 16:30:12 2018) [sssd[be[ipa.example.com]]] [sdap_get_generic_op_finished] (0x0400): Search result: Success(0), no errmsg set
(Fri Aug 24 16:30:12 2018) [sssd[be[ipa.example.com]]] [sdap_op_destructor] (0x2000): Operation 33 finished
(Fri Aug 24 16:30:12 2018) [sssd[be[ipa.example.com]]] [ipa_get_ad_override_done] (0x4000): No override found with filter [(&(objectClass=ipaOverrideAnchor)(ipaAnchorUUID=:SID:S-1-5-21-922099545-2851689246-2917073205-20676))].
(Fri Aug 24 16:30:12 2018) [sssd[be[ipa.example.com]]] [sdap_id_op_destroy] (0x4000): releasing operation connection
(Fri Aug 24 16:30:12 2018) [sssd[be[ipa.example.com]]] [ipa_initgr_get_overrides_step] (0x1000): Processing group 1/1
(Fri Aug 24 16:30:12 2018) [sssd[be[ipa.example.com]]] [ipa_get_ad_memberships_send] (0x0400): External group information still valid.
## /etc/sssd/sssd.conf
[domain/ipa.example.com]
cache_credentials = True
krb5_store_password_if_offline = True
# krb5_realm = IPA.EXAMPLE.COM
ipa_domain = ipa.example.com
ipa_hostname = entl01.ipa.example.com
# Server Specific Settings
ipa_server = entl01.ipa.example.com
ipa_server_mode = True
subdomain_homedir = %o
fallback_homedir = /home/%u
default_shell = /bin/bash
id_provider = ipa
auth_provider = ipa
access_provider = ipa
chpass_provider = ipa
ldap_tls_cacert = /etc/ipa/ca.crt
[sssd]
services = nss, sudo, pam, ssh
domains = ipa.example.com
[nss]
filter_users = root,ldap,named,avahi,haldaemon,dbus,radiusd,news,nscd,tomcat,activemq,informix,oracle,xdba,grid,dbadmin,weblogic,operator,postgres,devolog
memcache_timeout = 600
homedir_substring = /home
[pam]
[sudo]
[autofs]
[ssh]
[pac]
[ifp]
2 years, 3 months
Enumerate users from external group from AD trust
by Bolke de Bruin
Hello,
I have sssd 1.13.00 working against FreeIPA 4.2 domain. This domain has a trust relationship with a active directory domain.
One of the systems we are using requires to enumerate all users in groups by (unfortunate) design (Apache Ranger). This is done by using
“getent group”. During this enumeration the full user list for a group that has a nested external member group* is not always returned so we thought to
add “getent group mygroup” in order to get more details. Unfortunately this does not seem to work consistently: sometimes this gives information sometimes it does not:
[root@master centos]# getent group ad_users
ad_users:*:1950000004:
[root@master centos]# id bolke(a)ad.local
UID=1796201107(bolke(a)ad.local) GID=1796201107(bolke(a)ad.local) groepen=1796201107(bolke(a)ad.local),1796200513(domain users@ad.local),1796201108(test(a)ad.local)
[root@master centos]# getent group ad_users
ad_users:*:1950000004:bolke@ad.local <mailto:bolke@ad.local>
If I clear the cache (sss_cache -E) the entry is gone again:
[root@master centos]# getent group ad_users
ad_users:*:1950000004:
My question is how do I get sssd to enumerate *all users* in a group consistently?
Thanks!
Bolke
* https://docs.fedoraproject.org/en-US/Fedora/18/html/FreeIPA_Guide/trust-g...
4 years
SSSD strangeness
by simonc99@hotmail.com
Hi All
We've got SSSD 1.13.0 installed as part of a Centos 7.2.1511 installation.
We've used realmd to join the host concerned to our 2008R2 AD system. This went really well, and consequently we've been using SSSD to provide login services and kerberos integration for our fairly large hadoop system.
The authconfig that's implicitly run as part of realmd produces the following sssd.conf:
[sssd]
domains = <joined domain>
config_file_version = 2
services = nss, pam
[pam]
debug_level = 0x0080
[nss]
timeout = 20
force_timeout = 600
debug_level = 0x0080
[domain/<joined domain>]
ad_domain = <joined domain>
krb5_realm = <JOINED DOMAIN>
realmd_tags = manages-system joined-with-samba
cache_credentials = true
id_provider = ad
krb5_store_password_if_offline = True
default_shell = /bin/bash
ldap_id_mapping = True
use_fully_qualified_names = False
fallback_homedir = /home/%u@%d
access_provider = simple
simple_allow_groups = <AD group allowing logins>
krb5_use_kdc_info = False
entry_cache_timeout = 300
debug_level = 0x0080
ad_server = <active directory server>
As I've said - this works really well. We did have some stability issues initially, but they've been fixed by defining the 'ad_server' rather than using autodiscovery.
Logins work fine, kerberos TGTs are issued on login, and password changes are honoured correctly.
However, in general day to day use, we have noticed a few anomalies, that we just can't track down.
Firstly (this has happened a few times), a user will change their AD password (via a Windows PC).
Subsequent logins - sometimes with specific client software - fail with
pam_sss(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=<remote PC name> user=<username>
pam_sss(sshd:auth): received for user <username>: 17 (failure setting user credentials)
So in this example, the person concerned has changed their AD password. Further attempts to access this system via SSH work fine. However, using SFTP doesn't work (the above is output into /var/log/secure).
There are no local controls on sftp logins, and the user concerned was working fine (using both sftp and ssh) until they updated their password.
There is no separate sftp daemon running, and it only affects one individual currently (but we have seen some very similar instances before)
The second issue we have is around phantom groups in AD.
Hadoop uses an id -Gn command to see group membership for authorisation.
With some users - we've seen 6 currently - we see certain groups failing to be looked up:
id -Gn <username>
id: cannot find name for group ID xxxxyyyyy
<group name> <group name> <group name> <group name> <etc...>
The xxxxyyyyy indicates:
xxxx = hashed realm name
yyyyy = RID from group in AD
We can't find any group with that number on the AD side!
We can work around this by adding a local group (into /etc/group) for the GIDs affected. This means the id -Gn runs correctly, and the hadoop namenode can function correctly - but this is a workaround and we'd like to get to the bottom of the issue.
Rather than flooding this post now with logfiles, just thought I'd see if this looked familiar to anyone. Happy to upload any logs, amend logging levels, etc.
Many thanks
Simon
4 years
sssd[be[1320]: Backend is offline
by Harald Dunkel
Hi folks,
sssd 1.16.3-1 (rebuilt for Debian 9), systemd
At boot time sssd_nss fails to initialize. systemctl status sssd
shows
root@srvl061:~# systemctl status sssd
* sssd.service - System Security Services Daemon
Loaded: loaded (/lib/systemd/system/sssd.service; enabled; vendor preset: enabled)
Active: active (running) since Thu 2018-11-22 11:57:30 CET; 46s ago
Main PID: 1312 (sssd)
Tasks: 5 (limit: 7372)
CGroup: /system.slice/sssd.service
|-1312 /usr/sbin/sssd -i --logger=files
|-1345 /usr/lib/x86_64-linux-gnu/sssd/sssd_be --domain example.com --uid 0 --gid 0 --logger=files
|-1533 /usr/lib/x86_64-linux-gnu/sssd/sssd_nss --uid 0 --gid 0 --logger=files
|-1534 /usr/lib/x86_64-linux-gnu/sssd/sssd_pam --uid 0 --gid 0 --logger=files
`-1535 /usr/lib/x86_64-linux-gnu/sssd/sssd_pac --uid 0 --gid 0 --logger=files
Nov 22 11:57:25 srvl061.ac.example.com systemd[1]: Starting System Security Services Daemon...
Nov 22 11:57:25 srvl061.ac.example.com sssd[1312]: Starting up
Nov 22 11:57:25 srvl061.ac.example.com sssd[be[1345]: Starting up
Nov 22 11:57:30 srvl061.ac.example.com sssd[1533]: Starting up
Nov 22 11:57:30 srvl061.ac.example.com sssd[1534]: Starting up
Nov 22 11:57:30 srvl061.ac.example.com sssd[1535]: Starting up
Nov 22 11:57:30 srvl061.ac.example.com systemd[1]: Started System Security Services Daemon.
Nov 22 11:57:45 srvl061.ac.example.com sssd[be[1345]: Backend is offline
Apparently this is a problem of resolvconf generating /etc/\
resolv.conf at boot time. If I replace it by a static file, then
the problem is gone.
Question is, how can I tell systemd to wait for resolv.conf?
Is there some timeout in the backend I could adjust? Does it
wait for the network at all?
Every helpful comment is highly appreciated
Regards
Harri
4 years, 6 months
yubikey-based pkinit stopped working switching from sssd 1.15.2/Ubuntu 16.04 auf sssd 1.16.1/Ubuntu 18.04
by tallinn1960@yahoo.de
My client has a working setup of sssd/kerberos/ldap utilizing yubikeys and pkinit as the login mechanism, based on sssd 1.15.2 and Ubuntu 16.04.
My client wants to advance from Ubuntu 16.04 LTS to Ubuntu 18.04 LTS. A test installation of the latter with the corresponding sssd-version 1.16.1 does not allow yubikey-based login, although both kinit and p11_child do see the yubikey and the certificate on it. Kinit with yubikey does work.
Analysis of log gives that krb5_child behavior has changed. The function answer_pkinit is called with kr->pd->cmd set to SSS_PAM_AUTHENTICATE and kr->pd->authtok set to SSS_AUTHTOK_TYPE_SC_PIN in 1.15.2, but with kr->pd->cmd set to SSS_PAM_PREAUTH and kr->pd->authtok set to 0 in 1.16.1, causing the function to skip all pkinit/smarcard-related prompting and processing.
Both installations are using the same sssd.conf,krb5.conf etc.
How shall we fix this?
4 years, 10 months
Cannot authenticate user from parent domain in a child-domain joined server
by Chris J
Hi all,
I'm having problems having sssd authenticate a user in a parent domain
in the same
forest with SSSD. In brief, it's an Ubuntu 18.04 box with sssd 1.16.1:
the box was
joined to the domain 'development.cseserve.com' with 'realm join'. Users
in the
that domain can authenticate successfully, but users in the parent
domain
cseserve.com cannot.
After some reading, I found the sssctl command, and that the sssd.conf
file needed
a tweak to add 'ifp' to the list of services, which gave access to the
user-checks. Configuration file and output of various sssctl checks is
at the bottom of this email.
If I attempt authenticate as user in cseserv.com, I get:
root@hs-svn-02:/var/log/sssd# sssctl user-checks
chris.johnson(a)cseserv.com -a auth
user: chris.johnson(a)cseserv.com
action: auth
service: system-auth
SSSD nss user lookup result:
- user name: chris.johnson(a)cseserv.com
- user id: 715601141
- group id: 715601141
- gecos: Chris Johnson
- home directory: /home/chris.johnson(a)cseserv.com
- shell: /bin/bash
SSSD InfoPipe user lookup result:
- name: chris.johnson(a)cseserv.com
- uidNumber: 715601141
- gidNumber: 715601141
- gecos: Chris Johnson
- homeDirectory:
- loginShell:
testing pam_authenticate
Password:
pam_authenticate for user [chris.johnson(a)cseserv.com]: Authentication
failure
PAM Environment:
- no env -
root@hs-svn-02:/var/log/sssd#
Now in /var/log/syslog, when I tail -f during sssctl user-checks, I get
the error:
Dec 11 10:59:20 hs-svn-02 [sssd[krb5_child[20446]]]: Server not found
in Kerberos database
Dec 11 10:59:20 hs-svn-02 [sssd[krb5_child[20446]]]: Server not found
in Kerberos database
I can't see any other pertinent errors in log files, but I'm happy to
provide more
if I know what to send over :-)
This error does not occur for a user in the development.cseserv.com
domain, which
completes successfully:
[...deleted the preamble...]
testing pam_authenticate
Password:
pam_authenticate for user [cjohnson(a)development.cseserve.com]: Success
PAM Environment:
- KRB5CCNAME=FILE:/tmp/krb5cc_376801009_vS8U1c
I've tried various things based on various searches, including creating
a /etc/krb5.conf
file to specify encryption protocols, and after a restart this did not
change
the behaviour:
[libdefaults]
allow_weak_crypto = true
default_tgs_enctypes = arcfour-hmac-md5 des-cbc-crc des-cbc-md5
default_tkt_enctypes = arcfour-hmac-md5 des-cbc-crc des-cbc-md5
rdns=false
dns_lookup_kdc = true
Additionally I've tried explicitly declaring the cseserv domain as a
trusted domain in sssd.conf (based on
https://docs.pagure.org/SSSD.sssd/users/ad_provider.html#etc-sssd-sssd-conf),
and this failed as well:
[sssd]
domains = development.cseserv.com, cseserv.com
{...rest unchanged...}
[domain/development.cseserve.com/cseserve.com]
ad_server = hs-dc-01.cseserve.com
What obvious thing am I missing? From what I'm reading, this should
work.
Regards,
Chris
====================================================================
Sanity checking the domain configuration:
realm list gives:
root@hs-svn-02:/var/log/sssd# realm list
development.cseserv.com
type: kerberos
realm-name: DEVELOPMENT.CSESERV.COM
domain-name: development.cseserv.com
configured: kerberos-member
server-software: active-directory
client-software: sssd
required-package: sssd-tools
required-package: sssd
required-package: libnss-sss
required-package: libpam-sss
required-package: adcli
required-package: samba-common-bin
login-formats: %U(a)development.cseserv.com
login-policy: allow-realm-logins
root@hs-svn-02:/var/log/sssd#
sssctl domain-list shows that the parent domain was auto-discovered:
root@hs-svn-02:/var/log/sssd# sssctl domain-list
development.cseserve.com
test.cseserve.com
hst.cseserve.com
cseserve.com
root@hs-svn-02:/var/log/sssd#
sssctl domain-status development.cseserv.com gives:
Online status: Online
Active servers:
AD Global Catalog: hs-dc-01.development.cseserv.com
AD Domain Controller: hs-dc-01.development.cseserv.com
Discovered AD Global Catalog servers:
- hs-dc-01.development.cseserv.com
- hs-dc-02.development.cseserv.com
- gsh-dc-04.cseserv.com
- gsh-dc-05.cseserv.com
- gsh-dc-01.cseserv.com
Discovered AD Domain Controller servers:
- hs-dc-01.development.cseserv.com
- hs-dc-02.development.cseserv.com
sssctl domain-status cseserv.com gives:
root@hs-svn-02:/var/log/sssd# sssctl domain-status cseserv.com
Online status: Online
Active servers:
AD Domain Controller: gsh-dc-04.cseserv.com
AD Global Catalog: hs-dc-01.development.cseserv.com
Discovered AD Domain Controller servers:
- gsh-dc-04.cseserv.com
- gsh-dc-01.cseserv.com
- gsh-dc-05.cseserv.com
- gln-dc-01.cseserv.com
Discovered AD Global Catalog servers:
- hs-dc-01.development.cseserv.com
- hs-dc-02.development.cseserv.com
- gsh-dc-04.cseserv.com
- gsh-dc-05.cseserv.com
- gsh-dc-01.cseserv.com
My sssd.conf file:
[sssd]
domains = development.cseserve.com
config_file_version = 2
services = nss, pam, ifp
debug_level = 9
[domain/development.cseserve.com]
ad_domain = development.cseserve.com
krb5_realm = DEVELOPMENT.CSESERVE.COM
realmd_tags = manages-system joined-with-adcli
cache_credentials = True
id_provider = ad
krb5_store_password_if_offline = True
default_shell = /bin/bash
ldap_id_mapping = True
use_fully_qualified_names = True
fallback_homedir = /home/%u@%d
access_provider = ad
4 years, 11 months
Can I have sssd manage known_hosts with LDAP?
by George Diamantopoulos
Hello all,
I've been trying (and failing) to configure sssd to use LDAP to retrieve
hosts' public SSH keys. I'd like to ask if this is possible with LDAP at
all, or this feature is only supported with FreeIPA.
If yes, what search filter does sssd use to lookup keys in LDAP? I'm using
the sshPublicKey attribute for both people and machines in my LDAP schema,
but I can't figure out what attribute is checked to determine the hostname.
User ssh public key retrieval works fine in my configuration. I'm using
sssd 1.15 which ships with debian stretch.
Thanks!
BR,
George
4 years, 11 months
sssd AD authentication working; sssd autofs against LDAP / rfc2307bis not working...
by Spike White
Sssd experts,
This is all on RHEL7.
I have sssd properly authenticating against AD for my multi-domain forest.
All good – even cross-domain auth (as long as I don’t use tokengroups.)
Our company’s AD implementation is RFC2307bis schema-extended.
Now – for complicated reasons – I’m told I need to get nis automaps and nis
netgroups in AD and also working on the clients (via sssd) also.
As a first testing step, I’ve stood up an openLDAP server on another RHEL7
server. And schema extended it with RFC 2307 bis.
http://bubblesorted.raab.link/content/replace-nis-rfc2307-rfc2307bis-sche...
I added an initial automap.
When I query via ldapsearch, all looks good:
[root@spikerealmd02 sssd]# ldapsearch -LLL -x -H ldap://
austgcore17.us.example.com -b 'ou=automount,ou=admin,dc=itzgeek,dc=local'
-s sub -D 'cn=ldapadm,dc=itzgeek,dc=local' -w ldppassword
'objectClass=automountMap'
dn: automountMapName=auto.master,ou=automount,ou=admin,dc=itzgeek,dc=local
objectClass: top
objectClass: automountMap
automountMapName: auto.master
dn: automountMapName=auto.home,ou=automount,ou=admin,dc=itzgeek,dc=local
objectClass: top
objectClass: automountMap
automountMapName: auto.home
[root@spikerealmd02 sssd]# ldapsearch -LLL -x -H ldap://
austgcore17.us.example.com -b 'ou=automount,ou=admin,dc=itzgeek,dc=local'
-s sub -D 'cn=ldapadm,dc=itzgeek,dc=local' -w ldppassword
'objectClass=automount'
dn:
automountKey=/home2,automountMapName=auto.master,ou=automount,ou=admin,dc=
itzgeek,dc=local
objectClass: top
objectClass: automount
automountKey: /home2
automountInformation:
ldap:automountMapName=auto.home,ou=automount,ou=admin,dc
=itzgeek,dc=local --timeout=60 --ghost
dn:
automountKey=/,automountMapName=auto.home,ou=automount,ou=admin,dc=itzgeek
,dc=local
objectClass: top
objectClass: automount
automountKey: /
automountInformation:
-fstype=nfs,rw,hard,intr,nodev,exec,nosuid,rsize=8192,ws
ize=8192 austgcore17.us.example.com:/export/&
[root@spikerealmd02 sssd]#
Next, the sssd client configuration.
In my good sssd client’s sssd.conf file, I added “autofs” to my services
line and added an “autofs” section. That is, I have changed my
/etc/sssd/sssd.conf file as so:
[sssd]
…
services = nss,pam,autofs
…
[autofs]
debug_level = 9
autofs_provider = ldap
ldap_uri= ldap://austgcore17.us.example.com
ldap_schema = rfc2307bis
ldap_default_bind_dn = cn=ldapadm,dc=itzgeek,dc=local
ldap_default_authtok = ldppassword
ldap_autofs_search_base = ou=automount,ou=admin,dc=itzgeek,dc=local
ldap_autofs_map_object_class = automountMap
ldap_autofs_map_name = automountMapName
ldap_autofs_entry_object_class = automount
ldap_autofs_entry_key = automountKey
ldap_autofs_entry_value = automountInformation
[nss]
debug_level = 9
I appended sss to automount line in /etc/nsswitch.conf file:
automount: files sss
Yet, when I try to restart autofs service it (eventually) times out:
[root@spikerealmd02 sssd]# systemctl restart sssd
[root@spikerealmd02 sssd]# systemctl restart autofs
Job for autofs.service failed because a timeout was exceeded. See
"systemctl status autofs.service" and "journalctl -xe" for details.
Journalctl –xe reports this:
Dec 03 11:14:09 spikerealmd02.us.example.com [sssd[ldap_child[9653]]][9653]:
Failed to initialize credentials using keytab [MEMORY:/etc/krb5.keytab]:
Preauthentication failed. Unable to create GSSAPI-encrypted LDAP connection.
…
Dec 03 11:14:15 spikerealmd02.us.example.com [sssd[ldap_child[9680]]][9680]:
Failed to initialize credentials using keytab [MEMORY:/etc/krb5.keytab]:
Preauthentication faile
Dec 03 11:14:22 spikerealmd02.us.example.com systemd[1]: autofs.service
start operation timed out. Terminating.
Dec 03 11:14:22 spikerealmd02.us.example.com systemd[1]: Failed to start
Automounts filesystems on demand.
-- Subject: Unit autofs.service has failed
-- Defined-By: systemd
-- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel
--
-- Unit autofs.service has failed.
--
-- The result is failed.
Dec 03 11:14:22 spikerealmd02.us.example.com systemd[1]: Unit
autofs.service entered failed state.
Dec 03 11:14:22 spikerealmd02.us.example.com systemd[1]: autofs.service
failed.
Dec 03 11:14:22 spikerealmd02.us.example.com polkitd[897]: Unregistered
Authentication Agent for unix-process:9073:241010 (system bus :1.132,
object path /org/freedeskt
/var/log/sssd/ssd_nss.log looks like this:
(Mon Dec 3 11:11:13 2018) [sssd[autofs]] [sysdb_get_certmap] (0x0020):
Failed to read certmap config, skipping.
(Mon Dec 3 11:11:13 2018) [sssd[autofs]] [ldb] (0x4000): Added timed event
"ltdb_callback": 0x55f7263a1fc0
(Mon Dec 3 11:11:13 2018) [sssd[autofs]] [ldb] (0x4000): Added timed event
"ltdb_timeout": 0x55f7263a2080
(Mon Dec 3 11:11:13 2018) [sssd[autofs]] [ldb] (0x4000): Running timer
event 0x55f7263a1fc0 "ltdb_callback"
(Mon Dec 3 11:11:13 2018) [sssd[autofs]] [ldb] (0x4000): Destroying timer
event 0x55f7263a2080 "ltdb_timeout"
(Mon Dec 3 11:11:13 2018) [sssd[autofs]] [ldb] (0x4000): Ending timer
event 0x55f7263a1fc0 "ltdb_callback"
(Mon Dec 3 11:11:13 2018) [sssd[autofs]] [sysdb_get_certmap] (0x0400): No
certificate maps found.
What is wrong? BTW, for now – I don’t care about a GSSAPI SASL LDAP
binding; a simple binding is what I want.
BTW, I have not modified the /etc/autofs.conf file. I considered this, but
it seems that if I did – it’d be bypassing nss / sssd. Also, I’d have
another SASL creds hanging out there that’d I’d have to periodically rotate
on all clients, instead of relying on SSSD’s machine account that’s
auto-rotated every 30 days.
4 years, 11 months
filter out disabled ipa user
by Stijn De Weirdt
hi all,
we are using ipa as id_provider/access_provider/auth_provider for a domain, and we want to somehow completely hide users that are disabled in ipa. for now, disabled users are still known on the hosts (eg "getent passwd userxyz" works and gives the correct userid). we would like that eg "getent passwd userxyz" returns nothing (in particular we want that that userid can't start any new process anymore, and that the nfs mounts show that files the belong to the disabled user show up as owned by nobody etc etc.
is there any way to filter these users? perhaps some config setting i overlooked, or some ldap filter i can use?
many thanks,
stijn
4 years, 11 months