Re: group member denied access to directory
by Spike White
Is this a NFS mount point? If so, maybe you're hitting the "16
supplemental group" NFS inherent bug.
Spike
On Fri, Nov 20, 2020 at 2:21 PM Tung, Paul <PTung(a)mednet.ucla.edu> wrote:
> Hi,
>
>
>
> I was hoping someone on this list might be able to help.
>
> I’m getting permission denied when trying to access a directory owned by
> root, but with group that I’m a member of.
>
> I’m getting: -bash: cd: testdir: Permission denied
>
>
>
> I have the following scenario:
>
> Running CentOS Linux release 7.6.1810 and sssd 1.16.5
>
>
>
> I have a mount set up /data/testdir
>
> As root, I chown/chmod testdir:
>
> Chown root:testgrpa testdir
>
> Chmod 770 testdir
>
>
>
> When I log in as user1, I currently can’t cd into /data/testdir
>
> It gives:
>
> -bash: cd: testdir: Permission denied
>
>
>
> user1 is a member of testgrpa:
>
> OUTPUT of id user1:
>
> uid=129371342(user1) gid=129371342(user1) groups=129371342(user1)
> ,29042750285(group1),1435459822(group2),3456349245(group3),……,
> *239705249(testgrpa)*
>
>
>
> OUTPUT of getent group testgrpa:
>
> testgrpa:*: 239705249:*user1*,user2,user2,user4,…..,user50
>
>
>
>
>
> CONTENTS OF Sssd.conf:
>
> [sssd]
>
> config_file_version = 2
>
> services = nss,pam
>
> domains = dept.domain.com
>
>
>
> [nss]
>
> filter_users = root
>
> filter_groups = root
>
>
>
> [pam]
>
>
>
> [domain/dept.domai.com]
>
> id_provider = ldap
>
> auth_provider = ldap
>
> access_provider = ldap
>
> ldap_use_tokengroups = false
>
>
>
> enumerate = false
>
> cache_credentials = True
>
> case_sensitive = false
>
> ignore_group_members = false
>
> auto_private_groups = true
>
>
>
> ldap_schema = ad
>
>
>
> ldap_uri = ldaps://ldapsserver.dept.domain.com:636
>
> ldap_user_search_base = dc=ad,dc=dept,dc=domain,dc=com
>
> ldap_group_search_base = OU=Security
> Groups,OU=Groups,dc=ad,dc=dept,dc=domain,dc=com?sub?(|(cn=domain
> users)(cn=testgrpa))
>
> ldap_referrals = False
>
> ldap_group_nesting_level = 3
>
>
>
> ldap_tls_reqcert = allow
>
> ldap_tls_cacertdir = /etc/sssd
>
>
>
> ldap_use_tokengroups = True
>
> ldap_id_mapping = True
>
>
>
> override_homedir = /mnt/exports/shared/home/%u
>
> fallback_homedir = /shared/home/%u
>
>
>
> default_shell = /bin/bash
>
>
>
> ldap_access_order = filter, expire
>
> ldap_account_expire_policy = ad
>
> ldap_access_filter = (|(memberOf=cn=testgrpa,OU=Security
> Groups,OU=Groups,DC=ad,DC=dept,DC=domain,DC=com))
>
>
>
> ldap_default_bind_dn = <service account>
>
> ldap_default_authtok_type = obfuscated_password
>
> ldap_default_authtok = <authtok>
>
>
>
>
>
> Thanks,
>
>
>
> *Paul T*
>
> ------------------------------
>
> UCLA HEALTH SCIENCES IMPORTANT WARNING: This email (and any attachments)
> is only intended for the use of the person or entity to which it is
> addressed, and may contain information that is privileged and confidential.
> You, the recipient, are obligated to maintain it in a safe, secure and
> confidential manner. Unauthorized redisclosure or failure to maintain
> confidentiality may subject you to federal and state penalties. If you are
> not the intended recipient, please immediately notify us by return email,
> and delete this message from your computer.
> _______________________________________________
> sssd-users mailing list -- sssd-users(a)lists.fedorahosted.org
> To unsubscribe send an email to sssd-users-leave(a)lists.fedorahosted.org
> Fedora Code of Conduct:
> https://docs.fedoraproject.org/en-US/project/code-of-conduct/
> List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines
> List Archives:
> https://lists.fedorahosted.org/archives/list/sssd-users@lists.fedorahoste...
>
3 years, 4 months
Problem with transition an user credentials through password pam-stack
by Vladimir Vakhlamov
Hello,
i would be grateful if somebody could get any advice.
My stuff and enviroment:
I have this configs on the top of all main pam-configs(just for test):
password [default=die success=ok] my_custom_pam.so
password [default=die success=done] pam_sss.so use_authtok use_first_pass
my_custom_pam.so includes two simple functions which provide correct test user credentials:
PAM_EXTERN int pam_sm_authenticate(pam_handle_t *pamh, int flags, int argc, const char **argv) {
pam_set_item(pamh, PAM_AUTHTOK, "q1w2e3r4t5y6");
return PAM_SUCCESS;
}
PAM_EXTERN int pam_sm_chauthtok(pam_handle_t *pamh, int flags, int argc, const char **argv) {
pam_set_item(pamh, PAM_OLDAUTHTOK, "q1w2e3r4t5y6");
pam_set_item(pamh, PAM_AUTHTOK, "q1w2e3r4t5y6");
return PAM_SUCCESS;
}
Initially i worked with FreeIpa client. According to FreeIpa policy a new user should change his password during first login.
What i have is successfull auth stage, but pam_sss can't change the pass due to error server response.
...
Nov 27 08:35:04 test su[68635]: my_custom_pam(su:auth): [DEBUG] Debug: 1, Slot: 0
Nov 27 08:35:04 test su[68635]: my_custom_pam(su:auth): PAM_SUCCESS
Nov 27 08:35:04 test su[68635]: pam_sss(su:auth): authentication failure; logname= uid=1000 euid=0 tty=/dev/pts/6 ruser=user rhost= user=test_user(a)dc.test
Nov 27 08:35:04 test su[68635]: pam_sss(su:auth): received for user test_user(a)dc.test: 12
Nov 27 08:35:04 test su[68635]: pam_sss(su:account): User info message:
Nov 27 08:35:04 test su[68635]: my_custom_pam(su:chauthtok): [DEBUG] Debug: 1, Slot: 0
Nov 27 08:35:04 test su[68635]: my_custom_pam(su:chauthtok): PRELIM
Nov 27 08:35:04 test su[68635]: my_custom_pam(su:chauthtok): EXPIRED
Nov 27 08:35:04 test su[68635]: my_custom_pam(su:chauthtok): PAM_SUCCESS
Nov 27 08:35:04 test su[68635]: pam_sss(su:chauthtok): User info message: Old password not accepted.
Nov 27 08:35:04 test su[68635]: pam_sss(su:chauthtok): Authentication failed for user test_user(a)dc.test: 4 (System error)
...
Next time i tried to make experiment in Active Directory and i got the same result. The server can't accept user credentials.
I can't go through chauthtok prelim step because of "old user password is not accepted" and i can't get the reason why it happens.
Moveover if i remove use_first_pass parameter then pam_sss will prompt current password. In this case i enter the same password and it works and the password is changed successfully
no idea
thanks in advance
3 years, 4 months
Re: primary LDAP server reconnect timeout after failover to backup
by Michael Ströder
On 11/23/20 10:23 AM, Jochen Schaefer wrote:
> I have following design problem regarding the primary LDAP server
> reconnect timeout value:
> from time to time we need to recreate the DB's of the primary ldap
> server via sync repl. Therefor we are stopping the primary LDAP,
> deleting it's db files and starting it again.
>
> The sssd client behaves as expected:
> * failover to the backup LDAP server
> * check after internal timeout 31 seconds if primary is available again
> * switch back to the primary LDAP server
>
> The problem here is - the primary is still not ready with its sync
> replication
This is a general problem with OpenLDAP taking some time in refresh
phase. Same like with any other database server and significant amount
of DB entries to be replicated during initialization.
You could also try to reduce the amount of time needed for initializing
the replica (maybe you already did). But the time period of the refresh
phase will never be zero.
I'd recommend to solve that with an operational procedure which blocks
LDAP access (e.g. with temporary host-based firewall rule) from regular
LDAP clients until monitoring shows that the replica is in sync again.
More sophistic approaches would involve using load-balancer(s) with
sophistic replica health checks.
Ciao, Michael.
3 years, 4 months
primary LDAP server reconnect timeout after failover to backup
by Jochen Schaefer
Hi list,
I have following design problem regarding the primary LDAP server reconnect timeout value:
from time to time we need to recreate the DB's of the primary ldap server via sync repl. Therefor we are stopping the primary LDAP,
deleting it's db files and starting it again.
The sssd client behaves as expected:
* failover to the backup LDAP server
* check after internal timeout 31 seconds if primary is available again
* switch back to the primary LDAP server
The problem here is - the primary is still not ready with its sync replication such the sss client connects to the primary, gets negativ
results about user, group information and returns with failing authentication responses to ssh attempts and other authentication requests.
We are searching for an option to either let the client further connect to the ldap backup server even if the primary LDAP server came back
or to set a static timeout (e.g. 5 minutes) after which the client should reconnect to the primary LDAP server.
Any idea how to accomplish this?
I already thought about setting a temporary firewall rule on the primary LDAP server.
But I would rather like to have an option on the client sides to bypass this problem.
Thanks,
--
Jochen
3 years, 4 months
Funky machine accounts created, then adcli join will not correctly succeed.
by Spike White
All,
This is just an annoyance that occurs periodically and we can't figure out
why. We know how to remediate once seen.
Every now and then, on a new build the sssd join/configure will fail.
For example, a server provisioner today built 10 boxes and 2 failed. Upon
closer inspection, we see that AD domain has machine accounts with funky
names.
For example, these three VMs were built. ausflinfsfdcap01 - 03. 01 and 02
built fine, sssd installed, adcli join succeeded, life was good. We find
the usual machine accounts in the usual OU.
CN=ausflinfsfdcap01, CN=ausflinfsfdcap02
On 03, the adcli join failed. In AD, we find the following funky machine
accounts (in the usual OU):
CN=AUSFLINFSFDCAP0\0ACNF:5020ab3d-243a-4ef1-827b-d421c0dcf3d0
CN=AUSFLINFSFDCAP0
This first machine account name is fairly typical when this
failure occurs. This second I've never seen this particular type of funky
name server. I.e., a truncated hostname.
When I try adcli join again right now, it will fail (because of these funky
named machine accounts).
I delete these funky machine accounts via ldapdelete. Example:
ldapdelete -H ldap://ausdcamer.example.com
'CN=AUSFLINFSFDCAP0\0ACNF:5020ab3d-243a-4ef1-827b-d421c0dcf3d0,OU=Servers,OU=UNIX,DC=example,DC=com'
then I delete /etc/krb5.keytab file (if it exists) and re-run the adcli
join -- which then succeeds.
So like I say -- we know how to work around this failure mode. It's just a
nuisance at this point. Usually occurs << 10% of builds.
But does anyone know why these funky-named machine accounts arise? And how
to avoid this?
Spike
3 years, 4 months
group member denied access to directory
by Tung, Paul
Hi,
I was hoping someone on this list might be able to help.
I'm getting permission denied when trying to access a directory owned by root, but with group that I'm a member of.
I'm getting: -bash: cd: testdir: Permission denied
I have the following scenario:
Running CentOS Linux release 7.6.1810 and sssd 1.16.5
I have a mount set up /data/testdir
As root, I chown/chmod testdir:
Chown root:testgrpa testdir
Chmod 770 testdir
When I log in as user1, I currently can't cd into /data/testdir
It gives:
-bash: cd: testdir: Permission denied
user1 is a member of testgrpa:
OUTPUT of id user1:
uid=129371342(user1) gid=129371342(user1) groups=129371342(user1) ,29042750285(group1),1435459822(group2),3456349245(group3),......,239705249(testgrpa)
OUTPUT of getent group testgrpa:
testgrpa:*: 239705249:user1,user2,user2,user4,.....,user50
CONTENTS OF Sssd.conf:
[sssd]
config_file_version = 2
services = nss,pam
domains = dept.domain.com
[nss]
filter_users = root
filter_groups = root
[pam]
[domain/dept.domai.com]
id_provider = ldap
auth_provider = ldap
access_provider = ldap
ldap_use_tokengroups = false
enumerate = false
cache_credentials = True
case_sensitive = false
ignore_group_members = false
auto_private_groups = true
ldap_schema = ad
ldap_uri = ldaps://ldapsserver.dept.domain.com:636
ldap_user_search_base = dc=ad,dc=dept,dc=domain,dc=com
ldap_group_search_base = OU=Security Groups,OU=Groups,dc=ad,dc=dept,dc=domain,dc=com?sub?(|(cn=domain users)(cn=testgrpa))
ldap_referrals = False
ldap_group_nesting_level = 3
ldap_tls_reqcert = allow
ldap_tls_cacertdir = /etc/sssd
ldap_use_tokengroups = True
ldap_id_mapping = True
override_homedir = /mnt/exports/shared/home/%u
fallback_homedir = /shared/home/%u
default_shell = /bin/bash
ldap_access_order = filter, expire
ldap_account_expire_policy = ad
ldap_access_filter = (|(memberOf=cn=testgrpa,OU=Security Groups,OU=Groups,DC=ad,DC=dept,DC=domain,DC=com))
ldap_default_bind_dn = <service account>
ldap_default_authtok_type = obfuscated_password
ldap_default_authtok = <authtok>
Thanks,
Paul T
________________________________
UCLA HEALTH SCIENCES IMPORTANT WARNING: This email (and any attachments) is only intended for the use of the person or entity to which it is addressed, and may contain information that is privileged and confidential. You, the recipient, are obligated to maintain it in a safe, secure and confidential manner. Unauthorized redisclosure or failure to maintain confidentiality may subject you to federal and state penalties. If you are not the intended recipient, please immediately notify us by return email, and delete this message from your computer.
3 years, 5 months
sssd with samba
by Edouard Guigné
Dear sssd users,
I would like to get informations about the use of sssd with samba
(centos 7, samba 4.8.3).
I need it because I configured a samba share, accessible with sssd.
The authentication is against a windows AD.
My /etc/nsswitch.cnf is configured only with sssd :
/passwd: files sss//
//shadow: files sss//
//group: files sss/
For an other purpose, I set an sftpd access also configured with sssd
against the AD.
I followed some discussions on the samba user list about samba + sssd.
I would like to understand if there are some issues with sssd and samba
4.8.3 on centos 7 ?
Or is it with next RHEL 8 ?
/The RHEL 8 documentation states this: //
////
//"Red Hat only supports running Samba as a server with the winbindd //
//service to provide domain users and groups to the local system. Due to //
//certain limitations, such as missing Windows access control list (ACL) //
//support and NT LAN Manager (NTLM) fallback, SSSD is not supported." //
////
//https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/8/html/deploying_different_types_of_servers/assembly_using-samba-as-a-server_deploying-different-types-of-servers////
////
//What's confusing is that the RHEL 7 documentation says: //
////
//"Prior to Red Hat Enterprise Linux 7.1, only Winbind provided this //
//functionality. In Red Hat Enterprise Linux 7.1 and later, you no longer //
//need to run Winbind and SSSD in parallel to access SMB shares. For //
//example, accessing the Access Control Lists (ACLs) no longer requires //
//Winbind on SSSD clients." //
////
//and //
////
//"4.2.2. Determining Whether to Use SSSD or Winbind for SMB Shares //
//For most SSSD clients, using SSSD is recommended:" //
////
//and most worrisome, in my use case: //
////
//"In environments with direct Active Directory integration where the //
//clients use SSSD for general Active Directory user mappings, using //
//Winbind for the SMB ID mapping instead of SSSD can result in //
//inconsistent mapping."
/
In my case, running samba 4.8.3 with SSSD on centos 7 do I need to :
- enable and start winbind service , in conjunction to sssd ?
- or only sssd is enough with samba ?
- Do I have to fear issues in next release of sssd for the support of
samba ? especially for acls support ?/
/
A nsswitch.conf like :
passwd: files sss winbind
shadow: files sss winbind
group: files sss winbind
or
passwd: files winbind sss
shadow: files winbind sss
group: files winbind sss
Does not seem to work... I test and this is not stable.
Best Regards,
Edouard
3 years, 5 months
SSH keys and kerberos ticket
by Winberg Adam
Hi,
in our environment all NFS shares are mounted with 'sec=krb5' and user homedirs are on NFS. So when users logs in via SSH they need a kerberos ticket to read their homedir. SSH with GSSAPIAuthentication would solve this, and of course user/password works as well. But for different reasons we want to restrict login to ssh keys only, with the key stored non-exportable on a hard token (smartcard/yubikey) and the public part stored in AD (accessed by using sshd config option 'AuthorizedKeysCommand /usr/bin/sss_ssh_authorizedkeys'). The problem is that the user does not get a kerberos ticket on login with this scheme, forcing them to use 'kinit' which requires password which we dont want to use.
I've read
https://bugzilla.redhat.com/show_bug.cgi?id=1017651
and
https://fedorahosted.org/freeipa/ticket/4000
The bugzilla is old but contains new, relevant input from users but no new comments from any devs - are there any new thoughts of making SSSD/sshd capable of retrieving a kerberos TGT for a user logged in with ssh keys? I understand the security concerns, but having the keys non-exportable on a hard token and storing the public part in AD/IdM should solve those issues, dont you think?
Right now we are stuck between two security principles (requiring krb auth for NFS access and using a secure ssh key setup for access) that dont play nice with each other.
Regards
Adam
3 years, 5 months
user names in Match blocks in sshd_config (when using sssd with AD back-end)...
by Spike White
sssd professionals,
Interesting problem; seems to be an interaction with sshd daemon when
using an AD back-end.
When using sssd (with an AD back-end), what should my “Match” blocks in
/etc/ssh/sshd_config file look like for over-riding user values?
Right now, my Match blocks look like:
MaxSessions 10
....
Match User SERVICEPPTPRDVRA
MaxSessions 999
ClientAliveInterval 360
ClientAliveCountMax 3
Match User SERVICEPPTPRDDCA
MaxSessions 999
ClientAliveInterval 360
ClientAliveCountMax 3
And in the system log files, it looks like:
Nov 4 10:40:22 peplpc1mom01 sshd[2400354]: debug2: parse_server_config:
config reprocess config len 1479
Nov 4 10:40:22 peplpc1mom01 sshd[2400354]: debug3: checking match for 'User
SERVICEPPTPRDVRA' user SERVICEPPTPRDVRA host 10.175.99.51 addr 10.175.99.51
laddr 10.174.120.203 lport 22
Nov 4 10:40:22 peplpc1mom01 sshd[2400354]: debug1: user SERVICEPPTPRDVRA
matched 'User SERVICEPPTPRDVRA' at line 158
Nov 4 10:40:22 peplpc1mom01 sshd[2400354]: debug3: match found
Nov 4 10:40:22 peplpc1mom01 sshd[2400354]: debug3: reprocess config:159
setting MaxSessions 999
Nov 4 10:40:22 peplpc1mom01 sshd[2400354]: debug3: reprocess config:160
setting ClientAliveInterval 360
Nov 4 10:40:22 peplpc1mom01 sshd[2400354]: debug3: reprocess config:161
setting ClientAliveCountMax 3
Nov 4 10:40:22 peplpc1mom01 sshd[2400354]: debug3: checking match for 'User
SERVICEPPTPRDDCA' user SERVICEPPTPRDVRA host 10.175.99.51 addr 10.175.99.51
laddr 10.174.120.203 lport 22
Nov 4 10:40:22 peplpc1mom01 sshd[2400354]: debug3: match not found
Here's where it gets weird. Because this is an AD back-end, by default
sssd is setting
case_sensitive = false
That is, it matches any case of user names. Examples:
SERVICEPPTPRDVRA
servicepptprdvra
ServicePPTPrdVra
However, I notice that sssd maps all the user names to lowercase once
you’re fully logged in. (this is what's desired.)
Example:
[root@peplpc1mom01 ssh]# su -l SERVICEPPTPRDVRA
Last login: Wed Nov 4 10:03:31 CST 2020 on pts/12
[servicepptprdvra@peplpc1mom01 ~]$ id
uid=3001425(servicepptprdvra) gid=3001425(servicepptprdvra)
groups=3001425(servicepptprdvra),1010(amerunixusers),2284221(puppetentrp)
[servicepptprdvra@peplpc1mom01 ~]$
It looks like SSHD is looking at the raw “user name” input without any
processing for its match blocks. So I’m guessing this is before any PAM or
NSS processing.
Originally, I naively assumed that my Match blocks should be lowercase, as
that's what I see on the command line. But now I think it has to be
whatever raw input the user entered.
Spike
3 years, 5 months
sssd + ad periodic failures of backend on the hour, then restarting every 2 or so minutes
by Josh Sonstroem
Hi SSSD users,
I need some help debugging this strange failure we are having with sssd. We've got an issue on a small centos linux cluster in our data center that uses GSSAPI and active directory for auth. On the head-nodes the sssd application periodically enters an error state and begins restarting every few minutes and while its restarting host authentication fails. From the logs it appears that the sssd application is in a loop restarting itself every 2 or so minutes. There are numerous logs that are written during the occurrence of this issue. I've attached snippets below.
I've been on a quite a journey over the past few weeks trying to get sssd+ad working again in our environment. A few months ago our AD admins added a new site to our domain that is unreachable from on-premise. This lead to a bunch of strange issues until we setup our configs to limit which DCs we talk to and despite limiting the hosts to the proper AD site and servers we continue to have periodic errors on these busy head nodes. While in the error state, the sssctl domain-status command will usually fail with the following error:
# sssctl domain-status example.com
Unable to get online status [3]: Communication error
org.freedesktop.DBus.Error.NoReply: Did not receive a reply. Possible causes include: the remote application did not send a reply, the message bus security policy blocked the reply, the reply timeout expired, or the network connection was broken.
Check that SSSD is running and the InfoPipe responder is enabled. Make sure 'ifp' is listed in the 'services' option in sssd.conf.
Unable to get online status
Ifp is enabled in the services and on working nodes this works correctly. To try and describe what is happening in the fail state I'll walk thru the logs. The first indication of an issue is that in /var/log/messages we see the following entries when the application restart loop starts going:
12245:Nov 4 10:53:00 host-0 sssd[be[example.com]]: Shutting down
12246:Nov 4 10:53:00 host-0 sssd[be[example.com]]: Starting up
And then these repeat every few minutes
the krb5_child log shows errors like this when the ifp backend is not working
(Wed Nov 4 10:51:30 2020) [[sssd[krb5_child[263295]]]] [unpack_buffer] (0x0100): cmd [241] uid [266812] gid [286812] validate [true] enterprise principal [true] offline [false] UPN [user(a)EXAMPLE.COM]
(Wed Nov 4 10:51:30 2020) [[sssd[krb5_child[263295]]]] [unpack_buffer] (0x0100): ccname: [KEYRING:persistent:286812] old_ccname: [not set] keytab: [/etc/krb5.keytab]
(Wed Nov 4 10:51:30 2020) [[sssd[krb5_child[263295]]]] [check_use_fast] (0x0100): Not using FAST.
(Wed Nov 4 10:51:30 2020) [[sssd[krb5_child[263295]]]] [privileged_krb5_setup] (0x0080): Cannot open the PAC responder socket
(Wed Nov 4 10:51:30 2020) [[sssd[krb5_child[263295]]]] [set_lifetime_options] (0x0100): No specific renewable lifetime requested.
(Wed Nov 4 10:51:30 2020) [[sssd[krb5_child[263295]]]] [set_lifetime_options] (0x0100): No specific lifetime requested.
(Wed Nov 4 10:51:30 2020) [[sssd[krb5_child[263295]]]] [set_canonicalize_option] (0x0100): Canonicalization is set to [true]
(Wed Nov 4 10:51:30 2020) [[sssd[krb5_child[263295]]]] [sss_send_pac] (0x0040): sss_pac_make_request failed [-1][2].
(Wed Nov 4 10:51:30 2020) [[sssd[krb5_child[263295]]]] [validate_tgt] (0x0040): sss_send_pac failed, group membership for user with principal [user\(a)EXAMPLE.COM] might not be correct.
The nss log shows these failures and then the auto-reconnect after seeing something like this "[cache_req_common_dp_recv] (0x0040): CR #1171: Data Provider Error: 3, 5, (null)" see below
(Wed Nov 4 10:52:28 2020) [sssd[nss]] [cache_req_common_dp_recv] (0x0040): CR #1171: Data Provider Error: 3, 5, (null)
(Wed Nov 4 10:52:58 2020) [sssd[nss]] [cache_req_common_dp_recv] (0x0040): CR #1174: Data Provider Error: 3, 5, (null)
(Wed Nov 4 10:53:00 2020) [sssd[nss]] [sbus_dispatch] (0x0020): Performing auto-reconnect
(Wed Nov 4 10:53:01 2020) [sssd[nss]] [sbus_reconnect] (0x0080): Making reconnection attempt 1 to [unix:path=/var/lib/sss/pipes/private/sbus-dp_example.com]
(Wed Nov 4 10:53:01 2020) [sssd[nss]] [sbus_reconnect] (0x0080): Reconnected to [unix:path=/var/lib/sss/pipes/private/sbus-dp_example.com]
(Wed Nov 4 10:53:01 2020) [sssd[nss]] [nss_dp_reconnect_init] (0x0020): Reconnected to the Data Provider.
(Wed Nov 4 10:53:01 2020) [sssd[nss]] [cache_req_common_dp_recv] (0x0040): CR #1178: Data Provider Error: 3, 5, (null)
the ifp log shows the following, notice the killing children at the bottom
(Wed Nov 4 10:53:00 2020) [sssd[ifp]] [sbus_dispatch] (0x0020): Performing auto-reconnect
(Wed Nov 4 10:53:01 2020) [sssd[ifp]] [sbus_reconnect] (0x0080): Making reconnection attempt 1 to [unix:path=/var/lib/sss/pipes/private/sbus-dp_example.com]
(Wed Nov 4 10:53:01 2020) [sssd[ifp]] [sbus_reconnect] (0x0080): Reconnected to [unix:path=/var/lib/sss/pipes/private/sbus-dp_example.com]
(Wed Nov 4 10:55:06 2020) [sssd[ifp]] [sbus_dispatch] (0x0020): Performing auto-reconnect
(Wed Nov 4 10:55:07 2020) [sssd[ifp]] [sbus_reconnect] (0x0080): Making reconnection attempt 1 to [unix:path=/var/lib/sss/pipes/private/sbus-dp_example.com]
(Wed Nov 4 10:55:07 2020) [sssd[ifp]] [sbus_reconnect] (0x0080): Reconnected to [unix:path=/var/lib/sss/pipes/private/sbus-dp_example.com]
(Wed Nov 4 10:57:06 2020) [sssd[ifp]] [sbus_dispatch] (0x0020): Performing auto-reconnect
(Wed Nov 4 10:57:07 2020) [sssd[ifp]] [sbus_reconnect] (0x0080): Making reconnection attempt 1 to [unix:path=/var/lib/sss/pipes/private/sbus-dp_example.com]
(Wed Nov 4 10:57:07 2020) [sssd[ifp]] [sbus_reconnect] (0x0080): Reconnected to [unix:path=/var/lib/sss/pipes/private/sbus-dp_example.com]
(Wed Nov 4 10:59:10 2020) [sssd[ifp]] [sbus_dispatch] (0x0020): Performing auto-reconnect
(Wed Nov 4 10:59:11 2020) [sssd[ifp]] [sbus_reconnect] (0x0080): Making reconnection attempt 1 to [unix:path=/var/lib/sss/pipes/private/sbus-dp_example.com]
(Wed Nov 4 10:59:11 2020) [sssd[ifp]] [sbus_reconnect] (0x0080): Reconnected to [unix:path=/var/lib/sss/pipes/private/sbus-dp_example.com]
(Wed Nov 4 11:01:11 2020) [sssd[ifp]] [sbus_dispatch] (0x0020): Performing auto-reconnect
(Wed Nov 4 11:01:12 2020) [sssd[ifp]] [sbus_reconnect] (0x0080): Making reconnection attempt 1 to [unix:path=/var/lib/sss/pipes/private/sbus-dp_example.com]
(Wed Nov 4 11:01:12 2020) [sssd[ifp]] [sbus_reconnect] (0x0080): Reconnected to [unix:path=/var/lib/sss/pipes/private/sbus-dp_example.com]
(Wed Nov 4 11:02:01 2020) [sssd[ifp]] [orderly_shutdown] (0x0010): SIGTERM: killing children
Usually clearing the cache and restarting sssd are enough to fix this issue but not always. Its hard to diagnose because it effects production and peoples ability to even use the cluster and I can't recreate it on the fly.
Any idea of what might be causing these failures?
Thanks in advance,
Josh
3 years, 5 months