[Q] t3222 sssd still showing ipa user after removed from last group
by Petr Cech
Hi all,
I came back to ticket #3222 "sssd still showing ipa user after removed
from last group" [1]. And I have new knowledge. But I still do not see
the light at the end of the tunnel.
[1] https://fedorahosted.org/sssd/ticket/3222
I attached patch which enables some basic debug on using of memcache.
And two reproducers (with and without memcache) which are based on
reproducer written in ticket.
If we use memcache, the issue occurs only sometimes.
The difference between both cases is mixed state of switch after
sss_nss_mc_getgrnam() call in _nss_sss_getgrnam_r() function.
Note: code says (for default case):
/* if using the mmaped cache failed,
* fall back to socket based comms */
Could anyone help, please?
The report is:
#--- WRONG
[root@mirach sssd]# date && getent group testgroup
Wed Nov 9 16:01:05 CET 2016
>>> [A] record not found (time[1478703665])
>>> [B] record not found (time[1478703665])
testgroup:*:1703800674:
Number of members added 1
[root@mirach sssd]# sss_cache -UG && date && getent group testgroup
Wed Nov 9 16:01:07 CET 2016
>>> [A] record not found (time[1478703667])
>>> [B] default (time[1478703667])
testgroup:*:1703800674:testuser
Number of members removed 1
[root@mirach sssd]# sss_cache -UG && date && getent group testgroup
Wed Nov 9 16:01:09 CET 2016
>>> mc record expires at [1478703967] | now [1478703669]
>>> [A] MC used (time[1478703669])
testgroup:*:1703800674:testuser
[root@mirach sssd]# grep '>>>' *.log
sssd_nss.log:(Wed Nov 9 16:01:06 2016) [sssd[nss]]
[sss_mmap_set_rec_header] (0x0010): >>> MC STORE expiration [1478703966]
| now [1478703666] | delta [300]
sssd_nss.log:(Wed Nov 9 16:01:06 2016) [sssd[nss]]
[sss_mmap_cache_gr_store] (0x0010): >>> MC STORE [testgroup] [300]
members [0]
sssd_nss.log:(Wed Nov 9 16:01:07 2016) [sssd[nss]]
[sss_mmap_set_rec_header] (0x0010): >>> MC STORE expiration [1478703967]
| now [1478703667] | delta [300]
sssd_nss.log:(Wed Nov 9 16:01:07 2016) [sssd[nss]]
[sss_mmap_cache_gr_store] (0x0010): >>> MC STORE [testgroup] [300]
members [1]
#--- RIGHT
[root@mirach sssd]# date && getent group testgroup
Wed Nov 9 15:56:54 CET 2016
>>> [A] record not found (time[1478703414])
>>> [B] record not found (time[1478703414])
testgroup:*:1703800674:
Number of members added 1
[root@mirach sssd]# sss_cache -UG && date && getent group testgroup
Wed Nov 9 15:56:56 CET 2016
>>> [A] default (time[1478703416])
>>> [B] default (time[1478703416])
testgroup:*:1703800674:testuser
Number of members removed 1
[root@mirach sssd]# sss_cache -UG && date && getent group testgroup
Wed Nov 9 15:56:58 CET 2016
>>> [A] record not found (time[1478703418])
>>> [B] record not found (time[1478703418])
testgroup:*:1703800674:
[root@mirach sssd]# grep '>>>' *.log
sssd_nss.log:(Wed Nov 9 15:56:54 2016) [sssd[nss]]
[sss_mmap_set_rec_header] (0x0010): >>> MC STORE expiration [1478703714]
| now [1478703414] | delta [300]
sssd_nss.log:(Wed Nov 9 15:56:54 2016) [sssd[nss]]
[sss_mmap_cache_gr_store] (0x0010): >>> MC STORE [testgroup] [300]
members [0]
sssd_nss.log:(Wed Nov 9 15:56:56 2016) [sssd[nss]]
[sss_mmap_set_rec_header] (0x0010): >>> MC STORE expiration [1478703716]
| now [1478703416] | delta [300]
sssd_nss.log:(Wed Nov 9 15:56:56 2016) [sssd[nss]]
[sss_mmap_cache_gr_store] (0x0010): >>> MC STORE [testgroup] [300]
members [1]
sssd_nss.log:(Wed Nov 9 15:56:58 2016) [sssd[nss]]
[sss_mmap_set_rec_header] (0x0010): >>> MC STORE expiration [1478703718]
| now [1478703418] | delta [300]
sssd_nss.log:(Wed Nov 9 15:56:58 2016) [sssd[nss]]
[sss_mmap_cache_gr_store] (0x0010): >>> MC STORE [testgroup] [300]
members [0]
Regards
--
Petr^4 Čech
7 years
[RFC] Socket-activate responders
by Fabiano Fidêncio
People,
I've spent some time looking at the code and trying to understand what
are the needed changes in order to have this task done. I'll start by
writing down how things are working nowadays, what we want to achieve,
what are the parts that will need to be touched and what are the steps
that I'm going to take. Please, keep in mind that I may have a wrong
(or at least not so clear) understanding of all these points that I'm
about to explain and in case I made some mistake feel free to jump in
and correct me. Also, whatever we sum up from this email will be
written down in our DesignFeatures page. Let's start ...
How things work nowadays:
----------------------------------------
Nowadays all the services are started and taken care by the monitor.
It basically means that the monitor checks which are the services
listed to be started, starts those and registers them in order to
relay them signals coming from our tools. Here I'm not sure whether
the monitor relays signals from something else but the tools, but
doesn't seem to be case (and please, correct me in case I'm wrong).
What we want to achieve:
-------------------------------------
While my personal desire is to start slowly killing the monitor,
that's not going to be case right now.
We don't want to do any change in the code that wouldn't also
contemplate platforms where systemd is not available. That being said,
let's move to the important part ...
Here what we want to achieve is to have all responders (at least for
now, yes, just the responders) socket-activatable as some of them
don't actually have to be running all the time (that's the case, for
example, of the ssh, sudo and ifp responders). We also have to keep
into consideration that _any_ change in behaviour should _not_ happen,
which means that we still have to honor our compromise with sssd.conf
and still make the responders manually activated there to start and be
running from the moment sssd is running (as we do nowadays).
How we plan to achieve our goal:
-----------------------------------------------
Some parts I have a pretty clear idea on what to do, some others not
that much. The basic idea is to take as much advantage of systemd
machinery as we can and "remove" as much duties as possible from the
monitor. Let's go through this by parts ...
- Starting the service: the idea is to have a systemd unit for each of
the responders. Whether these units will be automatically generated by
us is a detail that doesn't worth attention right now. Let's take a
look on what the unit will look like:
[Unit]
Description=SSSD @responder@ service provider
[Install]
Also=sssd-@responder@.socket
[Service]
ExecStart==@libexecdir@/sssd/sssd_@responder@ --uid 0 --gid 0 --debug-to-files
Requires=sssd.service
PartOf=sssd.service
There's two options that deserve some explanation of their usage here:
-- "Requires=sssd.service": This option guarantees that sssd will be
up when any of the responders are up. Considering that the providers
part won't be changed, those will still be initializated synchronously
by the monitor and only then notify init that its start up finished,
which also mean that the providers' sbus socket will be up.
-- "PartOf=sssd.service": This options guarantees that when
sssd.service is restarted/stoped all the responders' services will be
restarted/stopped accordingly.
- Relaying signals: seems the best approach for this in order to
replace the registration currently done by the monitor, is to create a
named bus for each of the responders so the tools could talk directly
to them. By "named bus for each responder" I understand that's a pipe,
named sbus-@responder@ that will be used to send the dbus message
through it. It would require to adapt the tools code to actually check
whether the sbus-@responder@ pipe exists and then send the message, as
we won't have a list of the running responders, which may increase the
iterations to send a message but I do believe wouldn't hurt us too
much due to the small number of responders we have. It will also help
tools to be able to set different debug levels for each responder.
Seems, at least for me, that doing these steps we are covered with
respect to what we have nowadays. Does someone think that we are
missing something? What? Please, try to explain in the same way you
would explain to a 5 years old kid :-)
Coding plan:
------------------
My current plan is to start implementing this whole thing by creating
the named bus for each responder and have the communication between
the responders and the tools working. Next step will be to actually
have the responders started on demand by using socket-activation. And
the very last step will be to make sure we can have the responders
always running when they're listed in the sssd.conf.
I'm not comfortable on giving any estimation about when we will have
it done, mainly because I'd like to hear the feedback about this from
others in the team.
Looking forward for your reading back from you!
Best Regards,
--
Fabiano Fidêncio
7 years
[sssd PR#53][opened] Fixes in the config API related to secrets responder
by fidencio
URL: https://github.com/SSSD/sssd/pull/53
Author: fidencio
Title: #53: Fixes in the config API related to secrets responder
Action: opened
PR body:
"""
Those fixes were suggested by Lukaš in the following thread:
https://lists.fedorahosted.org/archives/list/sssd-devel@lists.fedorahoste...
Changes:
28fa419 (Fabiano Fidêncio, 11 minutes ago)
SECRETS: Add allowed_sec_users_options
There are options (the proxying related ones) that only apply to the
secrets' subsections. In order to make config API able to catch those,
let's create a new section called allowed_sec_users_options) and move there
these proxying options.
Signed-off-by: Fabiano Fidêncio <fidencio(a)redhat.com>
2aed214 (Fabiano Fidêncio, 2 hours ago)
SECRETS: Fix secrets rule in the allowed sections
We have been matching an invalid subsection of the secrets' section, like:
[secrets/users]
Let's ensure that we only match the following cases:
[secrets]
[secrets/users/[0-9]+?]
Signed-off-by: Fabiano Fidêncio <fidencio(a)redhat.com>
"""
To pull the PR as Git branch:
git remote add ghsssd https://github.com/SSSD/sssd
git fetch ghsssd pull/53/head:pr53
git checkout pr53
7 years