[sssd PR#89][opened] nss: rewrite nss responder so it uses cache_req
by fidencio
URL: https://github.com/SSSD/sssd/pull/89
Author: pbrezina
Title: #89: nss: rewrite nss responder so it uses cache_req
Action: opened
PR body:
"""
Given the size of the current nss responder it was quite impossible
to simply switch into using the cache_req interface, especially
because most of the code was duplication of cache lookups.
This patch completely rewrites the responder from scratch. The amount
of code was reduced to less than a half lines of code with no code duplication,
better documentation and better maintainability and readability.
All functionality should be intact.
*Code organization*
All protocol (parsing input message and send a reply) is placed
in nss_protocol.c. Functions that deals with creating a reply
packet are placed into their specific nss_protocol_$object.c files.
All supported commands are placed into nss_cmd.c. Functions that
deals with cache req are in nss_get_object.c and nss_enum.c.
*Code flow for non-enumeration*
An nss_getby_$input-type is called for each non-enumeration command.
This function parses the input message, creates a cache_req_data
structure and issues nss_get_object that calls cache_req. When
this request is done nss_getby_done make sure a reply is sent to
the client.
*Comments on enumeration*
I made some effort to make sure enumeration shares the same code
for users, groups, services and netgroups. Netgroups now uses
nss negative cache instead of implementing its own.
*Unit tests*
ALl existing unit tests, including the one for nss responder, pass.
@mzidek-rh is going to write new unit tests for added functionality
into cache_req and sss_ptr_hash interface.
Thanks to @spbnick for doing the first round of review, focused on
code documentation and readibility.
"""
To pull the PR as Git branch:
git remote add ghsssd https://github.com/SSSD/sssd
git fetch ghsssd pull/89/head:pr89
git checkout pr89
6 years, 11 months
Design discussion: Fleet Commander integration
by Jakub Hrozek
Hi,
with Alexander's help, I wrote up a design page about how SSSD should
read Fleet Commander data from IPA and present them to the FC client
component. The SSSD part is described here:
https://fedorahosted.org/sssd/wiki/DesignDocs/FleetCommanderIntegration
and the IPA part is here:
https://github.com/abbra/freeipa-desktop-profile/blob/master/plugin/Featu...
For convenience, I copied the SSSD wiki page below. Comments are welcome!
= Feature Name =
Related ticket(s):
* https://fedorahosted.org/sssd/ticket/2995
=== Problem statement ===
FleetCommander is a service to centrally manage Desktop environments. It includes a server to define desktop profiles and a client to apply profile information to the user's desktop session on a specified machine.
This design document describes the SSSD part of
an integration of FleetCommander with FreeIPA. The
integration is done two-fold, the IPA part of the integration is
[https://github.com/abbra/freeipa-desktop-profile/blob/master/plugin/Featu... documented separately].
=== Use cases ===
As an administrator, I want to manage desktop profiles in a centralized way
As an administrator, I want to use centrally defined users, groups, hosts and host groups to specify how desktop profiles should be applied.
As an administrator, I want to make sure desktop profiles associated with a specific user or user group are downloaded and applied on a specific FreeIPA client according to the desktop profile rules defined in FreeIPA.
=== Overview of the solution ===
FleetCommander consists on two components:
* a web service integrated with Cockpit that serves the dynamic application and the profile data to the network.
* and a client side daemon that runs on every host of the network.
Since this design page deals with the client side of the whole picture, this paragraph will focus on the integration of the FC client side daemon with SSSD.
The FC profiles will be downloaded by a new `session_provider` of IPA. This
provider will do nothing by default and will include an option to download
FC rules from IPA LDAP (perhaps `ipa_enable_fleetcmd = true`).
In order to minimize the required client-side configuration changes, enabling
the Fleet Commander client side deamon will drop SSSD configuration that
enables the SSSD functionality and restarts SSSD.
When a FreeIPA domain user logs in, the IPA provider will download the Fleet
Commander profile and rule objects and drop the resulting JSON files into
a per-user directory. The file names must be normalized and prepended with
priority (please refer to the IPA design page for more details).
In the future, we would like to link the Fleet Commander profiles with
HBAC rules, but the first implementation will not include this part.
=== Implementation details ===
The implementation has two distinct parts -- enabling the IPA session
provider's Fleet Commander functionality and actually fetching the Fleet
Commander data.
==== Enabling the IPA session provider ====
Since searching for the Fleet Commander profiles does not come for free -
at least one LDAP search must be issued, perhaps more unless we cache the
host groups, we should only enable this functionality if the Fleet Commander
client daemon is enabled as well. To this end, enabling the FC client deamon
would trigger a one-shot systemd service that would drop an include file
to SSSD's `conf.d` directory.
The systemd service might be implemented along these lines:
{{{
[Unit]
ConditionFileNotEmpty=/etc/sssd/sssd.conf
ConditionFileNotEmpty=!/etc/sssd/conf.d/fleetcommander.conf
[Service]
Type=oneshot
ExecStartPre=/bin/cp -y /usr/share/fleetcommander/sssd.snippet.conf /etc/sssd/conf.d/fleetcommander.conf
ExecStart=/bin/systemctl try-restart sssd
}}}
This systemd unit might be stored
in `/lib/systemd/system/fleetagent.service.d/sssd.conf`. A similar
functionality in this file should remove the included config file when
the FC client deamon is disabled.
==== Looking up the Fleet Commander profiles and storing the JSON profile data ====
Since the first implementation will only fetch rules that are linked to
this host and the user in question, the SSSD's session provider will issue
an LDAP search along these lines:
{{{
(&(objectclass=ipadeskprofilerule)(memberHost=my_fqdn_or_my_host_group)(memberUser=user_login_or_group))
}}}
All host groups the IPA client is a member of must be included in the
`memberHost` part of the filter. Additionally, all user groups must be
included in the `memberUser` part of the filter. Since in most cases,
the user's groups will be resolved during the login, we will only issue
an initgroups request in case the user's initgroups are expired already
to cover cases where the sessions provider was invoked separately.
The host groups are typically also resolved by the IPA access control
provider, but currently not cached. In the initial implementation, we can
just search the host groups again, but subsequent patches should optimize
the searches by storing the host groups in the cache or in an intermediate
in-memory result.
The LDAP search will include the Fleet Commander payload data in the
profile's `data` attribute. Once the data are known, SSSD will write them
to the disk. Since writing to the disk is typically quite fast
The JSON files will be stored in a new directory owned by the `sssd-ipa`
subpackage. The top-level directory could be at `/var/lib/sss/fleetcmd/`
with per-user subdirectories. So each per-user JSON file would be stored at
`/var/lib/sss/fleetcmd/<username>/<profilename>.json`. The `<username>`
directories need to be owned by the user being logged in.
The `<profilename.json>` file must include the priority as a number which
is read from the rule's `prio` attribute. The Fleet Commander client deamon
will then process the JSON files in this priority. The filenames must also
be normalized so that characters with a special meaning in shell are escaped
and spaces are converted to another character such as underscores. Please
refer to the IPA design page for more details.
In the first version, the profiles will always be written again. In the
future, we might want to optimize the process further by only writing the
JSON profiles if they differ from what's already stored on the disk. This
might be doable by storing the modifyTimestamp in the JSON profiles again,
if FC is able to ignore certain JSON key-value pairs that would be private
to SSSD or just storing the largest USN value of the found profiles in
the included directory in a specially-named file.
=== Configuration changes ===
Two new configuration options will be added:
* `session_provider` that will be inherited from the `id_provider` value, so for IPA clients, this provider will default to `ipa`. A default `session_provider` for other providers will just shortcut and return success.
* An option that enables the FC rules and profiles processing. A proposed option name is `ipa_enable_fleetcmd` with boolean semantics.
=== How To Test ===
Please see the use-cases above.
=== How To Debug ===
DEBUG messages will be added to the new session provider so that the admin
can trace if the session provider was invoked at all. An easy way to debug
the integration is to enable the sessions provider and the FleetCommander
integration manually w/o dropping the file by the FC client side daemon.
=== Authors ===
* Alexander Bokovoy
* Jakub Hrozek
6 years, 11 months
[RFC] NSS tlog integration
by Nikolai Kondrashov
Hi everyone,
Please find attached proof-of-concept patches for a part of NSS integration
with tlog. Namely, addition of shell substitution for getpwnam requests.
The code is supposed to replace a user's shell with /usr/bin/tlog-rec, if
session recording is enabled for all users, if it is enabled for that
particular user, or for a group that it belongs to.
The configuration is done in a dedicated section of sssd.conf named
"session_recording", which can contain three options "scope", "users", and
"groups". Those correspond to the scope of session recording: "none", "some",
and "all", corresponding in order to: disabled session recording, session
recording enabled for the specified users/groups, and session recording
enabled for all users handled by SSSD.
An example of a configuration can be:
[session_recording]
; Disabled
scope = none
or
[session_recording]
; Enabled for everyone
scope = all
or
[session_recording]
; Enabled for some users and groups
scope = some
users = user1, user2
groups = group1, group2
The parts to be done still are adding support for getpwuid and getpwent
requests, exporting of the original shell in pam_sss, and of course cleaning
it up and doing it according to your comments and requirements.
The code has some documentation in doxygen format, which I can change later if
we decide on some other format, or no documentation at all.
Please, tell me if I'm doing anything wrong this far already, or suggest
better ways to do it.
Thank you!
Nick
P.S. I'm on PTO for two weeks starting next week, so might not be able to
answer quickly.
6 years, 11 months
[PATCH] Unit tests for pam_sss using pam_wrapper (need help with CI..)
by Jakub Hrozek
Hi,
the attached patches implement unit tests for the pam_sss module using
pam_wrapper and libpamtest. In my testing, the coverage is around 75%
with mostly the parts that require running as root being untested.
I worked on this patchset even though the features for 1.14 are in full
swing because there are several tickets that will require us to patch
pam_sss, so it's important to have the code that changes tested. In
addition, when we merge Dan's patches to use TLS with integration tests,
then we'll be able to also test authentication in integration tests
easily using libpamtest-python.
However, our CI fails for me constantly:
http://sssd-ci.duckdns.org/logs/job/42/75/fedora_rawhide/ci.html
The strange thing is that running CI locally works fine and so does make
check. Can anyone help point me in the right direction as to what should
I check next? I suspect some of the environment variables might not be
set correctly, but I don't see why..
6 years, 11 months
Design document - Socket-activatable responders
by Fabiano Fidêncio
The design page is done [0] and it's based on this discussion [1] we
had on this very same mailing list. A pull-request with the
implementation is already opened [2].
[0]: https://fedorahosted.org/sssd/wiki/DesignDocs/SocketActivatableResponders
[1]: https://lists.fedorahosted.org/archives/list/sssd-devel@lists.fedorahoste...
[2]: https://github.com/SSSD/sssd/pull/84
The full text of c&p here:
= Socket Activatable Responders =
Related ticket(s):
* https://fedorahosted.org/sssd/ticket/2243
* https://fedorahosted.org/sssd/ticket/3129
=== Problem statement ===
SSSD has some responders which don't have to be running all the time,
but could be socket-activated instead in platforms where it's
supported. That's the case, for instance, for the IFP, ssh and sudo
responders.
Making these responders socket-activated would provide a better use
experience, as these services could be started on-demand when a client
needs them and exist after a period of inactivity. Currently the admin
has to explicitly list all the services that might potentially be
needed in the `services` section and the processes have to be running
all the time.
=== Use cases ===
==== sssctl ====
As more and more features that had been added depending on the IFP
responder, we should make sure that the responder is activated on
demand and the admins doesn't have to activate it manually.
==== KCM ====
The KCM responder is only seldom needed, when libkrb5 needs to access
the credentials store. At the same time, the KCM responder must be
running if the Kerberos credentials cache defaults to `KCM`.
Socket-activating the responder would solve both of these cases.
==== autofs ====
The autofs responder is typically only needed when a share is about to
be mounted.
=== Overview of the solution ===
The solution agreed on the mailing list is to add a new unit for each
one of the responders. Once a responder is started, it will
communicate to the monitor in order to let the monitor know that it's
up and the monitor will do the registration of the responder, which
basically consists in marking the service as started, increasing the
services' counter, getting the responder's configuratiom, adding the
responder to the service's list.
A configurable idle timeout will be implemented in each responder, as
part of this task, in order to exit the responder in case it's not
used for a few minutes.
=== Implementation details ===
In order to achieve our goal we will need a small modification in
responders' common code in order to make it ready for
socket-activation, add some systemd units for each of the responders
and finally small changes in the monitor code in order to manage the
new activated service.
The change in the responders' common code is quite trivial, just
chnage the sss_process_init code to call activate_unix_sockets()
istead of set_unix_socket(). Something like:
{{{
- ret = set_unix_socket(rctx, conn_setup);
+ ret = activate_unix_sockets(rctx, conn_setup);
}}}
The units that have to be added for each responder must look like:
sssd-@responder@.service.in:
{{{
[Unit]
Description=SSSD @responder@ Service responder
Documentation=man:sssd.conf(5)
Requires=sssd.service
PartOf=sssd.service
After=sssd.service
[Install]
Also=sssd-@responder@.socket
[Service]
ExecStart=@libexecdir@/sssd/sssd_@responder@ --uid 0 --gid 0 --debug-to-files
}}}
sssd-@responder@.socket.in:
{{{
[Unit]
Description=SSSD @responder@ Service responder socket
Documentation=man:sssd.conf(5)
[Socket]
ListenStream=@pipepath@/@responder@
[Install]
WantedBy=sockets.target
}}}
Some responders may have more than one socket, which is the case of
PAM, so another unit will be needed.
sssd-@responder(a)-priv.socket.in:
{{{
[Unit]
Description=SSSD @responder@ Service responder private socket
Documentation=man:sssd.conf(5)
[Socket]
ListenStream=@pipepath@/private/@responder@
[Install]
WantedBy=sockets.target
}}}
Last but not least, the IFP responder doesn't have a socket. It's
going to be D-Bus activated and some small changes will be required on
its D-Bus service unit.
{{{
-Exec=@libexecdir@/sssd/sss_signal
+Exec=@libexecdir@/sssd/sssd_@responder@
}}}
And, finally, the code in the monitor side will have to have some
adjustments in order to properly deal with an empty list of services
and, also, to register the service when it's started.
As just the responders' will be socket-activated for now, the service
type will have to exposed and passed through sbus when calling the
RegistrationService method and the monitor will have to properly do
the registration of the service when RegistrationService's callback is
triggered. As mentioned before, the "registration" that has to be done
from the monitor's side is:
* Mark the service as started;
* Increase the services' counter;
* Get the responders' configuration;
* Set the service's restart number;
* Add the service to the services' list.
=== Configuration changes ===
After this design is implemented, the "services" line in sssd.conf
will become optional for platforms where systemd is present. Note that
in order to keep backward compatibility, if the "services" line is
present, the services will behave exactly as they did before these
changes.
=== How To Test ===
The easiest way to test is removing the "services" line from sssd.conf
and try to use SSSD normally.
Using sssctl tool without having the ifp responder set in the
"services" line is another way to test.
=== How To Debug ===
The easiest way to debug this new feature is taking a look on the
responders' common initialization code and in the monitors' client
registration code.
Is worth to mention that disabling the systemd's services/sockets will
prevent the responders' services to be started.
=== Authors ===
Fabiano Fidêncio <fidencio(a)redhat.com>
Best Regards,
--
Fabiano Fidêncio
6 years, 12 months