[sssd PR#574][opened] cache_req: Don't force a fqname for files provider output
by fidencio
URL: https://github.com/SSSD/sssd/pull/574
Author: fidencio
Title: #574: cache_req: Don't force a fqname for files provider output
Action: opened
PR body:
"""
As we're enforcing the output of files provider to be fully-qualified we
can face some weirdness when using domain_resolution_order as:
[user@implicit_files@machine]$
This is not only not coherent but also causes issues when the local
user, which is managed by the files provider, tries to do a `sudo su`.
In this scenario, the user is asked by the password (doesn't matter
whether it's part of sudoers) and never is allowed to log-in.
In order to avoid the issues described above, let's just not force the
output of the files provider to be fully-qualified.
NOTE: I do not understand clearly why the issue with sudo happens.
"""
To pull the PR as Git branch:
git remote add ghsssd https://github.com/SSSD/sssd
git fetch ghsssd pull/574/head:pr574
git checkout pr574
5 years, 11 months
how to run intgcheck?
by Chris Kowalczyk
Hello All,
I have been trying to run sssd intgcheck, but to not success. Could you
help me with it?
Generally, I've been performing the following steps:
autoreconf -if
./configure --disable-cifs-idmap-plugin --without-samba \
--without-nfsv4-idmapd-plugin --without-secrets \
--without-kcm --enable-intgcheck-reqs --with-os=suse
make intgcheck
...And getting this error:
configure: error: source directory already configured; run "make distclean" there first
Of course, /make distclean/ destroys all the targets, so this is not a
solution.
I tried to follow the example in /./contrib/fedora/bashrc_sssd/ and
build the tests in a separate directory, but also without success:
autoreconf -if
cd x86_64
../configure --disable-cifs-idmap-plugin --without-samba \
--without-nfsv4-idmapd-plugin --without-secrets \
--without-kcm --enable-intgcheck-reqs --with-os=suse
make intgcheck
Running /make intgcheck/ from /x86_64/ directory complains about missing
/cifsidmap/, so clearly there is some problem with configuration.
Could you advise me what steps I should take to build and run the
integration tests?
Regards,
Chris
5 years, 11 months
What's the best way to debug SELinux issues on SSSD?
by Fabiano Fidêncio
People,
I've been trying to debug a SELinux issue related to the domain
resolution order.
Basically, if there's no domain_reoslution_order set:
[root@client1 vagrant]# ssh -l admin localhost
Password:
Last login: Mon May 21 19:00:06 2018 from ::1
[admin@client1 ~]$ id -Z
staff_u:staff_r:staff_t:s0-s0:c0.c1023
But, if domain_resolution_order is set:
[root@client1 vagrant]# ssh -l admin localhost
Password:
Last login: Mon May 21 19:30:45 2018 from ::1
[admin@ipa.example(a)client1 ~]$ id -Z
unconfined_u:unconfined_r:unconfined_t:s0-s0:c0.c1023
First thing that came to my mind was to take a look at
selinux_child.logs, but it didn't give me any clue as the logs are
exactly the same for both cases:
No domain_resolution_order set:
(Mon May 21 19:30:44 2018) [[sssd[selinux_child[23351]]]] [main]
(0x0400): selinux_child started.
(Mon May 21 19:30:44 2018) [[sssd[selinux_child[23351]]]] [main]
(0x2000): Running with effective IDs: [0][0].
(Mon May 21 19:30:44 2018) [[sssd[selinux_child[23351]]]] [main]
(0x2000): Running with real IDs [0][0].
(Mon May 21 19:30:44 2018) [[sssd[selinux_child[23351]]]] [main]
(0x0400): context initialized
(Mon May 21 19:30:44 2018) [[sssd[selinux_child[23351]]]]
[unpack_buffer] (0x2000): seuser length: 7
(Mon May 21 19:30:44 2018) [[sssd[selinux_child[23351]]]]
[unpack_buffer] (0x2000): seuser: staff_u
(Mon May 21 19:30:44 2018) [[sssd[selinux_child[23351]]]]
[unpack_buffer] (0x2000): mls_range length: 14
(Mon May 21 19:30:44 2018) [[sssd[selinux_child[23351]]]]
[unpack_buffer] (0x2000): mls_range: s0-s0:c0.c1023
(Mon May 21 19:30:44 2018) [[sssd[selinux_child[23351]]]]
[unpack_buffer] (0x2000): username length: 5
(Mon May 21 19:30:44 2018) [[sssd[selinux_child[23351]]]]
[unpack_buffer] (0x2000): username: admin
(Mon May 21 19:30:44 2018) [[sssd[selinux_child[23351]]]] [main]
(0x0400): performing selinux operations
(Mon May 21 19:30:44 2018) [[sssd[selinux_child[23351]]]]
[seuser_needs_update] (0x2000): getseuserbyname: ret: 0 seuser:
staff_u mls: s0-s0:c0.c1023
(Mon May 21 19:30:44 2018) [[sssd[selinux_child[23351]]]]
[pack_buffer] (0x0400): result [0]
(Mon May 21 19:30:44 2018) [[sssd[selinux_child[23351]]]]
[prepare_response] (0x4000): r->size: 4
(Mon May 21 19:30:44 2018) [[sssd[selinux_child[23351]]]] [main]
(0x0400): selinux_child completed successfully
domain_resolution_order set:
(Mon May 21 19:31:36 2018) [[sssd[selinux_child[23398]]]] [main]
(0x0400): selinux_child started.
(Mon May 21 19:31:36 2018) [[sssd[selinux_child[23398]]]] [main]
(0x2000): Running with effective IDs: [0][0].
(Mon May 21 19:31:36 2018) [[sssd[selinux_child[23398]]]] [main]
(0x2000): Running with real IDs [0][0].
(Mon May 21 19:31:36 2018) [[sssd[selinux_child[23398]]]] [main]
(0x0400): context initialized
(Mon May 21 19:31:36 2018) [[sssd[selinux_child[23398]]]]
[unpack_buffer] (0x2000): seuser length: 7
(Mon May 21 19:31:36 2018) [[sssd[selinux_child[23398]]]]
[unpack_buffer] (0x2000): seuser: staff_u
(Mon May 21 19:31:36 2018) [[sssd[selinux_child[23398]]]]
[unpack_buffer] (0x2000): mls_range length: 14
(Mon May 21 19:31:36 2018) [[sssd[selinux_child[23398]]]]
[unpack_buffer] (0x2000): mls_range: s0-s0:c0.c1023
(Mon May 21 19:31:36 2018) [[sssd[selinux_child[23398]]]]
[unpack_buffer] (0x2000): username length: 5
(Mon May 21 19:31:36 2018) [[sssd[selinux_child[23398]]]]
[unpack_buffer] (0x2000): username: admin
(Mon May 21 19:31:36 2018) [[sssd[selinux_child[23398]]]] [main]
(0x0400): performing selinux operations
(Mon May 21 19:31:36 2018) [[sssd[selinux_child[23398]]]]
[seuser_needs_update] (0x2000): getseuserbyname: ret: 0 seuser:
staff_u mls: s0-s0:c0.c1023
(Mon May 21 19:31:36 2018) [[sssd[selinux_child[23398]]]]
[pack_buffer] (0x0400): result [0]
(Mon May 21 19:31:36 2018) [[sssd[selinux_child[23398]]]]
[prepare_response] (0x4000): r->size: 4
(Mon May 21 19:31:36 2018) [[sssd[selinux_child[23398]]]] [main]
(0x0400): selinux_child completed successfully
Taking a look at the IPA provider, logs also do like the very same:
https://paste.fedoraproject.org/paste/FKhvxyj3clzXuE5C7tMGhw (pastebin
is huge!)
Some tip on which logs I could take a look and/or part of the code I
could instrument in order to, at least, get some directions?
Thanks in advance,
--
Fabiano Fidêncio
5 years, 11 months
[RFC] sbus2 integration
by Pavel Březina
Hi folks,
I sent a mail about new sbus implementation (I'll refer to it as sbus2)
[1]. Now, I'm integrating it into SSSD. The work is quite difficult
since it touches all parts of SSSD and the changes are usually
interconnected but I'm slowly moving towards the goal [2].
At this moment, I'm trying to take "miminum changes" paths so the code
can be built and function with sbus2, however to take full advantage of
it, it will take further improvements (that will not be very difficult).
There is one big change that I would like to take though, that needs to
be discussed. It is about how we currently handle sbus connections.
In current state, monitor and each backend creates a private sbus
server. The current implementation of a private sbus server is not a
message bus, it only serves as an address to create point to point
nameless connection. Thus each client must maintain several connections:
- each responder is connected to monitor and to all backends
- each backend is connected to monitor
- we have monitor + number of backends private servers
- each private server maintains about 10 active connections
This has several disadvantages - there are many connections, we cannot
broadcast signals, if a process wants to talk to other process it needs
to connect to its server and maintain the connection. Since responders
do not currently provider a server, they cannot talk between each other.
sbus2 implements proper private message bus. So it can work in the same
way as session or system bus. It is a server that maintains the
connections, keep tracks of their names and then routes messages from
one connection to another.
My idea is to have only one sbus server managed by monitor. Other
processes will connect to this server with a named connection (e.g.
sssd.nss, sssd.backend.dom1, sssd.backend.dom2). We can then send
message to this message bus (only one connection) and set destination to
name (e.g. sssd.nss to invalidate memcache). We can also send signals to
this bus and it will broadcast it to all connections that listens to
this signals. So, it is proper way how to do it. It will simplify things
and allow us to send signals and have better IPC in general.
I know we want to eventually get rid of the monitor, the process would
stay as an sbus server. It would become a single point of failure, but
the process can be restarted automatically by systemd in case of crash.
Also here is a bonus question - do any of you remember why we use
private server at all? Why don't we connect to system message bus? I do
not see any benefit in having a private server.
[1] https://github.com/pbrezina/sbus
[2] https://github.com/pbrezina/sssd/tree/sbus
5 years, 11 months