[sssd PR#475][opened] LDAP: Only add a sdap_domain instance for the current domain when instantiating a new ad_id_ctx
by jhrozek
URL: https://github.com/SSSD/sssd/pull/475
Author: jhrozek
Title: #475: LDAP: Only add a sdap_domain instance for the current domain when instantiating a new ad_id_ctx
Action: opened
PR body:
"""
NOTE: This fix doesn't address the segfault, only the condition that led
to it. I would prefer to track the segfault and the search base issues
separately
NOTE2: I didn't have much time to test this PR yet. I'm mostly submitting
it to get feedback
Please see the full commit message below. I'm really confused about this
issue mostly because it seems we've had this bug for quite some time but did
not see it. I would be glad if somebody helps me understand if iterating
over all domains and adding all domains that are not yet present in the
ad_id_ctx->ad_options->sdap_id_ctx->sdom list is correct or not
The commit message follows:
Resolves: https://pagure.io/SSSD/sssd/issue/3594
Previously, sdap_domain_subdom_add() was called when a new ad_id_ctx was
being instantiated in the AD subdomains provider. The
sdap_domain_subdom_add() call iterates over all known subdomains and adds a
sdap_domain instance for every domain that is not present in an existing
sdap_domain list.
This is problematic for the AD subdomains provider e.g in this scenario
found by downstream ticket #3594:
- there is a domain child1.sssdad.com the sssd is joined to
- the subdomain provider auto-discovers ssdad_tree.com and
sssdad.com, in this order (which is important). The list of
sss_domain_info objects is updated in this order, too
- for each domain, ad_subdom_ad_ctx_new() is called. This function
creates a new ad_id_ctx and calls sdap_domain_subdom_add() to
add an sdap_domain object into the sdap_id_ctx. The
sdap_domain_subdom_add() call adds both domains to the list
-- for the sssdad_tree subdomain is is ok, because subsequent
calls only use the first sdap_domain object which is
ssdad_tree.com (remember, order is important)
-- for the sssdad.com domain, ssdad_tree.com is added first,
which then causes all searches in the sssdad.com to have a
search base from ssdad_tree.com
Because the sdap_domain instance in sdap_id_ctx should not be a list, but a
single domain, this patch adds a utility function that creates an
sdap_domain instance for a single subdomain instance.
"""
To pull the PR as Git branch:
git remote add ghsssd https://github.com/SSSD/sssd
git fetch ghsssd pull/475/head:pr475
git checkout pr475
6 years, 2 months
[sssd PR#237][opened] providers: Move hostid from ipa to sdap
by hvenev
URL: https://github.com/SSSD/sssd/pull/237
Author: hvenev
Title: #237: providers: Move hostid from ipa to sdap
Action: opened
PR body:
"""
This just makes sss_ssh_knownhostsproxy work. There is no support for hostgroups (although hostgroups in `ipa` should continue working).
I've been using this for a few days with the `ldap` and `krb5` providers and I haven't noticed any regressions. I haven't tested `ipa` and `ad` but all tests seem to pass.
"""
To pull the PR as Git branch:
git remote add ghsssd https://github.com/SSSD/sssd
git fetch ghsssd pull/237/head:pr237
git checkout pr237
6 years, 2 months
Is this sort of test failure expected on a RHEL7.3+ environment?
by Richard Sharpe
Hi folks,
After installing all the RPMs needed, I managed to get sssd to build
using chmake.
This is what happened at the end:
-------------------------
PASS: src/tests/double_semicolon_test
make[4]: execvp: /bin/sh: Argument list too long
make[4]: *** [test-suite.log] Error 127
make[4]: Leaving directory `/home/rsharpe/src/sssd/x86_64'
make[3]: *** [check-TESTS] Error 2
make[3]: Leaving directory `/home/rsharpe/src/sssd/x86_64'
make[2]: *** [check-am] Error 2
make[2]: Leaving directory `/home/rsharpe/src/sssd/x86_64'
make[1]: *** [check-recursive] Error 1
make[1]: Leaving directory `/home/rsharpe/src/sssd/x86_64'
make: *** [check] Error 2
--------------------------
Is this expected?
The branch is master and I cloned it today.
--
Regards,
Richard Sharpe
(何以解憂?唯有杜康。--曹操)
6 years, 2 months
Some performance ideas related to running sssd on cluster nodes
by Jakub Hrozek
Hi,
I was helping analyze poor performance and server-side load spikes in an
environment where cluster nodes running sssd were all booted up at the
same time.
It turned out that this meant cache entries were expiring at the same
time and also the LDAP connection was expiring and reconnecting at the
same time. There are some tickets we filed (the ideas were mostly
William's) and I wanted to discuss them here.
https://pagure.io/SSSD/sssd/issue/3623 - Extend object lifetime if the
object hadn't changed in a long time
- I think this the most controversial and as we discussed a bit on
our phone call this is probably too dangerous to do by default.
Nonetheless, for resolving identity requests, it might be a
tunable that might provide a nice performance benefit.
https://pagure.io/SSSD/sssd/issue/3624 - Randomize cache lifetime by a
couple of percent
- What the title says. This might prevent hammering the servers in
case the cluster nodes went up at the same time and had the same
expire timestamps for all objects. Again, I'm not sure if this
makes sense by default, because it adds a bit of a fuzzy
behaviour, but I think it makes sense as a configurable.
https://pagure.io/SSSD/sssd/issue/3625 - Make sure periodical tasks use
randomization
- The be_ptask API already supports a bit of randomization, but
we're not really using it. I guess the review should be
case-by-case, but at least for ptasks that fetch any data from the
back end, I would even just randomize a bit by default.
https://pagure.io/SSSD/sssd/issue/3630 - Randomize
ldap_connection_expire_timeout either by default or w/ a configure option
- Again, if all connections expire and are reconnected at the same
time, the servers suffer.
Does anyone have an opinion on the issues? I think at least the
connection timeout is something we should look at, because IIRC that was
causing the most issues on the IDM servers. The other tickets are IMO
less important and I'm even not sure if we should implement them by
default.
6 years, 2 months