On Mon, Feb 22, 2016 at 12:03:32PM +0100, Sumit Bose wrote:
> On Mon, Feb 22, 2016 at 11:45:21AM +0100, Jakub Hrozek wrote:
> > On Mon, Feb 22, 2016 at 11:26:34AM +0100, Sumit Bose wrote:
> > > On Wed, Feb 17, 2016 at 11:45:36AM +0100, Jakub Hrozek wrote:
> > > > Hi,
> > > >
> > > > I would like to get some opinions on where I'm heading with the
> > > > performance enhancements for 1.14. Please note this is /not/ a
complete
> > > > design page. The goal is to just identify some blockers first before
I
> > > > spend more time working on this feature, even though I already
discussed
> > > > the page with some developers (thanks!).
> > > >
> > > > If we agree this is the way to go, I will polish the design page as
I
> > > > work on the feature.
> > > >
> > > > I've started the design page here:
> > > >
https://fedorahosted.org/sssd/wiki/DesignDocs/OneFourteenPerformanceImpro...
> > > >
> > > > For your convenience, I've included the text below as well:
> > > >
> > > > = Feature Name =
> > > > SSSD Performance enhancements for the 1.14 release
> > > >
> > > > Related ticket(s):
> > > > *
https://fedorahosted.org/sssd/ticket/2602
> > > > *
https://fedorahosted.org/sssd/ticket/2062
> > > >
> > > > === Problem statement ===
> > > > At the moment SSSD doesn't perform well in large environments.
Most of
> > > > the use-cases we've had reported revolved around logins of users
who are
> > > > members of large groups or a large amount of groups. Another
reported
> > > > use-case was the time it takes to resolve a large group.
> > > >
> > > > While workarounds are available for some of the issues (such as
using
> > > > `ignore_group_members` for resolution of large groups), our goal is
to be
> > > > able to perform well without these workarounds.
> > > >
> > > > === Use cases ===
> > > > * User who is a member of a large amount of AD groups logs in to a
Linux server that is a member of the AD domain.
> > > > * User who is a member of a large amount of AD or IPA groups logs in
to a Linux server that is a member of an IPA domain with a trust relationship to an AD
domain
> > > > * Administrator of a Linux server runs "ls -l" in a
directory where files are owned by a large group. An example would be group called
"students" in an university setup
> > > >
> > > > === Overview of the solution ===
> > > > During performance analysis with systemtap, we found out that the
biggest
> > > > delay happens when SSSD writes an entry to the cache. We can't
skip cache
> > > > writes completely, even if no attributes changed, because we store
also the
> > > > expiration timestamps in the cache. Also, even if a single attribute
(like
> > > > the timestamp) changes, ldb would need to unpack the whole entry,
change
> > > > the record, pack it back and then write the whole blob.
> > > >
> > > > In order to mitigate the costly cache writes, we should avoid writing
the
> > > > whole cache entry on every cache update.
> > > >
> > > > To avoid this, we will split the monolithic ldb cache representing
the
> > > > sysdb cache into two ldb files. One would contain the entry itself
and would
> > > > be fully synchronous. The other (new one) would only contain the
timestamps
> > > > and would be open using the `LDB_FLG_NOSYNC` to avoid synchronous
cache writes.
> > >
> > > It would be nice to see some data here to illustrate the potential
> > > improvement. E.g. calling 'id ad_user' after 'sss_cache
-E' would be an
> > > expensive operation if the ad_user is a member of many groups. If
> > > nothing has changes on the server side there should be a considerable
> > > difference between the two versions.
> > >
> > > I hope this is not too much effort but I would suggest to create an
> >
> > I think it's considerably less effort than coding this all up only to
> > realize there is no performance benefit (see also: lmdb back end for
> > ldb..)
> >
> > > instrumented build where you check in sysdb_set_entry_attr() if only
> > > timestamp attributes will be written and skip the ldb_modify in the case
> > > and just return EOK. The results here should be better than with an
> > > additional database but should show how much we can get here.
Because these things are easier to show interactively than explain over
e-mail, I ran some tests in a tmate session with Sumit where just
avoiding the cache writes shows a nice benefit. Just so that others are
in the loop as well, here are some numbers from one test run
With all cache writes:
# stap /sssd/contrib/systemtap/id_perf.stp
Total run time of id was: 9937 ms
Number of zero-level cache transactions: 275
Time spent in level-0 sysdb transactions: 4831 ms
Time spent writing to LDB: 3314 ms
Number of LDAP searches: 563
Time spent waiting for LDAP: 2845 ms
LDAP searches breakdown:
Number of user requests: 1
Time spent in user requests: 18
Number of group requests: 272
Time spent in group requests: 9486
Number of initgroups requests: 1
Time spent in initgroups requests: 58
When avoiding cache writes:
# stap /sssd/contrib/systemtap/id_perf.stp
Total run time of id was: 5446 ms
Number of zero-level cache transactions: 275
Time spent in level-0 sysdb transactions: 58 ms
Time spent writing to LDB: 15 ms
Number of LDAP searches: 555
Time spent waiting for LDAP: 3202 ms
LDAP searches breakdown:
Number of user requests: 1
Time spent in user requests: 13
Number of group requests: 272
Time spent in group requests: 5079
Number of initgroups requests: 1
Time spent in initgroups requests: 50
So I hope the results are conclusive enough to continue in this
direction. Also, when we look at the group requests themselves with full
cache writes:
# stap /sssd/contrib/systemtap/nested_group_perf.stp
Time spent in group sssd_be searches: 9261
Time spent in sdap_nested_group_send/recv: 4428 ms (ratio: 47.81%)
Time spent in zero-level sysdb transactions: 4282 ms (ratio: 46.23%)
Breakdown of sdap_nested_group req (total: 4428 ms)
sdap_nested_group_process req: 4419
sdap_nested_group_process_split req: 1828
sdap_nested_group_check_cache: 1768
sdap_nested_group_sysdb_search_users: 535
sdap_nested_group_sysdb_search_groups: 1117
ldap request breakdown of total 2370
sdap_nested_group_deref req: 2584
sdap_deref_search_send req 2358
processing deref results: 220
sdap_nested_group_lookup_user req: 6
sdap_nested_group_lookup_group req: 0
Time spent refreshing unknown members: 6
Breakdown of results processing (total 4282)
Time spent populating nested members: 1003
Time spent searching ldb while populating nested members: 495
Time spent saving nested members: 591
Time spent writing to the ldb: 2639 ms
And when avoiding cache writes:
# stap /sssd/contrib/systemtap/nested_group_perf.stp
Time spent in group sssd_be searches: 4774
Time spent in sdap_nested_group_send/recv: 4232 ms (ratio: 88.64%)
Time spent in zero-level sysdb transactions: 183 ms (ratio: 3.83%)
Breakdown of sdap_nested_group req (total: 4232 ms)
sdap_nested_group_process req: 4225
sdap_nested_group_process_split req: 1773
sdap_nested_group_check_cache: 1727
sdap_nested_group_sysdb_search_users: 504
sdap_nested_group_sysdb_search_groups: 1092
ldap request breakdown of total 2311
sdap_nested_group_deref req: 2444
sdap_deref_search_send req 2302
processing deref results: 140
sdap_nested_group_lookup_user req: 5
sdap_nested_group_lookup_group req: 0
Time spent refreshing unknown members: 4
Breakdown of results processing (total 183)
Time spent populating nested members: 0
Time spent searching ldb while populating nested members: 0
Time spent saving nested members: 0
Time spent writing to the ldb: 44 ms
I think this shows:
1) that working towards avoiding cache writes and only writing
timestamps is worth pursuing
2) that we also need to optimize the rest of the nested group code,
we especially do too many searches there which at the moment
unpack all the data and with thousands of group members, this
is too costly.
I assume the data above was taken when the cache was on a disk. Maybe it
might be helpful to run the tests with the cache on tmpfs as well to see
if it is the unpacking which is costly (CPU-bound) or if we still are
I/O-bound for whatever reasons.
bye,
Sumit
> _______________________________________________
> sssd-devel mailing list
> sssd-devel(a)lists.fedorahosted.org
>