On Wed, Mar 02, 2016 at 05:13:40PM +0100, Jakub Hrozek wrote:
> On Mon, Feb 22, 2016 at 12:03:32PM +0100, Sumit Bose wrote:
> > On Mon, Feb 22, 2016 at 11:45:21AM +0100, Jakub Hrozek wrote:
> > > On Mon, Feb 22, 2016 at 11:26:34AM +0100, Sumit Bose wrote:
> > > > On Wed, Feb 17, 2016 at 11:45:36AM +0100, Jakub Hrozek wrote:
> > > > > Hi,
> > > > >
> > > > > I would like to get some opinions on where I'm heading with
the
> > > > > performance enhancements for 1.14. Please note this is /not/ a
complete
> > > > > design page. The goal is to just identify some blockers first
before I
> > > > > spend more time working on this feature, even though I already
discussed
> > > > > the page with some developers (thanks!).
> > > > >
> > > > > If we agree this is the way to go, I will polish the design page
as I
> > > > > work on the feature.
> > > > >
> > > > > I've started the design page here:
> > > > >
https://fedorahosted.org/sssd/wiki/DesignDocs/OneFourteenPerformanceImpro...
> > > > >
> > > > > For your convenience, I've included the text below as well:
> > > > >
> > > > > = Feature Name =
> > > > > SSSD Performance enhancements for the 1.14 release
> > > > >
> > > > > Related ticket(s):
> > > > > *
https://fedorahosted.org/sssd/ticket/2602
> > > > > *
https://fedorahosted.org/sssd/ticket/2062
> > > > >
> > > > > === Problem statement ===
> > > > > At the moment SSSD doesn't perform well in large
environments. Most of
> > > > > the use-cases we've had reported revolved around logins of
users who are
> > > > > members of large groups or a large amount of groups. Another
reported
> > > > > use-case was the time it takes to resolve a large group.
> > > > >
> > > > > While workarounds are available for some of the issues (such as
using
> > > > > `ignore_group_members` for resolution of large groups), our goal
is to be
> > > > > able to perform well without these workarounds.
> > > > >
> > > > > === Use cases ===
> > > > > * User who is a member of a large amount of AD groups logs in
to a Linux server that is a member of the AD domain.
> > > > > * User who is a member of a large amount of AD or IPA groups
logs in to a Linux server that is a member of an IPA domain with a trust relationship to
an AD domain
> > > > > * Administrator of a Linux server runs "ls -l" in a
directory where files are owned by a large group. An example would be group called
"students" in an university setup
> > > > >
> > > > > === Overview of the solution ===
> > > > > During performance analysis with systemtap, we found out that
the biggest
> > > > > delay happens when SSSD writes an entry to the cache. We
can't skip cache
> > > > > writes completely, even if no attributes changed, because we
store also the
> > > > > expiration timestamps in the cache. Also, even if a single
attribute (like
> > > > > the timestamp) changes, ldb would need to unpack the whole
entry, change
> > > > > the record, pack it back and then write the whole blob.
> > > > >
> > > > > In order to mitigate the costly cache writes, we should avoid
writing the
> > > > > whole cache entry on every cache update.
> > > > >
> > > > > To avoid this, we will split the monolithic ldb cache
representing the
> > > > > sysdb cache into two ldb files. One would contain the entry
itself and would
> > > > > be fully synchronous. The other (new one) would only contain the
timestamps
> > > > > and would be open using the `LDB_FLG_NOSYNC` to avoid
synchronous cache writes.
> > > >
> > > > It would be nice to see some data here to illustrate the potential
> > > > improvement. E.g. calling 'id ad_user' after 'sss_cache
-E' would be an
> > > > expensive operation if the ad_user is a member of many groups. If
> > > > nothing has changes on the server side there should be a
considerable
> > > > difference between the two versions.
> > > >
> > > > I hope this is not too much effort but I would suggest to create an
> > >
> > > I think it's considerably less effort than coding this all up only to
> > > realize there is no performance benefit (see also: lmdb back end for
> > > ldb..)
> > >
> > > > instrumented build where you check in sysdb_set_entry_attr() if only
> > > > timestamp attributes will be written and skip the ldb_modify in the
case
> > > > and just return EOK. The results here should be better than with an
> > > > additional database but should show how much we can get here.
>
> Because these things are easier to show interactively than explain over
> e-mail, I ran some tests in a tmate session with Sumit where just
> avoiding the cache writes shows a nice benefit. Just so that others are
> in the loop as well, here are some numbers from one test run
>
> With all cache writes:
> # stap /sssd/contrib/systemtap/id_perf.stp
> Total run time of id was: 9937 ms
> Number of zero-level cache transactions: 275
> Time spent in level-0 sysdb transactions: 4831 ms
> Time spent writing to LDB: 3314 ms
> Number of LDAP searches: 563
> Time spent waiting for LDAP: 2845 ms
> LDAP searches breakdown:
> Number of user requests: 1
> Time spent in user requests: 18
>
> Number of group requests: 272
> Time spent in group requests: 9486
>
> Number of initgroups requests: 1
> Time spent in initgroups requests: 58
>
> When avoiding cache writes:
> # stap /sssd/contrib/systemtap/id_perf.stp
> Total run time of id was: 5446 ms
> Number of zero-level cache transactions: 275
> Time spent in level-0 sysdb transactions: 58 ms
> Time spent writing to LDB: 15 ms
> Number of LDAP searches: 555
> Time spent waiting for LDAP: 3202 ms
> LDAP searches breakdown:
> Number of user requests: 1
> Time spent in user requests: 13
>
> Number of group requests: 272
> Time spent in group requests: 5079
>
> Number of initgroups requests: 1
> Time spent in initgroups requests: 50
>
> So I hope the results are conclusive enough to continue in this
> direction. Also, when we look at the group requests themselves with full
> cache writes:
>
> # stap /sssd/contrib/systemtap/nested_group_perf.stp
> Time spent in group sssd_be searches: 9261
> Time spent in sdap_nested_group_send/recv: 4428 ms (ratio: 47.81%)
> Time spent in zero-level sysdb transactions: 4282 ms (ratio: 46.23%)
>
> Breakdown of sdap_nested_group req (total: 4428 ms)
> sdap_nested_group_process req: 4419
> sdap_nested_group_process_split req: 1828
> sdap_nested_group_check_cache: 1768
> sdap_nested_group_sysdb_search_users: 535
> sdap_nested_group_sysdb_search_groups: 1117
> ldap request breakdown of total 2370
> sdap_nested_group_deref req: 2584
> sdap_deref_search_send req 2358
> processing deref results: 220
> sdap_nested_group_lookup_user req: 6
> sdap_nested_group_lookup_group req: 0
> Time spent refreshing unknown members: 6
>
> Breakdown of results processing (total 4282)
> Time spent populating nested members: 1003
> Time spent searching ldb while populating nested members: 495
> Time spent saving nested members: 591
> Time spent writing to the ldb: 2639 ms
>
> And when avoiding cache writes:
> # stap /sssd/contrib/systemtap/nested_group_perf.stp
> Time spent in group sssd_be searches: 4774
> Time spent in sdap_nested_group_send/recv: 4232 ms (ratio: 88.64%)
> Time spent in zero-level sysdb transactions: 183 ms (ratio: 3.83%)
>
> Breakdown of sdap_nested_group req (total: 4232 ms)
> sdap_nested_group_process req: 4225
> sdap_nested_group_process_split req: 1773
> sdap_nested_group_check_cache: 1727
> sdap_nested_group_sysdb_search_users: 504
> sdap_nested_group_sysdb_search_groups: 1092
> ldap request breakdown of total 2311
> sdap_nested_group_deref req: 2444
> sdap_deref_search_send req 2302
> processing deref results: 140
> sdap_nested_group_lookup_user req: 5
> sdap_nested_group_lookup_group req: 0
> Time spent refreshing unknown members: 4
>
> Breakdown of results processing (total 183)
> Time spent populating nested members: 0
> Time spent searching ldb while populating nested members: 0
> Time spent saving nested members: 0
> Time spent writing to the ldb: 44 ms
>
> I think this shows:
> 1) that working towards avoiding cache writes and only writing
> timestamps is worth pursuing
> 2) that we also need to optimize the rest of the nested group code,
> we especially do too many searches there which at the moment
> unpack all the data and with thousands of group members, this
> is too costly.
I assume the data above was taken when the cache was on a disk.
Yes. I was running the tests on a local VMs on my laptop, which has an
SSD drive.
Maybe it
might be helpful to run the tests with the cache on tmpfs as well to see
if it is the unpacking which is costly (CPU-bound) or if we still are
I/O-bound for whatever reasons.
I did run the tests with tmpfs in addition to avoiding the cache writes
and I didn't get any additional benefit. I think this shows the
remaining time spent in the group request code is CPU-bound. But feel
free to suggest another test (or another way of interpreting the test..)
Looking at the results so far, we can:
* if dereference is enabled, shortcut sooner in the nested group
code becuase we would end up downloading all entries anyway. This
should speed up the block labeled "sdap_nested_group_check_cache"
above.
* try to remember intermediate results in the sdap_async.c module.
There are several places where we loop over all members and check
something, we should do it at most once to avoid unpacking the ldb
blobs all the time.
* perhaps shortcut parsing of entries sooner if we find out the
modifyTimestamp is the same as it was to avoid parsing all
attributes of huge groups.
I haven't ran the profiler yet as you asked me to on IRC the other day..