[SSSD] memory cache for initgroups

Niels de Vos ndevos at redhat.com
Thu Nov 6 21:02:29 UTC 2014


On Thu, Nov 06, 2014 at 11:45:18PM +0530, Vijay Bellur wrote:
> On 11/03/2014 08:12 PM, Jakub Hrozek wrote:
> >On Mon, Nov 03, 2014 at 03:41:43PM +0100, Jakub Hrozek wrote:
> >>On Mon, Nov 03, 2014 at 08:53:06AM -0500, Simo Sorce wrote:
> >>>On Mon, 3 Nov 2014 13:57:08 +0100
> >>>Jakub Hrozek <jhrozek at redhat.com> wrote:
> >>>
> >>>>Hi,
> >>>>
> >>>>we had short discussion on $SUBJECT with Simo on IRC already, but
> >>>>there are multiple people involved from multiple timezones, so I
> >>>>think a mailing list thread would be better trackable.
> >>>>
> >>>>Can we add another memory cache file to SSSD, that would track
> >>>>initgroups/getgrouplist results for the NSS responder? I realize
> >>>>initgroups is a bit different operation than getpw{uid,nam} and
> >>>>getgr{gid,nam} but what if the new memcache was only used by the NSS
> >>>>responder and at the same time invalidated when initgroups is
> >>>>initiated by the PAM responder to ensure the memcache is up-to-date?
> >>>
> >>>Can you describe the use case before jumping into a proposed solution ?
> >>
> >>Many getgrouplist() or initgroups() calls in a quick succession. One
> >>user is GlusterFS -- I'm not quite sure what the reason is there, maybe
> >>Vijay can elaborate.
> >
> 
> GlusterFS server invokes getgrouplist() to identify gids associated with an
> user on whose behalf a rpc request has been sent over the wire. There is a
> gid caching layer in GlusterFS and getgrouplist() does get called only if
> there is a gid cache miss. In the worst case, getgrouplist() can be invoked
> for every rpc request that GlusterFS receives and that seems to be the case
> in a deployment where we found that sssd was being busy. I am not certain
> about the sequence of operations that can cause the cache to be missed.
> 
> Adding Niels who is more familiar with the gid resolution & caching features
> in GlusterFS.

Just to add some background information on the getgrouplist(). GlusterFS
uses several processes that can call getgrouplist():
- NFS-server, a single process per system
- brick, a process per exported filesystem/directory, potentally several
  per system

  [Here, a Gluster environment has many systems (vm/physical). Each
   system normally runs the NFS-server, and a number of brick processes.
   The layout of the volume is important, but it is very common to have
   one or more distributed volumes that use multiple bricks on the same
   system (and many other systems).]

The need for resolving the groups of a user comes in when users belong
to many groups. The RPC protocols can not carry a huge list of groups,
so the resolving can be done on the server side when the protocol hits
its limits (> 16 for NFS, approx. > 93 for GlusterFS).

Upon using a Gluster volume, certain operations are sent to all the
bricks (i.e. some directory related operations). I can imagine that
a network share which is used by many users, trigger many getgrouplist()
calls in different brick processes at the (almost) same time.

For reference, the usage of getgrouplist() in the brick process can be
found here:
- https://github.com/gluster/glusterfs/blob/master/xlators/protocol/server/src/server-helpers.c#L24

The gid_resolve() function get called in case the brick process should
resolve the groups (and ignore the list of groups from the protocol). It
uses the gidcache functions from a private library:
- https://github.com/gluster/glusterfs/blob/master/libglusterfs/src/gidcache.h
- https://github.com/gluster/glusterfs/blob/master/libglusterfs/src/gidcache.c

The default time for the gidcache to expire is 2 seconds. Users should
be able to configure this to 30 seconds (or anything else) with:

    # gluster volume set <VOLUME> server.gid-timeout 30


I think this should explain the use-case sufficiently, but let me know
if there are any remaining questions. It might well be possible to make
this code more sssd friendly. I'm sure that we as Gluster developers are
open to any suggestions.

Thanks,
Niels

> 
> Thanks,
> Vijay
> 
> 
> >Sorry, I forgot to CC Vijay.
> >
> >>
> >>The other use-case is slapi-nis that calls getgrouplist() -- Alexander
> >>would explain better here, I only know that slapi-nis uses
> >>getgrouplist() during memberUID processing, but I'm fuzzy on the
> >>details.
> >>
> >>>
> >>>>I already know about two projects that would benefit from faster
> >>>>initgroups operation - GlusterFS uses getgrouplist() quite a lot with
> >>>>some setups and also in the IPA server mode, the SSSD running on the
> >>>>server becomes a bit of a bottlenec for some operations.
> >>>
> >>>Is the problem related to accessing the main ldb cache, or the fact we
> >>>also go out and contact the IdM server ?
> >>
> >>Mostly the cache. In particular, with the GlusterFS case, there was a
> >>huge filesystem directory that triggered 4000+ getgrouplist() calls, one
> >>per a file. Even though the getgrouplist() calls were for fewer than 10
> >>users, we would hit LDB every time and that produced quite a load on the
> >>SSSD.
> >>
> >>>
> >>>>Are there any technical reasons against a new memcache file that would
> >>>>cache initgroups?
> >>>
> >>>The main reason would be potential inconsistency between this cache and
> >>>the users and groups caches, but I do not think that would be really a
> >>>big deal if there is a pretty good reason for the cache.
> >>
> >>Hm, I see, then we have the same information duplicated in two caches,
> >>just from a different angle. However, for cases where precision really
> >>matters (access control), we don't use these caches anyway.
> >>
> >>Could we mitigate this problem by invalidating the memcache when group
> >>membership changes at all during a backend operation? I think changes to
> >>LDAP object are so rare, that invalidating the memcache (which is quite
> >>shortlived anyway) would be worth it.
> >>
> >>>
> >>>Do we need to insure all groups are resolvable in the groups cache ?
> >>
> >>I don't think so, if the group is not in the groups cache, then we just
> >>hit the responder.
> >>
> >>>Or is it ok if gids returned by the getgrouplist are not immediately
> >>>available in the groups cache ?
> >>>What about the user ?
> >>
> >>I *think* the user could also be left out. If the application also needs
> >>user data, it simply calls getpwnam -- like id(1) does. getgrouplist is
> >>a different interface with different outputs.
> >>
> >>>
> >>>Simo.
> >>>
> >>>--
> >>>Simo Sorce * Red Hat, Inc * New York
> 



More information about the sssd-devel mailing list