[SSSD] [PATCH] mmap_cache: Use two chains for hash collision.

Simo Sorce simo at redhat.com
Thu Sep 12 13:40:57 UTC 2013


On Thu, 2013-09-12 at 09:05 +0200, Lukas Slebodnik wrote:
> On (11/09/13 17:55), Simo Sorce wrote:
> >On Wed, 2013-09-11 at 21:21 +0200, Lukas Slebodnik wrote:
> >> On (11/09/13 12:50), Simo Sorce wrote:
> >> >On Wed, 2013-09-11 at 17:39 +0200, Lukas Slebodnik wrote:
> >> >> On (11/09/13 16:30), Michal Židek wrote:
> >> >> >On 09/11/2013 04:16 PM, Simo Sorce wrote:
> >> >> >>On Wed, 2013-09-11 at 15:58 +0200, Lukas Slebodnik wrote:
> >> >> >>>On (11/09/13 09:39), Simo Sorce wrote:
> >> >> >>>>On Wed, 2013-09-11 at 12:24 +0200, Lukas Slebodnik wrote:
> >> >> >>>>>ehlo,
> >> >> >>>>>
> >> >> >>>>>Two patches are attached.
> >> >> >>>>>
> >> >> >>>>>The 1st patch reverts commit 4662725ffef62b3b2502481438effa7c8fef9f80.
> >> >> >>>>>"mmap_cache: Skip records which doesn't have same hash"
> >> >> >>>>>  --next patch lokks nicer with recerted commit 4662725ffe
> >> >> >>>>>
> >> >> >>>>>
> >> >> >>>>>The 2nd patch changes behaviour of record chaining.
> >> >> >>>>>
> >> >> >>>>>LS
> >> >> >>>>
> >> >> >>>>NACK, you cannot just change the record structure w/o bumping the whole
> >> >> >>>>mmap MAJOR version number.
> >> >> >>>>
> >> >> >>>>Aside:
> >> >> >>>>
> >> >> >>>>If we are not I am thinking whether we should do some other
> >> >> >>>>optimizations to keep the size down.
> >> >> >>>>
> >> >> >>>>For example, hashes are generally relatively small quantities as we do
> >> >> >>>>not allow huge tables. If we can limit tables size to a max of 65k
> >> >> >>>>elements we could change the hash type from uint32_t to uint16_t, and
> >> >> >>>>end up using the same size we are using now.
> >> >> >>>>We already limit the total mmap size to 4G as rel_prt_t is a uint32_t,
> >> >> >>>>and in practice we'll never allow maps with a very large size as it eats
> >> >> >>>>into the memory available to 32 bit processes.
> >> >> >>>>So perhaps 65k for the hash tables is an acceptable limit. (I think we
> >> >> >>>>currently have a 100k limit which is probably a little bit overblown as
> >> >> >>>>a default anyway.
> >> >> >>>>
> >> >> >>>>Simo.
> >> >> >>>>
> >> >> >>>I would rather reduce "uint64_t expire" into "uint32_t expire_offset"
> >> >> >>>
> >> >> >>>expire_offset will be time since creation of mmap cache.
> >> >> >>>and uint64_t "creation_time" will be stored in "struct sss_mc_ctx"
> >> >> >>
> >> >> >>Uhmm, I do agree that 4billion seconds offset is probably ok, but then
> >> >> >>it means the information is split in 2 different places, and this will
> >> >> >>make clients more complicated, as clients will have to keep track of the
> >> >> >>creation time + expiration offset now ...
> >> >> >>
> >> >> >>>Do you agree?
> >> >> >>
> >> >> >>I am not 100% convinced yet, although I do agree that it would save
> >> >> >>quite some space.
> >> >> >>
> >> >> >>>Do I need to increment SSS_MC_MAJOR_VNO or only SSS_MC_MINOR_VNO?
> >> >> >>
> >> >> >>Major, you are completely changing the structure, an older client cannot
> >> >> >>be allowed to use the new map unless it explicitly know the layout
> >> >> >>changed.
> >> >> >
> >> >> >Well, if we already have to change the major version, I would go with
> >> >> >the two 32 bit next1 and nex2 members. If we really want to keep the
> >> >> >current size of the record (btw. why? We have to change the major ver.
> >> >> >anyway) then I like the two 16 bit pointers next1 and next2 more than
> >> >> >the 32 bit time offset (the memcache is just another cache layer, so it
> >> >> >does not have to be "gigantic"... 16 bit pointers are just fine IMO).
> >> >> >
> >> >> I don't like 16 bit next pointers.
> >> >> 
> >> >> Default value of hash table size is 100000 (for passwd)
> >> >> It is 53% greater value than UINT_16_MAX.
> >> >> If we decided to enlarge this default value in future, we would be still
> >> >> limited by UINT_16_MAX.
> >> >> 
> >> >> I think it is not good idea to reduce size of next pointers.
> >> >
> >> >I am thinking we might want to change the way we create the way we
> >> >handle record size to see if we can make a change that will be more
> >> >future proof.
> >> >
> >> >If we stored the size of the records in the header and always calculate
> >> >the position of b2 based on that information then we could introduce a
> >> >variable size record structure. This would allow in future to increase
> >> >the size of the structure for optional parameters w/o necessarily having
> >> >to bump up the major version.
> >> >
> >> >However I want to have at the very least 64bit alignemtn always
> >> >preserverd, so from 32 bytes we want to grow in 8 bytes steps, which
> >> >means adding a uint32_t padding before b2.
> >> totally agree.
> >> 
> >> SSS_MC_MAJOR_VNO was increased.
> >> padding was added. sizeof(struct sss_mc_ctx) will be 40
> >> 
> >> Updated patches are attached.
> >
> >It looks ok to me.
> >
> >However, just to be clear we are planning on pushing this to 1.12 only
> >right ?
> >
> >I am not hugely comfortable with pushing it to existing releases.
> >
> >Simo.
> >
> 
> The main problem is that existing releases could crash in some rare situations
> There is a report in bugzilla one week old.
> https://bugzilla.redhat.com/show_bug.cgi?id=997406#c12
> 
> User reported crash and he had latest sssd 1-10 with all
> mmap_cache patches; but it didn't help. sssd_nss crashed.
> 
> We had a long discussion about solution (revalidation of chains), but we did
> not find any simple "back-compatible" fix.
> 
> How do we want to fix it in older branches If we don't want to backport patch
> from previous mail.

The problem is that a database format change is not to be taken lightly
as it kills completely the use of the mmap cache until all long lived
processes are restarted. If we can detect the problem I would rather
nuke the caches.
Deletions are rare, they happen only when a user disappear IIRC, so if
we have a simple method of detecting when a deletion may cause issue we
can simply reset the mmap cache then it would have less dramatic effects
I think. But I am open to other ideas.

Simo.

-- 
Simo Sorce * Red Hat, Inc * New York




More information about the sssd-devel mailing list