On 04/23/2012 08:01 AM, Russell Beall wrote:
I've been running some more tests before setting up the ticket, but I think I have enough information now.  The uniqueMember attribute has extra processing overhead, but the necessary optimization might apply across the board for all attributes.  I found also that adding large sets of values for other attributes also increases modification times heavily, though not quite as much as uniqueMember.

uniqueMember is a DN syntax attribute.  DN syntax values are "expensive" to handle due to normalization overhead.

Luckily, the modification delay is based on the size of the modification rather than the size of the entry, so even if the modification is done to a 100K-value attribute, if the modification is only to remove a few members and add a few others, then the change is still relatively quick.  The delay is noticed most when first setting up a group, for instance, adding 100K members to an empty group takes 2.5 hours on 389 as opposed to 1 minute on Sun DS.

That's very interesting.  Does Sun DS have some sort of tuning parameter for number of values?  That is, they may have some threshold for number of values in an attribute - once the number hits that threshold, they may switch to using some sort of ADT to store the values, like a AVL tree or hash table, rather than the simple linked list used by default.


Also during this testing I have noticed a memory leak when running large quantities of ldapmodify operations.  When I set up a loop to delete and then re-add the eduPersonEntitlement attribute across 100K entries, I found that memory consumption continuously increased and the server crashed after the fifth iteration of this loop.  (And this one really is with ldapmodify and is not related to my earlier issues with excessively creating tombstones by deleting and adding entire entries).  Before digging into this too deeply and making another ticket, I wanted to ask if this had been noticed and fixed in the 1.2.10 release?  I am using the default 1.2.9.16 release. I'm guessing it hasn't since I didn't see it in the release notes.

Try increasing your nsslapd-cachememsize and monitoring it closely.  Using the size of id2entry.db4 is a good place to start, but that will not be enough.

http://docs.redhat.com/docs/en-US/Red_Hat_Directory_Server/9.0/html/Administration_Guide/Monitoring_Server_and_Database_Activity-Monitoring_Database_Activity.html

See also https://fedorahosted.org/389/ticket/51 and https://bugzilla.redhat.com/show_bug.cgi?id=697701


I am starting up the server with the valgrind command you recommended a few messages back to see if I can spot the leak, though of course with valgrind in the mix, the overhead and runtimes are, as might be expected, much increased.

Yes, and valgrind will report many false positives that are hard to weed through.

The issue you are seeing may not be a memory leak per se - see the ticket/bug above.


Regards,
Russ.

On Apr 19, 2012, at 1:42 PM, Rich Megginson wrote:

OK.  If you've ruled out the possibility that some plugin is interfering with the processing, then it must be something we will have to fix in the core server.  Please file a ticket at https://fedorahosted.org/389



--
389 users mailing list
389-users@lists.fedoraproject.org
https://admin.fedoraproject.org/mailman/listinfo/389-users