On 10/12/2011 04:15 PM, Justin Gronfur wrote:
On 10/12/2011 03:07 PM, Rich Megginson wrote:
> This is helpful. Any chance you could paste the entire stack
> traces? For example,
> #0 0x0000003735c3c868 in slapi_get_mapping_tree_node_by_dn@plt ()
> from /usr/lib64/dirsrv/libslapd.so.0
> #0 0x0000003735c4ad38 in slapi_dn_normalize_ext () from
> /usr/lib64/dirsrv/libslapd.so.0
> etc. are nice to have, but much better would be the entire stack
> traces of these calls so we can see where they are called from.
Attached is a set of full gstack dumps taken at 1 second intervals.
The majority of it consists of select/poll/etc... calls that I
filtered out last time, but left in this time for context.
Thanks.
The select/poll are the server basically idle, waiting on a condition
variable for new work to perform.
You can eliminate many of these by decreasing your cn=config
nsslapd-threadnum setting. The default is 30, but you may find better
performance by setting to somewhere around 2 times the number of
cpus/cores you have on your machine (but at least 8).
Do you know if any of these come from a period of time during which the
server is consuming a lot of CPU?
One of my coworkers wanted me to mention that we use long running ldap
connections (bound to a user's session for the duration of that
session unless the session is replicated to another jvm instance). I
know that isn't really standard, but I don't think that should cause
these problems.
No, should not be a problem. And it is standard - many apps do
this
(e.g. a web service that uses ldap for auth will not want to open/close
a connection for every single user - it will typically use a connection
pool of already open and possibly idle connections).
Tomorrow I'm planning on writing a forking bash process to push the
exact same requests under the exact same load at 389 to determine if
it is a problem caused by the java code or container itself (by
eliminating them completely). I'll keep you posted on the result of
this.
Thanks,
Justin