We have a customer that has been multi-threading behind multiple servers and writing to our Master server. These writes come in the form of heavy spikes (1k over 5 second intervals) very much burst traffic and all the writes are adding new items to the same ou. While we have plans to throttle them I had a few questions:
a) If they're writing to the same ou / updating the same indexes are they blocked on one items success before another succeeds? So in this case multi threading behind multiple boxes does not give them any performance impact. I would guess this is the case, but I want to be sure. Because replication seems to be fine which goes through a single thread iirc.
b) are there any performance tweaks that can help? I thought maybe looking at *nsslapd-threadnumber.
* -Jeff
On 08/20/2013 10:39 PM, Jeffrey Dunham wrote:
We have a customer that has been multi-threading behind multiple servers and writing to our Master server. These writes come in the form of heavy spikes (1k over 5 second intervals) very much burst traffic and all the writes are adding new items to the same ou. While we have plans to throttle them I had a few questions:
a) If they're writing to the same ou / updating the same indexes are they blocked on one items success before another succeeds?
Writing to the same the subtree is not an issue. Trying to update the same entry at the same time might slow it down a bit. The access log "etime" result(microsecond logging https://access.redhat.com/site/documentation/en-US/Red_Hat_Directory_Server/8.2/html/Configuration_and_Command-Line_Tool_Reference/logs-reference.html#Access_Log_Content-Access_Logging_Levels) will tell you more during these bulk updates.
So in this case multi threading behind multiple boxes does not give them any performance impact. I would guess this is the case, but I want to be sure. Because replication seems to be fine which goes through a single thread iirc.
b) are there any performance tweaks that can help? I thought maybe looking at /nsslapd-threadnumber. /
This setting usually doesn't need to be adjusted, as the performance impact is not related to the number of threads, but what is being updated in the db. Look at the "cn=monitor" output for the backend(e.g. cn=monitor,cn=example,cn=ldbm database,cn=plugins,cn=config). You really want the cacheHitRatios to be as close 99% as possible. Then adjust the cache sizes if needed.
Regards, Mark
/ / -Jeff
-- 389 users mailing list 389-users@lists.fedoraproject.org https://admin.fedoraproject.org/mailman/listinfo/389-users
On 08/20/2013 08:39 PM, Jeffrey Dunham wrote:
We have a customer that has been multi-threading behind multiple servers and writing to our Master server. These writes come in the form of heavy spikes (1k over 5 second intervals) very much burst traffic and all the writes are adding new items to the same ou.
What is the platform? What version of 389-ds-base? How much RAM do you have? What is the size of your database?
While we have plans to throttle them I had a few questions:
a) If they're writing to the same ou / updating the same indexes are they blocked on one items success before another succeeds?
Yes.
So in this case multi threading behind multiple boxes does not give them any performance impact. I would guess this is the case, but I want to be sure. Because replication seems to be fine which goes through a single thread iirc.
Replication on the supplier side or replication on the consumer side.
b) are there any performance tweaks that can help? I thought maybe looking at nsslapd-threadnumber.
To speed up writes? That might help, but not much, since your bottleneck is that only one write can happen at a time.
The first thing you should do is optimize your db and entry cache usage. You can use the https://github.com/richm/scripts/wiki/dbmon.sh script to monitor your cache usage, and find out how much RAM you need for your caches, and find out how much RAM you have left over for other tuning.
1) Try putting the db home directory on a RAM disk. By default, bdb uses memory mapped files in /var/lib/dirsrv/slapd-INST/db. These have to be flushed to disk. Change nsslapd-db-home-directory to point to a RAM fs.
mkdir /dev/shm/slapd-INST ; chown nobody:nobody /dev/shm/slapd-INST ; chmod 0600 /dev/shm/slapd-INST
Then shutdown dirsrv, edit /etc/dirsrv/slapd-INST/dse.ldif in the dn: cn=config,cn=ldbm database,cn=plugins,cn=config entry, add nsslapd-db-home-directory: /dev/shm/slapd-INST
NOTE: This will use the amount of RAM specified by nsslapd-dbcachesize, so make sure you have enough RAM.
https://access.redhat.com/site/documentation/en-US/Red_Hat_Directory_Server/...
2) Use different physical disks for your db directory, transaction log directory, and server log directory. If you can afford it, use a disk controller with a write back cache for the disk used for the transaction logs.
3) If you can afford the possibility of data loss, you can disable durable transactions.
-Jeff
-- 389 users mailing list 389-users@lists.fedoraproject.org https://admin.fedoraproject.org/mailman/listinfo/389-users
The reason I asked about nsslapd-threadnumber is because during the time of the spike, all transactions slow. Meaning that binds, adds, searches, ect. all start increasing in their etime until it hits the point where we've processed the majority of writes and then etimes fall back to 0.The customer in this case is doing 1k Adds to a subtree, an object with 10 attributes, three of which are indexed. I will also try the micro second logging in test and see if I can recreate the issue and maybe see something there. Hopefully that explanation gives you a little more insight into our issue. I really don't want to affect other customers by this bad one.
"Replication on the supplier side or replication on the consumer side." The consumer takes the burst of writes into it's on database fine through replication, but they're coming in obviously on a single replication session. It's using the same hardware/ds version.
FWIW we're using 1.2.11 on RHEL5.4, we're switching over to 1.3.1 on RHEL6 in a few months.
On Wed, Aug 21, 2013 at 7:09 AM, Rich Megginson rmeggins@redhat.com wrote:
On 08/20/2013 08:39 PM, Jeffrey Dunham wrote:
We have a customer that has been multi-threading behind multiple servers and writing to our Master server. These writes come in the form of heavy spikes (1k over 5 second intervals) very much burst traffic and all the writes are adding new items to the same ou.
What is the platform? What version of 389-ds-base? How much RAM do you have? What is the size of your database?
While we have plans to throttle them I had a few questions:
a) If they're writing to the same ou / updating the same indexes are they blocked on one items success before another succeeds?
Yes.
So in this case multi threading behind multiple boxes does not give them any performance impact. I would guess this is the case, but I want to be sure. Because replication seems to be fine which goes through a single thread iirc.
Replication on the supplier side or replication on the consumer side.
b) are there any performance tweaks that can help? I thought maybe looking at nsslapd-threadnumber.
To speed up writes? That might help, but not much, since your bottleneck is that only one write can happen at a time.
The first thing you should do is optimize your db and entry cache usage. You can use the https://github.com/richm/scripts/wiki/dbmon.sh script to monitor your cache usage, and find out how much RAM you need for your caches, and find out how much RAM you have left over for other tuning.
- Try putting the db home directory on a RAM disk. By default, bdb uses
memory mapped files in /var/lib/dirsrv/slapd-INST/db. These have to be flushed to disk. Change nsslapd-db-home-directory to point to a RAM fs.
mkdir /dev/shm/slapd-INST ; chown nobody:nobody /dev/shm/slapd-INST ; chmod 0600 /dev/shm/slapd-INST
Then shutdown dirsrv, edit /etc/dirsrv/slapd-INST/dse.ldif in the dn: cn=config,cn=ldbm database,cn=plugins,cn=config entry, add nsslapd-db-home-directory: /dev/shm/slapd-INST
NOTE: This will use the amount of RAM specified by nsslapd-dbcachesize, so make sure you have enough RAM.
https://access.redhat.com/site/documentation/en-US/Red_Hat_Directory_Server/...
- Use different physical disks for your db directory, transaction log
directory, and server log directory. If you can afford it, use a disk controller with a write back cache for the disk used for the transaction logs.
- If you can afford the possibility of data loss, you can disable durable
transactions.
-Jeff
-- 389 users mailing list 389-users@lists.fedoraproject.org https://admin.fedoraproject.org/mailman/listinfo/389-users
On 8/21/2013 9:14 AM, Jeffrey Dunham wrote:
The reason I asked about nsslapd-threadnumber is because during the time of the spike, all transactions slow. Meaning that binds, adds, searches, ect. all start increasing in their etime until it hits the point where we've processed the majority of writes and then etimes fall back to 0.The customer in this case is doing 1k Adds to a subtree, an object with 10 attributes, three of which are indexed.
This is actually quite strange : the server is designed to allow concurrent read operations while writes are in-flight. Initially I thought you were asking about multiple concurrent writes interfering with each other, which is plausible under some scenarios. However, writes blocking reads is more surprising. This could happen of course if there is contention for the underlying storage hardware : if the search references entries that are not in-cache already, or index pages that are not in the page pool, then it might wait on I/O already queued by writes.
One thing to note is that today you will see much (much!) better performance with SSD storage (use some kind of reliable "enterprise" SSD, not a random cheapo-drive intended for a laptop). One SSD will give you an order of magnitude more write performance than even multiple physical spindles. If it is the case that you're seeing I/O contention, then deploying an SSD drive should entirely solve the problem. Check the output from "iostat -x 1" while the spike is underway -- if the util% is high, or the queue length builds up, then you probably have an I/O bottleneck.
On 08/21/2013 09:29 AM, David Boreham wrote:
On 8/21/2013 9:14 AM, Jeffrey Dunham wrote:
The reason I asked about nsslapd-threadnumber is because during the time of the spike, all transactions slow. Meaning that binds, adds, searches, ect. all start increasing in their etime until it hits the point where we've processed the majority of writes and then etimes fall back to 0.The customer in this case is doing 1k Adds to a subtree, an object with 10 attributes, three of which are indexed.
This is actually quite strange : the server is designed to allow concurrent read operations while writes are in-flight. Initially I thought you were asking about multiple concurrent writes interfering with each other, which is plausible under some scenarios. However, writes blocking reads is more surprising. This could happen of course if there is contention for the underlying storage hardware : if the search references entries that are not in-cache already, or index pages that are not in the page pool, then it might wait on I/O already queued by writes.
Two other cases: 1) There are so many write threads that there are not enough threads available for search requests. 2) The write lock on a database page acquired for a write operation will block search requests that attempt to acquire a read lock on the database page.
One thing to note is that today you will see much (much!) better performance with SSD storage (use some kind of reliable "enterprise" SSD, not a random cheapo-drive intended for a laptop). One SSD will give you an order of magnitude more write performance than even multiple physical spindles. If it is the case that you're seeing I/O contention, then deploying an SSD drive should entirely solve the problem. Check the output from "iostat -x 1" while the spike is underway -- if the util% is high, or the queue length builds up, then you probably have an I/O bottleneck.
+1
-- 389 users mailing list 389-users@lists.fedoraproject.org https://admin.fedoraproject.org/mailman/listinfo/389-users
On 08/21/2013 05:29 PM, David Boreham wrote:
On 8/21/2013 9:14 AM, Jeffrey Dunham wrote:
The reason I asked about nsslapd-threadnumber is because during the time of the spike, all transactions slow. Meaning that binds, adds, searches, ect. all start increasing in their etime until it hits the point where we've processed the majority of writes and then etimes fall back to 0.The customer in this case is doing 1k Adds to a subtree, an object with 10 attributes, three of which are indexed.
This is actually quite strange : the server is designed to allow concurrent read operations while writes are in-flight. Initially I thought you were asking about multiple concurrent writes interfering with each other, which is plausible under some scenarios. However, writes blocking reads is more surprising. This could happen of course if there is contention for the underlying storage hardware : if the search references entries that are not in-cache already, or index pages that are not in the page pool, then it might wait on I/O already queued by writes.
we don't have dedicated threads for read or write operations, in theory writes should not block reads, but if the write threads queue up for the backend lock there might be no threads available to do the reads
One thing to note is that today you will see much (much!) better performance with SSD storage (use some kind of reliable "enterprise" SSD, not a random cheapo-drive intended for a laptop). One SSD will give you an order of magnitude more write performance than even multiple physical spindles. If it is the case that you're seeing I/O contention, then deploying an SSD drive should entirely solve the problem. Check the output from "iostat -x 1" while the spike is underway -- if the util% is high, or the queue length builds up, then you probably have an I/O bottleneck.
-- 389 users mailing list 389-users@lists.fedoraproject.org https://admin.fedoraproject.org/mailman/listinfo/389-users
On 8/21/2013 9:46 AM, Ludwig Krispenz wrote:
we don't have dedicated threads for read or write operations, in theory writes should not block reads, but if the write threads queue up for the backend lock there might be no threads available to do the reads
I wasn't taking about threads. It is not true that writes can't block reads. You might say that /typically /writes won't block reads. However there are several reasons (including the one I gave -- the read wants I/O that ends up delayed behind I/O ops initiated by writes) why reads can be blocked by concurrent write activity. As Rich mentioned also, a write txn can acquire exclusive locks on DB pages that the read subsequently touches.
Another thing you might try :
While the server is under stress, run the "pstack" command a few times and save the output.
If you post the thread stacks here, someone familiar with the code can say with more accuracy what's going on. For example it will be obvious whether you have starved out the thread pool, or you have threads mostly waiting on page locks in the DB, etc.
On 08/21/2013 09:53 AM, David Boreham wrote:
Another thing you might try :
While the server is under stress, run the "pstack" command a few times and save the output.
gdb will give much more detail http://port389.org/wiki/FAQ#Debugging_Hangs
If you post the thread stacks here, someone familiar with the code can say with more accuracy what's going on. For example it will be obvious whether you have starved out the thread pool, or you have threads mostly waiting on page locks in the DB, etc.
-- 389 users mailing list 389-users@lists.fedoraproject.org https://admin.fedoraproject.org/mailman/listinfo/389-users
So following your advice I was able to get some stack traces while the server was hanging/slow to respond. This is from one of our search hosts. I have shortened it here considerably because we do have customer data that is present, I can do some more scrubbing later if it will help. Seems to me to be revolved around indexes, I know we increased our allidslimit pretty high to 500000, I'm wondering if that has anything to do with it.
Out of the 30 worker threads 28 of them are in a state like: Thread 3 (Thread 0x2aef51f20940 (LWP 2569)): #0 0x000000328800b019 in pthread_cond_wait@@GLIBC_2.3.2 () from /lib64/libpthread.so.0 No symbol table info available. #1 0x00002aeeae1ba4f6 in __db_pthread_mutex_lock () from /lib64/ libdb-4.3.so No symbol table info available. #2 0x00002aeeae242619 in __lock_get_internal () from /lib64/libdb-4.3.so No symbol table info available. #3 0x00002aeeae242b7f in __lock_vec () from /lib64/libdb-4.3.so No symbol table info available. #4 0x00002aeeae222d30 in __db_lget () from /lib64/libdb-4.3.so No symbol table info available. #5 0x00002aeeae1cac72 in __bam_search () from /lib64/libdb-4.3.so No symbol table info available. #6 0x00002aeeae1bd8d7 in ?? () from /lib64/libdb-4.3.so No symbol table info available. #7 0x00002aeeae1bea4f in ?? () from /lib64/libdb-4.3.so No symbol table info available. #8 0x00002aeeae218829 in __db_c_get () from /lib64/libdb-4.3.so No symbol table info available. #9 0x00002aeeadf289ed in idl_new_fetch (be=0x1dd03130, db=<value optimized out>, inkey=0x2aef51f10760, txn=<value optimized out>, a=0x1dd44940, flag_err=0x2aef51f175bc, allidslimit=500000) at ldap/servers/slapd/back-ldbm/idl_new.c:223
There is a large unindex'd query running on one of the other threads [ base: o=example.com, filter: (&(objectclass=posixaccount)(uid=*)) ] : Thread 8 (Thread 0x2aef4ed1b940 (LWP 2564)): #0 0x000000328800e5c8 in pread64 () from /lib64/libpthread.so.0 No symbol table info available. #1 0x00002aeeae25c5dd in __os_io () from /lib64/libdb-4.3.so No symbol table info available. #2 0x00002aeeae25168b in __memp_pgread () from /lib64/libdb-4.3.so No symbol table info available. #3 0x00002aeeae2527dd in __memp_fget () from /lib64/libdb-4.3.so No symbol table info available. #4 0x00002aeeae1ca938 in __bam_search () from /lib64/libdb-4.3.so No symbol table info available. #5 0x00002aeeae1bd8d7 in ?? () from /lib64/libdb-4.3.so No symbol table info available. #6 0x00002aeeae1bea4f in ?? () from /lib64/libdb-4.3.so No symbol table info available. #7 0x00002aeeae218829 in __db_c_get () from /lib64/libdb-4.3.so No symbol table info available. #8 0x00002aeeae220fe6 in __db_get () from /lib64/libdb-4.3.so No symbol table info available. #9 0x00002aeeae22115a in __db_get_pp () from /lib64/libdb-4.3.so No symbol table info available. #10 0x00002aeeadf24266 in id2entry (be=0x1dd03130, id=7630577, txn=0x2aef4ed104e0, err=0x2aef4ed10544) at ldap/servers/slapd/back-ldbm/id2entry.c:315 inst = (ldbm_instance *) 0x1dc8d180 db = (DB *) 0x1dd01080 db_txn = (DB_TXN *) 0x0 key = {data = 0x2aef4ed10450, size = 4, ulen = 0, dlen = 0, doff = 0, flags = 0} data = {data = 0x0, size = 0, ulen = 0, dlen = 0, doff = 0, flags = 4} e = (struct backentry *) 0x0 ee = <value optimized out> temp_id = "\000tnñ"
And another locked worker thread: #0 0x000000328800d654 in __lll_lock_wait () from /lib64/libpthread.so.0 No symbol table info available. #1 0x0000003288008f4a in _L_lock_1034 () from /lib64/libpthread.so.0 No symbol table info available. #2 0x0000003288008e0c in pthread_mutex_lock () from /lib64/libpthread.so.0 No symbol table info available. #3 0x00002aeeae1ba54c in __db_pthread_mutex_lock () from /lib64/ libdb-4.3.so No symbol table info available. #4 0x00002aeeae252a51 in __memp_fget () from /lib64/libdb-4.3.so No symbol table info available. #5 0x00002aeeae218d73 in __db_c_get () from /lib64/libdb-4.3.so No symbol table info available. #6 0x00002aeeadf28b63 in idl_new_fetch (be=0x1dd03130, db=<value optimized out>, inkey=0x735755, txn=<value optimized out>, a=0x1dd421f0, flag_err=0x2aef4e3115bc, allidslimit=500000) at ldap/servers/slapd/back-ldbm/idl_new.c:298
And the replication thread appears to be locked as well:
#0 0x000000328800d654 in __lll_lock_wait () from /lib64/libpthread.so.0 No symbol table info available. #1 0x0000003288008f80 in _L_lock_1233 () from /lib64/libpthread.so.0 No symbol table info available. #2 0x0000003288008f03 in pthread_mutex_lock () from /lib64/libpthread.so.0 No symbol table info available. #3 0x000000328ac23289 in PR_Lock () from /usr/lib64/libnspr4.so No symbol table info available. #4 0x000000328ac234cb in PR_EnterMonitor () from /usr/lib64/libnspr4.so No symbol table info available. #5 0x00002aeeadf1496c in cache_lock_entry (cache=0x1dc8d208, e=0x2af02d468c00) at ldap/servers/slapd/back-ldbm/cache.c:1455 No locals. #6 0x00002aeeadf23b31 in find_entry_internal (pb=0x2af022054ca0, be=0x1dd03130, addr=<value optimized out>, lock=1, txn=0x2aef3ddf9cb0, flags=0) at ldap/servers/slapd/back-ldbm/findentry.c:237 No locals. #7 0x00002aeeadf4df1a in ldbm_back_modify (pb=0x2af022054ca0) at ldap/servers/slapd/back-ldbm/ldbm_modify.c:269
On Wed, Aug 21, 2013 at 9:14 AM, Rich Megginson rmeggins@redhat.com wrote:
On 08/21/2013 09:53 AM, David Boreham wrote:
Another thing you might try :
While the server is under stress, run the "pstack" command a few times and save the output.
gdb will give much more detail http://port389.org/wiki/FAQ#**Debugging_Hangshttp://port389.org/wiki/FAQ#Debugging_Hangs
If you post the thread stacks here, someone familiar with the code can say with more accuracy what's going on. For example it will be obvious whether you have starved out the thread pool, or you have threads mostly waiting on page locks in the DB, etc.
-- 389 users mailing list 389-users@lists.fedoraproject.**org 389-users@lists.fedoraproject.org https://admin.fedoraproject.**org/mailman/listinfo/389-usershttps://admin.fedoraproject.org/mailman/listinfo/389-users
-- 389 users mailing list 389-users@lists.fedoraproject.**org 389-users@lists.fedoraproject.org https://admin.fedoraproject.**org/mailman/listinfo/389-usershttps://admin.fedoraproject.org/mailman/listinfo/389-users
On 08/29/2013 04:22 PM, Jeffrey Dunham wrote:
So following your advice I was able to get some stack traces while the server was hanging/slow to respond. This is from one of our search hosts. I have shortened it here considerably because we do have customer data that is present, I can do some more scrubbing later if it will help.
I would like to have the full stack trace, all the way up to connection_threadmain - if you need to elide/obscure customer information, please do, but please include the full stack trace.
Seems to me to be revolved around indexes, I know we increased our allidslimit pretty high to 500000, I'm wondering if that has anything to do with it.
Looks like the unindexed searches are hogging all of the resources and locking pages needed by updates.
Out of the 30 worker threads 28 of them are in a state like: Thread 3 (Thread 0x2aef51f20940 (LWP 2569)): #0 0x000000328800b019 in pthread_cond_wait@@GLIBC_2.3.2 () from /lib64/libpthread.so.0 No symbol table info available. #1 0x00002aeeae1ba4f6 in __db_pthread_mutex_lock () from /lib64/libdb-4.3.so http://libdb-4.3.so No symbol table info available. #2 0x00002aeeae242619 in __lock_get_internal () from /lib64/libdb-4.3.so http://libdb-4.3.so No symbol table info available. #3 0x00002aeeae242b7f in __lock_vec () from /lib64/libdb-4.3.so http://libdb-4.3.so No symbol table info available. #4 0x00002aeeae222d30 in __db_lget () from /lib64/libdb-4.3.so http://libdb-4.3.so No symbol table info available. #5 0x00002aeeae1cac72 in __bam_search () from /lib64/libdb-4.3.so http://libdb-4.3.so No symbol table info available. #6 0x00002aeeae1bd8d7 in ?? () from /lib64/libdb-4.3.so http://libdb-4.3.so No symbol table info available. #7 0x00002aeeae1bea4f in ?? () from /lib64/libdb-4.3.so http://libdb-4.3.so No symbol table info available. #8 0x00002aeeae218829 in __db_c_get () from /lib64/libdb-4.3.so http://libdb-4.3.so No symbol table info available. #9 0x00002aeeadf289ed in idl_new_fetch (be=0x1dd03130, db=<value optimized out>, inkey=0x2aef51f10760, txn=<value optimized out>, a=0x1dd44940, flag_err=0x2aef51f175bc, allidslimit=500000) at ldap/servers/slapd/back-ldbm/idl_new.c:223
There is a large unindex'd query running on one of the other threads [ base: o=example.com http://example.com, filter: (&(objectclass=posixaccount)(uid=*)) ] : Thread 8 (Thread 0x2aef4ed1b940 (LWP 2564)): #0 0x000000328800e5c8 in pread64 () from /lib64/libpthread.so.0 No symbol table info available. #1 0x00002aeeae25c5dd in __os_io () from /lib64/libdb-4.3.so http://libdb-4.3.so No symbol table info available. #2 0x00002aeeae25168b in __memp_pgread () from /lib64/libdb-4.3.so http://libdb-4.3.so No symbol table info available. #3 0x00002aeeae2527dd in __memp_fget () from /lib64/libdb-4.3.so http://libdb-4.3.so No symbol table info available. #4 0x00002aeeae1ca938 in __bam_search () from /lib64/libdb-4.3.so http://libdb-4.3.so No symbol table info available. #5 0x00002aeeae1bd8d7 in ?? () from /lib64/libdb-4.3.so http://libdb-4.3.so No symbol table info available. #6 0x00002aeeae1bea4f in ?? () from /lib64/libdb-4.3.so http://libdb-4.3.so No symbol table info available. #7 0x00002aeeae218829 in __db_c_get () from /lib64/libdb-4.3.so http://libdb-4.3.so No symbol table info available. #8 0x00002aeeae220fe6 in __db_get () from /lib64/libdb-4.3.so http://libdb-4.3.so No symbol table info available. #9 0x00002aeeae22115a in __db_get_pp () from /lib64/libdb-4.3.so http://libdb-4.3.so No symbol table info available. #10 0x00002aeeadf24266 in id2entry (be=0x1dd03130, id=7630577, txn=0x2aef4ed104e0, err=0x2aef4ed10544) at ldap/servers/slapd/back-ldbm/id2entry.c:315 inst = (ldbm_instance *) 0x1dc8d180 db = (DB *) 0x1dd01080 db_txn = (DB_TXN *) 0x0 key = {data = 0x2aef4ed10450, size = 4, ulen = 0, dlen = 0, doff = 0, flags = 0} data = {data = 0x0, size = 0, ulen = 0, dlen = 0, doff = 0, flags = 4} e = (struct backentry *) 0x0 ee = <value optimized out> temp_id = "\000tnñ"
And another locked worker thread: #0 0x000000328800d654 in __lll_lock_wait () from /lib64/libpthread.so.0 No symbol table info available. #1 0x0000003288008f4a in _L_lock_1034 () from /lib64/libpthread.so.0 No symbol table info available. #2 0x0000003288008e0c in pthread_mutex_lock () from /lib64/libpthread.so.0 No symbol table info available. #3 0x00002aeeae1ba54c in __db_pthread_mutex_lock () from /lib64/libdb-4.3.so http://libdb-4.3.so No symbol table info available. #4 0x00002aeeae252a51 in __memp_fget () from /lib64/libdb-4.3.so http://libdb-4.3.so No symbol table info available. #5 0x00002aeeae218d73 in __db_c_get () from /lib64/libdb-4.3.so http://libdb-4.3.so No symbol table info available. #6 0x00002aeeadf28b63 in idl_new_fetch (be=0x1dd03130, db=<value optimized out>, inkey=0x735755, txn=<value optimized out>, a=0x1dd421f0, flag_err=0x2aef4e3115bc, allidslimit=500000) at ldap/servers/slapd/back-ldbm/idl_new.c:298
And the replication thread appears to be locked as well:
#0 0x000000328800d654 in __lll_lock_wait () from /lib64/libpthread.so.0 No symbol table info available. #1 0x0000003288008f80 in _L_lock_1233 () from /lib64/libpthread.so.0 No symbol table info available. #2 0x0000003288008f03 in pthread_mutex_lock () from /lib64/libpthread.so.0 No symbol table info available. #3 0x000000328ac23289 in PR_Lock () from /usr/lib64/libnspr4.so No symbol table info available. #4 0x000000328ac234cb in PR_EnterMonitor () from /usr/lib64/libnspr4.so No symbol table info available. #5 0x00002aeeadf1496c in cache_lock_entry (cache=0x1dc8d208, e=0x2af02d468c00) at ldap/servers/slapd/back-ldbm/cache.c:1455 No locals. #6 0x00002aeeadf23b31 in find_entry_internal (pb=0x2af022054ca0, be=0x1dd03130, addr=<value optimized out>, lock=1, txn=0x2aef3ddf9cb0, flags=0) at ldap/servers/slapd/back-ldbm/findentry.c:237 No locals. #7 0x00002aeeadf4df1a in ldbm_back_modify (pb=0x2af022054ca0) at ldap/servers/slapd/back-ldbm/ldbm_modify.c:269
On Wed, Aug 21, 2013 at 9:14 AM, Rich Megginson <rmeggins@redhat.com mailto:rmeggins@redhat.com> wrote:
On 08/21/2013 09:53 AM, David Boreham wrote: Another thing you might try : While the server is under stress, run the "pstack" command a few times and save the output. gdb will give much more detail http://port389.org/wiki/FAQ#Debugging_Hangs If you post the thread stacks here, someone familiar with the code can say with more accuracy what's going on. For example it will be obvious whether you have starved out the thread pool, or you have threads mostly waiting on page locks in the DB, etc. -- 389 users mailing list 389-users@lists.fedoraproject.org <mailto:389-users@lists.fedoraproject.org> https://admin.fedoraproject.org/mailman/listinfo/389-users -- 389 users mailing list 389-users@lists.fedoraproject.org <mailto:389-users@lists.fedoraproject.org> https://admin.fedoraproject.org/mailman/listinfo/389-users
-- 389 users mailing list 389-users@lists.fedoraproject.org https://admin.fedoraproject.org/mailman/listinfo/389-users
On 08/21/2013 09:14 AM, Jeffrey Dunham wrote:
The reason I asked about nsslapd-threadnumber is because during the time of the spike, all transactions slow. Meaning that binds, adds, searches, ect. all start increasing in their etime until it hits the point where we've processed the majority of writes and then etimes fall back to 0.The customer in this case is doing 1k Adds to a subtree, an object with 10 attributes, three of which are indexed. I will also try the micro second logging in test and see if I can recreate the issue and maybe see something there. Hopefully that explanation gives you a little more insight into our issue. I really don't want to affect other customers by this bad one.
Ok. Please see my tuning recommendations.
"Replication on the supplier side or replication on the consumer side." The consumer takes the burst of writes into it's on database fine through replication, but they're coming in obviously on a single replication session. It's using the same hardware/ds version.
Replication updates are done on a single thread.
FWIW we're using 1.2.11 on RHEL5.4,
Did you build this yourself?
we're switching over to 1.3.1 on RHEL6 in a few months.
Are you planning to build this yourself?
On Wed, Aug 21, 2013 at 7:09 AM, Rich Megginson <rmeggins@redhat.com mailto:rmeggins@redhat.com> wrote:
On 08/20/2013 08:39 PM, Jeffrey Dunham wrote:
We have a customer that has been multi-threading behind multiple servers and writing to our Master server. These writes come in the form of heavy spikes (1k over 5 second intervals) very much burst traffic and all the writes are adding new items to the same ou.
What is the platform? What version of 389-ds-base? How much RAM do you have? What is the size of your database?
While we have plans to throttle them I had a few questions: a) If they're writing to the same ou / updating the same indexes are they blocked on one items success before another succeeds?
Yes.
So in this case multi threading behind multiple boxes does not give them any performance impact. I would guess this is the case, but I want to be sure. Because replication seems to be fine which goes through a single thread iirc.
Replication on the supplier side or replication on the consumer side.
b) are there any performance tweaks that can help? I thought maybe looking at nsslapd-threadnumber.
To speed up writes? That might help, but not much, since your bottleneck is that only one write can happen at a time. The first thing you should do is optimize your db and entry cache usage. You can use the https://github.com/richm/scripts/wiki/dbmon.sh script to monitor your cache usage, and find out how much RAM you need for your caches, and find out how much RAM you have left over for other tuning. 1) Try putting the db home directory on a RAM disk. By default, bdb uses memory mapped files in /var/lib/dirsrv/slapd-INST/db. These have to be flushed to disk. Change nsslapd-db-home-directory to point to a RAM fs. mkdir /dev/shm/slapd-INST ; chown nobody:nobody /dev/shm/slapd-INST ; chmod 0600 /dev/shm/slapd-INST Then shutdown dirsrv, edit /etc/dirsrv/slapd-INST/dse.ldif in the dn: cn=config,cn=ldbm database,cn=plugins,cn=config entry, add nsslapd-db-home-directory: /dev/shm/slapd-INST NOTE: This will use the amount of RAM specified by nsslapd-dbcachesize, so make sure you have enough RAM. https://access.redhat.com/site/documentation/en-US/Red_Hat_Directory_Server/9.0/html/Configuration_Command_and_File_Reference/Database_Plug_in_Attributes.html#Database_Attributes_under_cnconfig_cnldbm_database_cnplugins_cnconfig 2) Use different physical disks for your db directory, transaction log directory, and server log directory. If you can afford it, use a disk controller with a write back cache for the disk used for the transaction logs. 3) If you can afford the possibility of data loss, you can disable durable transactions.
-Jeff -- 389 users mailing list 389-users@lists.fedoraproject.org <mailto:389-users@lists.fedoraproject.org> https://admin.fedoraproject.org/mailman/listinfo/389-users
389-users@lists.fedoraproject.org