I'm trying to get one-way replication going between a very old (1.3.7.8) server and a newly built (2.5.0 B2024.017.0000) server.
- The initial one-way full replication works. - Incremental updates from the old to the new work for a time and then start failing with these messages logged on the old server:
[18/Apr/2024:16:10:19.456632850 +0000] - WARN - NSMMReplicationPlugin - send_updates - agmt="cn=eds2prod-eds-ldap-63421-agreement" (eds-ldap-63421:389): Failed to send update operation to consumer (uniqueid 32245001-51c111ea-b1489889-5bf7d8fd, CSN 6620dbb5000400150000): Can't contact LDAP server. Will retry later. [18/Apr/2024:16:10:19.458134915 +0000] - ERR - NSMMReplicationPlugin - release_replica - agmt="cn=eds2prod-eds-ldap-63421-agreement" (eds-ldap-63421:389): Unable to send endReplication extended operation (Can't contact LDAP server) [18/Apr/2024:16:10:22.471856842 +0000] - INFO - NSMMReplicationPlugin - bind_and_check_pwp - agmt="cn=eds2prod-eds-ldap-63421-agreement" (eds-ldap-63421:389): Replication bind with SIMPLE auth resumed [18/Apr/2024:16:20:22.661249153 +0000] - WARN - NSMMReplicationPlugin - repl5_inc_update_from_op_result - agmt="cn=eds2prod-eds-ldap-63421-agreement" (eds-ldap-63421:389): Consumer failed to replay change (uniqueid (null), CSN (null)): Can't contact LDAP server(-1). Will retry later.
- That set of errors repeats approximately every 2 hours. I'm assuming this has caused replication to halt and it will not resume until it gets passed whatever the issue is.
I was hoping that I could somehow get past this 2 hour retry by disabling and re-enabling the agreement but that seems to have no effect.
Any thoughts on how to attack this?
- Tim
Just to add:
- I can manually make a connection from the old to new using the replication account and password, so connectivity is fine. - The old server is in 2-way replication with another server of the same version and that is all working perfectly.
try to review the access log for connection events related to those from the errors log file mentioned earlier, and also verify on the remote replica, the errors and access log events corresponding to this connection, as well as file descriptors and LDAP thread use, run some netstat and dsconf monitor commands:
dsconf SOME-INSTANCE-NAME-HERE config get nsslapd-conntablesize nsslapd-threadnumber nsslapd-maxdescriptors dsconf SOME-INSTANCE-NAME-HERE monitor server sysctl net.core.somaxconn net.ipv4.tcp_max_syn_backlog
On Thu, Apr 18, 2024 at 10:53 AM tdarby@arizona.edu wrote:
Just to add:
- I can manually make a connection from the old to new using the
replication account and password, so connectivity is fine.
- The old server is in 2-way replication with another server of the same
version and that is all working perfectly.
389-users mailing list -- 389-users@lists.fedoraproject.org To unsubscribe send an email to 389-users-leave@lists.fedoraproject.org Fedora Code of Conduct: https://docs.fedoraproject.org/en-US/project/code-of-conduct/ List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines List Archives: https://lists.fedoraproject.org/archives/list/389-users@lists.fedoraproject.... Do not reply to spam, report it: https://pagure.io/fedora-infrastructure/new_issue
Thanks for the help!
On the new server, there is nothing related in the errors log. In the access log, I see the successful binds from the old server:
[18/Apr/2024:16:10:22.457956096 +0000] conn=182814 op=0 BIND dn="cn=eds2.iam.arizona.edu:389,ou=Services,dc=eds,dc=arizona,dc=edu" method=128 version=3 [18/Apr/2024:16:10:22.465767099 +0000] conn=182814 op=0 RESULT err=0 tag=97 nentries=0 wtime=0.000061482 optime=0.007844345 etime=0.007904970 dn="cn=eds2.iam.arizona.edu:389,ou=services,dc=eds,dc=arizona,dc=edu" [18/Apr/2024:16:10:22.490471613 +0000] conn=182814 op=1 SRCH base="" scope=0 filter="(objectClass=*)" attrs="supportedControl supportedExtension" [18/Apr/2024:16:10:22.491081053 +0000] conn=182814 op=1 RESULT err=0 tag=101 nentries=1 wtime=0.000113881 optime=0.000613900 etime=0.000727198 [18/Apr/2024:16:10:22.492073298 +0000] conn=182814 op=2 SRCH base="" scope=0 filter="(objectClass=*)" attrs="supportedControl supportedExtension" [18/Apr/2024:16:10:22.492563635 +0000] conn=182814 op=2 RESULT err=0 tag=101 nentries=1 wtime=0.000127618 optime=0.000492238 etime=0.000619213 [18/Apr/2024:16:10:22.493480021 +0000] conn=182814 op=3 EXT oid="2.16.840.1.113730.3.5.12" name="replication-multisupplier-extop" [18/Apr/2024:16:10:22.497607017 +0000] conn=182814 op=3 RESULT err=0 tag=120 nentries=0 wtime=0.000045592 optime=0.004128811 etime=0.004173982
There's no more activity from that connection until it times out (10 minute timeout set): [18/Apr/2024:16:20:22.443255138 +0000] conn=182814 op=-1 fd=127 Disconnect - Connection timed out - Idle Timeout (nsslapd-idletimeout) - T1
Other info -
Old server: nsslapd-conntablesize: 65535 nsslapd-threadnumber: 96 nsslapd-maxdescriptors: 65535 net.core.somaxconn = 128 net.ipv4.tcp_max_syn_backlog = 2048
New server: nsslapd-conntablesize (attribute doesn't exist in the schema) nsslapd-threadnumber: 16 nsslapd-maxdescriptors: 16384 net.core.somaxconn = 4096 net.ipv4.tcp_max_syn_backlog = 4096
I tried upping the threadnumber and maxdescriptors to match the old server, but that didn't seem to help
Old server monitor command: dn: cn=monitor version: 389-Directory/1.3.9.0 B2018.304.1940 threads: 96 connection: (a bunch of these) currentconnections: 80 totalconnections: 858539 currentconnectionsatmaxthreads: 0 maxthreadsperconnhits: 17 dtablesize: 65535 readwaiters: 0 opsinitiated: 8152665 opscompleted: 8152664 entriessent: 160176043 bytessent: 95033377994 currenttime: 20240418184550Z starttime: 20240416182128Z nbackends: 3
New server monitor command: dn: cn=monitor version: 389-Directory/2.5.0 B2024.017.0000 threads: 17 connection: 1:20240417165837Z:3:3:-:cn=Directory Manager:0:0:0:4:ip=local connection: 2:20240418184424Z:3:2:-:cn=directory manager:0:0:0:200246:ip=127.0.0.1 currentconnections: 2 totalconnections: 200246 currentconnectionsatmaxthreads: 0 maxthreadsperconnhits: 0 dtablesize: 16258 readwaiters: 0 opsinitiated: 1913950 opscompleted: 1913949 entriessent: 1945806 bytessent: 206107062 currenttime: 20240418184424Z starttime: 20240417165836Z nbackends: 1
This seems to be working now. I tried the repl agreement poke command and nothing happened but tried it again the following day and it caused replication to resume. It has continued to work over the weekend. Not sure what the cause was, but I only need this to work a little while longer and then I can retire the old instances.
389-users@lists.fedoraproject.org