Hi Everyone,

 

I’m afraid this is still on-going. 

Weird thing is that we have one other consumer that was successfully initialized (eldap3); the other two (eldap4, eldap5) fail on initialization with the error:

 

    NSMMReplicationPlugin - replica_replace_ruv_tombstone: failed to update replication update vector for replica dc=stg,dc=id,dc=ubc,dc=ca: LDAP error - 1

 

I’ve tried importing both the master’s and the other consumer’s db’s into the 2 failed consumers but then the master’s error logs say

   

    NSMMReplicationPlugin - agmt="cn=eldap4 consumer" (eldap4:636): Replica has a different generation ID than the local data.

    NSMMReplicationPlugin - agmt="cn=eldap5 consumer" (eldap5:636): Replica has a different generation ID than the local data.

 

Anyone have any pointers about this issue? 

 

Thanks a lot,

Trev

 

 

From: 389-users-bounces@lists.fedoraproject.org [mailto:389-users-bounces@lists.fedoraproject.org] On Behalf Of Fong, Trevor
Sent: Wednesday, September 24, 2014 4:21 PM
To: 389-users@lists.fedoraproject.org
Subject: Re: [389-users] Trouble with Replication - Initializing Consumers

 

Hi Noriko,

 

Thanks for your response.  We’re running 389-Directory/1.2.11.29 B2014.094.1833.  Which release would you advise we update to?

I’ve also made sure we’ve set the maxber and cache sizes to sufficiently large values and are not getting any errors or warnings regarding those.

 

Thanks,

Trev

 

 

From: 389-users-bounces@lists.fedoraproject.org [mailto:389-users-bounces@lists.fedoraproject.org] On Behalf Of Noriko Hosoi
Sent: Wednesday, September 24, 2014 2:47 PM
To: 389-users@lists.fedoraproject.org
Subject: Re: [389-users] Trouble with Replication - Initializing Consumers

 

Hello Trevor,

What's the version of 389-ds-base?

Your server may not have this fix (https://fedorahosted.org/389/ticket/47606) yet.  This is in 389-ds-base-1.3.1 and newer. 

Please check your maxbersize and nsslapd-cachememsize and if they are not large enough, try increase them.

1. maxbersize: If the size of an entry is larger than the consumer's
   maxbersize, the following error used to be logged:
     Incoming BER Element was too long, max allowable is ### bytes.
     Change the nsslapd-maxbersize attribute in cn=config to increase.
   This message does not indicate how large the maxbersize needs to be.
   This patch adds the code to retrieve the failed ber size.
   Revised message:
     Incoming BER Element was @@@ bytes, max allowable is ### bytes.
         Change the nsslapd-maxbersize attribute in cn=config to increase.
   Note: There is no lber API that returns the ber size if it fails to
   handle the ber.  This patch borrows the internal structure of ber
   and get the size.  This could be risky since the size or structure
   of the ber could be updated in the openldap/mozldap lber.
2. cache size: The bulk import depends upon the nsslapd-cachememsize
   value in the backend instance entry (e.g., cn=userRoot,cn=ldbm
   database,cn=plugins,cn=config).  If an entry size is larger than
   the cachememsize, the bulk import used to fail with this message:
     import userRoot: REASON: entry too large (@@@ bytes) for the
         import buffer size (### bytes).  Try increasing nsslapd-
         cachememsize.
   Also, the message follows the skipping entry message:
     import userRoot: WARNING: skipping entry "<DN>"
   but actually, it did NOT "skip" the entry and continue the bulk
   import, but it failed there and completely wiped out the backend
   database.
   This patch modifies the message as follows:
     import userRoot: REASON: entry too large (@@@ bytes) for the
         effective import buffer size (### bytes). Try increasing nsslapd-
         cachememsize for the backend instance "userRoot".
   and as the message mentions, it just skips the failed entry and
   continues the bulk import.


Fong, Trevor wrote:

Hi Everyone,

 

I’m having trouble initializing consumers for replication.

On 2 of the 3 consumers, I get the following message after trying to re-initialize the consumers from the 389-console:

    NSMMReplicationPlugin - replica_replace_ruv_tombstone: failed to update replication update vector for replica dc=stg,dc=id,dc=ubc,dc=ca: LDAP error - 1

I’ve tried to db2ldif dump the database and re-import everything again with ldif2db before re-initializing.

I’ve even tried to ldif2db an export from the master before re-initializing.

I always get the above error.

 

Does anyone know what’s going on or have any suggestions?

/var/log/dirsrv/slapd-instance/errors extract follows.

 

Thanks a lot,

Trev

 

 

 

[24/Sep/2014:14:13:13 -0700] - WARNING: Import is running with nsslapd-db-private-import-mem on; No other process is allowed to access the database

[24/Sep/2014:14:13:34 -0700] - import userRoot: Processed 19704 entries -- average rate 980.1/sec, recent rate 980.1/sec, hit ratio 0%

[24/Sep/2014:14:13:37 -0700] - ERROR bulk import abandoned

[24/Sep/2014:14:13:37 -0700] - import userRoot: Aborting all Import threads...

[24/Sep/2014:14:13:44 -0700] - import userRoot: Import threads aborted.

[24/Sep/2014:14:13:44 -0700] - import userRoot: Closing files...

[24/Sep/2014:14:13:44 -0700] - libdb: userRoot/id2entry.db4: unable to flush: No such file or directory

[24/Sep/2014:14:13:44 -0700] - libdb: userRoot/cn.db4: unable to flush: No such file or directory

[24/Sep/2014:14:13:44 -0700] - libdb: userRoot/aci.db4: unable to flush: No such file or directory

[24/Sep/2014:14:13:44 -0700] - libdb: userRoot/departmentnumber.db4: unable to flush: No such file or directory

[24/Sep/2014:14:13:45 -0700] - libdb: userRoot/member.db4: unable to flush: No such file or directory

[24/Sep/2014:14:13:45 -0700] - libdb: userRoot/krbprincipalname.db4: unable to flush: No such file or directory

[24/Sep/2014:14:13:45 -0700] - libdb: userRoot/parentid.db4: unable to flush: No such file or directory

[24/Sep/2014:14:13:45 -0700] - libdb: userRoot/nsuniqueid.db4: unable to flush: No such file or directory

[24/Sep/2014:14:13:45 -0700] - libdb: userRoot/ou.db4: unable to flush: No such file or directory

[24/Sep/2014:14:13:45 -0700] - libdb: userRoot/ubceducwlpuid.db4: unable to flush: No such file or directory

[24/Sep/2014:14:13:45 -0700] - libdb: userRoot/entryrdn.db4: unable to flush: No such file or directory

[24/Sep/2014:14:13:45 -0700] - libdb: userRoot/sn.db4: unable to flush: No such file or directory

[24/Sep/2014:14:13:45 -0700] - libdb: userRoot/givenName.db4: unable to flush: No such file or directory

[24/Sep/2014:14:13:45 -0700] - libdb: userRoot/objectclass.db4: unable to flush: No such file or directory

[24/Sep/2014:14:13:45 -0700] - libdb: userRoot/uniquemember.db4: unable to flush: No such file or directory

[24/Sep/2014:14:13:45 -0700] - libdb: userRoot/uid.db4: unable to flush: No such file or directory

[24/Sep/2014:14:13:45 -0700] - libdb: userRoot/memberOf.db4: unable to flush: No such file or directory

[24/Sep/2014:14:13:45 -0700] - libdb: userRoot/mail.db4: unable to flush: No such file or directory

[24/Sep/2014:14:13:45 -0700] - libdb: userRoot/employeenumber.db4: unable to flush: No such file or directory

[24/Sep/2014:14:13:45 -0700] - import userRoot: Import failed.

[24/Sep/2014:14:13:45 -0700] - process_bulk_import_op: NULL target sdn

[24/Sep/2014:14:13:46 -0700] NSMMReplicationPlugin - replica_replace_ruv_tombstone: failed to update replication update vector for replica dc=stg,dc=id,dc=ubc,dc=ca: LDAP error - 1



--
389 users mailing list
389-users@lists.fedoraproject.org
https://admin.fedoraproject.org/mailman/listinfo/389-users