I've been looking through the archives for information, but I haven't stumbled on a solution to my problem.
I'm running ds-389 (389-ds-base-18.104.22.168) on a centos 7 box (CentOS Linux release 7.2.1511). I have a centos OS client configured using SSL/TLS
which queries the LDAP server. Per a previous thread, I configured the memeberOf plugin and all seems to be working properly.
I have a php script that will run on the client and change the LDAP password for the user. The problem is, the script looks for the SSHA has
of the password when an ldapsearch is issued.
However, when I issue a general ldapsearch (anonymously) I don't get the userpassword field. I read in your archives that I might have
to be the "directory manager" user in order to see the hashed password. I've been playing around with the ldapsearch syntax, but I can't
quite get it right.
Anyway, my question is, can I set a flag in 389-ds that will display the hashed userpassword? I think that will solve my problem with the php script returning an error that it can't retrieve the old password.
We have a few new servers deployed with 389-ds-base version 22.214.171.124-21. These servers were deployed in an environment where auto-patching happens and we forgot to disable that feature.
Overnight the servers were updated to 389-ds-base version 126.96.36.199-19. All of the upgraded servers are now in a bad state. We have tried multiple ways to reinitialize them but cannot seem to get the servers to work again. We do see in the logs when they d startup "Abnormal shutdown detected, rebuilding database". But then the server stays in that state and does nothing that I can tell other than touch the timestamp on the __DB files.
Are versions 188.8.131.52-21 and 184.108.40.206-19 not compatible? Can anyone suggest a way to reinit these servers without having to rebuild them?
Reinit has been attempted by doing the following methods:
- Console reinit. (Failed, cannot connect to server)
- Export replica for the server. (Import successful, however gets into the "rebuild database" state.
- Copied the entire instance (/var/lib/dirsrv/slapd-myinstance) and restart. Same state as the one above.
Paul M. Whitney
Sent from my browser.
We are seeing an issue with our replication agreements on 389DS. When we look at the Console, we used to be able to tell when was the last successful attempt to replicate and end of said replication. Same thing for Initialization state.
With the new 389DS (currently using version 220.127.116.11-21), the time stamps always revert back to "Wed, Dec 31, 19:00:00 EST 1969". So we have no quick way of discerning when the last successful replication or initialization occurred. Is this a feature or a bug/nuisance?
Paul M. Whitney
Sent from my browser.
389 Directory Server 18.104.22.168
The 389 Directory Server team is proud to announce 389-ds-base
Fedora packages are available on Fedora 28(rawhide).
The new packages and versions are:
Source tarballs are available for download at Download
Highlights in 22.214.171.124
* Version change
Installation and Upgrade
See Download <http://www.port389.org/docs/389ds/download.html> for
information about setting up your yum repositories.
To install, use *yum install 389-ds* yum install 389-ds After install
completes, run *setup-ds-admin.pl* if you have 389-admin installed,
otherwise please run *setup-ds.pl* to set up your directory server.
To upgrade, use *yum upgrade* yum upgrade After upgrade completes, run
*setup-ds-admin.pl -u* if you have 389-admin installed, otherwise please
run *setup-ds.pl* to update your directory server/admin
<http://www.port389.org/docs/389ds/legacy/install-guide.html> for more
information about the initial installation, setup, and upgrade
See Source <http://www.port389.org/docs/389ds/development/source.html>
for information about source tarballs and SCM (git) access.
We are very interested in your feedback!
Please provide feedback and comments to the 389-users mailing list:
If you find a bug, or would like to see a new feature, file it in our
Pagure project: https://pagure.io/389-ds-base
* Bump version to 126.96.36.199
* Ticket 49457 - Fix spal_meminfo_get function prototype
* Ticket 49455 - Add tests to monitor test suit.
* Ticket 49448 - dynamic default pw scheme based on environment.
* Ticket 49298 - fix complier warn
* Ticket 49298 - Correct error codes with config restore.
* Ticket 49454 - SSL Client Authentication breaks in FIPS mode
* Ticket 49453 - passwd.py to use pwdhash defaults.
* Ticket 49427 - whitespace in fedse.c
* Ticket 49410 - opened connection can remain no longer poll, like hanging
* Ticket 48118 - fix compiler warning for incorrect return type
* Ticket 49451 - Add environment markers to lib389 dependencies
* Ticket 49325 - Proof of concept rust tqueue in sds
* Ticket 49443 - scope one searches in 1.3.7 give incorrect results
* Ticket 48118 - At startup, changelog can be erronously rebuilt after
a normal shutdown
* Ticket 49412 - SIGSEV when setting invalid changelog config value
* Ticket 49441 - Import crashes - oneline fix
* Ticket 49377 - Incoming BER too large with TLS on plain port
* Ticket 49441 - Import crashes with large indexed binary attributes
* Ticket 49435 - Fix NS race condition on loaded test systems
* Ticket 77 - lib389 - Refactor docstrings in rST format - part 2
* Ticket 17 - lib389 - dsremove support
* Ticket 3 - lib389 - python 3 compat for paged results test
* Ticket 3 - lib389 - Python 3 support for memberof plugin test suit
* Ticket 3 - lib389 - config test
* Ticket 3 - lib389 - python 3 support ds_logs tests
* Ticket 3 - lib389 - python 3 support for betxn test
after reading post on the lists regarding acis I was wondering what will
be the preferred way to only grant access to the directory for hosts in
the own network.
On some comments I read that it's generally discouraged to use aci's
with a "not" logic like:
ip != 10.0.0.*
or something like this.
Does this apply to ip address based access too?
My approach would be just someting like:
aci: (targetattr = "*") (version 3.0;acl "Bind from special IPs
only";deny (all) (ip != "192.168.100.*" and ip != "10.0.0.*);)
do allow only from 192.168.100.* networks or from 10.0.0.*.
As long as I understood, I have to define aci's for every base dn
separately if I running multiple databases. Is there any way to define
this for the whole server?
Thanks and Regards
I was going through RedHat’s documentation on naming conflicts. Here’s what it says at the beginning (https://access.redhat.com/documentation/en-us/red_hat_directory_server/10... <https://access.redhat.com/documentation/en-us/red_hat_directory_server/10...>):
14.23.1. Solving Naming Conflicts
When two entries are created with the same DN on different servers, the automatic conflict resolution procedure during replication renames the last entry created, including the entry's unique identifier in the DN. Every directory entry includes a unique identifier given by the operational attribute nsuniqueid. When a naming conflict occurs, this unique ID is appended to the non-unique DN.
What I don’t understand is how conflicts can happen at all if the last one wins?
Also, they are talking about solutions to the conflicts and specifically renaming the user that has nsuniqueid prepended. Does that assume that the conflicting user is actually another person? Or does it mean the conflicting record denotes the same user? If it’s the same user, shouldn’t the conflicting record just be removed?
Really confused about the nature of the problem and the suggested solutions.
I see these messages in the errors log of on a couple of suppliers.:
08/Nov/2017:22:44:13.612346436 +0000] NSMMReplicationPlugin - changelog program - agmt="cn=meToXXXXt": CSN 5a02f2c30002007c0000 not found, we aren't as up to date, or we purged
[08/Nov/2017:22:44:13.629490783 +0000] NSMMReplicationPlugin - agmt="cn=meToXXXX": Data required to update replica has been purged from the changelog. The replica must be reinitialized.
The documentation from RedHat says:
agmt=%s(%s:%d): Can't locate CSN %s in the changelog (DB rc=%d). The consumer may need to be reinitialized. Most likely the changelog was recreated because of the disk is full or the server ungracefully shutdown. The local server will not be able to send any more change to that consumer until the consumer is reinitialized or gets the CSN from other suppliers. If this is a single-master replication, reinitialize the consumers. Otherwise, see if the consumer can get the CSN from other suppliers. If not, reinitialize the
I’ve used cl-dump to look at the change log but none of the machines have a reference to it. So where is the knowledge of that CSN is coming from and what can I do besides reinitializing? I’ve tried multiple reinitializations but the error just moves to another machine.
The replication doesn’t seem to be affected however. The symptom is error 18 in the output of `ipa-replica manage $HOSTNAME list`.
Mark Reynolds did suggest it was possibly a bug in 389-ds-base-188.8.131.52-21.el7_3.x86_64, but it’s not feasible to upgrade at the moment. Any way to manually maneuver out of this without upgrading?