Hello William,
Thank you for the advice.
Hey there!
Great to hear you want to use this in a container. I have a few things to advise here.
From reading this it looks like you want to have:
[ Container 1 ] [ Container 2 ] [ Container 3 ]
| | |
[ Shared Volume ]
So first off, this is *not* possible or supported. Every DS instance needs it's own
volume, and they replicate to each other:
[ Container 1 ] [ Container 2 ] [ Container 3 ]
| | |
[ Volume 1 ] [ Volume 2 ] [ Volume 3 ]
You probably also can't autoscale (easily) as a result of this. I'm still
working
on ideas to address this ...
But you can manually scale, if you script things properly.
I have a separate
persistent volume mounted to each container, as you suggest. I use a statefulset, so the
same volume is mounted across container replacements.
Every instance needs it's own changelog, and that is related to
it's replica ID.
If you remove a replica there IS a clean up process. Remember, 389 is not designed as a
purely stateless app, so you'll need to do some work to manage this.
I've
setup each instance to have it's own changelog, present in the persistent volume. The
scenario I had in mind was, if a container is deleted and recreated, for any reason. My
assumption is it'll take a few minutes, or probably hours, in the worst case scenario.
For all practical purposes, this will be like a reboot of a host running a ds instance.
Should I have any checks to see if it's working, or leave it alone and let replication
deal with the delay?
You'll need to just assert they exist statefully - ansible can help here.
Since
I'm using persistent volumes, the replication agreements will be in place, if it's
a configured instance. It struck me while writing this reply, that a container
replacement, in my case, will be similar to a host reboot, as all the config/data is
available in a persistent volume. In this case, do I need to treat container replacement
differently?
What do you mean by "re-init" here? from another replica?
The answer is ...
"it depends".
So many things can go wrong. Every instance needs it's own
volume, and data is shared
via replication.
Right now, my effort for containerisation has been to help support running 389 in atomic
host or suse transactional server. Running in kubernetes "out of the box" is a
stretch goal at the moment, but if you are willing to tackle it, I'd fully help and
support you to upstream some of that work.
Most likely, you'll need to roll your own image, and youll need to do some work in
dscontainer (our python init tool) to support adding/removing of replicas, configuration
of the replicaid, and the replication passwords.
Since I started this project a
while ago, I have been using a base image and installing 389 on top of it, with some
modifications, taken from
https://github.com/dabelenda/container-389ds/blob/master/Dockerfile, which disable
hostname checks, remove the startup via systemd, etc. I'm using kubernetes secrets for
storing passwords for directory manager, replication manager, etc. For replica id
configuration, as I'm using a statefulset which spins up containers with names like
389-ds-0, 389-ds-1, 389-ds-2, I'm reading the hostname of the container and generating
the replica ID. I haven't yet tried the dscontainer tool, which I see that does some
of the things that the linked dockerfile does, and a lot more too.
At a guess your POD architecture should be 1 HUB which receives all incomming
replication
traffic, and then the HUB dynamically adds/removes agreements to the the consumers, and
manages them. The consumers are then behind the haproxy instance that is part of kube.
Your writeable servers should probably still be outside of this system for the moment :)
Does that help? I'm really happy to answer any questions, help with planning and
improve our container support upstream with you.
Thanks,
—
Sincerely,
William Brown
Senior Software Engineer, 389 Directory Server
SUSE Labs
Thanks,
Aravind