On Sat, Oct 19, 2019 at 07:38:04PM -0400, Neal Gompa wrote:
> On Sat, Oct 19, 2019 at 7:37 PM Kevin Fenzi <kevin(a)scrye.com> wrote:
> > Greetings communishift group (and infrastructure list).
> > I was working on the communishift cluster trying to fix it's failing to
> > upgrade as well as some cert issues, and managed to munge up the cluster
> > but good. ;( It's a tribute to the resillance of OpenShift that it's up
> > and serving applications still. :)
> > In any event, I think the easiest way to clean things up and get back to
> > normal is for us to just reinstall it. With that in mind, I am planning
> > to do so starting at 21UTC on 2019-10-21 (monday).
> > If everyone could oc export any config or data they wish to save before
> > then that would be great.
> > Sorry for the trouble, but hopefully we will be back on track after
> > that.
> Out of curiosity, is there some documentation somewhere of how this
> process is being handled?
Which process? The re-install?
Then after the install we run a few things (setup our idp, storage,
certs and users).
As you may know, currently database backups on db-koji01 are causing
very heavy load, disrupting our users builds (
so, they are currently disabled.
However, not having current backups is not a good thing, IMHO.
So, I am considering the idea of adding a db-koji02 vm (also rhel7 using
the same postgres version that db-koji01 is) and enabling streaming
replication from db-koji01 -> db-koji02 and then once thats working, run
the database backups on db-koji02.
It turns out this doesn't require that many changes on db-koji01:
* adding a replication user
* Setting 3 new lines of postgresql config and restarting:
wal_level = 'hot_standby'
max_wal_senders = 10
wal_keep_segments = 100
(May need to adjust senders and keep segments)
All the other changes are on db-koji02:
* create/setup the vm
* run pg_basebackup to pull all the current data from 01
* setup postgresql.conf and recovery.conf files
* start server and confirm it keeps up with 01
* run pg_dump and confirm it keeps up with 01
This is, of course a really big change in a freeze to a criticial
service, so I'd like to get thoughts from others about it.
Should we wait until after freeze and do without backups until then
(note, that we have never had to restore this db from backups in the
past, although we have dumped/restored it to move to newer postgres
Is there something else we can do thats easier to mitigate the issues
Thoughts? ideas? Rotten fruit?
inventory/group_vars/bastion | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/inventory/group_vars/bastion b/inventory/group_vars/bastion
index aeacc87e4..8e63d1a11 100644
@@ -23,7 +23,7 @@ custom_rules: [
# TODO - remove modularity-wg membership here once it is not longer needed:
# This is a postfix gateway. This will pick up gateway postfix config in base
This is a freeze break request to enable the new mirrorlist server on
proxy14 as discussed on the mailing list.
I hope my conditionals are correct for the Ansible and Jinja2 files.
If this freeze break request gets accepted someone needs to run the
playbook against proxy14.
Before running the playbook, proxy14 should be removed from DNS to make
sure, that the old mirrorlist containers are correctly stopped and
deleted and that the new mirrorlist containers are correctly running.
Adrian Reber (1):
Enable new mirrorlist server on proxy14
roles/mirrormanager/backend/files/backend.cron | 8 +++++---
.../backend/templates/sync_pkl_to_mirrorlists.sh | 2 +-
roles/mirrormanager/mirrorlist_proxy/tasks/main.yml | 2 +-
.../mirrorlist_proxy/templates/mirrorlist.service.j2 | 4 ++--
4 files changed, 9 insertions(+), 7 deletions(-)
One of the main goals for Fedora 31 Silverblue was to have core
applications that were removed a few releases ao from the Silverblue
fixed image because they could be installed as Flatpaks actually
pre-installed when you install Silverblue. The anaconda feature landed
this cycle, and we have all the applications available as
Fedora-infrastructure built Flatpaks, so the last missing piece was
getting the ostree-installer config updated appropriately
We tried to squeeze this in before the final freeze, but there were
some bugs in the configuration and templates that didn't quite work -
those are all fixed and tested in rawhide, and we'd like to backport
There are three things that we need:
* Backport fixes to the lorax template:
* Backport fixes to the Pungi config:
* Update Pungi on the F31 compose VM to a version that includes the
patch from https://pagure.io/pungi/pull-request/1278 - this was
already done on the VM that runs rawhide composes.
Note that Silverblue is *not* a release-blocking deliverable for
Fedora 31. I think the risk to the overall compose is pretty small,
given that everything here is a direct backport from Rawhide, and
caused no problems there.
Greetings communishift group (and infrastructure list).
I was working on the communishift cluster trying to fix it's failing to
upgrade as well as some cert issues, and managed to munge up the cluster
but good. ;( It's a tribute to the resillance of OpenShift that it's up
and serving applications still. :)
In any event, I think the easiest way to clean things up and get back to
normal is for us to just reinstall it. With that in mind, I am planning
to do so starting at 21UTC on 2019-10-21 (monday).
If everyone could oc export any config or data they wish to save before
then that would be great.
Sorry for the trouble, but hopefully we will be back on track after
Good Morning Everyone,
This morning I found out that https://pagure.io/fedora-infrastructure was not
available, it was throwing a 500 error on every page/call.
I checked the logs and found:
GitError: Error performing curl request: (60): Peer certificate cannot be
authenticated with given CA certificates
The combination and "GitError" and a SSL related error led me to repoSpanner.
So with the help of Patrick, we confirmed that the SSL cert for pagure01 was
expiring on Oct 15th 2019.
We then regenerated that SSL cert.
We thought the repospanner playbook was going to redeploy that cert so I ran it,
but it did not change anything (both in its run as well as in the symptoms
We then found out that this piece is actually part of the pagure.yml playbook,
so I've ran it with `-t repospanner/server` to limit its effect.
Then I've restarted httpd, stunnel and repospanner(a)ansible.service on pagur01.
The first two were likely not necessary, the last one was to get the new cert in
So I would like retro-active approval for my actions since the systems I've
touched are frozen.