Hey folks!
I'm wondering whether we have a list of people who requested Fedora Messaging (RabbitMQ) certificates and how to contact them.
We're in need to refresh the CA cert, so I need to send the new CA cert to all clients so that they can add it to their trusted certs (append it to the file that [tls]ca_cert points to in the config file).
Most of those certs are used by apps in ansible, those are easy, but there are also CentOS and external applications IIRC.
I've tried searching our tracker with little success.
If you are using fedora-messaging in the CentOS infra, please respond here.
If you are using fedora-messaging outside of the Fedora infra, please respond here.
I think those user accounts are "external", please chime in if you recognize one of yours:
- coreos
- centos-ci
- osci-pipelines
- copr
- copr-be-dev
- alt-src (CentOS Stream)
- centos-integration
- centos-koji
- cbs
- resultsdb-centos
- centos-stream-robosignatory
- distrobuildsync-eln
- odcs-private-queue
- odcs
- openqa
I think those certs aren't used anymore, if that's not the case please respond here:
- gitlab-centos
- basset
- datagrepper (only datanommer is connected to the bus)
- git-hooks (used by dist-git but it's now "pagure")
- github2fedmsg (retired)
- joystick
- mailman3-fedmsg-plugin (renamed to "mailman")
- mbs-private-queue
- messaging-bridge (retired)
- monitor-gating
- mts
- nuancier (retired)
- releng-tools
- robosign (renamed to "robosignatory")
- sse2fedmsg (retired)
- supybot-fedmsg (replaced by maubot)
- tag2distrepo
- tahrir-api (renamed to "tahrir")
- ursabot (replaced by maubot)
- zanata2fedmsg (retired)
- fedora-messaging-operator
- fedora-search
- fm-orchestrator
- rpminspect
- testing-farm
I've built this list by looking at issued certs that did not have a matching user creation instruction in our ansible repo, so it may be flawed.
It would be great if we had some sort of registry with a contact account or address for each issued cert :-)
Once every client is trusting the new CA, we can switch the server certs to the new ones, and then send out the updated client certs.
The new combined CA file is available at https://infrastructure.fedoraproject.org/infra/rabbitmq-certs/production/ca…
(replace "production" with "staging" for the staging one)
Am I missing something?
Thanks for you attention!
Aurélien
Greetings.
we are now in the infrastructure freeze leading up to the Fedora 42
Final release. This is a final release freeze.
We do this to ensure that our infrastructure is stable and ready to
release Fedora 42 when it's available.
You can see a list of hosts that do not freeze by checking out the
ansible repo and running the freezelist script:
git clone
https://infrastructure.fedoraproject.org/infra/ansible.git
ansible/scripts/freezelist -i inventory
Any hosts listed as freezes is frozen until 2025-04-15 (or later if
release slips). Frozen hosts should have no changes made to them without
a sign-off on the change from at least 2 sysadmin-main or rel-eng
members, along with (in most cases) a patch of the exact change to be
made to this list and/or a pull-request to the infra/ansible repo.
Thanks,
kevin
Hi all,
As part of the initiative to move to forgejo, we are closing off
pagure.io to new projects, which will help with the spam repos issue
as well.
Users are still able to fork existing repositories, but brand new
repos will be closed off.
The changes are in the following pull request:
https://pagure.io/fedora-infra/ansible/pull-request/2566
Note too these changes are in staging, and an email has been sent to
devel-announce announcing these changes.
cheers,
ryanlerch
Greetings.
We have had several applications crashing (resultsdb,
resultsdb_ci_listener) or being slow (bodhi) of late.
I did some digging today and discovered that db01 is pretty saturated on
I/O. This means all the apps that use db01 are fighting i/o and
returning things slower than they should.
On looking more, it was mailman that was using the vast amount of the
i/o. I of course thought at first that it was crawlers, but it is not.
Instead it seems to be the bounce processor.
This processor wakes up every few minutes and does a query for any
bounces in the bounceevent table that are processed = 'false'.
If it finds any, it processes them.
However, that table is now 50GB and contains 152167015 rows
(all of them pretty much processed = 'True').
From the logs (which logs slow queries), an example:
2025-04-08 21:32:40.510 GMT [7073] LOG: duration: 267423.928 ms plan:
Query Text: SELECT bounceevent.id AS bounceevent_id, bounceevent.list_id AS bounceevent_
list_id, bounceevent.email AS bounceevent_email, bounceevent.timestamp AS bounceevent_timestamp,
bounceevent.message_id AS bounceevent_message_id, bounceevent.context AS bounceevent_context, b
ounceevent.processed AS bounceevent_processed
FROM bounceevent
WHERE bounceevent.processed = false
Gather (cost=1000.00..7441540.83 rows=1 width=137)
Workers Planned: 2
-> Parallel Seq Scan on bounceevent (cost=0.00..7440540.73 rows=1 width=137)
Filter: (NOT processed)
Yes, thats 267seconds to process that query, all the time hammering I/O
because the table is too large to cache well.
This all pointed me to find this 7 year old bug report:
https://gitlab.com/mailman/mailman/-/issues/343
Hopefully abompard finds it a fun blast from the past. :)
Anyhow, a quick fix I think would be:
* Save a copy of the latest database dump that should have that table
backed up.
* 'truncate bounceevent' to wipe it
Thoughts? +1s? counter proposals?
I'd like to do this so the other db01 users stop having problems.
kevin
Hi everyone,
Here's a quick overview of some particular dates of interest in the
F42 release cycle, which can also be found in the F42 release
schedule[1]:
Now - we are in Final Freeze
April 10 - F42 Final Go/No-Go meeting[2]
April 15 - Current (early) final target release date
April 24 - Elections Nominations open for x2 Council seats, x4 FESCo
seats and x4 Mindshare Committee seats
May 13 - F40 is EOL
May 19 - Elections Voting opens
May 30 - Elections Voting closes & results follow
For those of you already looking ahead to F43, please take note of the
following dates, which can also be found in the F43 release
schedule[3]:
June 25 - Changes requiring infrastructure changes submission deadline
July 1 - System Wide changes and those requiring mass rebuild
submission deadline
July 22 - Self Contained changes submission deadline
July 23 - Mass rebuild
August 12 - F43 branching
Please keep an eye out for updates to our F42 Go/No-Go meeting event.
Currently it is scheduled for Thursday, April 10 @ 1700 UTC in the
#fedora-meeting room on Matrix. A reminder or a rescheduling email
will be sent on Wednesday, April 9 in advance of this meeting, pending
the availability of a suitable release candidate.
Kindest regards,
Aoife
[1] https://fedorapeople.org/groups/schedule/f-42/f-42-key-tasks.html
[2] https://calendar.fedoraproject.org/meeting/11013/
[3] https://fedorapeople.org/groups/schedule/f-43/f-43-key-tasks.html
--
Aoife Moloney
Fedora Operations Architect
Fedora Project
Matrix: @amoloney:fedora.im
IRC: amoloney
The EPEL Steering Committee has realized that the original plan for
EPEL 10 repos will cause some upgrade path problems. This is
explained in further detail in this issue:
https://pagure.io/epel/issue/324
To address this, we'd like to make some changes to the EPEL 10
portions of the new-updates-sync script and the mirrormanager scanner
repo mappings. This is set up already in this pull request:
https://pagure.io/fedora-infra/ansible/pull-request/2557
I feel like these changes are relatively safe for a freeze break and
shouldn't impact Fedora in any way. In the unlikely event this
doesn't go as planned, we can roll it back by reverting the pull
request.
Can I get some +1's for this plan?
--
Carl George
We need to carry again a patch that I thought we no longer needed for
the koji hubs.
Patch is:
diff --color -Nur koji-1.34.1.orig/plugins/hub/kiwi.py koji-1.34.1/plugins/hub/kiwi.py
--- koji-1.34.1.orig/plugins/hub/kiwi.py 2024-09-12 13:52:57.631212839 -0700
+++ koji-1.34.1/plugins/hub/kiwi.py 2024-09-12 13:54:15.472971281 -0700
@@ -17,7 +17,7 @@
@export
def kiwiBuild(target, arches, desc_url, desc_path, optional_arches=None, profile=None,
scratch=False, priority=None, make_prep=False, repos=None, release=None,
- type=None, type_attr=None, result_bundle_name_format=None, use_buildroot_repo=True,
+ type=None, type_attr=None, result_bundle_name_format=None, use_buildroot_repo=False,
version=None, repo_releasever=None):
context.session.assertPerm('image')
for i in [desc_url, desc_path, profile, version, release, repo_releasever]:
basically to set the use_buildroot_repo setting to False.
We don't want to use the buildroot_repo when making images/deliverables,
because the packages from there are not signed. We want to use them from
the compose repo, which are signed.
Longer term it would be nice to adjust this upstream so we didn't have
to keep switching the default. :(
The build with this patch is:
https://koji.fedoraproject.org/koji/taskinfo?taskID=130991535
I would like to update koji01/02 with this and restart httpd there.
There are no changes for builders.
+1s?
kevin