Greetings.
we are now in the infrastructure freeze leading up to the Fedora 43
Final release. This is a final release freeze.
We do this to ensure that our infrastructure is stable and ready to
release Fedora 43 when it's available.
You can see a list of hosts that do not freeze by checking out the
ansible repo and running the freezelist script:
git clone
https://infrastructure.fedoraproject.org/infra/ansible.git
ansible/scripts/freezelist -i inventory
Any hosts listed as freezes is frozen until 2025-10-22 (or later if
release slips). Frozen hosts should have no changes made to them without
a sign-off on the change from at least 2 sysadmin-main or rel-eng
members, along with (in most cases) a patch of the exact change to be
made to this list and/or a pull-request to the infra/ansible repo.
Note that there is an outstanding issue with kojipkgs throwing 503s
that will likely need a freeze break to fix once we can figure out what
is going on.
Thanks,
kevin
Dear all,
You are kindly invited to the meeting:
F43 Final Go/No-Go Meeting on 2025-10-23 from 17:00:00 to 20:00:00 UTC
The meeting will be about:
Please join us for the Fedora Linux Final Go/No-Go meeting in #fedora-meeting on matrix @ 1700 UTC.
In this meeting, we will determine the status of the release candidate for F43.
For more information on this meeting, please visit https://fedoraproject.org/wiki/Go_No_Go_Meeting
Source: https://calendar.fedoraproject.org//meeting/11131/
I sent a PR today to make pungi-fedora CI actually tell you what went
wrong when it fails:
https://pagure.io/pungi-fedora/pull-request/1560
the same PR is merged on all other branches (thanks sgallagh). It'd be
nice to merge it on F43 too. It's not *critical*, but it would make
more visible the (bogus) reason for the CI failure on the other PR I've
submitted (and will send an FBR for).
--
Adam Williamson (he/him/his)
Fedora QA
Fedora Chat: @adamwill:fedora.im | Mastodon: @adamw@fosstodon.org
https://www.happyassassin.net
PR is https://pagure.io/pungi-fedora/pull-request/1558
As of
https://bodhi.fedoraproject.org/updates/FEDORA-FLATPAK-2025-7a727109d7
and a couple of other updates, the 'current' builds of almost all the
flatpak apps that are embedded into the Silverblue and Kinoite images
at build time have been updated to builds that use f43 runtimes. There
are a couple of laggards for Kinoite in
https://bodhi.fedoraproject.org/updates/FEDORA-FLATPAK-2025-2c064c473d
.
This means we need to update the pungi config to embed the f43 runtimes
in the installer images, rather than the f42 runtimes, which is what
the PR does. Until we do that, installs of Silverblue and Kinoite will
fail like this:
https://openqa.fedoraproject.org/tests/3878770#step/_do_install_and_reboot/…
the error from the logs is:
pyanaconda.modules.common.errors.installation.PayloadInstallationError: Failed to install flatpaks: flatpak-error-quark: The application org.gnome.Logs/x86_64/stable requires the runtime org.fedoraproject.Platform/x86_64/f43 which was not found (8)
Once we merge the PR, future Silverblue images should be fixed. Kinoite
images will still be broken till the kolourpaint/kmahjongg update goes
stable (so we should get that pushed ASAP).
--
Adam Williamson (he/him/his)
Fedora QA
Fedora Chat: @adamwill:fedora.im | Mastodon: @adamw@fosstodon.org
https://www.happyassassin.net
Hey everyone. I'm still trying to find a solution to the timeout issue from our
rdu3 proxies to build vlan services (
https://pagure.io/fedora-infrastructure/issue/12814 )
As part of that, I'd like to:
* disable proxy10 in dns. This will result in all src/koji/kojipkgs
traffic going via proxy01 from external (internal/builds will still
use proxy101/proxy110).
* wait a bit and confirm that proxy01 can handle the load by itself
* reinstall proxy10 with fedora 43. This will take an hour or so
with all the ansible templates/syncing.
* re-add it in dns and see if the timeout issue persists
* if it does, repeat installing proxy10 with fedora 41
If that doesn't help out, next step I'd like to try would be to
reinstall a vmhost there with rhel10. I think the least disruptive way
to do that would be to move batcave01 to vmhost-x86-01 and then
reinstall vmhost-x86-05. That would need a short outage for:
vmhost-x86-05.rdu3.fedoraproject.org:dl05.rdu3.fedoraproject.org:running:1
vmhost-x86-05.rdu3.fedoraproject.org:mailman01.rdu3.fedoraproject.org:running:1
vmhost-x86-05.rdu3.fedoraproject.org:ocp02.ocp.rdu3.fedoraproject.org:running:1
vmhost-x86-05.rdu3.fedoraproject.org:proxy10.rdu3.fedoraproject.org:running:1
vmhost-x86-05.rdu3.fedoraproject.org:tang02.rdu3.fedoraproject.org:running:1
vmhost-x86-05.rdu3.fedoraproject.org:wiki02.rdu3.fedoraproject.org:running:1
vmhost-x86-05.rdu3.fedoraproject.org:zabbix01.rdu3.fedoraproject.org:running:1
Let me know what you all think
kevin
Dear all,
You are kindly invited to the meeting:
F43 Final Go/No-Go Meeting on 2025-10-16 from 17:00:00 to 20:00:00 UTC
The meeting will be about:
Please join us for the Fedora Linux Final Go/No-Go meeting in #fedora-meeting on matrix @ 1700 UTC.
In this meeting, we will determine the status of the release candidate for F43.
For more information on this meeting, please visit https://fedoraproject.org/wiki/Go_No_Go_Meeting
Source: https://calendar.fedoraproject.org//meeting/11131/
Hello everyone.
My name is Jiri, I'm a Red Hat employee from Brno.
My matrix handle is jpodivin.
I've been working on the Log Detective[1] for some time now, and I figured
I should be directly involved in the infrastructure part of things as well.
As of now, the publicly accessible instance of Log Detective, which is
available through Copr and the website, is running on
logdetective01.fedorainfracloud.org.
It has been working relatively well, for a while. However, the recent
changes around firewall, specifically nftables, have led to some issues.
That's why I would like to merge my changes[2] to ours, that is Log
Detective, ansible role, and execute them from the batcave.
Regards,
[1] https://www.logdetective.com/
[2] https://pagure.io/fedora-infra/ansible/pull-request/2896#
--
Jiri Podivin
Senior Software Engineer, Openstack
Red Hat Czech, s.r.o. <https://www.redhat.com/>
Purkyňova 3080/97b
61200 Brno, Czech Republic
jpodivin(a)redhat.com M: +420739108412
IRC/slack: jpodivin
<https://red.ht/sig>
Hey folks.
I'd like to deploy:
https://pagure.io/fedora-infra/ansible/pull-request/2897
this sets up things to try and install and configure 3 power9
boxes for copr hypervisors in rdu3.
We now have a isolated vlan there and hopefully I can provision them
and we can get them up and in service.
This needs a freeze break because it touches noc01 (dhcp).
I don't think there's much risk here.
kevin
Hey everyone.
As you may know, our connection timeouts to kojipkgs is back
( https://pagure.io/fedora-infrastructure/issue/12814 )
I have been unable to find a fix yet, but I have a few things I would
like to try:
1. I'd like to try adding:
retries 5
retry-on all-retryable-errors
option redispatch 1
to the kojipkgs backend in haproxy.
This will not fix anything, but it should make it so when a connection
times out it gets sent to the other server and might have a chance of
getting properly served instead of returning a 503. ie, bandaid over the
problem while we try and track it down.
patch:
diff --git a/roles/haproxy/templates/haproxy.cfg b/roles/haproxy/templates/haproxy.cfg
index c311c0f9d8..f2ba4654e7 100644
--- a/roles/haproxy/templates/haproxy.cfg
+++ b/roles/haproxy/templates/haproxy.cfg
@@ -277,6 +277,9 @@ backend kojipkgs-backend
server kojipkgs01.{{ datacenter }}.fedoraproject.org kojipkgs01.{{ datacenter }}.fedoraproject.org:80 check inter 30s rise 1 fall 3
server kojipkgs02.{{ datacenter }}.fedoraproject.org kojipkgs02.{{ datacenter }}.fedoraproject.org:80 check inter 30s rise 1 fall 3
option httpchk GET /
+ retries 5
+ retry-on all-retryable-errors
+ option redispatch 1
{% endif %}
{% if datacenter == "rdu3" %}
2. I would like to try and take varnish out of the path to see if it's
related to the problem. To do this on kojipkgs01:
- Take kojipkgs01 out of haproxy so it gets no requests
- stop varnish and httpd on it
- reset httpd to listen on port 80 instead of 8080
- confirm it's working
- re-enable in haproxy to get traffic.
If the problem persists we know it's not varnish related.
If it doesn't we know to focus on varnish.
I may have other things to try as I think of them.
kevin