Freeze Break Request: s3-mirror adjustments
by Kevin Fenzi
First, I noticed we are running the full sync twice right now, at the
same time:
[root@mm-backend01 cron.d][PROD]# cat /etc/cron.d/s3.sh
#Ansible: s3sync
0 0,11 * * * s3-mirror /usr/local/bin/lock-wrapper s3sync /usr/local/bin/s3.sh 2>&1 | /usr/local/bin/nag-once s3.sh 1d 2>
&1
#Ansible: s3sync-main
0 0 * * * s3-mirror /usr/local/bin/lock-wrapper s3sync-main /usr/local/bin/s3.sh 2>&1 | /usr/local/bin/nag-once s3.sh 1d
2>&1
Second, the attached patch changes the sync scripts to:
* do one sync with no --delete and excluding repodata
* do another one with --delete and including repodata
* invalidate the repodata
I adjusted the cron jobs to handle the repodata invalidate (I think).
TODO: only sync when things have changed.
+1s?
kevin
--
4 years, 1 month
Prioritization of tickets and infra work
by Clement Verna
Hi all,
Last week during our weekly meeting we have discuss trying to better
prioritize our work and tickets. In order to do that we are going to try to
use a "yummy vs trouble" [1] index and use a prioritization matrix [0] to
order our work.
Yummy representing the added value or benefit of a task and Trouble
representing how much effort it would take to complete the task. Each
property being either small, medium or large.
Starting this week, I ll send a email with a list of 5 tickets from our
backlog [1], asking opinions about each tickets yummy and trouble level. We
can use this weekly email to ask questions or provide more context. We will
then update these tickets with the outcome of the discussion, in case of
strong disagreement we can use our weekly IRC meeting to make decision.
This will hopefully help us focus on items that provides a high value to
our community and also provide a way for everyone to participate in this
prioritization.
[0] - https://www.process.st/prioritization-matrix/
[1] -
https://pagure.io/fedora-infrastructure/issues?status=Open&tags=backlog
4 years, 1 month
CPE Weekly: 2020-03-06
by Aoife Moloney
---
title: CPE Weekly status email
tags: CPE Weekly, email
---
# CPE Weekly: 2020-03-06
Background:
The Community Platform Engineering group is the Red Hat team combining
IT and release engineering from Fedora and CentOS. Our goal is to keep
core servers and services running and maintained, build releases, and
other strategic tasks that need more dedicated time than volunteers
can give.
For better communication, we will be giving weekly reports to the
CentOS and Fedora communities about the general tasks and work being
done. Also for better communication between our groups we have
created #redhat-cpe on Freenode IRC! Please feel free to catch us
there, a mail has landed on both the CentOS and Fedora devel lists
with context here.
## Fedora Updates
* Fedora Minimal Compose is being worked on currently for F32 beta
### Data Centre Move
* Please start to plan for 2-3 week outage of communishift starting
2020-04-12 to allow for the move
* Due to the data centre move, we cannot get a new box to run odcs-backend
https://pagure.io/fedora-infrastructure/issue/8721
* We are also scoping the work required for 'Minimum Viable Fedora' -
here is the link to the mail as a refresher of what to expect, and
what not https://lists.fedoraproject.org/archives/list/infrastructure@lists.fedora...
Our move dates are as follows:
* Move 1 April 13 to IAD2 (essential hardware)
* Move 2 April 13 to RDU-CC (communishift)
* Move 3 May 11 to IAD2 (QA equipment)
* Move 4 June 1 to IAD2 (anything and everything else)
### AAA Replacement
* Sprint 5 will see the team focusing on integration with FASJSON API
and 2FA token
* We have also decided to postpone testing of the new solution until
post data centre move to make sure it is
* As always, check out our progress on github here
https://github.com/orgs/fedora-infra/projects/6
### CI/CD
* Monitor-Gating: Blocked in staging because F32 isn’t branched off
there yet (Koji, Bodhi, PDC, https://pagure.io/releng/issue/9293)
* Automatic Release Tags and Changelog: Ongoing Devel-list thread here
https://lists.fedoraproject.org/archives/list/devel@lists.fedoraproject.o...
* The team have moved to Jitsi video conferencing with interested
community members (ngompa, clime, mboddu, mhroncok) about the
different approaches to automatic release tags. Result: Approach
considering existing EVRs most palatable (over <#commits>.<#builds>).
* We now have a more official looking repo:
https://pagure.io/Fedora-Infra/rpmautospec
* The team also created implementation details and roadmap:
https://hackmd.io/2iQUWeLdR1uTSJ6WL2JqBA
### Sustaining Team
* Old cloud is now officially retired
* Bodhi XSS vulnerability patched
* The team are also looking to prepare a Bodhi 5.2 release
* Fedora Minimal Compose (Use ODCS to trigger test composes)
* The team are also scoping the Mbbox upgrade and Task breakdown
https://github.com/fedora-infra/mbbox/projects/1
* Support community members helping with Badges outage
* The teams has started a conversation about Infra Ticket prioritization
https://lists.fedoraproject.org/archives/list/infrastructure@lists.fedora...
## Docs
### Misc Updates/Review Requests
* Initial f32-updates-testing push is fixed
https://github.com/fedora-infra/bodhi/issues/3936
* Fixed perms on f32 ostree to finish updates pushes
* Anitya tests are fixed
https://github.com/fedora-infra/anitya/commit/8fe224dbc6b8071f5c5e6f11abe...
* Failing tests in the-new-hotness being resolved
https://github.com/fedora-infra/the-new-hotness/pull/273
* Please review Packit integration in the-new-hotness
https://github.com/packit-service/packit/issues/689
* Please review KeepassXC flatpak issue
https://pagure.io/flatpak-module-tools/issue/6
* Please review Jms-messaging-plugin reviews
https://github.com/jenkinsci/jms-messaging-plugin/pull/162
## CentOS Updates
### CentOS
* Ppc64le kickstarter added for CentOS 8
* This will still need to be tested and added in production
* The infrastructure is stable overall though!
### CentOS Stream
* We are now working on the sync-to-git process
* Tycho module was successfully added to Stream in development!
* We are also getting closer to having a contributor workflow model
available for later this year - watch this space!
* We are also working with upstream to generate reports for Stream too
As always, feedback is welcome, and we will continue to look at ways
to improve the delivery and readability of this weekly report.
Have a great weekend!
Aoife
Source: https://hackmd.io/8iV7PilARSG68Tqv8CzKOQ
--
Aoife Moloney
Product Owner
Community Platform Engineering Team
Red Hat EMEA
Communications House
Cork Road
Waterford
4 years, 1 month
Revamping the Release Readiness meeting
by Ben Cotton
(Posting to many mailing lists for visibility. I apologize if you see
this more times than you'd like.)
You may have already seen my Community Blog post[1] about changing the
Release Readiness meeting process. The meeting has questionable value
in the current state, so I want to make it more useful. We'll do this
by having teams self-report readiness issues on a dedicated wiki
page[2] beginning now. This gives the community time to chip in and
help with areas that need help without waiting until days before the
release.
I invite teams to identify a representative to keep the wiki page up
to date. Update it as your status changes and I'll post help requests
in my weekly CommBlog posts[3] and the FPgM office hours[4] IRC
meeting. The Release Readiness meeting will be shortened to one hour
and will review open concerns instead of polling for teams that may or
may not be there. We will use the logistics mailing list[5] to discuss
issues and make announcements, so I encourage representatives to join
this list.
[1] https://communityblog.fedoraproject.org/fedora-program-update-2020-08/
[2] https://fedoraproject.org/wiki/Release_Readiness
[3] https://communityblog.fedoraproject.org/category/program-management/
[4] https://apps.fedoraproject.org/calendar/council/#m9570
--
Ben Cotton
He / Him / His
Senior Program Manager, Fedora & CentOS Stream
Red Hat
TZ=America/Indiana/Indianapolis
4 years, 1 month
[PATCH] s3-mirror: Run crons to sync s3 mirror a lot more often
by Kevin Fenzi
We have been getting complaints from copr users that they are hitting
out of date cloudfront cached data when they are doing builds.
We are syncing not all that often currently, and sometimes if a updates
push or rawhide compose finishes after the sync time it coud be a long
while before it picks up on it. So, since most of these jobs finish in
5-10min when there is nothing to sync, just have them all run every 15min
or so. If this starts hitting locking too much we can spread them back out
once we get a sense of when they are hitting that.
Additionally, we should just set them up to only sync when their particular
thing has finished. This would make it a lot more sane, but require a
redesign/rewrite.
Amended to use the old values for test releases.
Signed-off-by: Kevin Fenzi <kevin(a)scrye.com>
---
roles/s3-mirror/tasks/main.yml | 20 ++++++++++----------
1 file changed, 10 insertions(+), 10 deletions(-)
diff --git a/roles/s3-mirror/tasks/main.yml b/roles/s3-mirror/tasks/main.yml
index 12351cb..075785e 100644
--- a/roles/s3-mirror/tasks/main.yml
+++ b/roles/s3-mirror/tasks/main.yml
@@ -68,7 +68,7 @@
- s3-mirror
- name: s3sync cron - updates for current
- cron: name="s3sync-updates-current" minute="0" hour="3,9,15,21" user="s3-mirror"
+ cron: name="s3sync-updates-current" minute="2,17,32,47" user="s3-mirror"
job='/usr/local/bin/lock-wrapper s3sync-updates-current "/usr/local/bin/s3-sync-path.sh /pub/fedora/linux/updates/{{ FedoraCycleNumber|int }}/" 2>&1 | /usr/local/bin/nag-once s3-updates-current.sh 1d 2>&1'
cron_file=s3-updates-current.sh
when: env != 'staging' and inventory_hostname.startswith('mm-backend01.')
@@ -76,7 +76,7 @@
- s3-mirror
- name: s3sync cron - updates for development/current+1 x86_64
- cron: name="s3sync-updates-current" minute="0" hour="2,7,10" user="s3-mirror"
+ cron: name="s3sync-updates-current" minute="3,18,33,48" user="s3-mirror"
job='/usr/local/bin/lock-wrapper s3sync-updates-dev-cur-plus-1-x86_64 "/usr/local/bin/s3-sync-path.sh /pub/fedora/linux/development/{{ FedoraCycleNumber|int + 1 }}/Everything/x86_64/os/" 2>&1 | /usr/local/bin/nag-once s3-updates-dev-cur-plus-1-x86_64.sh 1d 2>&1'
cron_file=s3-updates-dev-cur-plus-1-x86_64.sh
disabled={{not FedoraBranched|bool}}
@@ -85,7 +85,7 @@
- s3-mirror
- name: s3sync cron - updates for development/current+1 aarch64
- cron: name="s3sync-updates-current" minute="0" hour="4,11,18" user="s3-mirror"
+ cron: name="s3sync-updates-current" minute="4,19,34,49" user="s3-mirror"
job='/usr/local/bin/lock-wrapper s3sync-updates-dev-cur-plus-1-aarch64 "/usr/local/bin/s3-sync-path.sh /pub/fedora/linux/development/{{ FedoraCycleNumber|int + 1 }}/Everything/aarch64/os/" 2>&1 | /usr/local/bin/nag-once s3-updates-dev-cur-plus-1-aarch64.sh 1d 2>&1'
cron_file=s3-updates-dev-cur-plus-1-aarch64.sh
disabled={{not FedoraBranched|bool}}
@@ -94,7 +94,7 @@
- s3-mirror
- name: s3sync cron - updates for current-1
- cron: name="s3sync-updates-previous" minute="30" hour="0,6,12,18" user="s3-mirror"
+ cron: name="s3sync-updates-previous" minute="5,20,35,50" user="s3-mirror"
job='/usr/local/bin/lock-wrapper s3sync-updates-previous "/usr/local/bin/s3-sync-path.sh /pub/fedora/linux/updates/{{ FedoraCycleNumber|int - 1 }}/" 2>&1 | /usr/local/bin/nag-once s3-updates-previous.sh 1d 2>&1'
cron_file=s3-updates-previous.sh
when: env != 'staging' and inventory_hostname.startswith('mm-backend01.')
@@ -102,7 +102,7 @@
- s3-mirror
- name: s3sync cron - epel 7 x86_64
- cron: name="s3sync-epel7-x86_64" minute="10" hour="2,5,8,11,14,17,20,23" user="s3-mirror"
+ cron: name="s3sync-epel7-x86_64" minute="6,21,36,51" user="s3-mirror"
job='/usr/local/bin/lock-wrapper s3sync-epel7-x86_64 "/usr/local/bin/s3-sync-path.sh /pub/epel/7/x86_64/" 2>&1 | /usr/local/bin/nag-once s3-epel7-x86_64.sh 1d 2>&1'
cron_file=s3-epel7-x86_64.sh
when: env != 'staging' and inventory_hostname.startswith('mm-backend01.')
@@ -110,7 +110,7 @@
- s3-mirror
- name: s3sync cron - epel 7 aarch64
- cron: name="s3sync-epel7-aarch64" minute="20" hour="4,7,10,13,16,19,22" user="s3-mirror"
+ cron: name="s3sync-epel7-aarch64" minute="7,22,37,52" user="s3-mirror"
job='/usr/local/bin/lock-wrapper s3sync-epel7-aarch64 "/usr/local/bin/s3-sync-path.sh /pub/epel/7/aarch64/" 2>&1 | /usr/local/bin/nag-once s3-epel7-aarch64.sh 1d 2>&1'
cron_file=s3-epel7-aarch64.sh
when: env != 'staging' and inventory_hostname.startswith('mm-backend01.')
@@ -118,7 +118,7 @@
- s3-mirror
- name: s3sync cron - epel 8 Everything x86_64
- cron: name="s3sync-epel8-everything-x86_64" minute="43" hour="3,6,9,12,15,17,20,23" user="s3-mirror"
+ cron: name="s3sync-epel8-everything-x86_64" minute="8,23,38,53" user="s3-mirror"
job='/usr/local/bin/lock-wrapper s3sync-epel8-everything-x86_64 "/usr/local/bin/s3-sync-path.sh /pub/epel/8/Everything/x86_64/" 2>&1 | /usr/local/bin/nag-once s3-epel8-everything-x86_64.sh 1d 2>&1'
cron_file=s3-epel8-everything-x86_64.sh
when: env != 'staging' and inventory_hostname.startswith('mm-backend01.')
@@ -126,7 +126,7 @@
- s3-mirror
- name: s3sync cron - epel 8 Everything aarch64
- cron: name="s3sync-epel8-everything-aarch64" minute="38" hour="4,7,10,13,16,19,22" user="s3-mirror"
+ cron: name="s3sync-epel8-everything-aarch64" minute="9,24,39,54" user="s3-mirror"
job='/usr/local/bin/lock-wrapper s3sync-epel8-everything-aarch64 "/usr/local/bin/s3-sync-path.sh /pub/epel/8/Everything/aarch64/" 2>&1 | /usr/local/bin/nag-once s3-epel8-everything-aarch64.sh 1d 2>&1'
cron_file=s3-epel8-everything-aarch64.sh
when: env != 'staging' and inventory_hostname.startswith('mm-backend01.')
@@ -134,7 +134,7 @@
- s3-mirror
- name: s3sync cron - epel 8 Modular x86_64
- cron: name="s3sync-epel8-modular-x86_64" minute="32" hour="3,6,9,12,15,17,20,23" user="s3-mirror"
+ cron: name="s3sync-epel8-modular-x86_64" minute="10,25,40,55" user="s3-mirror"
job='/usr/local/bin/lock-wrapper s3sync-epel8-modular-x86_64 "/usr/local/bin/s3-sync-path.sh /pub/epel/8/Modular/x86_64/" 2>&1 | /usr/local/bin/nag-once s3-epel8-modular-x86_64.sh 1d 2>&1'
cron_file=s3-epel8-modular-x86_64.sh
when: env != 'staging' and inventory_hostname.startswith('mm-backend01.')
@@ -142,7 +142,7 @@
- s3-mirror
- name: s3sync cron - epel 8 Modular aarch64
- cron: name="s3sync-epel8-modular-aarch64" minute="27" hour="4,7,10,13,16,19,22" user="s3-mirror"
+ cron: name="s3sync-epel8-modular-aarch64" minute="11,26,41,56" user="s3-mirror"
job='/usr/local/bin/lock-wrapper s3sync-epel8-modular-aarch64 "/usr/local/bin/s3-sync-path.sh /pub/epel/8/Modular/aarch64/" 2>&1 | /usr/local/bin/nag-once s3-epel8-modular-aarch64.sh 1d 2>&1'
cron_file=s3-epel8-modular-aarch64.sh
when: env != 'staging' and inventory_hostname.startswith('mm-backend01.')
--
1.8.3.1
4 years, 1 month
[PATCH] s3-mirror: Run crons to sync s3 mirror a lot more often
by Kevin Fenzi
We have been getting complaints from copr users that they are hitting
out of date cloudfront cached data when they are doing builds.
We are syncing not all that often currently, and sometimes if a updates
push or rawhide compose finishes after the sync time it coud be a long
while before it picks up on it. So, since most of these jobs finish in
5-10min when there is nothing to sync, just have them all run every 15min
or so. If this starts hitting locking too much we can spread them back out
once we get a sense of when they are hitting that.
Additionally, we should just set them up to only sync when their particular
thing has finished. This would make it a lot more sane, but require a
redesign/rewrite.
Signed-off-by: Kevin Fenzi <kevin(a)scrye.com>
---
roles/s3-mirror/tasks/main.yml | 22 +++++++++++-----------
1 file changed, 11 insertions(+), 11 deletions(-)
diff --git a/roles/s3-mirror/tasks/main.yml b/roles/s3-mirror/tasks/main.yml
index 12351cb..6916d01 100644
--- a/roles/s3-mirror/tasks/main.yml
+++ b/roles/s3-mirror/tasks/main.yml
@@ -60,7 +60,7 @@
- s3-mirror
- name: s3sync cron - test releases
- cron: name="s3sync-updates-current" minute="40" hour="5,9,13,19" user="s3-mirror"
+ cron: name="s3sync-updates-current" minute="1,16,31,46" user="s3-mirror"
job='/usr/local/bin/lock-wrapper s3sync-test-releases "/usr/local/bin/s3-sync-path.sh /pub/fedora/linux/releases/test/" 2>&1 | /usr/local/bin/nag-once s3-test-releases.sh 1d 2>&1'
cron_file=s3-test-releases.sh
when: env != 'staging' and inventory_hostname.startswith('mm-backend01.')
@@ -68,7 +68,7 @@
- s3-mirror
- name: s3sync cron - updates for current
- cron: name="s3sync-updates-current" minute="0" hour="3,9,15,21" user="s3-mirror"
+ cron: name="s3sync-updates-current" minute="2,17,32,47" user="s3-mirror"
job='/usr/local/bin/lock-wrapper s3sync-updates-current "/usr/local/bin/s3-sync-path.sh /pub/fedora/linux/updates/{{ FedoraCycleNumber|int }}/" 2>&1 | /usr/local/bin/nag-once s3-updates-current.sh 1d 2>&1'
cron_file=s3-updates-current.sh
when: env != 'staging' and inventory_hostname.startswith('mm-backend01.')
@@ -76,7 +76,7 @@
- s3-mirror
- name: s3sync cron - updates for development/current+1 x86_64
- cron: name="s3sync-updates-current" minute="0" hour="2,7,10" user="s3-mirror"
+ cron: name="s3sync-updates-current" minute="3,18,33,48" user="s3-mirror"
job='/usr/local/bin/lock-wrapper s3sync-updates-dev-cur-plus-1-x86_64 "/usr/local/bin/s3-sync-path.sh /pub/fedora/linux/development/{{ FedoraCycleNumber|int + 1 }}/Everything/x86_64/os/" 2>&1 | /usr/local/bin/nag-once s3-updates-dev-cur-plus-1-x86_64.sh 1d 2>&1'
cron_file=s3-updates-dev-cur-plus-1-x86_64.sh
disabled={{not FedoraBranched|bool}}
@@ -85,7 +85,7 @@
- s3-mirror
- name: s3sync cron - updates for development/current+1 aarch64
- cron: name="s3sync-updates-current" minute="0" hour="4,11,18" user="s3-mirror"
+ cron: name="s3sync-updates-current" minute="4,19,34,49" user="s3-mirror"
job='/usr/local/bin/lock-wrapper s3sync-updates-dev-cur-plus-1-aarch64 "/usr/local/bin/s3-sync-path.sh /pub/fedora/linux/development/{{ FedoraCycleNumber|int + 1 }}/Everything/aarch64/os/" 2>&1 | /usr/local/bin/nag-once s3-updates-dev-cur-plus-1-aarch64.sh 1d 2>&1'
cron_file=s3-updates-dev-cur-plus-1-aarch64.sh
disabled={{not FedoraBranched|bool}}
@@ -94,7 +94,7 @@
- s3-mirror
- name: s3sync cron - updates for current-1
- cron: name="s3sync-updates-previous" minute="30" hour="0,6,12,18" user="s3-mirror"
+ cron: name="s3sync-updates-previous" minute="5,20,35,50" user="s3-mirror"
job='/usr/local/bin/lock-wrapper s3sync-updates-previous "/usr/local/bin/s3-sync-path.sh /pub/fedora/linux/updates/{{ FedoraCycleNumber|int - 1 }}/" 2>&1 | /usr/local/bin/nag-once s3-updates-previous.sh 1d 2>&1'
cron_file=s3-updates-previous.sh
when: env != 'staging' and inventory_hostname.startswith('mm-backend01.')
@@ -102,7 +102,7 @@
- s3-mirror
- name: s3sync cron - epel 7 x86_64
- cron: name="s3sync-epel7-x86_64" minute="10" hour="2,5,8,11,14,17,20,23" user="s3-mirror"
+ cron: name="s3sync-epel7-x86_64" minute="6,21,36,51" user="s3-mirror"
job='/usr/local/bin/lock-wrapper s3sync-epel7-x86_64 "/usr/local/bin/s3-sync-path.sh /pub/epel/7/x86_64/" 2>&1 | /usr/local/bin/nag-once s3-epel7-x86_64.sh 1d 2>&1'
cron_file=s3-epel7-x86_64.sh
when: env != 'staging' and inventory_hostname.startswith('mm-backend01.')
@@ -110,7 +110,7 @@
- s3-mirror
- name: s3sync cron - epel 7 aarch64
- cron: name="s3sync-epel7-aarch64" minute="20" hour="4,7,10,13,16,19,22" user="s3-mirror"
+ cron: name="s3sync-epel7-aarch64" minute="7,22,37,52" user="s3-mirror"
job='/usr/local/bin/lock-wrapper s3sync-epel7-aarch64 "/usr/local/bin/s3-sync-path.sh /pub/epel/7/aarch64/" 2>&1 | /usr/local/bin/nag-once s3-epel7-aarch64.sh 1d 2>&1'
cron_file=s3-epel7-aarch64.sh
when: env != 'staging' and inventory_hostname.startswith('mm-backend01.')
@@ -118,7 +118,7 @@
- s3-mirror
- name: s3sync cron - epel 8 Everything x86_64
- cron: name="s3sync-epel8-everything-x86_64" minute="43" hour="3,6,9,12,15,17,20,23" user="s3-mirror"
+ cron: name="s3sync-epel8-everything-x86_64" minute="8,23,38,53" user="s3-mirror"
job='/usr/local/bin/lock-wrapper s3sync-epel8-everything-x86_64 "/usr/local/bin/s3-sync-path.sh /pub/epel/8/Everything/x86_64/" 2>&1 | /usr/local/bin/nag-once s3-epel8-everything-x86_64.sh 1d 2>&1'
cron_file=s3-epel8-everything-x86_64.sh
when: env != 'staging' and inventory_hostname.startswith('mm-backend01.')
@@ -126,7 +126,7 @@
- s3-mirror
- name: s3sync cron - epel 8 Everything aarch64
- cron: name="s3sync-epel8-everything-aarch64" minute="38" hour="4,7,10,13,16,19,22" user="s3-mirror"
+ cron: name="s3sync-epel8-everything-aarch64" minute="9,24,39,54" user="s3-mirror"
job='/usr/local/bin/lock-wrapper s3sync-epel8-everything-aarch64 "/usr/local/bin/s3-sync-path.sh /pub/epel/8/Everything/aarch64/" 2>&1 | /usr/local/bin/nag-once s3-epel8-everything-aarch64.sh 1d 2>&1'
cron_file=s3-epel8-everything-aarch64.sh
when: env != 'staging' and inventory_hostname.startswith('mm-backend01.')
@@ -134,7 +134,7 @@
- s3-mirror
- name: s3sync cron - epel 8 Modular x86_64
- cron: name="s3sync-epel8-modular-x86_64" minute="32" hour="3,6,9,12,15,17,20,23" user="s3-mirror"
+ cron: name="s3sync-epel8-modular-x86_64" minute="10,25,40,55" user="s3-mirror"
job='/usr/local/bin/lock-wrapper s3sync-epel8-modular-x86_64 "/usr/local/bin/s3-sync-path.sh /pub/epel/8/Modular/x86_64/" 2>&1 | /usr/local/bin/nag-once s3-epel8-modular-x86_64.sh 1d 2>&1'
cron_file=s3-epel8-modular-x86_64.sh
when: env != 'staging' and inventory_hostname.startswith('mm-backend01.')
@@ -142,7 +142,7 @@
- s3-mirror
- name: s3sync cron - epel 8 Modular aarch64
- cron: name="s3sync-epel8-modular-aarch64" minute="27" hour="4,7,10,13,16,19,22" user="s3-mirror"
+ cron: name="s3sync-epel8-modular-aarch64" minute="11,26,41,56" user="s3-mirror"
job='/usr/local/bin/lock-wrapper s3sync-epel8-modular-aarch64 "/usr/local/bin/s3-sync-path.sh /pub/epel/8/Modular/aarch64/" 2>&1 | /usr/local/bin/nag-once s3-epel8-modular-aarch64.sh 1d 2>&1'
cron_file=s3-epel8-modular-aarch64.sh
when: env != 'staging' and inventory_hostname.startswith('mm-backend01.')
--
1.8.3.1
4 years, 1 month
Freeze break request: koji autovacuum_freeze_max_age
by Kevin Fenzi
Greetings.
Last night koji alerted due to slowness. It was not backups or anything,
but rather the database hitting the limit I raised in commit c678f73b:
-autovacuum_freeze_max_age = 200000000 # maximum XID age before forced vacuum
+autovacuum_freeze_max_age = 300000000 # maximum XID age before forced vacuum
What this means is basicially: postgres records the xid (transaction id)
that can 'see' other transactions in the table rows. However, xid is a
32 bit value, meaning there can only be about 2.1billion transactions
before it 'wraps around'. When it does so, all the 'old' XID's need to
be gone or it will confuse it. It removes the old xids by marking old
transactions as 'frozen' (so any other transaction should see them).
So, this value tells the autovacuumer to start processing the table for
old xids and frezzing them, so by the time the wrap around happens
everything will be set.
Unfortunately, it's doing this on the buildroot_listing table, which is:
public | buildroot_listing | table | koji | 219 GB |
So, the i/o load is heavy and koji is slow to respond to real requests.
There's (at least) tree things we could do:
1. Bump the autovacuum_freeze_max_age up to 600million. The 100million
bump I did in january gave us about 1.5 months, so if we do 600, we
might last until june, when we will be migrating to the new datacenter.
600million is still a long way from 2.1 billion, so it should be fine.
At that point I hope to move db-koji01 to a rhel8 instance and much
newer postgresql. We could also run the vacuum duing downtime and let it
finish.
2. Just let it finish now. Things will be slow, I don't know for how
long. Users will complain and it will take longer for people to get
things done, but at the end we should be in better shape and there's
basically no action we need to take (other than handling complaints)
3. Schedule an outage and take the db offline and run the vacuum. This
might be quicker than letting the autovac finish, I am not sure.
Thoughts? please +1 the freeze break of the option you thing is best, or
feel free to ask more info or suggest other options.
kevin
4 years, 1 month