[ Summary: if you are going to rebuild the OSBS buildroot image in the
near future, talk to me to make sure things don't break ]
I just submitted a new version of flatpak-module-tools to Bodhi. This
updates flatpak-module-tools to use the libmodulemd v2 API - this was
forced because module-build-service hsa switched to use the v2 API.
*However* this update is incompatible with the current version of
atomic-reactor - which has references to the libmodulemd v1 API. So,
if the OSBS buildroot image is rebuilt after
flatpak-module-tools-0.10.1 hits stable, but before atomic-reactor is
fixed, Flatpak builds are going to break.
Submitting the necessary atomic-reactor fixes upstream blocks on
getting a libmodulemd v2 package into EPEL-7:
https://bugzilla.redhat.com/show_bug.cgi?id=1724271 - but we can land
them as a local patch to the Fedora atomic-reactor package if
I was working on deploying Fedora Happiness Packets on OpenShift. I was
looking for resources for the same.
I followed Clement's article to get an initial idea.
If you could help me find OpenShift resources on how to deploy a Django App
that'll be great.
I'd like to propose we migrate pagure01 (which is pagure.io) and
pagure-stg01 (pagure-stg.io) from osuosl to the community cage in rdu2.
We have had odd networking problems between .cz and osuosl and at times
between phx2 and osuosl.
Moving things would allow us to:
drop the pagure-proxy in ibiblio (which currently proxies all traffic to
the osuosl instances to avoid the networking issues from cz).
move forward with adding repospanner to ansible and have it appear in
pagure.io so people could do PR's and CI and other fun things against
our ansible repo.
Why not move it to openshift? Well, we could, but thats going to take
cycles we don't have right now, so a simple migration seems much easier
and gets us a lot of good.
Why not move it to phx2? Well, we would need to get a bunch of ports
opened for it's use and we may well be moving resources out of phx2 at
some point, so we really don't want to add another thing to move out.
Why not make a new instance in rdu2 and migrate just data to it?
Again, we could, but at this point a brute force migration seems like a
better use of our cycles than figuring exactly what needs synced over.
If we really don't want the downtime we can look more into this, I just
worry that it will take a lot of time to make sure we have everything
synced and setup right.
If folks are ok with this, I'd like to consider taking it down friday
late afternoon and migrating it over this weekend. It would have
downtime for however long it takes to sync the disk over. I can do
pagure-stg01 as a test run later this week if we like.
= Preamble =
The infrastructure team will be having its weekly meeting tomorrow,
2019-07-11 at 15:00 UTC in #fedora-meeting-1 on the freenode network.
We have a gobby document at
which can be edited for the agenda (see:
Please try and review and edit that document before the meeting and we
will use it to have our agenda of things to discuss. A copy as of today
is included in this email.
If you have something to discuss, add the topic to the discussion area
with your name. If you would like to teach other folks about some
application or setup in our infrastructure, please add that topic and
your name to the learn about section.
= Introduction =
We will use it over the week before the meeting to gather status and info
discussion items and so forth, then use it in the irc meeting to transfer
information to the meetbot logs.
= Meeting start stuff =
#startmeeting Infrastructure (2019-07-11)
#chair nirik pingou puiterwijk relrod smooge tflink cverna mizdebsk
mkonecny abompard bowlofeggs
= Let new people say hello =
#topic New folks introductions
#info This is a place where people who are interested in Fedora
Infrastructure can introduce themselves
#info Getting Started Guide:
= Status / Information / Trivia / Announcements =
(We put things here we want others on the team to know, but don't need to
(Please use #info <the thing> - your name)
#topic announcements and information
#info bowlofeggs is on extended leave
#info cverna will be going on extended leave
#info abombard will be going on extended leave
#info other vacations/leave/etc?
#info Flock2Fedora 2019-08-08 -> 2019-08-11
#info No site trip to PHX2
#info Red Hat is now owned by IBM.
= Things we should discuss =
We use this section to bring up discussion topics. Things we want to talk
as a group and come up with some consensus /suor decision or just
problem or issue. If there are none of these we skip this section.
(Use #topic your discussion topic - your username)
#info smooge is on call from 2019-07-04 -> 2019-07-11
#info ?????? is on call from 2019-07-11 -> 2019-07-18
#info Summary of last week: (from smooge )
#topic Monitoring discussion
#info Go over existing out items and fix
#topic Tickets discussion
Go thru each ticket one by one
Put all topics for discussion under here
Here we will discuss any apprentice questions, try and match up people
for things to do with things to do, progress, testing anything like that.
= Learn about some application or setup in infrastructure =
(This section, each week we get 1 person to talk about an application or
that we have. Just going over what it is, how to contribute, ideas for
etc. Whoever would like to do this, just add the i/nfo in this section. In
event we don't find someone to teach about something, we skip this section
and just move on to open floor.)
= Meeting end stuff =
#topic Open Floor
Stephen J Smoogen.
So, we have seen some very sporadic connection issues that have been
breaking composes. I'm at a loss how to debug it or figure out whats
going on, so I thought I would collect information here and see if
anyone had any ideas.
Of the last 11 doomed rawhide composes:
Fedora-Rawhide-20190622.n.0 - loop issue 
Fedora-Rawhide-20190622.n.1 - loop issue 
Fedora-Rawhide-20190623.n.0 - armv7 http2 framing error 
Fedora-Rawhide-20190624.n.0 - loop issue 
Fedora-Rawhide-20190626.n.0 - kde broken deps
Fedora-Rawhide-20190626.n.1 - loop issue 
Fedora-Rawhide-20190627.n.0 - loop issue 
Fedora-Rawhide-20190627.n.1 - pkg download issue 
Fedora-Rawhide-20190630.n.0 - armv7 http2 framing error
Fedora-Rawhide-20190630.n.1 - a x86_64 live downloading issue 
Fedora-Rawhide-20190702.n.0 - pkg download issue 
 mock in old chroot has 5 loop devices available, if you try and use
more than that they just don't appear in the chroot. For some reason
loop devices aren't getting cleaned up as easily or we hit multiple
composes per builder and it goes over 5. I patched our mock to have 11
of them. :) See:
 This looks like:
DEBUG util.py:585: BUILDSTDERR: [MIRROR]
libavc1394-0.5.4-10.fc30.armv7hl.rpm: Curl error (16): Error in the
HTTP2 framing layer for
DEBUG util.py:585: BUILDSTDERR: [FAILED]
libavc1394-0.5.4-10.fc30.armv7hl.rpm: No more mirrors to try - All
mirrors were already tried without success
DEBUG util.py:585: BUILDSTDERR: Unable to create appliance : Unable to
download from repo : Cannot download
Packages/l/libavc1394-0.5.4-10.fc30.armv7hl.rpm: All mirrors were tried
 This looks like:
(it's a screenshot, see:
) but basically:
Failed to download the following packages: Cannot download
Packages/s/systemd-bootchart-233-4.fc30.x86_64.rpm: All mirrors were tried.
 This looks like:
DEBUG util.py:585: BUILDSTDERR: 2019-07-01 03:02:48,081: Non
interactive installation failed: Failed to download the following
packages: Cannot download
Packages/i/iwl3945-firmware-18.104.22.168-97.fc31.noarch.rpm: All mirrors
A short summary of our setup:
A builder, which might be running a mock chroot or a vm requests
something from kojipkgs.
kojipkgs resolves for them in dns to 2 ip's: proxy101 or proxy110.
Those run apache which proxies incoming requests for that host to
haproxy also running on those hosts.
haproxy checks for liveness of the two backend kojipkgs servers:
kojipkgs01 and kojipkgs02. The request will go to one of the two or
whichever is up.
kojipkgs01/02 have varnish listening on port 80, so if they have the
thing in cache is just replies with that. Otherwise it makes a query
against a locally running apache to fetch the item. That apache reads
the package from a nfs mount of all the koji data those machines have on
So, lots of layers, but erverything should be pretty reliable.
And most of the time it is. The builders download tons and tons of
things just fine via this setup without problems usually.
At the point all these are failing, it would be running the rawhide
curl/dnf stack, not the native f29 one on the builders, so if it's a
curl problem I would expect it to happen more often.
So, how can we debug or mitigate this? any ideas?
As some of you may have read:
or other media reports about vulnerabilities of the current gpg
TLDR: Someone can (and has been) flooding sks keyservers with poisoned
certs. Users that download from sks keyservers may well find gpg just
stops working, hangs, or breaks in terrible ways. The SKS software is no
longer maintained and because the policy is 'never delete anything'
there's likely no way to mitigate the attacks.
I've cc'ed nb here for his take on things, but as I read it, it might be
best to just retire the keys.fedoraproject.org service at least for now
to avoid breaking users or telling them we have a service they should
trust when they really... should not.
I have CCNA, LPIC-1 and currently studying for LPIC-2. The goal is
LPIC-3 Virtualization & HA and RHCE.
I am interest in system administration/engineering.
What i would like to learn, is Python scripting and MySQL/SQLite. Also
configuration management, like Ansible.
I can dedicate from 0-15 hours per week.
ps. If my current "skills set" aren't enough to be of any help on the
project, you can deny my participation to it.