Is there any objection to making the ResultsDB database dumps public
under https://infrastructure.fedoraproject.org/infra/db-dumps/ ?
I wanted to grab a database dump today, to populate a local database for
development purposes. I was able to scp it from db-qa02 since I'm in
sysadmin-qa, but I figured it would be nice to have it available
publicly alongside those other applications' db dumps.
I can't think of any reason why the ResultsDB dumps need to be kept
secret. There should be no sensitive data in them, right?
So if there are no objections I'll file an infra ticket for adding the
dumps into the public-db-copy script.
--
Dan Callaghan <dcallagh(a)redhat.com>
Senior Software Engineer, Products & Technologies Operations
Red Hat
# Fedora QA Devel Meeting
# Date: 2017-09-18
# Time: 14:00 UTC
(https://fedoraproject.org/wiki/Infrastructure/UTCHowto)
# Location: #fedora-meeting-1 on irc.freenode.nethttps://phab.qa.fedoraproject.org/w/meetings/20170918-fedoraqadevel/
If you have any additional topics, please reply to this thread or add
them in the wiki doc.
Tim
Proposed Agenda
===============
Announcements and Information
-----------------------------
- Please list announcements or significant information items below so
the meeting goes faster
Tasking
-------
- Does anyone need tasks to do?
Potential Other Topics
----------------------
- running standard interface tests on koji/mbs builds
Open Floor
----------
- TBD
Hi folks! Since we have the fine folks from IBM and pschindl involved
with openqa now, and I'm trying to decrease my bus factor, I thought
I'd pass along a few notes about one of the scariest bits of our openQA
deployment...
The tests which require multiple VMs to communicate with each other use
software-defined networking via openvswitch to achieve this. The
upstream docs for this are:
https://github.com/os-autoinst/openQA/blob/master/docs/Networking.asciidoc
Broadly speaking our deployments just implement that stuff via Ansible,
but it's pretty complex, so here's some notes.
The main bit of the ansible config is in
roles/openqa/worker/tasks/tap-setup.yml , with associated files and
templates in roles/openqa/worker/{files,templates} . But especially for
this networking stuff, there are actually important bits elsewhere that
are harder to find.
We need special iptables rules to make the magic happen, and they get
done somewhere else. First off, there's a mechanism in the infra
ansible plays that lets you specify custom rules as an ansible
variable, and we do this in inventory/group_vars/openqa-tap-workers -
look at that file and you'll see some custom rules. However, this
mechanism isn't actually flexible enough to let us add all the rules we
need, so we actually have a variant of the basic iptables template
file: roles/base/templates/iptables/iptables.openqa-tap-workers . You
can diff this against roles/base/templates/iptables/iptables to see
what we change: basically we add some masquerade rules at the end.
These can't be added as custom rules because they have to go in the nat
table - if we changed the nat table in custom rules, then the
"otherwise kick everything out" bit of the template wouldn't work
correctly (as it wouldn't be applied to the right table).
Why did this come up, you ask? Well, we just had to re-deploy qa09 -
which is the x86_64 tap worker host - and it wasn't quite working right
for these tap tests, with a strange failure mode (*some* network
connections from the worker VMs would work fine, others would just
stall). Also, tap networking wasn't working right on the new ppc64
worker host at all, no connections from within the worker VMs were
working.
After a lot of time bashing my head against the wall, I finally figured
out what was going on: it's all about network interfaces on the host.
There are a couple of bits in the iptables stuff which specifically
refer to what should be the active interface on the host. Up till
today, these all specified eth0.
When qa09 got re-deployed, for some reason, it had active network
connections on *both* eth0 *and* eth1. This was causing the weird
behaviour - it looks like openvswitch was deciding more or less at
random to route traffic from the guests over eth0 or eth1, and any
traffic that got routed over eth0 worked, but traffic routed over eth1
just didn't because the firewall wasn't set up to allow it.
There's actually even a bit in tap-setup.yml that tries to disable
eth1, but it didn't work because it relies on adding ONBOOT=no to
ifcfg-eth1 if it exists, but ifcfg-eth1 *didn't* exist. NM was just
bringing it up entirely without a config file, it seems. So for now I
just manually created an ifcfg-eth1 on qa09 which specifies ONBOOT=no .
On the ppc64 worker host, the active network interface isn't eth0, it's
eth2. So to account for this, I changed the custom iptables stuff to
allow everything for both eth0 *and* eth2 (this involved changing both
the custom rules and the modified template).
With these changes, the 'tap' networking tests seem to be working
properly on both qa09 and openqa-ppc64le-01.
So the moral of the story is, if I'm off on a desert island and this
stuff starts giving you trouble for some reason, remember about all the
different places where we have config for it in the ansible plays, and
check the active interfaces on the problematic host...
--
Adam Williamson
Fedora QA Community Monkey
IRC: adamw | Twitter: AdamW_Fedora | XMPP: adamw AT happyassassin . net
http://www.happyassassin.net
Hello Adam,
I discovered this morning that the openQA running on openqa.stg has some
hack code (1) but this change is not visible in master or staging branch
on pagure.
So I am not able to identify if any other changes are present on the
machine.
Would it be possible to have a remote read access to the openQA source
code used by openqa.stg to help for test failure investigation ?
(1)
https://openqa.stg.fedoraproject.org/tests/156101/modules/server_cockpit_de…
===
sub run {
my $self = shift;
# HACK HACK HACK
assert_script_run "setenforce 0";
# check cockpit appears to be enabled and running and firewall is
setup
assert_script_run 'systemctl is-enabled cockpit.socket';
===
--
Michel Normand
Hi,
I'm happy to announce a new version of rpmgrill has been released.
Changes include:
* `rpmgrill-unpack-rpms`: now also attempts in copying downloaded build logs from
Koji into unpacked sub directories.
* `rpmgrill-fetch-build` and `rpmgrill-analyze-local` marked as deprecated and
will be removed in the next release.
* `rpmgrill` will now exit with an exit code of 1 to indicate the test
run had failures and 0 to indicate a success.
* `rpmgrill` now recognizes executables linked with new dtags (BIND_NOW)
(Thanks to Petr Pisar).
Updates have been filed in bodhi, so testing is very appreciated:
F25: https://bodhi.fedoraproject.org/updates/rpmgrill-0.31-1.fc25
F26: https://bodhi.fedoraproject.org/updates/rpmgrill-0.31-1.fc26
F27: https://bodhi.fedoraproject.org/updates/rpmgrill-0.31-1.fc27
Happy hacking!
--
Róman Joost
Senior Software Engineer, Products & Technologies Operations (Brisbane)
Red Hat
I'm a newbie, would like to help out with something. I'm dev/qa automation at work but fairly new to Linux. Willing to clean the floors, make the coffee :-)
Monday is a holiday in the US and I will not be available to lead the
QA Devel meeting.
If there are some topics that need discussing and someone else is
willing to lead the meeting, please reply here and the meeting can
happen.
Tim