# F10 Blocker Review meeting
# Date: 2024-09-30
# Time: 16:00 UTC
# Location:
https://matrix.to/#/#blocker-review:fedoraproject.org?web-instance[element.…
Hi folks! It's time for a Fedora 41 blocker review meeting! We have 1
proposed blocker and 2 proposed freeze exceptions for Final.
Here is a handy link which should show you the meeting time
in your local time:
https://www.timeanddate.com/worldclock/fixedtime.html?msg=Fedora+41+Blocker…
The meeting will be on Matrix. Click the link above to join in a web
client - you can authenticate with your FAS account - or use a
dedicated client of your choosing.
If you have time this weekend, you can take a look at the proposed or
accepted blockers before the meeting - the full lists can be found
here: https://qa.fedoraproject.org/blockerbugs/ .
Remember, you can also now vote on bugs outside of review meetings! If
you look at the bug list in the blockerbugs app, you'll see links
labeled "Vote!" next to all proposed blockers and freeze exceptions.
Those links take you to tickets where you can vote.
https://pagure.io/fedora-qa/blocker-review has instructions on how
exactly you do it. We usually go through the tickets shortly before the
meeting and apply any clear votes, so the meeting will just cover bugs
where there wasn't a clear outcome in the ticket voting yet. **THIS
MEANS IF YOU VOTE NOW, THE MEETING WILL BE SHORTER!**
We'll be evaluating these bugs to see if they violate any of the
Release Criteria and warrant the blocking of a release if they're not
fixed. Information on the release criteria for F40 can be found on the
wiki [0].
For more information about the Blocker and Freeze exception process,
check out these links:
- https://fedoraproject.org/wiki/QA:SOP_blocker_bug_process
- https://fedoraproject.org/wiki/QA:SOP_freeze_exception_bug_process
And for those of you who are curious how a Blocker Review Meeting
works - or how it's supposed to go and you want to run one - check out
the SOP on the wiki:
- https://fedoraproject.org/wiki/QA:SOP_Blocker_Bug_Meeting
Have a good weekend and see you on Monday!
[0] https://fedoraproject.org/wiki/Fedora_Release_Criteria
--
Adam Williamson (he/him/his)
Fedora QA
Fedora Chat: @adamwill:fedora.im | Mastodon: @adamw@fosstodon.org
https://www.happyassassin.net
• Server user poll
==================
We decided to use the current version with all inserted modifications and with the shorter of the formulated alternatives on hackmd.io.
Mowest will create a Lime survey version of the discussed draft and reach out to Justin to put the project forward.
• Testing Release 41
====================
We agreed to continue the discussion on the mailing list due to running out of time here. See pboy: Improving our release testing efforts. An attempt to summarize our discussion (https://lists.fedoraproject.org/archives/list/server@lists.fedoraproject.or…)
We will begin our next meeting with this topic!
For details see our Working Group project page at
https://docs.fedoraproject.org/en-US/server-working-group/wg-minutes-2024/
== Please take some time to discuss this! ==
--
Peter Boy
https://fedoraproject.org/wiki/User:Pboy
PBoy(a)fedoraproject.org
Timezone: CET (UTC+1) / CEST (UTC+2)
Fedora Server Edition Working Group member
Fedora Docs team contributor and board member
Java developer and enthusiast
At our last meeting https://meetbot.fedoraproject.org/meeting_matrix_fedoraproject-org/2024-09-…) I had agreed to try to summarize our current state of discussion and to describe a possible solution. Here we go.
We discuss this topic for a long time, the initial tracking issue #63 (https://pagure.io/fedora-server/issue/63) is 2 years old.
Our initial intention was (and still is)
* Systematization of activities according to criteria and objective needs
* As a supplement to automatic tests and aspects that may not be amenable to automatic testing
* Integration and coordination with distribution-wide QA
* Discovery of new problems for which, of course, there are no automated tests (yet).
* This includes, among other things, monitoring the release changes that could potentially have side effects for Server.
* Checking the documentation for necessary updates
Topic "WHAT to test"
====================
One position was/is that manual testing is more or less completely redundant because everything relevant is now covered by automated testing.
One argument against this is that in the past, some problems were only noticed and found in the course of manual testing (e.g. software raid when switching to GPT) or were not found at all or not in time because manual testing of a release was not carried out or was insufficient (e.g. the problems with LVM administration, one of the most important functionality for Server).
And somehow it doesn't feel right not to test our installation media at all and use it to perform an installation or update for the first time after the release has been published.
I suppose it is agreeable, "it is good to have human testing of the deliverables written to real physical media on real physical systems" (adamwill). And this is exactly what we did in the past. Sometimes our manual testing program included our central services, virtualization and containerization, which are highly consequential in the event of a failure.
Additionally, is seems to be agreeable, "to test whatever workflows (we) have that *aren't* covered in the validation tests" (adamwill), although we may need to clarify, what "validation tests" do exactly cover.
Probably, we agree about the following list of human / manual test tasks:
* test DVD installation media on physikal hardware
* test netboot installation media on physikal hardware
* test VM (KVM) instantiation including first steps after first boot
* in both cases checking:
**** Everything works without breakage
**** No distortion of the graphical or terminal output.
**** No irritating, inaccurate or misleading error messages.
**** besides the graphical guided steps, check shell access (<F1> etc) inkl. access to log files, print screen etc.
**** Accuracy of the relevant documentation
* In case of DVD installation, additionally (here running on hardware)
**** Installing virtualization
**** Installing containerization systemd-nspawn
**** Installing containerization podman (as soon as we have documentation and procedures ready)
* Test the dnf upgrade procedure on hardware and on a "real life" instance, not just the minimal default
* Test the dnf upgrade procedure on a "real life" VM instance, not just the minimal default
* Create a list of any special or one-off tests that are likely to be needed, based on the list of changes
**** Perform and monitor these tests
Topic "HOW to test"
====================
Previously, we created a corresponding list as a tracking issue. For this release, I created a wiki page, which is easier to use.
As adamwill noted, this page "definitely shouldn't exist, we should fold anything important it covers into one of the existing pages". It takes us away from our goal from the very beginning of aligning testing with the distribution QA. It is a stopgap solution because the current pages do not offer us this capability.
To organize the test practically, we need a concise and clear list of all the tasks that need to be completed and that we can “tick off”. It would be good to have a structure like the one offered by the wiki page (https://fedoraproject.org/wiki/Server/QA_Manual_Testing_Overview) And nothing that is not part of the server test program belongs on this page.
It would be really great if the current QA pages could be added to or changed accordingly.
A good starting page would be the server page that is now sent in the announcement emails: https://fedoraproject.org/wiki/Test_Results:Fedora_41_Branched_20240924.n.0…
We should remove all items that have nothing to do with Server, starting with the download list.
The test matrix and coverage page / lists should be split into automated tests and manual/human tests.
The lists themselves can probably consist largely of annotated links to existing pages, but preferably with anchors directly to the relevant spot. And they should already indicate who tested and, above all, with which result. And it must be clear at a glance what has not yet been tested.
And we need a place for one-off, release-specific tests, should the need arise.
Topic "SUPPLEMENTING the tests"
===============================
Of our server-specific services (or roles), only two are currently covered by tests: PostgreSQL and IPA. This needs to be completed.
The process is:
1. define these as blocking roles in the prd/tech spec
2. cover them in the release criteria
3. write wiki test cases
4. automate them
The first task is done. We should continue with the second one. Pragmatically, we should focus on the services for which documentation and procedures already exist: virtualization, containerization (nspawn), web server and NFS server.
The biggest issue might be the Apache server. Its current installation procedure effectively results in an unusable and broken instance. There is a lot of work to be done.
But we should discuss this separately from the general release tests. We have tracking issue #61 for this (https://pagure.io/fedora-server/issue/61)
--
Peter Boy
https://fedoraproject.org/wiki/User:Pboy
PBoy(a)fedoraproject.org
Timezone: CET (UTC+1) / CEST (UTC+2)
Fedora Server Edition Working Group member
Fedora Docs team contributor and board member
Java developer and enthusiast
# F41 Blocker Review meeting
# Date: 2024-09-23
# Time: 16:00 UTC
# Location:
https://matrix.to/#/#blocker-review:fedoraproject.org?web-instance[element.…
Hi folks! It's time for a Fedora 41 blocker review meeting! We have 2
proposed blockers for Final.
Here is a handy link which should show you the meeting time
in your local time:
https://www.timeanddate.com/worldclock/fixedtime.html?msg=Fedora+41+Blocker…
The meeting will be on Matrix. Click the link above to join in a web
client - you can authenticate with your FAS account - or use a
dedicated client of your choosing.
If you have time today, you can take a look at the proposed or
accepted blockers before the meeting - the full lists can be found
here: https://qa.fedoraproject.org/blockerbugs/ .
Remember, you can also now vote on bugs outside of review meetings! If
you look at the bug list in the blockerbugs app, you'll see links
labeled "Vote!" next to all proposed blockers and freeze exceptions.
Those links take you to tickets where you can vote.
https://pagure.io/fedora-qa/blocker-review has instructions on how
exactly you do it. We usually go through the tickets shortly before the
meeting and apply any clear votes, so the meeting will just cover bugs
where there wasn't a clear outcome in the ticket voting yet. **THIS
MEANS IF YOU VOTE NOW, THE MEETING WILL BE SHORTER!**
We'll be evaluating these bugs to see if they violate any of the
Release Criteria and warrant the blocking of a release if they're not
fixed. Information on the release criteria for F40 can be found on the
wiki [0].
For more information about the Blocker and Freeze exception process,
check out these links:
- https://fedoraproject.org/wiki/QA:SOP_blocker_bug_process
- https://fedoraproject.org/wiki/QA:SOP_freeze_exception_bug_process
And for those of you who are curious how a Blocker Review Meeting
works - or how it's supposed to go and you want to run one - check out
the SOP on the wiki:
- https://fedoraproject.org/wiki/QA:SOP_Blocker_Bug_Meeting
Have a good day and see you on Monday!
[0] https://fedoraproject.org/wiki/Fedora_Release_Criteria
--
Best regards / S pozdravem,
František Zatloukal
Senior Quality Engineer
Red Hat
I just want to give an encouragement to look over the "work in progress"
Fedora Server Survey.
I'm excited about this project for our Working Group. Peter Boy started a
draft, and I made a number of edits to the draft today. I also added some
goals for the survey at the top of the document to help guide our decisions
and formation of questions. I believe there is still work that could be
done to improve the questions before we take them and input them into the
Lime Survey software. Please take a look at the following document and
consider:
1. Do we have the right goals for this survey? (Too many, too few, not the
right goals...)
2. Do we have the right questions to reach the proposed goals for the
survey? (Too many, too few, not the right questions for our goals)
3. Are the questions clear and providing data we can use? (i.e. open input
questions will create extra work to tabulate the results)
The more feedback the better. Peter Boy has planned that the survey will be
at the top of the agenda for next week's meeting so your input before then
is appreciated.
Draft of Survey:
https://hackmd.io/@pboy/ByguCouphC
--
mowest
--------->
discoverfoss.com