On Mon, Nov 25, 2019 at 1:39 PM pmkellly@frontier.com <pmkellly@frontier.com> wrote:
Here is a link to the List discussion on the subject item:

https://lists.fedoraproject.org/archives/search?q=drive+dismount+test+case&page=1&mlist=test%40lists.fedoraproject.org&sort=date-asc

I don't have a current example of this happening. It was originally
posted to the list by Alan Jenkins while F31 was still Rawhide. I had
seen the problem too; so after a few days and no replies, I picked it
up. The problem disappeared shortly after F31 branched. I didn't file a
bug report; though I would have if it had continued. It just seemed like
something important and basic that should get some attention. Especially
since it was once a test case and had been removed from the matrix.

As promised during yesterday's meeting, I looked at the proposed test case:
https://fedoraproject.org/wiki/User:Tablepc/Draft_testcase_reboot

I have multiple comments:
- It is trying to check two things at the same time: 1. that reboot/shutdown works as expected 2. that filesystems are properly unmounted. I believe there should be two separate test cases for these. So please split these into two.
- Formatting needs be fixed to make it look like our other test case pages. Currently it looks like a copy-paste from email without effort to use wiki formatting.
- Basics don't need to be explained, because the required knowledge level for performing the test case guarantees that the tester knows e.g. how to log in. So for example the first two points can be shortened to "Switch to a free virtual console using Ctrl+Alt+F<n> shortcut and log in".
- We don't need to mandate a particular disk layout in test case setup. It is more useful for different testers to have different environments, so that they have a higher chance to detect a bug.
- I don't like checking pre-shutdown messages using halt very much. The first problem is that with plymouth installed (the default), you won't see the messages, but a frozen plymouth screen (unless you're quite fast and switch it to console messages before it freezes). The second problem is that it relies too much on user intuition in how to distinguish a success or a failure state. There is no example of either. Additionally, do we know for sure that a system that can't unmount filesystems will halt eventually? I'd expect it to hang forever. I'd rather leave all the error checking for the subsequent boot (or in the case of a hanging boot, it's obviously broken - but we won't need a test case for it, because you'll easily see it with regular interaction with the system).
- When checking boot logs for fsck fixes, it's important to show an example of not just a successful case, but also of a failed case. And I seem to be lucky today [1].
- When testing system shutdown methods, I'd only use reboot and poweroff. Halt is very niche and shutdown is old and replaced by poweroff.

Kamil

[1]
-- Logs begin at Tue 2019-08-27 09:26:40 CEST, end at Tue 2019-11-26 14:50:14 CET. --
Nov 25 10:25:20 phoenix systemd-fsck[684]: root: recovering journal
Nov 25 10:25:20 phoenix systemd-fsck[684]: root: Clearing orphaned inode 12325283 (uid=1000, gid=1000, mode=0100644, size=641092)
Nov 25 10:25:20 phoenix systemd-fsck[684]: root: Clearing orphaned inode 12331101 (uid=1000, gid=1000, mode=0100644, size=641092)
...
Nov 25 10:25:20 phoenix systemd-fsck[684]: root: clean, 1023215/26869760 files, 46957728/107451392 blocks
Nov 25 09:25:22 phoenix systemd-fsck[877]: boot: recovering journal
Nov 25 09:25:22 phoenix systemd-fsck[878]: fsck.fat 4.1 (2017-01-24)
Nov 25 09:25:22 phoenix systemd-fsck[878]: 0x25: Dirty bit is set. Fs was not properly unmounted and some data may be corrupt.
Nov 25 09:25:22 phoenix systemd-fsck[878]:  Automatically removing dirty bit.
Nov 25 09:25:22 phoenix systemd-fsck[878]: Performing changes.
Nov 25 09:25:22 phoenix systemd-fsck[878]: /dev/nvme0n1p1: 34 files, 6897/51145 clusters
Nov 25 09:25:22 phoenix systemd-fsck[877]: boot: clean, 103/65536 files, 67833/262144 blocks