On 28/10/2023 10.06, fedora(a)eyal.emu.id.au wrote:
On 28/10/2023 09.38, Jeffrey Walton wrote:
> On Fri, Oct 27, 2023 at 5:59 PM Eyal Lebedinsky <fedora(a)eyal.emu.id.au> wrote:
>>
>> Fully updated F28.
>>
>> I had to send one (of 7) member disk for RMA.
>> I notice that the system is very non responsive. 'top' shows
>>
>> PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
>> 1365697 root 20 0 0 0 0 R 93.8 0.0 384:40.55
kworker/u16:3+flush-9:127
>>
>> This continues even when there are no user actions (ff, tb closed).
>>
>> A few days ago it stopped, but today I see that it kept running all night where
there were
>> period of inactivity for a few hours.
>>
>> As another point: a few days ago I received a disk from RMA and the recovery went
as fast as expected.
>> I then removed another disk to send for RMA.
>>
>> Is this expected? Is there anything I can do to improve the situation?
>
> If you have a hot spare and a lot of data, I could envision a
> situation where a low priority thread takes several days to rebuild
> the array. Or that has been my [limited] experience when failing over.
> But it usually happens in the background, and does not affect
> responsiveness too much.
>
> Jeff
I do not think this is the situation. I do not have a spare.
$ cat /proc/mdstat
Personalities : [raid6] [raid5] [raid4]
md127 : active raid6 sdg1[5] sdf1[4] sdh1[6] sdc1[9] sde1[7] sdd1[8]
58593761280 blocks super 1.2 level 6, 512k chunk, algorithm 2 [7/6] [_UUUUUU]
bitmap: 84/88 pages [336KB], 65536KB chunk
unused devices: <none>
To show how slow it is, looking at iostat I see the writing is going at below 100KB/s.
I decided to pause (virsh save) a VM, which needs to write about 8GB.
It is now going for over 30m and completed about 3/4 of the job...
oops, I misread it. It did not complete 6GB, only 600MB (now up to 900MB).
I will let it complete through the day.
--
Eyal at Home (fedora(a)eyal.emu.id.au)