On Wed, Mar 24, 2021 at 11:05 PM Chris Murphy <lists@colorremedies.com> wrote:
On Wed, Mar 24, 2021 at 6:09 AM Richard Shaw <hobbes1069@gmail.com> wrote:
>
> I was syncing a 100GB blockchain, which means it was frequently getting appended to, so COW was really killing my I/O (iowait > 50%) but I had hoped that marking as nodatacow would be a 100% fix, however iowait would be quite low but jump up on a regular basis to 25%-50% occasionally locking up the GUI briefly. It was worst when the blockchain was syncing and I was rm the old COW version even after rm returned. I assume there was quite a bit of background tasks that were still updating.

> I assume for a blockchain, starts small and just grows / appended to.

Append writes are the same on overwriting and cow file systems. You
might get slightly higher iowait because datacow means datasum which
means more metadata to write. But that's it. There's no data to COW if
it's just appending to a file. And metadata writes are always COW.

Hmm... While still annoying (chrome locking up because it can't read/write to it's cache in my /home) my desk chair benchmarking says that it was definitely better as nodatacow. Now that I think about it, initial syncing I'm likely getting the blocks out of order which would explain things a bit more. I'm not too worried about nodatasum for this file as the nature of the blockchain is to be able to detect errors (intentional or accidental) already and should be self correcting.


You could install bcc-tools and run btrfsslower with the same
(exclusive) workload with datacow and nodatacow to see if latency is
meaningfully higher with datacow but I don't expect that this is a
factor.

That's an interesting tool. So I don't want to post all of it here as it could have some private info in it but I'd be willing to share it privately. 

One interesting output now is the blockchain file is almost constantly getting written to but since it's synced, it's only getting appended to (my guess) and I'm not noticing any "chair benchmark" issues but one of the writes did take 1.8s while most of them were a few hundred ms or less.  


iowait just means the CPU is idle waiting for IO to complete. It could
do other things, even IO, if that IO can be preempted by proper
scheduling. So the GUI freezes are probably because there's some other
file on /home, along with this 100G file, that needs to be accessed
and between the kernel scheduler, the file system, the IO scheduler,
and the drive, it's just reluctant to go do that IO. Again, bcc-tools
can help here in the form of fileslower, which will show latency
spikes regardless of the file system (it's at the VFS layer and thus
closer to the application layer which is where the GUI stalls will
happen).

I'm pretty sure that's exactly what's happening. But is there a better I/O scheduler for traditional hard disks, currently I have:

$ cat /sys/block/sda/queue/scheduler
mq-deadline kyber [bfq] none

 
Any way this workload can be described in sufficient detail that
anyone can reproduce the setup, can help make it possible for multiple
other people trying to collect the information we'd need to track down
what's going on. And that also includes A/B testing, such as the exact
same setup but merely running the 100G (presumably it is not actually
the exact size but the workload as the sync is happening)

I was rounding slightly, so yes not exactly 100GB but as the nature of a blockchain it keeps growing:

$ ls -sh data.mdb
101G data.mdb

A large bittorrent download should also be similar since you don't get the parts in order, but perhaps it's smart enough to allocate all the space on the front end?

Thanks,
Richard