Hello all,
TL;DR: casync is an interesting piece of technology, but doesn't work for our current test images.
As I just merged another fedora-26 test image refresh, I took some time to evaluate casync [1]. On my laptop I still have the old image (c75882fd), and on my colo server (which has a really fast internet connection) I downloaded the new image (554546f5).
The main tradeoff is the chunk size: smaller sizes increase the likelihood of a particular chunk already being present locally, but increase the number chunks that need to be downloaded. As this usually happens through HTTP, lots of small files kill performance completely (mostly due to the usual TCP slow start behaviour, but also due to the extra effort of making the connection itself).
The reference time on my VDSL-25:
$ time bots/image-download fedora-26 real 9m52.313s
This is still bearable, but it would be even more interesting to improve image uploads (they take about 45 mins here).
First I created a casync index from the fedora 26 qcow image on my server, with the the default option (64 kB chunk size):
server$ cd /path/to/testdir server$ casync make fedora-26.caibx images/fedora-26-*.qcow2
Now on my laptop I use casync to get the new image, using as many chunks from the previous one as a basis, plus the new chunks from my server:
$ casync extract --seed images/fedora-26-c75882*.qcow2 http://piware.de/tmp/cockpit-images-casync/fedora-26.caibx images/fedora-26-554546f5c12141c3bc2e92ba53564ad490f6974aea5f7df831ebcab563bf1b62.qcow2
This has to download thousands of chunks, each using one HTTP request: It took slightly over 22 minutes. Oddly enough this didn't save the downloaded chunks anywhere, so I'm unable to tell whether this actually saved any bandwidth.
Next I changed the average chunk size to 8 MB, and produced a new chunk store:
server$ casync --store 8m.castr --chunk-size=8M make fedora-26-8mb.caibx images/fedora-26-*.qcow2
Trying to download it fails:
$ casync -vv --store http://piware.de/tmp/cockpit-images-casync/8m.castr --seed images/fedora-26-c75882fdee8847229eb89707d41b4488286605fc9f150811a5d950b8a1399825.qcow2 extract http://piware.de/tmp/cockpit-images-casync/fedora-26-8mb.caibx images/fedora-26-554546f5c12141c3bc2e92ba53564ad490f6974aea5f7df831ebcab563bf1b62.qcow2 [...] Chunk too large Failed to acquire http://piware.de/tmp/cockpit-images-casync/8m.castr/6265/626515b391d445efc75... Failed to run synchronizer: Broken pipe
OK, fair enough. Next attempt with 1MB worked, but took pretty much exactly the same time as bots/image-download. Apparently there wasn't any actual saving, the qcow images seem to differ too much in between image rebuilds.
Cross-checking with rsync over ssh (against the old image):
| 1,515,291,648 100% 2.68MB/s 0:09:00 (xfr#1, to-chk=0/1) | | sent 282,439 bytes received 1,479,233,072 bytes 2,712,219.09 bytes/sec | total size is 1,515,291,648 speedup is 1.02
Maybe there is some trick ("sort" the blocks on the file system or so) to make qcow files rsyncable better. But in conclusion, right now this delta approach doesn't buy us much.
Martin
[1] http://0pointer.net/blog/casync-a-tool-for-distributing-file-system-images.h...
On Tue, Jul 11, 2017, at 11:45, Martin Pitt wrote:
Maybe there is some trick ("sort" the blocks on the file system or so) to make qcow files rsyncable better. But in conclusion, right now this delta approach doesn't buy us much.
The problem here is qcow's compression. Even when changing the image only a little, the compression will change almost everything in the resulting file. casync compresses blocks after checksumming them, so it's possible to use it on raw images.
After hearing Lennart's talk about this in Berlin the other day, I decided it's worth to give it another look.
To test it, I downloaded the last three fedora-26 images, which should have enough in common to see some benefits:
image commmit date ------------------------ c758 83abe8a8 Jul 1 5545 2623987a Jul 10 4dad 6d6be503 Jul 12
And uncompressed them with
$ qemu-img convert <image>.qcow2 <image>.raw
I then ran `casync make` with the same store on each of them, starting with the oldest. I repeated the experiment for a couple of different chunk sizes, with a new store for each chunk size.
image reuse size files time --------------------------------------- 64k c758 40% 1.1G 74k 17 min 5545 74% 1.5G 100k 9 min 4dad 93% 1.7G 107k 4 min --------------------------------------- 512k c758 37% 0.9G 11k 18 min 5545 64% 1.6G 17k 12 min 4dad 84% 1.8G 20k 7 min --------------------------------------- 1M c758 34% 0.9G 5k 19 min 5545 56% 1.6G 9k 14 min 4dad 77% 2.0G 11k 9 min --------------------------------------- 8M c758 38% 0.9G 355 24 min 5545 37% 1.8G 713 24 min 4dad 40% 2.6G 1k 23 min
reuse: the number of chunks that were reused (as reported by casync) size: the size of the store on disk files: the number of files in the store time: runtime on my system
The basline for reused chunks for these images seems to be in the high 30%. I guess these are just empty blocks.
We're definitely not gaining anything at a chunk size of 8M. The store has exactly the same size and generating it takes the same time as compressing the image directly with `xz`. The store also grows almost linearly with the amount of images, so not many chunks seem to be reused between images.
The smaller chunk sizes seem to present a nice balance between number of files and reused chunks. Like Martin already mentioned, the bottleneck might be http requests.
Before we spend any more time on this, let's find out if this could this give us any tangible benefits that justify the additional complexity and developer time. It saves quite some disk space and bandwidth (for both developers and test machines), at the expense of long image creation times when the store is empty.
I don't know the space and bandwidth constraints of our test runners. So I'm not sure if this is really worth pursuing. What do you think?
Cheers Lars
P.S.: I also ran the same tests on the compressed qcow images for comparison. It behaved as expected: the highest chunk reuse achieved was 21% for the 64k chunk size, but usually it was *much* lower than that. Not worth it.
I'll running another batch of tests tonight, which puts all current images into one store. Let's see how much saving we'll get between different operating systems.
cockpit-devel@lists.fedorahosted.org