HekaFS encryption layer: benchmarks
by Edward Shishkin
Hello everyone,
Here are mongo-benchmark results for 1-brick volumes formatted with
(1) ext4: http://file.brq.redhat.com/~eshishki/HekaFS/hekafs.ext4.html
(2) xfs: http://file.brq.redhat.com/~eshishki/HekaFS/hekafs.xfs.html
Legend:
A) "no crypt xlator": crypt and oplock xlators are not invoked.
B) "crypt xlator with trivial cipher transform": crypt and oplock
xlators are invoked, cipher transform is represented by empty operator.
EOFs (end-of-files) are fully handled.
C) "crypt xlator with AEC-CBC cipher transform": crypt and oplock
xlators are invoked, cipher transform is represented by AEC_CBC_encrypt
of OpenSSL library.
The link to mongo documentation can be found at the end of tables.
---------------------------------------------------------------
For the client/server configuration (1) above I have also measured
performance of writing/reading a large file (800M):
dd writing:
(dd if=/dev/zero of=/mnt/gluster/largefile bs=4K count=200000)
A) 178.57 s, 4.6 MB/s (no crypt xlator)
B) 226.02 s, 3.6 MB/s (crypt xlator with trivial cipher transform)
C) 254.02 s, 3.2 MB/s (crypt xlator with AES-CBC cipher transform)
B/A = 1.26
C/A = 1.42
dd reading:
(dd if=/mnt/gluster/largefile of=/dev/null bs=4K)
A) 11.2394 s, 72.9 MB/s (no crypt xlator)
B) 13.4892 s, 60.7 MB/s (crypt xlator with trivial cipher transform)
C) 26.4549 s, 31.0 MB/s (crypt xlator with AES-CBC cipher transform)
B/A = 1.20
C/A = 2.36
RAW dd reading of ext4 large file on the server:
(dd if=/root/exp2/edward/largefile of=/dev/null bs=4K)
9.69625 s, 84.5 MB/s
------------------------------------------------------------------
Server and client volfiles for (A) and (B) attached.
HekaFS sources: git://git.fedorahosted.org/CloudFS.git (branch "aes").
Common comments:
Current implementation of encryption support leads to performance drop
from 1.28 till 2.36 times depending on operations (for bricks formatted
with ext4). The most affected operation is reading a largefile.
The least affected operation is creating a large set of small files.
Thanks,
Edward.
11 years, 8 months