Hi all Recently, we want to implement a new translator for data compression, would you guys can give some advises? We want to make the new translator upon the “Posix”, any suggestions?
Thanks & Best Regards Sun Yongjie(孙永洁) Phone: 8751-1643 18611302918 CUBE: GTC-17-W030 Email: yongjie.sun@intel.commailto:yongjie.sun@intel.com
On 07/03/2013 08:51 PM, Sun, Yongjie wrote:
Hi all Recently, we want to implement a new translator for data compression, would you guys can give some advises? We want to make the new translator upon the “Posix”, any suggestions?
There are two basic ways you could do this.
* Whole file: keep files uncompressed while they're open, compressed otherwise. It shouldn't be hard to use the GFID to establish correspondence between the two.
* Block: compress each (possibly large) block separately, maintaining a list of mappings from user-visible uncompressed offsets to internal compressed offsets.
The problem with the whole-file approach is *huge* latency on the first open of a file, and high server load re-compressing after close. This problem would be especially acute for small updates to large files, as e.g. some cloud provisioning tools will do to "personalize" virtual-machine images. For that reason, I think the block-wise approach is the only really usable option.
The problem with the block-wise approach is complexity. Space management is going to get more complicated as a less compressible version of a block is written over a more compressible version, forcing new space to be allocated at the physical EOF. Reporting the correct file size can also be surprisingly difficult. Lastly, this approach requires careful synchronization between reads and writes involving blocks that are in the process of being de/compressed. It can be done, but it would take significant effort.
Given this complexity, you might be better off just running a normal GlusterFS on top of a local filesystem or block-device driver that does its own compression. Unfortunately, when I looked for such a thing the options seemed quite poor. That's a surprise, and maybe an opportunity for some enterprising person to write a dm-compress target (for example) that's usable by far more than just GlusterFS.
Hi Jeff Can you introduce more detail about your way 1?
And now we are trying to implement a new translator just upon the translator "posix", it compress data and then through next translator "posix" write to disk? But in "writev" function, it can only deal "struct iovec" and comress it. But how to identify whether the file writing is "end" or how to identify the "iovec" is the last part of the file?
-----Original Message----- From: Jeff Darcy [mailto:jdarcy@redhat.com] Sent: Friday, July 05, 2013 9:01 PM To: cloudfs-devel@lists.fedorahosted.org Cc: Sun, Yongjie Subject: Re: How to implement data compression function in GlusterFS/HeKeFS
On 07/03/2013 08:51 PM, Sun, Yongjie wrote:
Hi all Recently, we want to implement a new translator for data compression, would you guys can give some advises? We want to make the new translator upon the “Posix”, any suggestions?
There are two basic ways you could do this.
* Whole file: keep files uncompressed while they're open, compressed otherwise. It shouldn't be hard to use the GFID to establish correspondence between the two.
* Block: compress each (possibly large) block separately, maintaining a list of mappings from user-visible uncompressed offsets to internal compressed offsets.
The problem with the whole-file approach is *huge* latency on the first open of a file, and high server load re-compressing after close. This problem would be especially acute for small updates to large files, as e.g. some cloud provisioning tools will do to "personalize" virtual-machine images. For that reason, I think the block-wise approach is the only really usable option.
The problem with the block-wise approach is complexity. Space management is going to get more complicated as a less compressible version of a block is written over a more compressible version, forcing new space to be allocated at the physical EOF. Reporting the correct file size can also be surprisingly difficult. Lastly, this approach requires careful synchronization between reads and writes involving blocks that are in the process of being de/compressed. It can be done, but it would take significant effort.
Given this complexity, you might be better off just running a normal GlusterFS on top of a local filesystem or block-device driver that does its own compression. Unfortunately, when I looked for such a thing the options seemed quite poor. That's a surprise, and maybe an opportunity for some enterprising person to write a dm-compress target (for example) that's usable by far more than just GlusterFS.
cloudfs-devel@lists.fedorahosted.org