On 07/15/2011 12:05 AM, Jeff Darcy wrote:
On 07/14/2011 05:19 PM, Edward Shishkin wrote:
>> It doesn't take a POSIX expert to know that if nobody ever even
>> tried (**) to write past point X (or extend past X via f/truncate)
>> then no read should return data past X. Does that really require a
>> specific citation?
>
>
> I am sorry, but this is absolutely irrelevant to my question. Indeed,
> your citation says ((*) above) :
>
> "if nobody ever even try to .. extend past X via f/truncate... ",
>
> while in your example of "conflict" ((**) above) we see:
>
> "If we're extending the file..."
>
> Again, please, describe a conflicting situation between read and
> write, or provide programmers documentation which will show that (**)
> is a real conflict.
It's really very simple. Consider an initially empty file.
(1) Client A tries to write 13 bytes.
(2) The write is padded to 16 bytes (cipher-block boundary) which
are written to the file - regardless of which way we're handling
EOF.
(3) File is truncated back down to 13, either by moving the last 3
bytes into an xattr and truncating, or by storing a "virtual EOF"
in an xattr.
If client B's read of 16 bytes is allowed to occur between (2) and (3),
it will read 16 bytes - three more than anyone ever tried to write,
three more than it should have gotten under POSIX.
Why do you suppose that B will read 16 bytes?
This is a logical mistake. Nobody do obligate
read() to function so. B will read *nothing*.
Yeah, yeah, file is already 16 bytes long on the
local fs, but for B it is still empty. And this
scenario is POSIX-compliant.
I have already explained how to implement this:
it is enough to maintain actual file size. Just
do it properly: update it after appending, but
before shrinking. And be happy without unneeded
inode locking/higher-level reads serialization.
Thanks,
Edward.
To prevent this, we
have to defer the read. Holding a lock on the inode solves the
problem
simply but not well. It can not be emphasized enough that this is a
distributed filesystem. We can not assume that our calls "downward"
will be served locally, so the lock durations would be large and the
server-failure scenarios difficult. Furthermore, if we do have to wait
for a response we will do so in an event-driven rather than sequential
fashion by returning through several levels and allowing the transport
code to run. When we return we might not be on either the same thread
or the same processor, which makes locking-based approaches invalid. We
must implement our own higher-level serialization.
> And where will it be lost? IMHO we can maintain actual size in the
> private part of inode (see "Handling EOBs" document for details). In
> particular this can be considered as "correct st_size".
Every translator invocation has a cost - too high a cost, IMO, but
that's between me and the Gluster developers. Every such invocation on
the main read or write paths means those paths will be slower.
> For now I see only attempts from your side to justify non-existing
> problem by making reference to completely irrelevant documentation.
...and here I thought we were all on the same side. Silly me. For my
part, all I'm seeing is attempts from you to *reject* a real and
provably working solution to a real problem based on ignoring any issues
you don't understand. Try writing some code that actually does this
well enough to pass some tests, and you'll understand the problems better.
>>> One more reason to adhere this approach is supporting HMACs for
>>> authentication: where are we going to store them? In xattrs? IMHO
>>> it is not serious. They should be placed in file's body right
>>> after respective atoms.
>>
>> A moment ago you were complaining about fragmentation and
>> performance, now you want to intersperse data with metadata at a
>> 16-byte granularit
>
> I'd say it has status of data, not metadata. I wouldn't mix these
> storage classes (see Handling EOBs document for details).
>
> And yes, I believe I am pretty consistent here: every read of a file
> interspaced with HMACs will issue a contiguous sequence of requests
> (assuming that file is defragmented in the local fs). We just need to
> perform offset translation:
>
> new_off = off + (off>> atom_bits<< hmac_bits);
>
> I don't think it is a very expensive operation on any architectures
> supported by Red Hat.
I think you're understating the performance cost of allocating a new
buffer, then copying piece-wise between the "multiplexed" buffer
(containing data plus metadata) and the "demultiplexed" one (containing
only data). You're completely *ignoring* the issue of needing to extend
the GlusterFS wire protocol to accommodate all of this - something I
have pointed out repeatedly is not an option for us at this time. It's
all easy if you don't expect to do it yourself, I suppose.