HekaFS encryption layer: managing requests on a client and server sides, locking protocol
by Edward Shishkin
HekaFS encryption layer:
Managing requests on a client and server sides
Shared and exclusive accesses.
Accumulating events.
DRAFT
Transparent encryption layer brings in specific problems inherent for
stackable file systems. One of such problems is conflicting
read-modify-write requests, see (*) below for details.
This is in addition complicated with the "hole problem", see (**)
below, and with the EOF (end-of-file) problem specific for CBC
encryption mode, which is the most preferable mode in our case.
All the problems above should be resolved with active participation
of a special server-side manager. So we introduce a special xlator of
"features" type called for historical reasons "oplock".
Client issues (submits) read/write requests. That said, client informs
an oplock xlator, that he wants to read/write a chunk of data
[offset, offset + count] and asks for access to disk. Oplock xlator,
in turn, grants or refuses access to disk.
If the access hasn't been provided, then oplock puts this requests
in special maintained database and provide the client with unique
request-id so that oplock will be able to find that request in the
data-base and check its status when client will repeatedly asks for
access.
So we distinguishes the following types of requests:
FIRST_REQ_TYPE /* primary request: client asks for access for the
* first time */
REPEAT_REQ_TYPE /* repeated request: client has already asked for
* access, oplock refused and provide the client
* with request-id.
* When making repeated request clients needs to
* pass this request-id to oplock xlator */
Oplock (the server-side manager) provides accesses of 2 types:
. shared access
. exclusive access
For any request oplock must provide an exclusive or shared access
in a finite stretch of time (oplock periodically updates "local
priority" of every request in the respective databases).
For every file oplock maintains "File Size Of the Future" (FSOF).
FSOF is initialized as ia_size via ->fstat() at open() time.
Then it gets updated by special way depending on arrived requests.
For every arrived request of FIRST_REQ_TYPE type oplock assigns
one of the following statuses:
. OVRWR /* overwrite, if request doesn't change current FSOF */
. APPTRUNC /* append-or-truncate, if request changes current FSOF */
That said, if for any write request (off, count)
(off + count <= FSOF), then request is OVWR, otherwise APPTRUNC.
Every (expanding or shrinking) truncate request is APPTRUNC.
For every arrived APPTRUNC request oplock xlator updates FSOF
by the following way:
. FSOF = off + count (in the case of appending writes);
. FSOF = off (in the case of truncates).
Oplock xlator doesn't update FSOF for arrived OVRWR requests.
For every file oplock xlator maintains a Common Set of Locks
(CSL), which includes a pair of the following databases:
. subset of exclusive locks (ESL);
. subset of shared locks (SSL).
ESL is a queue. Every element of this queue represents a pending
request, which waits for an exclusive access to the file. This
element contains a record - unique (for the whole CSL!) request-id.
SSL is an rb-tree of extents. Every such extent (off, count) points
to a queue of pending requests which want to write to this interval
(off, count). Every such request contains a unique (for the whole
CSL!) request-id.
If a primary arrived request is APPTRUNC, then oplock xlator assigns
him a unique id as the next non-busy serial number and puts the
request to ESL queue.
If a primary arrived request (off, len) is OVRWR, then oplock xlator
assigns him a unique id as the next non-busy serial number and puts
this request to SSL by the following steps:
. find all extents in SSL overlapped with (off, len);
. replace all those extents and (off, len) with a single one and
merge all their queues properly (in the resulted queue requests
must be ordered by request-id).
All requests are sent to oplock xlator by clients via ->fgetxattr().
Offset, count, request-id, etc. are encoded to the "name" of
extended attribute prefixed with a special magic string.
Checking global priorities
For every file oplock xlator maintains NEXT, an id of request which
must get an access (exclusive, or shared) next time. The value of NEXT
is initialized with zero value, as the first request-id is always zero.
NEXT is incremented by oplock xlator with every access is granted to
a secondary request.
Also for every file oplock xlator maintains NR_WRITEBACK, a number
of requests, which have been provided an access and which are
currently in progress. NR_WRITEBACK is incremented every time when
an access is granted to some request. ->writev_cbk() decrements
NR_WRITEBACK counter and drops CSL_WRITEBACK flag, if it has become 0.
For every arrived secondary request oplock checks its global priority:
if its id coincides with NEXT, then arrived request has the highest
global priority. Otherwise, if it is larger then NEXT, it has low
global priority.
Arrived secondary request can not have id smaller then NEXT.
For every set of locks (ESL and SSL) oplock xlator maintains a number
of their elements (NR_ESL and NR_SSL).
Handling primary requests by oplock xlator
If a primary APPTRUNC request has arrived and the common set of lock
(CSL) is empty (NR_ESL + NR_SSL == 0) and the flag CSL_WRITEBACK is not
set, then oplock
. updates FSOF
. sets a CSL_WRITEBACK flag and
. grants an exclusive access to the client.
In this case no requests are put to CSL (a kind of optimization).
If a primary OVRWR request has arrived, or a primary APPTRUNC request
has arrived and CSL_WRITEBACK is set, or the common set of lock (CSL)
is not empty, then oplock xlator
. updates FSOF (for APPTRUNC request);
. assigns a request-id;
. put the request to respective set (ESL or SSL).
Handling repeated requests by oplock xlator
For any arrived secondary request oplock xlator checks its global
priority (see above).
If the request has low global priority, then EBUSY is returned.
Suppose arrived request (off, len) has the highest global priority.
If CSL_WRITEBACK flag is not set, then there is no executing requests
(NR_WRITEBACK must be 0 in this case), so oplock
. grants respective access to the arrived request;
. sets CSL_WRITEBACK flag;
. increments NR_WRITEBACK counter;
. increments NEXT.
If CSL_WRITEBACK flag is set and the arrived request is APPTRUNC, then
exclusive lock can not be granted and oplock xlator returns EBUSY.
If CSL_WRITEBACK flag is set and the arrived request is OVRWR, then
oplock checks NR_ESL.
If NR_ESL != 0, then an exclusive access is hold (in this case
NR_ESL must be 1), so we can not provide shared access and oplock
returns EBUSY.
Else If NR_ESL == 0, then a shared access is hold and oplock
xlator looks for respective extent in the SSL rb-tree and check the
local priority of the request in the respective queue.
If the request has the highest local priority, then oplock
grants shared access, and
. increments NR_WRITEBACK counter;
. increments NEXT.
Else if the request has low local priority, then oplock xlator
returns EBUSY.
With every granted exclusive access oplock returns file size encoded
to "xattr" value, so that client will be able to perform proper hole
conversion. This is a kind of optimization: the client also can find
file size by calling ->fstat().
->writev_cbk() removes completed request from respective queue of
ESL or SSL. If the respective queue of SSL has become empty, then
respective extent has been removed from the rb-tree.
Comments
Exclusive locks are needed to avoid holes on a local file systems
(see (**)), we need to make sure that nobody except us will change
file size.
Example of conflict between appending writes:
Suppose file is 20K
Process A performs ->writev(size = 10, off = 20);
Process B performs ->writev(size = 10, off = 30);
Process A checks file size (20K);
Process B checks file size (20K);
Process A writes 10K bytes from offset 20K;
Process B converts a 10K "hole" at offset 20K.
Process B writes from offset 30K.
As the result we'll have unexpected 10K of zeros at offset 20K.
Obviously "shared" locks don't work here in spite of writes to
disjoint intervals.
Such conflicts don't take place in local file systems, which
update hole metadata "in place".
Some improvements of this locking protocol are possible.
Non-precise APPTRUNC requests (with offset < file size) can be split
into precise APPTRUNC and OVRWR requests to increase a number of
requests that can be handled in parallel. However, this will require
additional support from server and client sides.
In the degenerated case (of infinite merges) shared lock acts like
exclusive one.
------------------------------------------------------------------------------
(*) http://lists.gnu.org/archive/html/gluster-devel/2011-05/msg00002.html
(**)
http://fedorahosted.org/pipermail/cloudfs-devel/2011-November/000172.html
All comments, suggestions are welcome.
Edward.
11 years, 10 months
Ordered writes in HekaFS encryption layer
by Edward Shishkin
Ordered writes in HekaFS encryption layer
DRAFT
Transparent encryption layer brings in specific problems
inherent for stackable file systems (i.e. intermediary layers
between user and local fs). One of such problems is that such
layers are not aware about metadata of local file systems that
indicates "holes". So if such holes exists, then it will be
represented to user as a "garbage" (decrypted set of zeros),
and this would mean posix non-compliance.
The single reasonable way for us is to not allow holes at local
fs, i.e. to detect all moments of hole creation and mandatory
convert it to a set of (encrypted) zeros.
A hole is created every time when local file system is asked
to write from offset, which is larger than file size. So the
first idea is to compare file size and offset that user wants
to write from. If the offset is larger than file size, we
convert hole before write. However, it wouldn't be enough only
to follow user's instructions.
Encryption layer writes data by chunks (usually of atom size).
This means that in a common case we (encryption layer) must
split a user request into many chunks and writes them separately.
This is because:
1) Linux VFS doesn't accept too large chunks (write of chunk
larger than MAX_INT will be incomplete, and we can not allow
such "truncation" in encryption layer).
2) splitting large writes will improve things in the case of
concurrent access, as writing to different parts of file
requires to acquire different "shared locks".
However, splitting writes without any additional efforts from
encryption layer is prone to appearing short-lived holes on a
local fs. For example, user asks to append 20K to 10K file.
Suppose we write by 4K chunks and the first chunk that hits
local fs has offset 12K. It means that 2K hole will be created
on the local fs.
We need to avoid such short-lived holes even in spite of their
short lifespan: after a system crash we'll have already
persistent holes (and everything will be consistent from the
standpoint of local fs).
We avoid such short-lived holes by using so-called ordering
technique: the encryption layer provides a guarantee that any
"appending" sequence of requests will be written in ordered
fashion.
Glossary
---------
Chunk of data is a sequence of (logical) bytes
B = {b1, b2, ..., bm} in a file at some offset off. For every
chunk B we'll denote offset(B) = off, size(B) = m.
Request is an order for a local fs to write some chunk of
data (see above).
Submit a request means to ask an upper server-side manager
(oplock xlator in our case) to write a respective chunk of
data.
Sequence of requests {R0, R1, ..., Rn} is any sequence of
chunks so that offset(R_i) + size(R_i) == offset (R_(i+1)).
Request R_i is direct parent of R_(i+1). Request R_s, (s < i)
is indirect parent of R_(i+1).
Sequence of requests {R0, R1, ..., Rn} is appending iff
offset(R_i) > file_size for some i, 0 <= i <= n.
In particular, appending sequence changes file size.
Sequence of requests is overwriting, iff it is not appending.
Appending sequence is minimal, iff offset(R0) > file_size
Lemma
--------
Every sequence can be split into an overwriting and a minimal
appending sub-sequences.
So we split every sequence of requests into 2 sub-sequences
(overwriting and appending ones). An overwriting sub-sequence
is written in parallel fashion. An appending subsequence is
written in ordered fashion (see below for definitions).
Every sequence has
. block of HEAD_ATOM type (<= 1),
. block of TAIL_ATOM type (<= 1),
. blocks of FULL_ATOM type (>= 0).
We define a linear order on a set of blocks of any sequence by
the following rule:
(A < B) iff (offset(A) < offset(B)).
All requests {R1, R2, ...} of appending sequence are written in
ordered ("parent first") fashion. This means that:
A1. On a client side
R_(i+1) is written by the callback function ->writev_cbk()
of ->writev() spawned to write its direct parent (R_i).
Since we acquire an exclusive access to write the whole
appending sequence, all its requests are written immediately
in ordered fashion (we don't ask server-side manager to write a
separate R_j). See do_ordered_submit().
B1. On a server side
A special server-side manager (oplock xlator) queues requests
and grants (or decline) exclusive access to write the whole
appending sequence.
All requests {R1, R2, ...} of overwriting sequence are written
in parallel fashion. This means that:
A2. On a client side
We submit all R_j in a loop (see do_parallel_submit). I.e. for
every request R_j we ask the server-side manager (oplock xlator)
for "shared access". If the shared access is not granted, then
we try again.
B2. On a server side
A special server-side manager (oplock xlator) queues requests
and grants (or decline) shared access to write a separate request
R_j of overwriting sequence. (Definitions of exclusive and shared
access, and the policy of their granting will be defined separately).
Such technique allows to simplify things (i.e. to not involve
additional sorting means at server side).
Implementation details.
The order HEAD_ATOM < FULL_BLOCK_ATOM < HEAD_ATOM is hardcoded
(see function do_ordered_submit). The order on blocks of the
same FULL_BLOCK_ATOM type is provided by maintaining a special
cursor at local area (see crypt_local_t, avec_config).
Recap
-----
We ask for exclusive access for the whole appending sequence.
Once it is granted, all requests of the sequence are written
one-by-one in ordered fashion.
All requests of any overwrite sequence are submitted in
parallel fashion. We ask for shared access for every separate
request of an overwrite sequence.
All comments, suggestions are welcome.
Edward.
11 years, 10 months
[RFC patch hekafs] Do not return -1 from STACK_WIND targets ever
by Pete Zaitcev
My hekafs server was misconfigured and some functions failed in places
where they normally would not, and that uncovered a problem: whenever
that happened, the client would hang. This happens because we cannot
return errors like normal people do in the kernel, by throwing an
error code. Functions called through STACK_WIND must return zero and
report the error through the callback.
Not sure what errno to set in this case. Generally I just want to
indicate that "something is busted". In the past I would return EDOM.
The ENOSPC seems ridiculous enough here.
Comments?
---
commit f8d58238c449568e05f71bd29816790e4a38ba4c
Author: Pete Zaitcev <zaitcev(a)yahoo.com>
Date: Wed Nov 9 16:16:27 2011 -0700
Prevent a hang in case of errors in fops functions.
diff --git a/xlators/features/uidmap/src/uidmap.c b/xlators/features/uidmap/src/uidmap.c
index 91104fd..191fb57 100644
--- a/xlators/features/uidmap/src/uidmap.c
+++ b/xlators/features/uidmap/src/uidmap.c
@@ -34,6 +34,9 @@
#include "common-utils.h"
#include "uidmap.h"
+/* ugly #includes below */
+#include <errno.h>
+
#define HFS_UID_LOW_DEFAULT 10000
#define HFS_UID_HIGH_DEFAULT 19999
#define HFS_GID_LOW_DEFAULT 10000
@@ -1350,7 +1353,8 @@ uidmap_entrylk(call_frame_t *frame, xlator_t *this,
((type == ENTRYLK_RDLCK) ? "ENTRYLK_RDLCK" : "ENTRYLK_WRLCK"));
if ((*uidmap_map)(frame->root, this->name) == -1) {
- return -1;
+ STACK_UNWIND_STRICT(entrylk, frame, -1, ENOSPC);
+ return 0;
}
STACK_WIND(frame, uidmap_entrylk_cbk,
@@ -1373,7 +1377,8 @@ uidmap_fentrylk(call_frame_t *frame, xlator_t *this,
((type == ENTRYLK_RDLCK) ? "ENTRYLK_RDLCK" : "ENTRYLK_WRLCK"));
if ((*uidmap_map)(frame->root, this->name) == -1) {
- return -1;
+ STACK_UNWIND_STRICT(entrylk, frame, -1, ENOSPC);
+ return 0;
}
STACK_WIND(frame, uidmap_fentrylk_cbk,
@@ -1389,7 +1394,8 @@ uidmap_inodelk(call_frame_t *frame, xlator_t *this, const char *volume,
loc_t *loc, int32_t cmd, struct gf_flock *flock)
{
if ((*uidmap_map)(frame->root, this->name) == -1) {
- return -1;
+ STACK_UNWIND_STRICT(inodelk, frame, -1, ENOSPC);
+ return 0;
}
STACK_WIND(frame, uidmap_inodelk_cbk,
@@ -1409,7 +1415,8 @@ uidmap_finodelk(call_frame_t *frame, xlator_t *this, const char *volume,
frame->root->unique, volume, fd, cmd);
if ((*uidmap_map)(frame->root, this->name) == -1) {
- return -1;
+ STACK_UNWIND_STRICT(finodelk, frame, -1, ENOSPC);
+ return 0;
}
STACK_WIND(frame, uidmap_finodelk_cbk,
@@ -1429,7 +1436,8 @@ uidmap_xattrop(call_frame_t *frame, xlator_t *this, loc_t *loc,
frame->root->unique, loc->path, loc->inode->ino, flags);
if ((*uidmap_map)(frame->root, this->name) == -1) {
- return -1;
+ STACK_UNWIND_STRICT(xattrop, frame, -1, ENOSPC, NULL);
+ return 0;
}
STACK_WIND(frame, uidmap_xattrop_cbk,
@@ -1450,7 +1458,8 @@ uidmap_fxattrop(call_frame_t *frame, xlator_t *this, fd_t *fd,
frame->root->unique, fd, flags);
if ((*uidmap_map)(frame->root, this->name) == -1) {
- return -1;
+ STACK_UNWIND_STRICT(fxattrop, frame, -1, ENOSPC, NULL);
+ return 0;
}
STACK_WIND(frame, uidmap_fxattrop_cbk,
@@ -1473,7 +1482,9 @@ uidmap_lookup(call_frame_t *frame, xlator_t *this,
loc->inode->ino);
if ((*uidmap_map)(frame->root, this->name) == -1) {
- return -1;
+ STACK_UNWIND_STRICT(lookup, frame, -1, ENOSPC,
+ NULL, NULL, NULL, NULL);
+ return 0;
}
STACK_WIND(frame, uidmap_lookup_cbk,
@@ -1493,7 +1504,8 @@ uidmap_stat(call_frame_t *frame, xlator_t *this, loc_t *loc)
frame->root->unique, loc->path, loc->inode->ino);
if ((*uidmap_map)(frame->root, this->name) == -1) {
- return -1;
+ STACK_UNWIND_STRICT(stat, frame, -1, ENOSPC, NULL);
+ return 0;
}
STACK_WIND(frame, uidmap_stat_cbk,
@@ -1514,7 +1526,8 @@ uidmap_rchecksum(call_frame_t *frame, xlator_t *this, fd_t *fd,
frame->root->unique, fd->inode->ino);
if ((*uidmap_map)(frame->root, this->name) == -1) {
- return -1;
+ STACK_UNWIND_STRICT(rchecksum, frame, -1, ENOSPC, 0, 0);
+ return 0;
}
STACK_WIND(frame, uidmap_rchecksum_cbk,
@@ -1535,7 +1548,8 @@ uidmap_getspec(call_frame_t *frame, xlator_t *this, const char *key,
frame->root->unique, key, flag);
if ((*uidmap_map)(frame->root, this->name) == -1) {
- return -1;
+ STACK_UNWIND_STRICT(getspec, frame, -1, ENOSPC, NULL);
+ return 0;
}
STACK_WIND(frame, uidmap_getspec_cbk,
@@ -1555,7 +1569,8 @@ uidmap_readlink(call_frame_t *frame, xlator_t *this, loc_t *loc, size_t size)
frame->root->unique, loc->path, loc->inode->ino, size);
if ((*uidmap_map)(frame->root, this->name) == -1) {
- return -1;
+ STACK_UNWIND_STRICT(readlink, frame, -1, ENOSPC, NULL, NULL);
+ return 0;
}
STACK_WIND(frame, uidmap_readlink_cbk,
@@ -1576,7 +1591,9 @@ uidmap_mknod(call_frame_t *frame, xlator_t *this, loc_t *loc,
frame->root->unique, loc->path, loc->inode->ino, mode, dev);
if ((*uidmap_map)(frame->root, this->name) == -1) {
- return -1;
+ STACK_UNWIND_STRICT(mknod, frame, -1, ENOSPC, NULL, NULL,
+ NULL, NULL);
+ return 0;
}
STACK_WIND(frame, uidmap_mknod_cbk,
@@ -1598,7 +1615,9 @@ uidmap_mkdir(call_frame_t *frame, xlator_t *this, loc_t *loc, mode_t mode,
((loc->inode)? loc->inode->ino : 0), mode);
if ((*uidmap_map)(frame->root, this->name) == -1) {
- return -1;
+ STACK_UNWIND_STRICT(mkdir, frame, -1, ENOSPC,
+ NULL, NULL, NULL, NULL);
+ return 0;
}
STACK_WIND(frame, uidmap_mkdir_cbk,
@@ -1617,7 +1636,8 @@ uidmap_unlink(call_frame_t *frame, xlator_t *this, loc_t *loc)
frame->root->unique, loc->path, loc->inode->ino);
if ((*uidmap_map)(frame->root, this->name) == -1) {
- return -1;
+ STACK_UNWIND_STRICT(unlink, frame, -1, ENOSPC, NULL, NULL);
+ return 0;
}
STACK_WIND(frame, uidmap_unlink_cbk,
@@ -1636,7 +1656,8 @@ uidmap_rmdir(call_frame_t *frame, xlator_t *this, loc_t *loc, int flags)
frame->root->unique, loc->path, loc->inode->ino, flags);
if ((*uidmap_map)(frame->root, this->name) == -1) {
- return -1;
+ STACK_UNWIND_STRICT(rmdir, frame, -1, ENOSPC, NULL, NULL);
+ return 0;
}
STACK_WIND(frame, uidmap_rmdir_cbk,
@@ -1658,7 +1679,9 @@ uidmap_symlink(call_frame_t *frame, xlator_t *this, const char *linkpath,
((loc->inode)? loc->inode->ino : 0));
if ((*uidmap_map)(frame->root, this->name) == -1) {
- return -1;
+ STACK_UNWIND_STRICT(symlink, frame, -1, ENOSPC,
+ NULL, NULL, NULL, NULL);
+ return 0;
}
STACK_WIND(frame, uidmap_symlink_cbk,
@@ -1680,7 +1703,9 @@ uidmap_rename(call_frame_t *frame, xlator_t *this, loc_t *oldloc, loc_t *newloc)
newloc->path, newloc->ino);
if ((*uidmap_map)(frame->root, this->name) == -1) {
- return -1;
+ STACK_UNWIND_STRICT(rename, frame, -1, ENOSPC,
+ NULL, NULL, NULL, NULL, NULL);
+ return 0;
}
STACK_WIND(frame, uidmap_rename_cbk,
@@ -1702,7 +1727,9 @@ uidmap_link(call_frame_t *frame, xlator_t *this, loc_t *oldloc, loc_t *newloc)
newloc->path, newloc->inode->ino);
if ((*uidmap_map)(frame->root, this->name) == -1) {
- return -1;
+ STACK_UNWIND_STRICT(link, frame, -1, ENOSPC,
+ NULL, NULL, NULL, NULL);
+ return 0;
}
STACK_WIND(frame, uidmap_link_cbk,
@@ -1727,7 +1754,9 @@ uidmap_setattr(call_frame_t *frame, xlator_t *this, loc_t *loc,
scratch_cs.gid = stbuf->ia_gid;
if (((*uidmap_map)(frame->root, this->name) == -1) ||
((*uidmap_map)(&scratch_cs, this->name) == -1)) {
- return -1;
+ STACK_UNWIND_STRICT(setattr, frame, -1, ENOSPC,
+ NULL, NULL);
+ return 0;
}
stbuf->ia_uid = scratch_cs.uid;
stbuf->ia_gid = scratch_cs.gid;
@@ -1756,7 +1785,9 @@ uidmap_fsetattr(call_frame_t *frame, xlator_t *this, fd_t *fd,
scratch_cs.gid = stbuf->ia_gid;
if (((*uidmap_map)(frame->root, this->name) == -1) ||
((*uidmap_map)(&scratch_cs, this->name) == -1)) {
- return -1;
+ STACK_UNWIND_STRICT(fsetattr, frame, -1, ENOSPC,
+ NULL, NULL);
+ return 0;
}
stbuf->ia_uid = scratch_cs.uid;
stbuf->ia_gid = scratch_cs.gid;
@@ -1780,7 +1811,8 @@ uidmap_truncate(call_frame_t *frame, xlator_t *this, loc_t *loc,
frame->root->unique, loc->path, loc->inode->ino, offset);
if ((*uidmap_map)(frame->root, this->name) == -1) {
- return -1;
+ STACK_UNWIND_STRICT(truncate, frame, -1, ENOSPC, NULL, NULL);
+ return 0;
}
STACK_WIND(frame, uidmap_truncate_cbk,
@@ -1803,7 +1835,8 @@ uidmap_open(call_frame_t *frame, xlator_t *this, loc_t *loc,
fd, wbflags);
if ((*uidmap_map)(frame->root, this->name) == -1) {
- return -1;
+ STACK_UNWIND_STRICT(open, frame, -1, ENOSPC, 0);
+ return 0;
}
STACK_WIND(frame, uidmap_open_cbk,
@@ -1823,7 +1856,9 @@ uidmap_create(call_frame_t *frame, xlator_t *this, loc_t *loc,
frame->root->unique, loc->path, loc->inode->ino, flags, mode);
if ((*uidmap_map)(frame->root, this->name) == -1) {
- return -1;
+ STACK_UNWIND_STRICT(create, frame, -1, ENOSPC,
+ 0, NULL, NULL, NULL, NULL);
+ return 0;
}
STACK_WIND(frame, uidmap_create_cbk,
@@ -1843,7 +1878,9 @@ uidmap_readv(call_frame_t *frame, xlator_t *this, fd_t *fd,
frame->root->unique, fd, size, offset);
if ((*uidmap_map)(frame->root, this->name) == -1) {
- return -1;
+ STACK_UNWIND_STRICT(readv, frame, -1, ENOSPC,
+ NULL, 0, NULL, NULL);
+ return 0;
}
STACK_WIND(frame, uidmap_readv_cbk,
@@ -1864,7 +1901,8 @@ uidmap_writev(call_frame_t *frame, xlator_t *this, fd_t *fd,
frame->root->unique, fd, vector, count, offset);
if ((*uidmap_map)(frame->root, this->name) == -1) {
- return -1;
+ STACK_UNWIND_STRICT(writev, frame, -1, ENOSPC, NULL, NULL);
+ return 0;
}
STACK_WIND(frame, uidmap_writev_cbk,
@@ -1884,7 +1922,8 @@ uidmap_statfs(call_frame_t *frame, xlator_t *this, loc_t *loc)
((loc->inode)? loc->inode->ino : 0));
if ((*uidmap_map)(frame->root, this->name) == -1) {
- return -1;
+ STACK_UNWIND_STRICT(statfs, frame, -1, ENOSPC, NULL);
+ return 0;
}
STACK_WIND(frame, uidmap_statfs_cbk,
@@ -1902,7 +1941,8 @@ uidmap_flush(call_frame_t *frame, xlator_t *this, fd_t *fd)
"%"PRId64": (*fd=%p)", frame->root->unique, fd);
if ((*uidmap_map)(frame->root, this->name) == -1) {
- return -1;
+ STACK_UNWIND_STRICT(flush, frame, -1, ENOSPC);
+ return 0;
}
STACK_WIND(frame, uidmap_flush_cbk,
@@ -1920,7 +1960,8 @@ uidmap_fsync(call_frame_t *frame, xlator_t *this, fd_t *fd, int32_t flags)
"%"PRId64": (flags=%d, *fd=%p)", frame->root->unique, flags, fd);
if ((*uidmap_map)(frame->root, this->name) == -1) {
- return -1;
+ STACK_UNWIND_STRICT(fsync, frame, -1, ENOSPC, NULL, NULL);
+ return 0;
}
STACK_WIND(frame, uidmap_fsync_cbk,
@@ -1941,7 +1982,8 @@ uidmap_setxattr(call_frame_t *frame, xlator_t *this,
((loc->inode)? loc->inode->ino : 0), dict, flags);
if ((*uidmap_map)(frame->root, this->name) == -1) {
- return -1;
+ STACK_UNWIND_STRICT(setxattr, frame, -1, ENOSPC);
+ return 0;
}
STACK_WIND(frame, uidmap_setxattr_cbk,
@@ -1962,7 +2004,8 @@ uidmap_getxattr(call_frame_t *frame, xlator_t *this,
((loc->inode)? loc->inode->ino : 0), name);
if ((*uidmap_map)(frame->root, this->name) == -1) {
- return -1;
+ STACK_UNWIND_STRICT(getxattr, frame, -1, ENOSPC, NULL);
+ return 0;
}
STACK_WIND(frame, uidmap_getxattr_cbk,
@@ -1983,7 +2026,8 @@ uidmap_fsetxattr(call_frame_t *frame, xlator_t *this,
((fd->inode)? fd->inode->ino : 0), dict, flags);
if ((*uidmap_map)(frame->root, this->name) == -1) {
- return -1;
+ STACK_UNWIND_STRICT(fsetxattr, frame, -1, ENOSPC);
+ return 0;
}
STACK_WIND(frame, uidmap_fsetxattr_cbk,
@@ -2004,7 +2048,8 @@ uidmap_fgetxattr(call_frame_t *frame, xlator_t *this,
((fd->inode)? fd->inode->ino : 0), name);
if ((*uidmap_map)(frame->root, this->name) == -1) {
- return -1;
+ STACK_UNWIND_STRICT(fgetxattr, frame, -1, ENOSPC, NULL);
+ return 0;
}
STACK_WIND(frame, uidmap_fgetxattr_cbk,
@@ -2025,7 +2070,8 @@ uidmap_removexattr(call_frame_t *frame, xlator_t *this,
((loc->inode)? loc->inode->ino : 0), name);
if ((*uidmap_map)(frame->root, this->name) == -1) {
- return -1;
+ STACK_UNWIND_STRICT(removexattr, frame, -1, ENOSPC);
+ return 0;
}
STACK_WIND(frame, uidmap_removexattr_cbk,
@@ -2045,7 +2091,8 @@ uidmap_opendir(call_frame_t *frame, xlator_t *this, loc_t *loc, fd_t *fd)
frame->root->unique, loc->path, loc->inode->ino, fd);
if ((*uidmap_map)(frame->root, this->name) == -1) {
- return -1;
+ STACK_UNWIND_STRICT(opendir, frame, -1, ENOSPC, 0);
+ return 0;
}
STACK_WIND(frame, uidmap_opendir_cbk,
@@ -2064,7 +2111,8 @@ uidmap_readdirp(call_frame_t *frame, xlator_t *this, fd_t *fd, size_t size,
frame->root->unique, fd, size, offset);
if ((*uidmap_map)(frame->root, this->name) == -1) {
- return -1;
+ STACK_UNWIND_STRICT(readdirp, frame, -1, ENOSPC, NULL);
+ return 0;
}
STACK_WIND(frame, uidmap_readdirp_cbk,
@@ -2085,7 +2133,8 @@ uidmap_readdir(call_frame_t *frame, xlator_t *this, fd_t *fd,
frame->root->unique, fd, size, offset);
if ((*uidmap_map)(frame->root, this->name) == -1) {
- return -1;
+ STACK_UNWIND_STRICT(readdir, frame, -1, ENOSPC, NULL);
+ return 0;
}
STACK_WIND(frame, uidmap_readdir_cbk,
@@ -2106,7 +2155,8 @@ uidmap_fsyncdir(call_frame_t *frame, xlator_t *this,
frame->root->unique, datasync, fd);
if ((*uidmap_map)(frame->root, this->name) == -1) {
- return -1;
+ STACK_UNWIND_STRICT(fsyncdir, frame, -1, ENOSPC);
+ return 0;
}
STACK_WIND(frame, uidmap_fsyncdir_cbk,
@@ -2126,7 +2176,8 @@ uidmap_access(call_frame_t *frame, xlator_t *this, loc_t *loc, int32_t mask)
((loc->inode)? loc->inode->ino : 0), mask);
if ((*uidmap_map)(frame->root, this->name) == -1) {
- return -1;
+ STACK_UNWIND_STRICT(access, frame, -1, ENOSPC);
+ return 0;
}
STACK_WIND(frame, uidmap_access_cbk,
@@ -2146,7 +2197,8 @@ uidmap_ftruncate(call_frame_t *frame, xlator_t *this,
frame->root->unique, offset, fd);
if ((*uidmap_map)(frame->root, this->name) == -1) {
- return -1;
+ STACK_UNWIND_STRICT(ftruncate, frame, -1, ENOSPC, NULL, NULL);
+ return 0;
}
STACK_WIND(frame, uidmap_ftruncate_cbk,
@@ -2165,7 +2217,8 @@ uidmap_fstat(call_frame_t *frame, xlator_t *this, fd_t *fd)
"%"PRId64": (*fd=%p)", frame->root->unique, fd);
if ((*uidmap_map)(frame->root, this->name) == -1) {
- return -1;
+ STACK_UNWIND_STRICT(fstat, frame, -1, ENOSPC, NULL);
+ return 0;
}
STACK_WIND(frame, uidmap_fstat_cbk,
@@ -2187,7 +2240,8 @@ uidmap_lk(call_frame_t *frame, xlator_t *this, fd_t *fd,
lock->l_start, lock->l_len, lock->l_pid);
if ((*uidmap_map)(frame->root, this->name) == -1) {
- return -1;
+ STACK_UNWIND_STRICT(lk, frame, -1, ENOSPC, NULL);
+ return 0;
}
STACK_WIND(frame, uidmap_lk_cbk,
11 years, 10 months
Volume is not started after hfs_start_volume
by Pete Zaitcev
Dear All:
I have a problem that after I mount a volume, any I/O to it hangs.
This is usually a symptom of volume that is not started, as I heard.
So, I checked, and when I run "gluster volume info", it outputs
"Status: Created". I am trying to figure out what is going on, but
so far only found that the volumes are kept by glusterd in a mysterious
list. Not sure how it's updated. The question is, is it ok for the
status not to be "Started" for volumes under the management of HekaFS
(my version is new enough to launch a separate instance of glusterfsd).
And if not, what can be done to fix it?
Thanks in advance,
-- Pete
11 years, 10 months