If a command is issued to the sanlock daemon soon after the previous command from the same client has completed, it might not be processed for up to 1 second. This scenario is commonplace during sequential operations issued through lvmlockd, making them feel unusually sluggish.
The delay occurs when a command is issued by a client soon after that client has been marked as 'resumed' and while the sanlock daemon is in the main loop executing poll().
This is because the fds that poll() monitors are the fds of the non-suspended clients. Since the duration of poll() is up to STANDARD_CHECK_INTERVAL milliseconds, i.e. 1 second, any client that resumes during that period and issues another command will not be picked up during that invocation of poll(). Instead we need to wait for that invocation of poll() to return and be called again on the next loop iteration.
This problem was observed using lvmlockd between successive invocations of lvs: the poll() is entered before client_resume for the first lvs's 'unlock' command, so when the next lvs command's 'acquire' command arrives, it must wait for the poll() to complete and restart, so takes longer than necessary.
This is illustrated in the following sequence of events caused by two consecutive invocations of lvs, which we pick up as the first lvs command is nearing completion:
1. lvmlockd sends "unlock" command to sanlock daemon socket; 2. sanlock daemon dispatches this as "cmd_release" to a worker thread and calls client_suspend; 3. sanlock daemon invokes poll() (not listening to the suspended lvmlockd client); 4. sanlock worker thread finishes handling the "cmd_release" command, returns the response on the socket, and calls client_resume; 5. the second lvs command is issued; 6. lvmlockd issues an "acquire" command on the same connection (but the daemon isn't listening yet); 7. sanlock daemon's poll() returns after timing out after 1000 ms; 8. sanlock daemon's main loop executes poll() again, this time listening to the lvmlockd client; 9. poll() returns immediately and receives the "acquire" command.
This patch makes client_resume interrupt the currently-executing poll() by poking an internal eventfd on which poll() is listening in addition to the non-suspended clients. This causes the current poll() to return and immediately restart, this time listening on the resumed client's fd, ready to receive a new command from the client.
Some performance measurements follow, demonstrating how this patch makes the second command more responsive.
Before:
% time lvs >/dev/null; time lvs >/dev/null
real 0m0.051s user 0m0.008s sys 0m0.008s
real 0m0.880s user 0m0.000s sys 0m0.012s
After:
% time lvs >/dev/null; time lvs >/dev/null
real 0m0.039s user 0m0.004s sys 0m0.012s
real 0m0.036s user 0m0.000s sys 0m0.016s
Signed-off-by: Jonathan Davies jonathan.davies@citrix.com --- src/main.c | 27 ++++++++++++++++++++++++--- src/sanlock_internal.h | 1 + 2 files changed, 25 insertions(+), 3 deletions(-)
diff --git a/src/main.c b/src/main.c index 7038e4e..a7a9016 100644 --- a/src/main.c +++ b/src/main.c @@ -35,6 +35,7 @@ #include <sys/utsname.h> #include <sys/resource.h> #include <uuid/uuid.h> +#include <sys/eventfd.h>
#define EXTERN #include "sanlock_internal.h" @@ -190,8 +191,10 @@ static int client_alloc(void) { int i;
+ /* pollfd is one element longer as we use an additional element for the + * eventfd notification mechanism */ client = malloc(CLIENT_NALLOC * sizeof(struct client)); - pollfd = malloc(CLIENT_NALLOC * sizeof(struct pollfd)); + pollfd = malloc((CLIENT_NALLOC+1) * sizeof(struct pollfd));
if (!client || !pollfd) { log_error("can't alloc for client or pollfd array"); @@ -360,6 +363,9 @@ void client_resume(int ci) /* make poll() watch this connection */ pollfd[ci].fd = cl->fd; pollfd[ci].events = POLLIN; + + /* interrupt any poll() that might already be running */ + eventfd_write(efd, 1); } out: pthread_mutex_unlock(&cl->mutex); @@ -737,19 +743,29 @@ static int main_loop(void) int i, rv, empty, check_all; char *check_buf = NULL; int check_buf_len = 0; + uint64_t ebuf;
gettimeofday(&last_check, NULL); poll_timeout = STANDARD_CHECK_INTERVAL; check_interval = STANDARD_CHECK_INTERVAL;
while (1) { - rv = poll(pollfd, client_maxi + 1, poll_timeout); + /* as well as the clients, check the eventfd */ + pollfd[client_maxi+1].fd = efd; + pollfd[client_maxi+1].events = POLLIN; + + rv = poll(pollfd, client_maxi + 2, poll_timeout); if (rv == -1 && errno == EINTR) continue; if (rv < 0) { /* not sure */ } - for (i = 0; i <= client_maxi; i++) { + for (i = 0; i <= client_maxi + 1; i++) { + if (pollfd[i].fd == efd && pollfd[i].revents & POLLIN) { + /* a client_resume completed */ + eventfd_read(efd, &ebuf); + continue; + } if (client[i].fd < 0) continue; if (pollfd[i].revents & POLLIN) { @@ -1676,6 +1692,11 @@ static int do_daemon(void) if (rv < 0) goto out_threads;
+ /* initialize global eventfd for client_resume notification */ + if ((efd = eventfd(0, EFD_CLOEXEC | EFD_NONBLOCK)) == -1) + log_error("couldn't create eventfd"); + goto out_threads; + main_loop();
close_token_manager(); diff --git a/src/sanlock_internal.h b/src/sanlock_internal.h index 9df82f3..0855eec 100644 --- a/src/sanlock_internal.h +++ b/src/sanlock_internal.h @@ -376,6 +376,7 @@ EXTERN int helper_kill_fd; EXTERN int helper_status_fd; EXTERN uint64_t helper_last_status; EXTERN uint32_t helper_full_count; +EXTERN int efd;
EXTERN struct list_head spaces; EXTERN struct list_head spaces_rem;
On Fri, Oct 30, 2015 at 04:16:08PM +0000, Jonathan Davies wrote:
If a command is issued to the sanlock daemon soon after the previous command from the same client has completed, it might not be processed for up to 1 second. This scenario is commonplace during sequential operations issued through lvmlockd, making them feel unusually sluggish.
Thanks for the patch, and the thorough explantion; it works great.
- if ((efd = eventfd(0, EFD_CLOEXEC | EFD_NONBLOCK)) == -1)
log_error("couldn't create eventfd");
goto out_threads;
I added some braces here and pushed it out.
Thanks again, Dave
sanlock-devel@lists.fedorahosted.org