[SSSD] [PATCH] Lookup domains at startup

Sumit Bose sbose at redhat.com
Tue Jun 4 10:05:12 UTC 2013


On Fri, May 31, 2013 at 04:15:18PM +0200, Jakub Hrozek wrote:
> On Fri, May 31, 2013 at 01:37:11PM +0200, Sumit Bose wrote:
> > Hi,
> > 
> > recently the patch "Allow flat name in the FQname format" was commit to
> > master. The flat domain name is determined at runtime but currently only when
> > the responders receive a request with an unknown domain name.
> > 
> > If now the flat domain name is used in the FQname and the nss responder
> > receives e.g. a 'getent passwd DOM\username' request with the flat
> > domain name after startup everything is fine. Because after startup the
> > domain part of the given fully qualified user name is not know and a
> > request will be send to the backends to look it up. If the request is
> > done the flat domain name is know and can be used in the returned
> > FQname.
> > 
> > if on the other hand the nss responder receives a 'getent passwd
> > username at domain.name' with the domain name from sssd.conf the domain
> > part of the user name is known and there is no reason to send a
> > get_domains request to the backend. Hence the flat domain name is not
> > known when the FQname for the response is constructed and will be
> > replaced by the full name.
> > 
> > To avoid this the following patch will always run a get_domains request
> > at startup to get the needed domain data.
> > 
> > Fixes https://fedorahosted.org/sssd/ticket/1951.
> > 
> > bye,
> > Sumit
> 
> Works fine, after startup there is a subdomain created and the flat name
> discovered.
> 
> But I wonder if it was worth it to add some kind of check in
> be_get_subdomains() to avoid issuing another request if the previous
> came within some interval and if it did just return success. Currently
> this patch triggers an LDAP search per responder.
> 
> The downside I see is that there is already a similar check in the
> responders themselves, so this would be a bit of duplication.

Thank you for the review. I've added two patches which adds a generic
request queue to the data provider code and let the get_subdomains
request use it. With this you should only see two requests going to the
server, one triggered by the starting responders and the other is the
online callback. I think it is not worth the time to try to optimize
this out as well, because there are different requirements for the
responders and the online callback.

bye,
Sumit
-------------- next part --------------
From c74f34cd0909b5ca680166d9c649cea9674e941b Mon Sep 17 00:00:00 2001
From: Sumit Bose <sbose at redhat.com>
Date: Fri, 31 May 2013 10:52:05 +0200
Subject: [PATCH 1/3] Lookup domains at startup

To make sure that e.g. the short/NetBIOS domain name is available this
patch make sure that the responders send a get_domains request to their
backends at startup the collect the domain information or read it from
the cache if the backend is offline.

For completeness I added this to all responders even if they do not need
the information at the moment.

Fixes https://fedorahosted.org/sssd/ticket/1951
---
 src/responder/autofs/autofssrv.c             |    6 +++
 src/responder/common/responder.h             |    4 ++
 src/responder/common/responder_get_domains.c |   49 ++++++++++++++++++++++++++
 src/responder/nss/nsssrv.c                   |    6 +++
 src/responder/pac/pacsrv.c                   |    6 +++
 src/responder/pam/pamsrv.c                   |    6 +++
 src/responder/ssh/sshsrv.c                   |    6 +++
 src/responder/sudo/sudosrv.c                 |    6 +++
 8 files changed, 89 insertions(+), 0 deletions(-)

diff --git a/src/responder/autofs/autofssrv.c b/src/responder/autofs/autofssrv.c
index ea4c049..edd6f42 100644
--- a/src/responder/autofs/autofssrv.c
+++ b/src/responder/autofs/autofssrv.c
@@ -194,6 +194,12 @@ autofs_process_init(TALLOC_CTX *mem_ctx,
         goto fail;
     }
 
+    ret = schedule_get_domains_task(rctx, rctx->ev, rctx);
+    if (ret != EOK) {
+        DEBUG(SSSDBG_FATAL_FAILURE, ("schedule_get_domains_tasks failed.\n"));
+        goto fail;
+    }
+
     DEBUG(SSSDBG_TRACE_FUNC, ("autofs Initialization complete\n"));
     return EOK;
 
diff --git a/src/responder/common/responder.h b/src/responder/common/responder.h
index 68b4ebb..5331d5b 100644
--- a/src/responder/common/responder.h
+++ b/src/responder/common/responder.h
@@ -303,6 +303,10 @@ struct tevent_req *sss_dp_get_domains_send(TALLOC_CTX *mem_ctx,
 
 errno_t sss_dp_get_domains_recv(struct tevent_req *req);
 
+errno_t schedule_get_domains_task(TALLOC_CTX *mem_ctx,
+                                  struct tevent_context *ev,
+                                  struct resp_ctx *rctx);
+
 errno_t csv_string_to_uid_array(TALLOC_CTX *mem_ctx, const char *cvs_string,
                                 bool allow_sss_loop,
                                 size_t *_uid_count, uid_t **_uids);
diff --git a/src/responder/common/responder_get_domains.c b/src/responder/common/responder_get_domains.c
index defa4a4..592cd8d 100644
--- a/src/responder/common/responder_get_domains.c
+++ b/src/responder/common/responder_get_domains.c
@@ -369,3 +369,52 @@ static errno_t check_last_request(struct resp_ctx *rctx, const char *hint)
 
     return EOK;
 }
+
+static void get_domains_at_startup_done(struct tevent_req *req)
+{
+    int ret;
+
+    ret = sss_dp_get_domains_recv(req);
+    talloc_free(req);
+    if (ret != EOK) {
+        DEBUG(SSSDBG_OP_FAILURE, ("sss_dp_get_domains request failed.\n"));
+    }
+
+    return;
+}
+
+static void get_domains_at_startup(struct tevent_context *ev,
+                                   struct tevent_immediate *imm,
+                                   void *pvt)
+{
+    struct tevent_req *req;
+    struct resp_ctx *rctx;
+
+    rctx = talloc_get_type(pvt, struct resp_ctx);
+
+    req = sss_dp_get_domains_send(rctx, rctx, true, NULL);
+    if (req == NULL) {
+        DEBUG(SSSDBG_OP_FAILURE, ("sss_dp_get_domains_send failed.\n"));
+        return;
+    }
+
+    tevent_req_set_callback(req, get_domains_at_startup_done, NULL);
+    return;
+}
+
+errno_t schedule_get_domains_task(TALLOC_CTX *mem_ctx,
+                                  struct tevent_context *ev,
+                                  struct resp_ctx *rctx)
+{
+    struct tevent_immediate *imm;
+
+    imm = tevent_create_immediate(mem_ctx);
+    if (imm == NULL) {
+        DEBUG(SSSDBG_OP_FAILURE, ("tevent_create_immediate failed.\n"));
+        return ENOMEM;
+    }
+
+    tevent_schedule_immediate(imm, ev, get_domains_at_startup, rctx);
+
+    return EOK;
+}
diff --git a/src/responder/nss/nsssrv.c b/src/responder/nss/nsssrv.c
index ee8fecb..ebad150 100644
--- a/src/responder/nss/nsssrv.c
+++ b/src/responder/nss/nsssrv.c
@@ -532,6 +532,12 @@ int nss_process_init(TALLOC_CTX *mem_ctx,
     }
     responder_set_fd_limit(fd_limit);
 
+    ret = schedule_get_domains_task(rctx, rctx->ev, rctx);
+    if (ret != EOK) {
+        DEBUG(SSSDBG_FATAL_FAILURE, ("schedule_get_domains_tasks failed.\n"));
+        goto fail;
+    }
+
     DEBUG(SSSDBG_TRACE_FUNC, ("NSS Initialization complete\n"));
 
     return EOK;
diff --git a/src/responder/pac/pacsrv.c b/src/responder/pac/pacsrv.c
index 9bc2766..22f87cb 100644
--- a/src/responder/pac/pacsrv.c
+++ b/src/responder/pac/pacsrv.c
@@ -207,6 +207,12 @@ int pac_process_init(TALLOC_CTX *mem_ctx,
     }
     responder_set_fd_limit(fd_limit);
 
+    ret = schedule_get_domains_task(rctx, rctx->ev, rctx);
+    if (ret != EOK) {
+        DEBUG(SSSDBG_FATAL_FAILURE, ("schedule_get_domains_tasks failed.\n"));
+        goto fail;
+    }
+
     DEBUG(SSSDBG_TRACE_FUNC, ("PAC Initialization complete\n"));
 
     return EOK;
diff --git a/src/responder/pam/pamsrv.c b/src/responder/pam/pamsrv.c
index c71ef07..fad564a 100644
--- a/src/responder/pam/pamsrv.c
+++ b/src/responder/pam/pamsrv.c
@@ -203,6 +203,12 @@ static int pam_process_init(TALLOC_CTX *mem_ctx,
     }
     responder_set_fd_limit(fd_limit);
 
+    ret = schedule_get_domains_task(rctx, rctx->ev, rctx);
+    if (ret != EOK) {
+        DEBUG(SSSDBG_FATAL_FAILURE, ("schedule_get_domains_tasks failed.\n"));
+        goto done;
+    }
+
     ret = EOK;
 
 done:
diff --git a/src/responder/ssh/sshsrv.c b/src/responder/ssh/sshsrv.c
index 410e631..a1d1f6c 100644
--- a/src/responder/ssh/sshsrv.c
+++ b/src/responder/ssh/sshsrv.c
@@ -166,6 +166,12 @@ int ssh_process_init(TALLOC_CTX *mem_ctx,
         goto fail;
     }
 
+    ret = schedule_get_domains_task(rctx, rctx->ev, rctx);
+    if (ret != EOK) {
+        DEBUG(SSSDBG_FATAL_FAILURE, ("schedule_get_domains_tasks failed.\n"));
+        goto fail;
+    }
+
     DEBUG(SSSDBG_TRACE_FUNC, ("SSH Initialization complete\n"));
 
     return EOK;
diff --git a/src/responder/sudo/sudosrv.c b/src/responder/sudo/sudosrv.c
index a6344a9..e6bd997 100644
--- a/src/responder/sudo/sudosrv.c
+++ b/src/responder/sudo/sudosrv.c
@@ -148,6 +148,12 @@ int sudo_process_init(TALLOC_CTX *mem_ctx,
         goto fail;
     }
 
+    ret = schedule_get_domains_task(rctx, rctx->ev, rctx);
+    if (ret != EOK) {
+        DEBUG(SSSDBG_FATAL_FAILURE, ("schedule_get_domains_tasks failed.\n"));
+        goto fail;
+    }
+
     DEBUG(SSSDBG_TRACE_FUNC, ("SUDO Initialization complete\n"));
 
     return EOK;
-- 
1.7.7.6

-------------- next part --------------
From d3482d3a25f2e65749a3d988938f7e7183e45880 Mon Sep 17 00:00:00 2001
From: Sumit Bose <sbose at redhat.com>
Date: Fri, 31 May 2013 22:00:17 +0200
Subject: [PATCH 2/3] Add be request queue

For some backend targets it might be not desirable to run requests in
parallel but to serialize them. To avoid that each provider has to
implement a queue for this target this patch implements a generic queue
which collects incoming requests before they are send to the target.
---
 src/providers/data_provider_be.c |  119 ++++++++++++++++++++++++++++++++++++++
 src/providers/dp_backend.h       |   11 ++++
 2 files changed, 130 insertions(+), 0 deletions(-)

diff --git a/src/providers/data_provider_be.c b/src/providers/data_provider_be.c
index cd67156..49a7a89 100644
--- a/src/providers/data_provider_be.c
+++ b/src/providers/data_provider_be.c
@@ -301,6 +301,125 @@ static errno_t be_file_request(TALLOC_CTX *mem_ctx,
     return EOK;
 }
 
+static errno_t be_queue_request(TALLOC_CTX *queue_mem_ctx,
+                                struct bet_queue_item **req_queue,
+                                TALLOC_CTX *req_mem_ctx,
+                                struct be_req *be_req,
+                                be_req_fn_t fn)
+{
+    struct bet_queue_item *item;
+    int ret;
+
+    if (*req_queue == NULL) {
+        DEBUG(SSSDBG_TRACE_ALL, ("Queue is empty, " \
+                                 "running request immediately.\n"));
+        ret = be_file_request(req_mem_ctx, be_req, fn);
+        if (ret != EOK) {
+            DEBUG(SSSDBG_OP_FAILURE, ("be_file_request failed.\n"));
+            return ret;
+        }
+    }
+
+    item = talloc_zero(queue_mem_ctx, struct bet_queue_item);
+    if (item == NULL) {
+        DEBUG(SSSDBG_OP_FAILURE, ("talloc_zero failed, cannot add item to " \
+                                  "request queue.\n"));
+    } else {
+        DEBUG(SSSDBG_TRACE_ALL, ("Adding request to queue.\n"));
+        item->mem_ctx = req_mem_ctx;
+        item->be_req = be_req;
+        item->fn = fn;
+
+        DLIST_ADD(*req_queue, item);
+    }
+
+    return EOK;
+}
+
+static void be_queue_next_request(struct be_req *be_req, enum bet_type type)
+{
+    struct bet_queue_item *item;
+    struct bet_queue_item *current = NULL;
+    struct bet_queue_item **req_queue;
+    int ret;
+    DBusMessage *reply;
+    uint16_t err_maj;
+    uint32_t err_min;
+    const char *err_msg = "Cannot file back end request";
+    struct be_req *next_be_req = NULL;
+    dbus_bool_t dbret;
+    DBusConnection *dbus_conn;
+
+    req_queue = &be_req->becli->bectx->bet_info[type].req_queue;
+
+    if (*req_queue == NULL) {
+        DEBUG(SSSDBG_TRACE_ALL, ("Queue is empty, nothing to do.\n"));
+        return;
+    }
+
+    DLIST_FOR_EACH(item, *req_queue) {
+        if (item->be_req == be_req) {
+            current = item;
+            break;
+        }
+    }
+
+    if (current != NULL) {
+        DLIST_REMOVE(*req_queue, current);
+    }
+
+    if (*req_queue == NULL) {
+        DEBUG(SSSDBG_TRACE_ALL, ("Request queue is empty.\n"));
+        return;
+    }
+
+    next_be_req = (*req_queue)->be_req;
+
+    ret = be_file_request((*req_queue)->mem_ctx, next_be_req, (*req_queue)->fn);
+    if (ret == EOK) {
+        DEBUG(SSSDBG_TRACE_ALL, ("Queued request filed successfully.\n"));
+        return;
+    }
+
+    DEBUG(SSSDBG_OP_FAILURE, ("be_file_request failed.\n"));
+
+    be_queue_next_request(next_be_req, type);
+
+    reply = (DBusMessage *) next_be_req->pvt;
+
+    if (reply) {
+        /* Return a reply if one was requested
+         * There may not be one if this request began
+         * while we were offline
+         */
+        err_maj = DP_ERR_FATAL;
+        err_min = ret;
+
+        dbret = dbus_message_append_args(reply,
+                                         DBUS_TYPE_UINT16, &err_maj,
+                                         DBUS_TYPE_UINT32, &err_min,
+                                         DBUS_TYPE_STRING, &err_msg,
+                                         DBUS_TYPE_INVALID);
+
+        if (!dbret) {
+            DEBUG(SSSDBG_CRIT_FAILURE, ("Failed to generate dbus reply\n"));
+            dbus_message_unref(reply);
+            goto done;
+        }
+
+        dbus_conn = sbus_get_connection(next_be_req->becli->conn);
+        if (dbus_conn == NULL) {
+            DEBUG(SSSDBG_CRIT_FAILURE, ("D-BUS not connected\n"));
+            goto done;
+        }
+        dbus_connection_send(dbus_conn, reply, NULL);
+        dbus_message_unref(reply);
+    }
+
+done:
+    talloc_free(next_be_req);
+}
+
 bool be_is_offline(struct be_ctx *ctx)
 {
     time_t now = time(NULL);
diff --git a/src/providers/dp_backend.h b/src/providers/dp_backend.h
index e0e2210..9a8df4c 100644
--- a/src/providers/dp_backend.h
+++ b/src/providers/dp_backend.h
@@ -68,11 +68,22 @@ struct loaded_be {
     void *handle;
 };
 
+struct bet_queue_item {
+    struct bet_queue_item *prev;
+    struct bet_queue_item *next;
+
+    TALLOC_CTX *mem_ctx;
+    struct be_req *be_req;
+    be_req_fn_t fn;
+
+};
+
 struct bet_info {
     enum bet_type bet_type;
     struct bet_ops *bet_ops;
     void *pvt_bet_data;
     char *mod_name;
+    struct bet_queue_item *req_queue;
 };
 
 struct be_offline_status {
-- 
1.7.7.6

-------------- next part --------------
From 34116a32c78fbd33b595f5fce3d0d63ac7c7d70c Mon Sep 17 00:00:00 2001
From: Sumit Bose <sbose at redhat.com>
Date: Mon, 3 Jun 2013 10:40:12 +0200
Subject: [PATCH 3/3] Use queue for get_subdomains

It does not make much sense to run multiple get_subdomains request in
parallel because all requests will load the same information from the
server. The IPA and AD provider already implement a short timeout to
avoid the multiple requests are running to fast after each other. But if
the timeout is over chances are that if two or more request come in fast
the first request cannot update the timeout and request will run in
parallel. To avoid this the requests are queued and send one after the
other to the provider.
---
 src/providers/data_provider_be.c |   10 +++++++---
 1 files changed, 7 insertions(+), 3 deletions(-)

diff --git a/src/providers/data_provider_be.c b/src/providers/data_provider_be.c
index 49a7a89..53fa5cd 100644
--- a/src/providers/data_provider_be.c
+++ b/src/providers/data_provider_be.c
@@ -496,6 +496,8 @@ static void get_subdomains_callback(struct be_req *req,
               dp_err_type, errnum, errstr?errstr:"<NULL>",
               dp_pam_err_to_string(req, dp_err_type, errnum)));
 
+    be_queue_next_request(req, BET_SUBDOMAINS);
+
     reply = (DBusMessage *)req->pvt;
 
     if (reply) {
@@ -629,9 +631,11 @@ static int be_get_subdomains(DBusMessage *message, struct sbus_connection *conn)
 
     be_req->req_data = req;
 
-    ret = be_file_request(becli->bectx,
-                          be_req,
-                          becli->bectx->bet_info[BET_SUBDOMAINS].bet_ops->handler);
+    ret = be_queue_request(becli->bectx,
+                           &becli->bectx->bet_info[BET_SUBDOMAINS].req_queue,
+                           becli->bectx,
+                           be_req,
+                           becli->bectx->bet_info[BET_SUBDOMAINS].bet_ops->handler);
     if (ret != EOK) {
         err_maj = DP_ERR_FATAL;
         err_min = ret;
-- 
1.7.7.6



More information about the sssd-devel mailing list