[SSSD] [PATCH] LDAP: Do not impose sizelimit=1 for single-entry searches

Jakub Hrozek jhrozek at redhat.com
Thu Sep 17 07:35:11 UTC 2015


On Wed, Sep 16, 2015 at 04:29:20PM +0200, Jakub Hrozek wrote:
> On Thu, Jul 23, 2015 at 02:34:52PM +0200, Lukas Slebodnik wrote:
> > On (23/07/15 10:57), Jakub Hrozek wrote:
> > >On Thu, Jul 23, 2015 at 09:39:00AM +0200, Lukas Slebodnik wrote:
> > >> On (21/07/15 21:33), Jakub Hrozek wrote:
> > >> >Hi,
> > >> >
> > >> >the attached patch fixes regression tracked by
> > >> >https://fedorahosted.org/sssd/ticket/2723
> > >> >
> > >> >Please see the commit message for more details.
> > >> 
> > >> >From 2f4e2e6d5e8f29f5b2cdd9f0b825edc172da57ea Mon Sep 17 00:00:00 2001
> > >> >From: Jakub Hrozek <jhrozek at redhat.com>
> > >> >Date: Tue, 21 Jul 2015 21:00:27 +0200
> > >> >Subject: [PATCH] LDAP: imposing sizelimit=1 for single-entry searches breaks
> > >> > overlapping domains
> > >> >
> > >> >https://fedorahosted.org/sssd/ticket/2723
> > >> >
> > >> >In case there are overlapping sdap domains, a search for a single user
> > >> >might match and return multiple entries. For instance, with AD domains
> > >> >represented by search bases:
> > >> >    DC=win,DC=trust,DC=test
> > >> >    DC=child,DC=win,DC=trust,DC=test
> > >> >
> > >> >A search for user from win.trust.test would be based at:
> > >> >    DC=win,DC=trust,DC=test
> > >> >but would match both search bases and return both users.
> > >> >
> > >> >Instead of performing complex filtering, just save both users. The
> > >> >responder would select the entry that matches the user's search.
> > >> Patch works
> > >> but do we need to store all users?
> > >
> > >No, but the number of users we store is at maximum the number of
> > >subdomains, so I didn't think it was an issue.
> > >
> > >> 
> > >> IIRC we have a code where we choose user based on the best match of
> > >> domain dn and used dn.
> > >
> > >Yes, we have sdap_domain_get_by_dn(). I tried to remove regression with
> > >as minimal patch as possible (restore previous behaviour).
> > >
> > It is a regression just in master caused by "wildcard patches" which was not
> > backported to stable branches. So we do not need to have a minimal patch.
> > 
> > So we are not in hurry. So it's just a blocker for next upstream release.
> > 
> > >I can also work on additional patch that matches the original DN based
> > >on base DN of sdap_domain if you prefer.. Something like:
> > >
> > >    entry_match = None
> > >    for entry in matched_entries:
> > >        if sdap_domain_get_by_dn(entry) == be_req_domain:
> > >            entry_match = entry
> > >            break
> > >
> > >    if entry_match == None:
> > >        raise NoMatchError
> > As you wish.
> > 
> 
> After just only a month since the nack, attached are updated patches. I
> would prefer to only apply the first one to downstream, since the others
> are just optimizations and tests.

CI runs revealed the tests didn't link correctly on Debian. New patches
are attached.
-------------- next part --------------
>From 16fe08547ee82a7398aa1409034de745f5aed101 Mon Sep 17 00:00:00 2001
From: Jakub Hrozek <jhrozek at redhat.com>
Date: Wed, 2 Sep 2015 15:53:34 +0200
Subject: [PATCH 1/4] KRB5: Offline operation with disabled domain

https://fedorahosted.org/sssd/ticket/2637

If a subdomain is in the disabled state, switch krb5_child operation
into offline mode.

Similarly, instead of marking the whole back end as offline, mark just
the domain as offline -- depending on the domain type, this would mark
the whole back end or just inactivate subdomain.
---
 src/providers/krb5/krb5_auth.c | 12 +++++++++++-
 1 file changed, 11 insertions(+), 1 deletion(-)

diff --git a/src/providers/krb5/krb5_auth.c b/src/providers/krb5/krb5_auth.c
index d35df13994a3e16feff90592bec16d7a8f30b70a..e3e9601b356efd72e50ab86e8b7cdd048e4e70d4 100644
--- a/src/providers/krb5/krb5_auth.c
+++ b/src/providers/krb5/krb5_auth.c
@@ -720,7 +720,7 @@ static void krb5_auth_resolve_done(struct tevent_req *subreq)
              * was found good, setting offline,
              * but we still have to call the child to setup
              * the ccache file if we are performing auth */
-            be_mark_offline(state->be_ctx);
+            be_mark_dom_offline(state->domain, state->be_ctx);
             kr->is_offline = true;
 
             if (kr->pd->cmd == SSS_PAM_CHAUTHTOK ||
@@ -754,9 +754,19 @@ static void krb5_auth_resolve_done(struct tevent_req *subreq)
         kr->is_offline = be_is_offline(state->be_ctx);
     }
 
+    if (!kr->is_offline
+            && sss_domain_get_state(state->domain) == DOM_INACTIVE) {
+        DEBUG(SSSDBG_TRACE_INTERNAL,
+              "Subdomain %s is inactive, will proceed offline\n",
+              state->domain->name);
+        kr->is_offline = true;
+    }
+
     if (kr->is_offline
             && sss_krb5_realm_has_proxy(dp_opt_get_cstring(kr->krb5_ctx->opts,
                                         KRB5_REALM))) {
+        DEBUG(SSSDBG_TRACE_FUNC,
+              "Resetting offline status, KDC proxy is in use\n");
         kr->is_offline = false;
     }
 
-- 
2.4.3

-------------- next part --------------
>From 6b081145a93f783edcf5824f7223970d30d5ccf4 Mon Sep 17 00:00:00 2001
From: Jakub Hrozek <jhrozek at redhat.com>
Date: Wed, 2 Sep 2015 15:52:51 +0200
Subject: [PATCH 2/4] AD: Do not mark the whole back end as offline if
 subdomain lookup fails

Required for:
https://fedorahosted.org/sssd/ticket/2637

Rather mark the domain as inactive. It will be marked as active later,
in the meantime the main domain can continue to work online and
subdomain requests will be answered from cache.

The lookup request itself just returns a special error code and lets the
caller handle the error code as appropriate (normally by disabling the
subdomain temporarily).
---
 src/providers/ad/ad_id.c | 81 +++++++++++++++++++++++++++++++++++++++---------
 1 file changed, 67 insertions(+), 14 deletions(-)

diff --git a/src/providers/ad/ad_id.c b/src/providers/ad/ad_id.c
index 4f327f823173eb113153a556322dae4cc4b42f3e..ecaf6c993bf7ddb7ba565d40ef0ad250114f5536 100644
--- a/src/providers/ad/ad_id.c
+++ b/src/providers/ad/ad_id.c
@@ -91,17 +91,27 @@ ad_handle_acct_info_send(TALLOC_CTX *mem_ctx,
     state->ad_options = ad_options;
     state->cindex = 0;
 
+    if (sss_domain_get_state(sdom->dom) == DOM_INACTIVE) {
+        ret = ERR_SUBDOM_INACTIVE;
+        goto immediate;
+    }
+
     ret = ad_handle_acct_info_step(req);
-    if (ret == EOK) {
-        tevent_req_done(req);
-        tevent_req_post(req, be_ctx->ev);
-    } else if (ret != EAGAIN) {
-        tevent_req_error(req, ret);
-        tevent_req_post(req, be_ctx->ev);
+    if (ret != EAGAIN) {
+        goto immediate;
     }
 
     /* Lookup in progress */
     return req;
+
+immediate:
+    if (ret != EOK) {
+        tevent_req_error(req, ret);
+    } else {
+        tevent_req_done(req);
+    }
+    tevent_req_post(req, be_ctx->ev);
+    return req;
 }
 
 static errno_t
@@ -160,8 +170,7 @@ ad_handle_acct_info_done(struct tevent_req *subreq)
         state->dp_error = dp_error;
         state->err = err;
 
-        tevent_req_error(req, ret);
-        return;
+        goto fail;
     }
 
     if (sdap_err == EOK) {
@@ -170,8 +179,8 @@ ad_handle_acct_info_done(struct tevent_req *subreq)
     } else if (sdap_err == ERR_NO_POSIX) {
         disable_gc(state->ad_options);
     } else if (sdap_err != ENOENT) {
-        tevent_req_error(req, EIO);
-        return;
+        ret = EIO;
+        goto fail;
     }
 
     /* Ret is only ENOENT or ERR_NO_POSIX now. Try the next connection */
@@ -188,12 +197,27 @@ ad_handle_acct_info_done(struct tevent_req *subreq)
             /* No more connections */
             tevent_req_done(req);
         } else {
-            tevent_req_error(req, ret);
+            goto fail;
         }
         return;
     }
 
     /* Another lookup in progress */
+    return;
+
+fail:
+    if (IS_SUBDOMAIN(state->sdom->dom)) {
+        /* Deactivate subdomain on lookup errors instead of going
+         * offline completely.
+         * This is a stopgap, until our failover is per-domain,
+         * not per-backend. Unfortunately, we can't rewrite the error
+         * code on some reported codes only, because sdap_id_op code
+         * encapsulated the failover as well..
+         */
+        ret = ERR_SUBDOM_INACTIVE;
+    }
+    tevent_req_error(req, ret);
+    return;
 }
 
 errno_t
@@ -258,6 +282,16 @@ get_conn_list(struct be_req *breq, struct ad_id_ctx *ad_ctx,
         break;
     }
 
+    /* Regardless of connection types, a subdomain error must not be allowed
+     * to set the whole back end offline, rather report an error and let the
+     * caller deal with it (normally disable the subdomain
+     */
+    if (IS_SUBDOMAIN(dom)) {
+        for (cindex = 0; clist[cindex] != NULL; cindex++) {
+            clist[cindex]->ignore_mark_offline = true;
+        }
+    }
+
     return clist;
 }
 
@@ -328,6 +362,11 @@ done:
 
 static void ad_account_info_complete(struct tevent_req *req);
 
+struct ad_account_info_state {
+    struct be_req *be_req;
+    struct sss_domain_info *dom;
+};
+
 void
 ad_account_info_handler(struct be_req *be_req)
 {
@@ -341,6 +380,7 @@ ad_account_info_handler(struct be_req *be_req)
     struct sdap_id_conn_ctx **clist;
     bool shortcut;
     errno_t ret;
+    struct ad_account_info_state *state;
 
     ad_ctx = talloc_get_type(be_ctx->bet_info[BET_ID].pvt_bet_data,
                              struct ad_id_ctx);
@@ -391,13 +431,21 @@ ad_account_info_handler(struct be_req *be_req)
         goto fail;
     }
 
+    state = talloc(be_req, struct ad_account_info_state);
+    if (state == NULL) {
+        ret = ENOMEM;
+        goto fail;
+    }
+    state->dom = sdom->dom;
+    state->be_req = be_req;
+
     req = ad_handle_acct_info_send(be_req, be_req, ar, sdap_id_ctx,
                                    ad_ctx->ad_options, sdom, clist);
     if (req == NULL) {
         ret = ENOMEM;
         goto fail;
     }
-    tevent_req_set_callback(req, ad_account_info_complete, be_req);
+    tevent_req_set_callback(req, ad_account_info_complete, state);
     return;
 
 fail:
@@ -412,12 +460,17 @@ ad_account_info_complete(struct tevent_req *req)
     int dp_error;
     const char *error_text = "Internal error";
     const char *req_error_text;
+    struct ad_account_info_state *state;
 
-    be_req = tevent_req_callback_data(req, struct be_req);
+    state = tevent_req_callback_data(req, struct ad_account_info_state);
+    be_req = state->be_req;
 
     ret = ad_handle_acct_info_recv(req, &dp_error, &req_error_text);
     talloc_zfree(req);
-    if (dp_error == DP_ERR_OK) {
+    if (ret == ERR_SUBDOM_INACTIVE) {
+        be_mark_dom_offline(state->dom, be_req_get_be_ctx(be_req));
+        return be_req_terminate(be_req, DP_ERR_OFFLINE, EAGAIN, "Offline");
+    } else if (dp_error == DP_ERR_OK) {
         if (ret == EOK) {
             error_text = NULL;
         } else {
-- 
2.4.3

-------------- next part --------------
>From 957b962bcd65f470cbc71071ae2cb1b0493943b2 Mon Sep 17 00:00:00 2001
From: Jakub Hrozek <jhrozek at redhat.com>
Date: Wed, 2 Sep 2015 12:10:03 +0000
Subject: [PATCH 3/4] AD: Set ignore_mark_offline=false when resolving AD root
 domain

https://fedorahosted.org/sssd/ticket/2637

Avoid going offline in cases where SSSD is connected to a child domain
but the root domain is not accessible.
---
 src/providers/ad/ad_subdomains.c | 56 +++++++++++++++++++++++-----------------
 1 file changed, 33 insertions(+), 23 deletions(-)

diff --git a/src/providers/ad/ad_subdomains.c b/src/providers/ad/ad_subdomains.c
index d1d468043410c80e6bf7f0f48a13bd9e962552af..8ed3dab0995f78a16f4a7df2e729ea88a39a782c 100644
--- a/src/providers/ad/ad_subdomains.c
+++ b/src/providers/ad/ad_subdomains.c
@@ -80,7 +80,8 @@ struct ad_subdomains_req_ctx {
     struct ad_id_ctx *root_id_ctx;
     struct sdap_id_op *root_op;
     size_t root_base_iter;
-    struct sysdb_attrs *root_domain;
+    struct sysdb_attrs *root_domain_attrs;
+    struct sss_domain_info *root_domain;
 
     size_t reply_count;
     struct sysdb_attrs **reply;
@@ -689,6 +690,7 @@ static errno_t ad_subdomains_get_root(struct ad_subdomains_req_ctx *ctx)
     return EAGAIN;
 }
 
+static struct sss_domain_info *ads_get_root_domain(struct ad_subdomains_req_ctx *ctx);
 static struct ad_id_ctx *ads_get_root_id_ctx(struct ad_subdomains_req_ctx *ctx);
 static void ad_subdomains_root_conn_done(struct tevent_req *req);
 
@@ -769,7 +771,14 @@ static void ad_subdomains_get_root_domain_done(struct tevent_req *req)
         }
     }
 
-    ctx->root_domain = reply[0];
+    ctx->root_domain_attrs = reply[0];
+    ctx->root_domain = ads_get_root_domain(ctx);
+    if (ctx->root_domain == NULL) {
+        DEBUG(SSSDBG_OP_FAILURE, "Could not find the root domain\n");
+        ret = EFAULT;
+        goto fail;
+    }
+
     ctx->root_id_ctx = ads_get_root_id_ctx(ctx);
     if (ctx->root_id_ctx == NULL) {
         DEBUG(SSSDBG_OP_FAILURE, "Cannot create id ctx for the root domain\n");
@@ -803,15 +812,13 @@ fail:
     be_req_terminate(ctx->be_req, dp_error, ret, NULL);
 }
 
-static struct ad_id_ctx *ads_get_root_id_ctx(struct ad_subdomains_req_ctx *ctx)
+static struct sss_domain_info *ads_get_root_domain(struct ad_subdomains_req_ctx *ctx)
 {
     errno_t ret;
     const char *name;
     struct sss_domain_info *root;
-    struct sdap_domain *sdom;
-    struct ad_id_ctx *root_id_ctx;
 
-    ret = sysdb_attrs_get_string(ctx->root_domain, AD_AT_TRUST_PARTNER, &name);
+    ret = sysdb_attrs_get_string(ctx->root_domain_attrs, AD_AT_TRUST_PARTNER, &name);
     if (ret != EOK) {
         DEBUG(SSSDBG_OP_FAILURE, "sysdb_attrs_get_string failed.\n");
         return NULL;
@@ -820,32 +827,40 @@ static struct ad_id_ctx *ads_get_root_id_ctx(struct ad_subdomains_req_ctx *ctx)
     /* With a subsequent run, the root should already be known */
     root = find_domain_by_name(ctx->sd_ctx->be_ctx->domain,
                                name, false);
-    if (root == NULL) {
-        DEBUG(SSSDBG_OP_FAILURE, "Could not find the root domain\n");
-        return NULL;
-    }
 
-    sdom = sdap_domain_get(ctx->sd_ctx->ad_id_ctx->sdap_id_ctx->opts, root);
+    return root;
+}
+
+static struct ad_id_ctx *ads_get_root_id_ctx(struct ad_subdomains_req_ctx *ctx)
+{
+    errno_t ret;
+    struct sdap_domain *sdom;
+    struct ad_id_ctx *root_id_ctx;
+
+    sdom = sdap_domain_get(ctx->sd_ctx->ad_id_ctx->sdap_id_ctx->opts,
+                           ctx->root_domain);
     if (sdom == NULL) {
         DEBUG(SSSDBG_OP_FAILURE,
-              "Cannot get the sdom for %s!\n", root->name);
+              "Cannot get the sdom for %s!\n", ctx->root_domain->name);
         return NULL;
     }
 
     if (sdom->pvt == NULL) {
         ret = ad_subdom_ad_ctx_new(ctx->sd_ctx->be_ctx,
                                    ctx->sd_ctx->ad_id_ctx,
-                                   root,
+                                   ctx->root_domain,
                                    &root_id_ctx);
         if (ret != EOK) {
             DEBUG(SSSDBG_OP_FAILURE, "ad_subdom_ad_ctx_new failed.\n");
             return NULL;
         }
+
         sdom->pvt = root_id_ctx;
     } else {
         root_id_ctx = sdom->pvt;
     }
 
+    root_id_ctx->ldap_ctx->ignore_mark_offline = true;
     return root_id_ctx;
 }
 
@@ -860,16 +875,11 @@ static void ad_subdomains_root_conn_done(struct tevent_req *req)
     ret = sdap_id_op_connect_recv(req, &dp_error);
     talloc_zfree(req);
     if (ret) {
-        if (dp_error == DP_ERR_OFFLINE) {
-            DEBUG(SSSDBG_MINOR_FAILURE,
-                  "No AD server is available, cannot get the "
-                  "subdomain list while offline\n");
-        } else {
-            DEBUG(SSSDBG_OP_FAILURE,
-                  "Failed to connect to AD server: [%d](%s)\n",
-                  ret, strerror(ret));
-        }
+        be_mark_dom_offline(ctx->root_domain, be_req_get_be_ctx(ctx->be_req));
 
+        DEBUG(SSSDBG_OP_FAILURE,
+              "Failed to connect to AD server: [%d](%s)\n",
+              ret, strerror(ret));
         goto fail;
     }
 
@@ -1040,7 +1050,7 @@ static void ad_subdomains_get_slave_domain_done(struct tevent_req *req)
      */
     ret = ad_subdomains_process(ctx, ctx->sd_ctx->be_ctx->domain,
                                 ctx->reply_count, ctx->reply,
-                                ctx->root_domain, &nsubdoms, &subdoms);
+                                ctx->root_domain_attrs, &nsubdoms, &subdoms);
     if (ret != EOK) {
         DEBUG(SSSDBG_OP_FAILURE, ("Cannot process subdomain list\n"));
         tevent_req_error(req, ret);
-- 
2.4.3

-------------- next part --------------
>From 616577f8df1c8f6c2d168f0a6de75962598a20b6 Mon Sep 17 00:00:00 2001
From: Jakub Hrozek <jhrozek at redhat.com>
Date: Wed, 2 Sep 2015 13:41:26 +0200
Subject: [PATCH 4/4] IPA: Do not allow the AD lookup code to set backend as
 offline in server mode

https://fedorahosted.org/sssd/ticket/2637

In server mode, we should not allow the AD lookups to set the backend
offline. Rather just let them report an error and deal with the error
separately.
---
 src/providers/ipa/ipa_subdomains_id.c | 8 +++++++-
 1 file changed, 7 insertions(+), 1 deletion(-)

diff --git a/src/providers/ipa/ipa_subdomains_id.c b/src/providers/ipa/ipa_subdomains_id.c
index ad1743ae5fe7ff80518e7bc58e4df04143732719..7c609ab6e69d6f23c4de7c1c9569d73074d4e2dd 100644
--- a/src/providers/ipa/ipa_subdomains_id.c
+++ b/src/providers/ipa/ipa_subdomains_id.c
@@ -634,6 +634,7 @@ ipa_get_ad_acct_send(TALLOC_CTX *mem_ctx,
             ret = ENOMEM;
             goto fail;
         }
+        clist[1]->ignore_mark_offline = true;
         break;
     default:
         clist = talloc_zero_array(req, struct sdap_id_conn_ctx *, 2);
@@ -642,6 +643,7 @@ ipa_get_ad_acct_send(TALLOC_CTX *mem_ctx,
             goto fail;
         }
         clist[0] = ad_id_ctx->ldap_ctx;
+        clist[0]->ignore_mark_offline = true;
         clist[1] = NULL;
     }
 
@@ -1037,7 +1039,11 @@ ipa_get_ad_acct_ad_part_done(struct tevent_req *subreq)
 
     ret = ad_handle_acct_info_recv(subreq, &state->dp_error, NULL);
     talloc_zfree(subreq);
-    if (ret != EOK) {
+    if (ret == ERR_SUBDOM_INACTIVE) {
+        be_mark_dom_offline(state->obj_dom, be_req_get_be_ctx(state->be_req));
+        tevent_req_error(req, ret);
+        return;
+    } else if (ret != EOK) {
         DEBUG(SSSDBG_OP_FAILURE, "AD lookup failed: %d\n", ret);
         tevent_req_error(req, ret);
         return;
-- 
2.4.3



More information about the sssd-devel mailing list