[SSSD] [PATCH] back end: periodic task API + refresh of expired records

Jakub Hrozek jhrozek at redhat.com
Wed May 22 10:10:16 UTC 2013


On Tue, May 21, 2013 at 10:13:36PM +0200, Pavel Březina wrote:
> Hi,
> 
> On 05/21/2013 12:06 PM, Jakub Hrozek wrote:
> >On Wed, May 15, 2013 at 11:45:47AM +0200, Pavel Březina wrote:
> >
> >I have a couple of questions.
> >
> >[PATCH 1/4] back end: periodic task API
> >What is the purpose of BE_PTASK_OFFLINE_SKIP? Why not simply let the
> >back end go online and re-enable the task instead?
> 
> I'm sorry, I don't follow.
> 
> The purpose of it is: If we are about to execute request and back end is
> offline, we just skip current execution and reschedule it to now + period.
> It is current behaviour of enumeration and other tasks (I'm not sure about
> dyndns).
> 

Sure and I was wondering whether simply skip enumeration attempts while
SSSD is offline and let netlink notify us about network changes. But
that wouldn't help with the case where SSSD is "offline" becuse the
server is having networking problems, not the client.

> >
> >I think you should either always store last_execution time or introduce
> >last_attempt. The reason is that if back end was offline for a long
> >time and then be_ptask_schedule was called with
> >BE_PTASK_SCHEDULE_FROM_LAST then the task might be scheduled in the
> 
> This value is only internal. Do you see any code path when this is possible?
> If so then I think it is a bug.
> 
> >past. I'm not sure how tevent behaves in this respect.
> 
> Tevent would just trigger the timer in the next opportunity. It's the same
> as if you create timer to T and enter the event loop in T+x.
> 

In that case it's fine.

> >In general it's not clear to me from the code who is responsible for
> >freeing the task -- is it the caller, but only on success? I think this
> >should always behave the same.
> 
> I'm sorry, I don't know what you mean.
> 

The be_ptask structure is allocated using be_ptask_create() on an
external memory context, which means the memory context "owns" the task.

But in some cases, for example when be_ptask_schedule() fails, the task
is freed inside a static function (setting the pointer to NULL to avoid
double-free), That seems strange to me, I would expect that if the user
of the task owns the context, he is responsible for managing the structure
and it wouldn't just go away.

> >
> >be_ptask_schedule(): The DEBUG message uses %d but passes in a string.
> 
> Thanks, will fix that.
> It's not even d but đ (alt+s), a relic from using czech keybord (I know, I'm
> weird, but I never got used to anything else :-))
> 
> >
> >[PATCH 2/4] back end: periodical refresh of expired records API
> >The "enum be_refresh" name is too generic. Maybe "enum be_refresh_type"
> >?
> >
> >In be_refresh_get_names(), can you use sysdb_attrs_to_list() to gather
> >the names?
> >
> >Is it wise to assume that all object have names and name the getter
> >get_names()? Some object might not have names at all, but for instance
> >UUIDs..
> 
> I thought every sysdb object has a name. What objects don't?

AD groups do not have a name after being stored from tokenGroups, for
example.

Also HBAC rules for example use UUID in RDN and only store the name to
make reporting and debugging nicer. In general there is no guarantee
that an object will have a name.

> I'll rename it then to something more generic.
> 

>  Also in be_refresh_step() you call get_names but get back a
> >"dn".
> 
> Sorry, about that. I wanted to use originalDN but I was forced to change it
> to name, because netgroups API uses name.
> 
> >
> >Please file tickets to add unit tests for these two modules and make the
> >tickets block upstream #1923.
> >
> >Nitpick:
> >>+    filter = talloc_asprintf(tmp_ctx, "(&(%s<=%lld))",
> >>+                             SYSDB_CACHE_EXPIRE, (long long)now);
> >                                                             ^^^
> >                                                         missing space
> >
> >[PATCH 3/4] back end: add refresh expired records periodic task
> >>--- a/src/config/SSSDConfig/__init__.py.in
> >>+++ b/src/config/SSSDConfig/__init__.py.in
> >>@@ -125,6 +125,7 @@ option_strings = {
> >>      'entry_cache_service_timeout' : _('Entry cache timeout length (seconds)'),
> >>      'entry_cache_autofs_timeout' : _('Entry cache timeout length (seconds)'),
> >>      'entry_cache_sudo_timeout' : _('Entry cache timeout length (seconds)'),
> >>+    'refresh_expired_interval' : _('How often should expired rules be refreshed in background'),
> >
> >We should use "objects" or "entries" instead of rules.
> 
> Good catch, I used sudo option as copy source :-)
> 
> >Can you explain the "circular dependency" here? I think you already told
> >me in person, but I forgot..
> 
> Let me answer with question - do you see a way how to easily remove it in
> scope of 1.10? If yes, I'm all for it. But I think we should not mess with
> back end and provider code at this point.
> 

The point is I don't know what the "circular dependency" comment means.
Dependency between what? What enhancement should be tracked to remove
it?

> >>--- a/src/providers/dp_ptask.h
> >>+++ b/src/providers/dp_ptask.h
> >>@@ -27,6 +27,9 @@
> >>
> >>  #include "providers/dp_backend.h"
> >>
> >>+/* solve circular dependency */
> >>+struct be_ctx;
> >>+
> >>  struct be_ptask;
> >>
> >>  /**
> >>diff --git a/src/providers/dp_refresh.h b/src/providers/dp_refresh.h
> >>index 54c6aac99866415d4f33f61ac05d487f71b8c49c..d93a932637804dfbf743c395af3f8e640f79c12f 100644
> >>--- a/src/providers/dp_refresh.h
> >>+++ b/src/providers/dp_refresh.h
> >>@@ -27,6 +27,9 @@
> >>  #include "providers/dp_backend.h"
> >>  #include "providers/dp_ptask.h"
> >>
> >>+/* solve circular dependency */
> >>+struct be_ctx;
> >>+
> >
> >[PATCH 4/4] providers: refresh expired netgroups
> >I don't see one important part implemented -- what if all netgroups
> >expire at once, then they are all refreshed at once, right? Can we add
> >some throttling and refresh them in batches with some delay to avoid
> >starving the back end?
> 
> I thought we agreed in person that we will first push a basic stupid
> solution and than fine tune it if necessary - hopefully based on real
> environment.

OK, I thought there was going to be some proof of concept that we would
just amend based on testing. Please file a ticket about this, we'll get
the throttling implemented post-beta.



More information about the sssd-devel mailing list