[SSSD] Some ELAPI design questions

Dmitri Pal dpal at redhat.com
Wed Aug 26 22:34:12 UTC 2009


Hi,

I have some thoughts that I wanted to run by you. May be I am getting
too deep and trying to overengineer things.
And I am definitely a bit confused about what should I use in the
specific case.

When writing into the file via ELAPI I need to understand how to deal with:
* async processing
* buffering
* streams and file descriptors

The async processing means that we need to be able to  prepare data to
write and then indicate that we want it to be written but do not block.
The actual writing happens from the main loop when the file dgot green
light and we know that it would not block. Same with read.
The async processing is all based on file descriptors as far as I have seen.

When we talk about buffering there are different levels of buffering.
There is kernel buffering and stdio stream buffering.

Streams are more convenient to use plus I wanted to provide a sink that
writes to stderr, but may be I should reconsider.

So what is the problem?
It seems that stdio and async processing should not be used together. Is
this correct?
Even if I get fileno() from the stream I write to  and pass it to the
library like tevent (or similar)  the async processing will be screwed
by streams buffering data.
Is this correct understanding?
Can I use stdio or should I use low level : open-write-fsync-close
functions?
There are "_unlocked" equivalents of some of the stdio functions but it
is not clear if they will be available on other platforms.

If I do not use stdio I will not take advantage of the stdio buffering.
If I understand correctly in this case the fsync forces kernel to flush
the page.
Is this Ok?

If I use stdio then I can potentially have one implementation for
writing to file and to stderr.
But here is another issue with using stdio and stderr in particular. The
program that uses ELAPI can be a daemon which closes file decriptors
0-2. Using close().
In this case trying to use stderr would not do any good. So before using
stderr I need to check fd 2 is still open and if it is not do something
about it.
The expectation and the goal is to effectively recover the stderr steam
and be able to write to it so that output goes to the terminal.
Here is the piece of code I have in mind:

fd = fileno(stderr);
err = fcntl(fd, F_GETFD)
if (err != EBADF) {
    file descriptor is open so I can use stderr as is...
}
else {
    stderr is broken we need to rewive it. How?
    Would I have to create a file descriptor, attach it to terminal and
then attach to stderr using fdopen(). Is there anything better?
    Is there anything that would help me to just output things to the
terminal?
}

I am trying to understand the whole solution and have a consistent
approach that deals with all these issues.

And one more thing - keeping the file open, I suspect that by default
the file should be kept open and we just need to flush it from time to
time based on config parameter but there is also a need to allow
"append" functionality when file is opened on per event basis. Though
this is costly and inefficient to reopen it all the time (no doubts)
there might be cases where such functionality is required.
I guess I just close and re-open file in this case but async processing
would mess things up a bit. You can't just enqueue data to be written
and let the event library deal with it. The callback that would write my
data to fd would have to complete writing one record and then close fd
and open new fd, create a different event and attach next record to it.
Something like this.
Brain starts to boil a bit when I start thinking about chaining of these
callbacks (IMO threads and semaphores are simpler :-))       

-- 
Thank you,
Dmitri Pal

Engineering Manager IPA project,
Red Hat Inc.


-------------------------------
Looking to carve out IT costs?
www.redhat.com/carveoutcosts/




More information about the sssd-devel mailing list