[master 0/4] Poke at iutil.execReadlines

Vratislav Podzimek vpodzime at redhat.com
Tue Aug 26 07:46:39 UTC 2014


On Wed, 2014-08-13 at 09:38 -0400, David Shea wrote:
> On 08/13/2014 04:48 AM, Vratislav Podzimek wrote:
> > On Tue, 2014-08-12 at 09:36 -0400, David Shea wrote:
> >> On 08/12/2014 02:59 AM, Vratislav Podzimek wrote:
> >>> On Fri, 2014-08-08 at 15:32 -0400, David Shea wrote:
> >>>> Maybe this is why anaconda-yum gets stuck sometimes? I don't know. I just
> >>>> figure a thread can't starve if there ain't no thread, so this gets rid of
> >>>> a thread and adds some tests to make sure I didn't break things too badly.
> >>> I still don't understand the reason for removing the thread. I think it
> >>> is better to just pull out the output of the process and let it
> >>> terminate ASAP even if our processing takes longer. Is it only because
> >>> it's simpler and easier to debug or python's problematic signal
> >>> handling?
> >> Well, from the other side of things, I don't understand what the thread
> >> and queue gains us. There's already a buffer between the parent and the
> >> child inherent to using pipes, and if we want we can use (or relative to
> >> this patch, go back to using, and setting bufsize back to 1 probably
> >> wouldn't be the worst idea) the additional buffer within the Popen
> >> object. From the standpoint of the child, unless it fills that kernel
> >> buffer (4k? 8k? one of those I think) and is blocking on write, from
> >> it's point of view it's done with the output. Using an additional thread
> >> doesn't get it out of the child any faster, it just copies it do a
> >> different buffer faster.
> >>
> >>> Which brings me to another question -- what happens if our code doesn't
> >>> read all the output from the process? Does it even terminate? We should
> >>> have a test case for that as well as I think it is a completely valid,
> >>> useful and used usecase (use, use, use :)).
> >> This is kind of one of those grey areas, but either the output will be
> >> hanging out in kernel buffer land and we read it without any problems,
> >> or the read fails with EPIPE and that gets translated into an OSError.
> >> Setting the bufsize back to 1 would be a good idea in light of this to
> >> avoid problems reading the last line before the process exits.
> >>
> >> As far as the process terminating, unless the write is blocked, which is
> >> unlikely, then yes, of course it does.
> > I'm not entirely sure how these things work so let me explain my thoughs
> > and concerns about it:
> >
> > The way it worked with a thread looked to me like a traditional
> > multi-thread approach where a thread is used to offload the I/O from the
> > main processing thread. A question is if this really does any difference
> > in Python, but since it's I/O it probably should. Another question is if
> > using generator and yield doesn't do the same as far as waiting for I/O
> > goes. But probably even more important feature was that our thread read
> > all the output from the child process and put it into a queue with no
> > limitted size (I know this could be an issue, but we don't run things
> > like 'yes') and the child process could terminate without getting
> > SIGPIPE, blocking on the output or anything like that.
> >
> > With the thread gone I think the child process now has to wait for us to
> > process the output line by line (so that it can pass more data to us) if
> > the size of the output exceeds the pipe buffer in the kernel. And if
> > that happens and in the same time we find what we were looking for and
> > stop processing lines from the child process, I think it may even not
> > terminate when the generator goes out of scope. That would need testing,
> > maybe when the generator goes out of scope -> the Popen is discarded ->
> > the child process is killed, I don't know. Setting bufsize to 1 could
> > only make these things worse, I think.
> 
> I don't see the child blocking on write() as a bad thing. We're trying 
> to perform synchronous I/O, so if the child write more data to the 
> buffer than we're ready to handle, then the child can just wait for a 
> bit until we're ready. Trying to solve this with an unbounded buffer 
> seems like actually not a good idea. There is no particular signal-based 
> problem that I'm trying to solve here. I'm trying to get rid of some of 
> the inconsistency we had in starting new processes and threads, and in 
> this one particular case we have a thread that is never joined 
> potentially running forever writing to an unlimited buffer. When I say 
> it like that it sounds kind of crazy!
> 
> Again, this is synchronous, line-based I/O. The thread handling the data 
> needs to block on the availability of new data, and the thread reading 
> the data can't do anything meaningful with the data until the thread 
> handling the data is ready. Based on that, this whole thing can be 
> handled by the same thread and an appropriately sized buffer, like 4k or 
> so. If I have some runaway process writing data into a pipe read by, 
> say, head, faster than the data can be read out of the pipe, my 
> process's write calls will block until the write buffer is ready. This 
> is expected behavior and processes are expected to deal with this kind 
> of situation, if they even care about this kind of situation. If I have 
> some runaway process writing to the current execReadlines, anaconda will 
> consume all of the system memory and crash.
> 
> I think the process never dying is a valid concern, and I think that it 
> could be fixed by replacing the generator with a traditional iterator. 
> Have next() return the next line and have __del__ kill the child process.
Makes sense to me. Thanks for additional clarification of the background
of the issue!

-- 
Vratislav Podzimek

Anaconda Rider | RHCE | Red Hat, Inc. | Brno - Czech Republic




More information about the anaconda-patches mailing list