On Thu, 2005-08-04 at 22:27 -0500, Paul wrote:
On Thu, 2005-08-04 at 21:43 -0400, Dave Jones wrote:
> On Fri, Aug 05, 2005 at 09:22:55AM +0800, Ian Kent wrote:
> > I also find it hard to understand why it is such a problem having a larger
> > stack. As you point out, as software evolves it ultimately becomes more
> > complex. If the developers design needs it and the software is reliable
> > and efficient (aka performs well) then why not.
> >
> > A quick caclulation.
> >
> > 2000*4k is about 8M in say 1G at least.
> >
> > Not a large percentage overhead I think.
>
> Now try finding 2000 _contiguous_ pairs of pages after the machine
> has been up for a while, under load. Memory fragmentation makes
> this a really nasty problem, and the VM eats its own head after
> repeatedly scanning every page in the system.
I thought I heard that there was some work being done in the upstream
kernel to have a process "defrag" memory in the background. This would
help alleviate this problem on systems with long up-times.
actually that work is different; it is intended to defrag *userspace*
pages; not kernel pages. And the existing vm already can reclaim those
(by freeing them; the defrag work is there to avoid the actual free just
to move them). The problem really is more complex than that, and the
kernel VM got a lot of robustness back by having 4Kb stacks.
(Now on x86-64 and other 64 bit machines this is FAR less of a problem;
actually it's almost exclusively a x86 problem. x86 has a 1Gb lowmem
zone where all kernel stacks and other kernel datastructures have to go,
and the rest of memory goes into a highmem zone. This split is like
quadrupling the VM pain; without this split, multi-page stacks are still
not pretty but an order of magnitude less of a problem)