On 10/30/2013 07:32 PM, Josh Boyer wrote:
On Wed, Oct 30, 2013 at 7:25 PM, Prarit Bhargava
> On 10/30/2013 02:10 PM, Simo Sorce wrote:
>> On Wed, 2013-10-30 at 10:51 -0700, David Strauss wrote:
>>> On Wed, Oct 30, 2013 at 10:09 AM, Josh Boyer
>>>> Massive 4096 multi-cored CPU machines with terabytes of DRAM and
>>>> petabytes of storage, or more commodity style hardware used in
>>>> heterogeneous environments, etc.
>>> The latter. We'd want a separate HPC group for 512+ core machines.
>> Or simply, sites so big can care for their own kernel builds most
>> probably, or seek for commercial support.
> Why limit it so low? If we're thinking about going big, well, GO BIG.
> Users of Fedora want to support these systems out-of-the-box so they can get an
> idea if their systems work. Stopping at 512 just seems too low these days.
> We're talking about saving a very small amount of memory by not going to 4096 ..
Remind me how much again? IIRC, it was around 2MB additional runtime
overhead to set MAX_CPUS to that, right? That's very small on
servers, not so small on cloud.
Right, I think that was about it... it may be a little less than that. I
wonder, however, how many people are actually using a bleeding-edge fedora
kernel for memory-critical cloud purposes? I have a feeling that it's in the
same order of magnitude of people booting fedora on systems with greater than