On Tuesday, August 20, 2019 10:48:06 PM MST John Harris wrote:
> On Sunday, August 18, 2019 4:33:47 AM MST Gordan Bobic wrote:
>
>> On Sun, Aug 11, 2019 at 10:36 AM <mcatanzaro(a)gnome.org> wrote:
>>
>>> This seems like a distraction from the real goal here, which is to
>>> ensure Fedora remains responsive under heavy memory pressure,
>>
>> I think this is an overwhelmingly important point, and as somebody
>> regularly working with ARM machines with tiny amounts of RAM, it is of
>> considerable interest to me.
>> I typically use CentOS because stability is important to me, but most
>> worthwhile things filter to there, so I hope what I'm about to say is not
>> _too_ deprecated.
>>
>> 1) Compile options
>> From what I can tell from rpm macro options, default on C7 seems to be
>> -O2.
-Os seems to help in most cases.
>> Adding -ffunction-sections -fdata-sections to defaults can help
>> considerably in producing smaller binaries, and is not the default.
>> Linking with -Wl,--gc-sections helps a lot and is not the default
>> Extensive stripping seems to already be the default (--strip-unneeded,
>> removal of .comment and .note sections)
>>
>> 2) Runtime condiguration
>> Default stack size is 8192 (ulimit -s). This unnecessarily eats a
>> considerably amount of memory. I have yet to see anything that actually
>> experiences problems with 1M.
>>
>> 3) zram
>> This was mentioned earlier in the thread, and on most of my systems,
>> memory constrained or otherwise, unless I have an overwhelming reason
>> not to, I run with zram swap equal in size to RAM with lz4 compression
>> and
>> vm.swappiness=100. I typically see compression ratios between 2:1 and 3:1
>> in zram, so on a system with, say, 10GB of RAM, it would provide 10GB of
>> very fast swap at a cost of 3-5GB of RAM. This seems like a favourable
>> trade off, especially on systems with extremely constrained RAM (e.g. ARM
>> devices with 512MB of RAM).
>>
>> I'm sure there is more that can be done, but this seems like a good start
>> as far as the cost / benefit is concerned.
>
> Python, Lua and a few other common programs can have issues with a stack
> size of 1MiB.
>
> --
> John M. Harris, Jr. <johnmh(a)splentity.com>
> Splentity
>
https://splentity.com/
I would also like to add that I don't see how it's even possible to run into a
low-memory scenario on a system with 10 GiB (That's a *lot* of memory! I run
on a Core 2 Duo based system that can support a max of 8 GiB as my daily
driver.) often enough to have a problem with oom_killer.
I'm finding that 32G is not necessarily sufficient for compiling clang
itself. Similarly I've had a hard time compiling UnrealEngine from
source. I usually see ld using up to 12G of memory to link each
artifact. Using -j$(nproc) on a 16 VCPU system amplifies the issue. I
rely on adding another 32G swap file to complete the job.
I'm now using an NVME SSD for my swap file. No more hangs for the most
part! Usually if I lock up, it's for maybe a minute before OOM steps in.
However, I'd definitely like to see a non-zram "solution" for use cases
like this. Ultimately I'd like to see the use of traditional swap files
to not hang a system even if it is placed on a md RAID array. I'm
guessing that the long term fix is for OOM to happen sooner, and/or
kernel schedulers to be improved, as mentioned elsewhere in this thread.
Ideally this wouldn't require the use of systemd or cgroups to make it
possible.