I am finding that one of my c++ packages has compilation units that generate very large assembly (.s)files -- so large that any attempt to build them in memory (e.g. with -pipe) causes memory exhaustion.The only way I have found to reliably get the build to run to completion is by using -save-temps to forceg++ to save the .s assembly files to disk. I also have to remove any (make) parallelism in the builds. I am doing this: %configure \ CXXFLAGS="${CXXFLAGS} -save-temps" \ ... and using make (-j1 implied) instead of make_build. Just curious if anyone has a better suggestion here. Phil
On Tue, Jun 25, 2019 at 7:15 PM Philip Kovacs via devel devel@lists.fedoraproject.org wrote:
I am finding that one of my c++ packages has compilation units that generate very large assembly (.s) files -- so large that any attempt to build them in memory (e.g. with -pipe) causes memory exhaustion. The only way I have found to reliably get the build to run to completion is by using -save-temps to force g++ to save the .s assembly files to disk. I also have to remove any (make) parallelism in the builds.
I am doing this:
%configure \ CXXFLAGS="${CXXFLAGS} -save-temps" \ ...
and using make (-j1 implied) instead of make_build.
Just curious if anyone has a better suggestion here.
I've got a few packages with that problem, too. Besides the approaches you listed above, I've done all of the following at one point or another (just for the affected files; no need to pessimize everything):
- Reduce optimization level from -O2 to -O1 or -O0. - Reduce debugging info level from -g (== -g2) to -g1 or -g0. - Pass -Wl,--no-keep-memory and -Wl,--reduce-memory-overheads to the linker.
That last one is because the linker runs out of memory while linking polymake on 32-bit platforms.
Good luck!
On 6/26/19 00:25 UTC, Philip Kovacs via devel wrote:
I am finding that one of my c++ packages has compilation units that generate very large assembly (.s) files -- so large that any attempt to build them in memory (e.g. with -pipe) causes memory exhaustion. The only way I have found to reliably get the build to run to completion is by using -save-temps to force g++ to save the .s assembly files to disk.
Please quantify: What is the byte size of the .s file?
First hint: give the virtual machine enough resources! Either RAM, or "swap" (paging) space.
Also, -pipe itself uses at most (16 * 4KiB) more memory. Memory (RAM) exhaustion is caused by having all the (.data+.bss) of both the compiler and the assembler resident at the same time. (Most of this will be the symbol table for the assembler.) Even then, if you have enough swap space (shown by the utility program /usr/sbin/swapon, also by /usr/bin/top | grep Swap) then compilation will succeed, although much more slowly due to demand paging. You can increase swap space by using one or more "instantiated" files in the filesystem; see "man swapon".
It may be possible to use the gcc option -ffunction-sections (possibly combined with a filter using /usr/bin/sed, etc.) as a hint to the assembler to discard the symbols for local labels upon reaching the end of each function.
For particular cases, you can use "gcc --verbose ...", or even "strace -f -o strace.out -e trace=execve -s 500 gcc ...", to recover command sequences that may be edited according to desire.
On Wednesday, June 26, 2019, 01:05:13 AM EDT, John Reiser jreiser@bitwagon.com wrote:
Please quantify: What is the byte size of the .s file?
First hint: give the virtual machine enough resources! Either RAM, or "swap" (paging) space.
The .s got up to about 375M before that particular g++ compile process died. The jobs are submitted with the usual suspects: koji or fedpkg. I don't think we have any control over the resources allocatedto the vm's or containers the jobs land on (and we shouldn't).
On Wed, 26 Jun 2019 05:33:24 +0000 (UTC) Philip Kovacs via devel devel@lists.fedoraproject.org wrote:
On Wednesday, June 26, 2019, 01:05:13 AM EDT, John Reiser jreiser@bitwagon.com wrote:
Please quantify: What is the byte size of the .s file?
First hint: give the virtual machine enough resources! Either RAM, or "swap" (paging) space.
The .s got up to about 375M before that particular g++ compile process died. The jobs are submitted with the usual suspects: koji or fedpkg. I don't think we have any control over the resources allocatedto the vm's or containers the jobs land on (and we shouldn't).
what package is it?
Dan
On Wednesday, June 26, 2019, 02:42:29 AM EDT, Dan Horák dan@danny.cz wrote:> what package is it?
fastbit. This evening I retired it in master since no upstream updates have been issued since 02/2016. https://src.fedoraproject.org/rpms/fastbit
The build problems are completely recent, nothing "real" has changed with this package in years. I just startedgetting intermittent failures. f30 and f29 are still in the git tree if you want to look at it.
On 6/26/19 3:25 AM, Philip Kovacs via devel wrote:
I am finding that one of my c++ packages has compilation units that generate very large assembly (.s) files -- so large that any attempt to build them in memory (e.g. with -pipe) causes memory exhaustion. The only way I have found to reliably get the build to run to completion is by using -save-temps to force g++ to save the .s assembly files to disk. I also have to remove any (make) parallelism in the builds.
I am doing this:
%configure \ CXXFLAGS="${CXXFLAGS} -save-temps" \ ...
and using make (-j1 implied) instead of make_build. > Just curious if anyone has a better suggestion here.
You don't need to abandon %make_build and friends for that. You can either set RPM_BUILD_NCPUS=1 environent variable or define %_smp_ncpus_max macro to 1 in the build environment, whichever is more convenient.
- Panu -