https://koji.fedoraproject.org/koji/taskinfo?taskID=74840901
How should I interpret this?
Rich.
On Tue, 31 Aug 2021 10:07:38 +0100 "Richard W.M. Jones" rjones@redhat.com wrote:
https://koji.fedoraproject.org/koji/taskinfo?taskID=74840901
How should I interpret this?
something is killing the build in progress (like oomd?) and inspection of the builders (looks at least x86_64 + ppc64le + s390x are affected) in question is needed ...
Dan
On Tue, 31 Aug 2021 11:13:40 +0200 Dan Horák dan@danny.cz wrote:
On Tue, 31 Aug 2021 10:07:38 +0100 "Richard W.M. Jones" rjones@redhat.com wrote:
https://koji.fedoraproject.org/koji/taskinfo?taskID=74840901
How should I interpret this?
something is killing the build in progress (like oomd?) and inspection of the builders (looks at least x86_64 + ppc64le + s390x are affected) in question is needed ...
from buildvm-s390x-22.s390.fedoraproject.org
[290483.759667] oom-kill:constraint=CONSTRAINT_NONE,nodemask=(null),cpuset=/,mems_allowed=0,global_oom,task_memcg=/system.slice/kojid.service,task=dnf,pid=1739780,uid=0 [290483.759690] Out of memory: Killed process 1739780 (dnf) total-vm:16787488kB, anon-rss:14531544kB, file-rss:4kB, shmem-rss:0kB, UID:0 pgtables:32650kB oom_score_adj:0 [290486.777808] oom_reaper: reaped process 1739780 (dnf), now anon-rss:0kB, file-rss:0kB, shmem-rss:0kB
killed dnf, that's not good ...
Dan
On Tue, Aug 31, 2021 at 11:17:18AM +0200, Dan Horák wrote:
On Tue, 31 Aug 2021 11:13:40 +0200 Dan Horák dan@danny.cz wrote:
On Tue, 31 Aug 2021 10:07:38 +0100 "Richard W.M. Jones" rjones@redhat.com wrote:
https://koji.fedoraproject.org/koji/taskinfo?taskID=74840901
How should I interpret this?
something is killing the build in progress (like oomd?) and inspection of the builders (looks at least x86_64 + ppc64le + s390x are affected) in question is needed ...
from buildvm-s390x-22.s390.fedoraproject.org
[290483.759667] oom-kill:constraint=CONSTRAINT_NONE,nodemask=(null),cpuset=/,mems_allowed=0,global_oom,task_memcg=/system.slice/kojid.service,task=dnf,pid=1739780,uid=0 [290483.759690] Out of memory: Killed process 1739780 (dnf) total-vm:16787488kB, anon-rss:14531544kB, file-rss:4kB, shmem-rss:0kB, UID:0 pgtables:32650kB oom_score_adj:0 [290486.777808] oom_reaper: reaped process 1739780 (dnf), now anon-rss:0kB, file-rss:0kB, shmem-rss:0kB
killed dnf, that's not good ...
Here's another one, different package, different arch, but same sort of error:
https://koji.fedoraproject.org/koji/taskinfo?taskID=74848368
It's tricky to understand from the log or the OOM report if there's a particular package that dnf is trying to install that is causing the problem, and which package that could be.
However I see there are other recent f36 builds that have not failed (including glibc) so it cannot be a completely generic dnf problem:
https://koji.fedoraproject.org/koji/builds?tagID=44414
Rich.
On Tue, Aug 31, 2021 at 10:07:38AM +0100, Richard W.M. Jones wrote:
https://koji.fedoraproject.org/koji/taskinfo?taskID=74840901
How should I interpret this?
Very odd
Looking at the parent task:
https://koji.fedoraproject.org/koji/taskinfo?taskID=74840893
and picking ppc64 arch instead, it gets even wierder. The root.log file:
https://kojipkgs.fedoraproject.org//work/tasks/903/74840903/root.log
contains strace logs:
DEBUG util.py:446: execve("/bin/kernel-install", ["/bin/kernel-install", "-v", "add", "5.14.0-61.fc36.ppc64le", "/lib/modules/5.14.0-61.fc36.ppc64le/vmlinuz"], 0x7fffc420e508 /* 12 vars */) = 0 DEBUG util.py:446: brk(NULL) = 0x13c870000 DEBUG util.py:446: readlink("/proc/self/exe", "/usr/bin/bash", 4096) = 13 DEBUG util.py:446: openat(AT_FDCWD, "/var/tmp/tmp.mock.gzh1qs1v/lib64/nosync.so", O_RDONLY|O_CLOEXEC) = 3 ...snip...
What on earth is going on there ?
Who would be running strace on a *production* build system host ?
Regards, Daniel
On Tue, Aug 31, 2021 at 10:13 AM Daniel P. Berrangé berrange@redhat.com wrote:
On Tue, Aug 31, 2021 at 10:07:38AM +0100, Richard W.M. Jones wrote:
https://koji.fedoraproject.org/koji/taskinfo?taskID=74840901
How should I interpret this?
Very odd
Looking at the parent task:
https://koji.fedoraproject.org/koji/taskinfo?taskID=74840893
and picking ppc64 arch instead, it gets even wierder. The root.log file:
https://kojipkgs.fedoraproject.org//work/tasks/903/74840903/root.log
contains strace logs:
DEBUG util.py:446: execve("/bin/kernel-install", ["/bin/kernel-install", "-v", "add", "5.14.0-61.fc36.ppc64le", "/lib/modules/5.14.0-61.fc36.ppc64le/vmlinuz"], 0x7fffc420e508 /* 12 vars */) = 0 DEBUG util.py:446: brk(NULL) = 0x13c870000 DEBUG util.py:446: readlink("/proc/self/exe", "/usr/bin/bash", 4096) = 13 DEBUG util.py:446: openat(AT_FDCWD, "/var/tmp/tmp.mock.gzh1qs1v/lib64/nosync.so", O_RDONLY|O_CLOEXEC) = 3 ...snip...
What on earth is going on there ?
Who would be running strace on a *production* build system host ?
Apparently us: https://src.fedoraproject.org/rpms/kernel/c/6b7647647610ebe404bd27c768e5df17...
-- 真実はいつも一つ!/ Always, there's only one truth!
On Tue, Aug 31, 2021 at 10:29:27AM -0400, Neal Gompa wrote:
On Tue, Aug 31, 2021 at 10:13 AM Daniel P. Berrangé berrange@redhat.com wrote:
On Tue, Aug 31, 2021 at 10:07:38AM +0100, Richard W.M. Jones wrote:
https://koji.fedoraproject.org/koji/taskinfo?taskID=74840901
How should I interpret this?
Very odd
Looking at the parent task:
https://koji.fedoraproject.org/koji/taskinfo?taskID=74840893
and picking ppc64 arch instead, it gets even wierder. The root.log file:
https://kojipkgs.fedoraproject.org//work/tasks/903/74840903/root.log
contains strace logs:
DEBUG util.py:446: execve("/bin/kernel-install", ["/bin/kernel-install", "-v", "add", "5.14.0-61.fc36.ppc64le", "/lib/modules/5.14.0-61.fc36.ppc64le/vmlinuz"], 0x7fffc420e508 /* 12 vars */) = 0 DEBUG util.py:446: brk(NULL) = 0x13c870000 DEBUG util.py:446: readlink("/proc/self/exe", "/usr/bin/bash", 4096) = 13 DEBUG util.py:446: openat(AT_FDCWD, "/var/tmp/tmp.mock.gzh1qs1v/lib64/nosync.so", O_RDONLY|O_CLOEXEC) = 3 ...snip...
What on earth is going on there ?
Who would be running strace on a *production* build system host ?
Apparently us: https://src.fedoraproject.org/rpms/kernel/c/6b7647647610ebe404bd27c768e5df17...
Is this related to the OOM issues in Koji, or something else?
(In fact now I ask that question I wonder if this could somehow be _causing_ the OOM issues ...)
Rich.
On Tue, Aug 31, 2021 at 9:41 AM Richard W.M. Jones rjones@redhat.com wrote:
On Tue, Aug 31, 2021 at 10:29:27AM -0400, Neal Gompa wrote:
On Tue, Aug 31, 2021 at 10:13 AM Daniel P. Berrangé berrange@redhat.com wrote:
On Tue, Aug 31, 2021 at 10:07:38AM +0100, Richard W.M. Jones wrote:
https://koji.fedoraproject.org/koji/taskinfo?taskID=74840901
How should I interpret this?
Very odd
Looking at the parent task:
https://koji.fedoraproject.org/koji/taskinfo?taskID=74840893
and picking ppc64 arch instead, it gets even wierder. The root.log file:
https://kojipkgs.fedoraproject.org//work/tasks/903/74840903/root.log
contains strace logs:
DEBUG util.py:446: execve("/bin/kernel-install", ["/bin/kernel-install", "-v", "add", "5.14.0-61.fc36.ppc64le", "/lib/modules/5.14.0-61.fc36.ppc64le/vmlinuz"], 0x7fffc420e508 /* 12 vars */) = 0 DEBUG util.py:446: brk(NULL) = 0x13c870000 DEBUG util.py:446: readlink("/proc/self/exe", "/usr/bin/bash", 4096) = 13 DEBUG util.py:446: openat(AT_FDCWD, "/var/tmp/tmp.mock.gzh1qs1v/lib64/nosync.so", O_RDONLY|O_CLOEXEC) = 3 ...snip...
What on earth is going on there ?
Who would be running strace on a *production* build system host ?
Apparently us: https://src.fedoraproject.org/rpms/kernel/c/6b7647647610ebe404bd27c768e5df17...
Is this related to the OOM issues in Koji, or something else?
(In fact now I ask that question I wonder if this could somehow be _causing_ the OOM issues ...)
No, this was added to one build at the request of releng because composes are randomly failing in kernel-core %post. As we just call kernel-install, we first tried calling kernel-install -v, but that doesn't give much useful information because it doesn't seem to add any verbosity to the subtasks it runs. The 5.14.0-61 build was done last night to try and gather some more information for them, and the strace call is expected to disappear with the next build. In fact, once they have run the composes and gotten the information needed, 61 can be untagged, and 60 will be the same kernel without the debugging calls to kernel-install. It should only impact the very few packages which have kernel-core as a buildreq.
Justin
On Tue, Aug 31, 2021 at 10:25:37AM -0500, Justin Forbes wrote:
No, this was added to one build at the request of releng because composes are randomly failing in kernel-core %post. As we just call kernel-install, we first tried calling kernel-install -v, but that doesn't give much useful information because it doesn't seem to add any verbosity to the subtasks it runs. The 5.14.0-61 build was done last night to try and gather some more information for them, and the strace call is expected to disappear with the next build. In fact, once they have run the composes and gotten the information needed, 61 can be untagged, and 60 will be the same kernel without the debugging calls to kernel-install. It should only impact the very few packages which have kernel-core as a buildreq.
Yeah, sorry to have confused anyone here. We were trying to track down this anoying sporadic failure in kernel-core trigger/new-kernel-pkg.
Unfortunately, it caused this OOM issue on ppc64le, so it didn't in the end help us much. I have untagged that kernel and killed the stuck rawhide compose.
I'm going to see if I can get it to happen with scratch livemedia against a sidetag with that kernel tagged in now. :(
kevin