On Sun, 24 Dec 2023 at 15:51, Sam Varshavchik <mrsam@courier-mta.com> wrote:
Stephen Smoogen writes:

> »My apologies for bad quoting.. email from phone. What version of rpm build 
> is used and what are some packages which are rebuilt that show this issue. 
> This may be needed if the core dump is due to something else in the 
> environment like memory limits etc 

It's 4.19.1 on FC39, and it's packages that I'm working on. It's glibc 
complaining about a double-free, and not any resource limits. I can get a 
backtrace out of it:

                #1  0x00007f05dd8588ee raise (libc.so.6 + 0x3e8ee)
                #2  0x00007f05dd8408ff abort (libc.so.6 + 0x268ff)
                #3  0x00007f05dd8417d0 __libc_message.cold (libc.so.6 + 0x277d0)
                #4  0x00007f05dd8b47a5 malloc_printerr (libc.so.6 + 0x9a7a5)
                #5  0x00007f05dd8b6a3a _int_free (libc.so.6 + 0x9ca3a)
                #6  0x00007f05dd8b93de free (libc.so.6 + 0x9f3de)
                #7  0x00007f05dda984ec rpmugUid (librpm.so.10 + 0x584ec)
                #8  0x00007f05dda84255 rpmfilesStat (librpm.so.10 + 0x44255)
                #9  0x00007f05dda8438f rpmfiStat (librpm.so.10 + 0x4438f)
                #10 0x00007f05dda84444 rpmfiArchiveWriteHeader (librpm.so.10 + 0x44444)
                #11 0x00007f05dda871c9 iterWriteArchiveNext (librpm.so.10 + 0x471c9)

I am looking at this core dump. I see 32 active execution threads at the 
time this whole thing went kaput, and all the code in rpmug.c is definitely 
not thread safe. I did not look very hard, I don't know if there are mutexes 
higher up the call chain, but the overall behavior – occasional core dumps 
-- is indicative of thread races.


Thanks. I was wondering if it was dnf/rpm on the system or dnf/rpm in the chroot but it sounds like something changed between 4.19.0.1 (what I had on my system since September?)  and 4.19.1 ( December)

The changelog doesn't say much beyond 
* Tue Dec 12 2023 Michal Domonkos <mdomonko@redhat.com> - 4.19.1-1
- Update to 4.19.1 (https://rpm.org/wiki/Releases/4.19.1)

I forget if there is a way to pin an rpm in a mock environment so that you don't update over 4.19.0 to see if you can see if 
a) the problem still happens with that (possibly indicating that whatever is calling into rpm is broken) or b) the problem doesn't occur and it is a change between .0.1 and .19.1
 

--
_______________________________________________
devel mailing list -- devel@lists.fedoraproject.org
To unsubscribe send an email to devel-leave@lists.fedoraproject.org
Fedora Code of Conduct: https://docs.fedoraproject.org/en-US/project/code-of-conduct/
List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines
List Archives: https://lists.fedoraproject.org/archives/list/devel@lists.fedoraproject.org
Do not reply to spam, report it: https://pagure.io/fedora-infrastructure/new_issue


--
Stephen Smoogen, Red Hat Automotive
Let us be kind to one another, for most of us are fighting a hard battle. -- Ian MacClaren