Hi,
The elfutils webpage says: "To report bugs: please open a bugzilla report against the elfutils component."
However it seems the redhat bugzilla doesn't have an elfutils component. Therefore I'm reporting it here, hope that's okay.
The attached file will cause a huge malloc allocation with elfutils' nm tool. This will crash if you try to run it with address sanitizer.
The reason is likely that nm will try to allocate space for something based on the header value - no matter if that value makes any sense. A sanity check that checks in such cases if the file itself is smaller than the supposedly allocated memory could avoid that.
Address Sanitizer trace:
==29915==ERROR: AddressSanitizer failed to allocate 0xb18002000 (47647301632) bytes of LargeMmapAllocator: 12
==19508==AddressSanitizer CHECK failed: /var/tmp/portage/sys-devel/gcc-4.9.2/work/gcc-4.9.2/libsanitizer/sanitizer_common/sanitizer_posix.cc:66 "(("unable to mmap" && 0)) != (0)" (0x0, 0x0) #0 0x7f1a5001df90 (/usr/lib/gcc/x86_64-pc-linux-gnu/4.9.2/libasan.so.1+0x5cf90) #1 0x7f1a500221f3 in __sanitizer::CheckFailed(char const*, int, char const*, unsigned long long, unsigned long long) (/usr/lib/gcc/x86_64-pc-linux-gnu/4.9.2/libasan.so.1+0x611f3) #2 0x7f1a50027041 (/usr/lib/gcc/x86_64-pc-linux-gnu/4.9.2/libasan.so.1+0x66041) #3 0x7f1a4ffddad8 (/usr/lib/gcc/x86_64-pc-linux-gnu/4.9.2/libasan.so.1+0x1cad8) #4 0x7f1a5001868f in malloc (/usr/lib/gcc/x86_64-pc-linux-gnu/4.9.2/libasan.so.1+0x5768f) #5 0x41a421 in xmalloc /f/elfutils/elfutils-0.163/lib/xmalloc.c:52 #6 0x4089a4 in show_symbols /f/elfutils/elfutils-0.163/src/nm.c:1212 #7 0x40ce47 in handle_elf /f/elfutils/elfutils-0.163/src/nm.c:1484 #8 0x4033a6 in process_file /f/elfutils/elfutils-0.163/src/nm.c:387 #9 0x4033a6 in main /f/elfutils/elfutils-0.163/src/nm.c:248 #10 0x7f1a4f2cef9f in __libc_start_main (/lib64/libc.so.6+0x1ff9f) #11 0x40438e (/old-ram/elfutils/nm+0x40438e)
On 2015-06-23 18:44, Hanno Böck wrote:
The elfutils webpage says: "To report bugs: please open a bugzilla report against the elfutils component."
However it seems the redhat bugzilla doesn't have an elfutils component. Therefore I'm reporting it here, hope that's okay.
IIRC, to find elfutils, you have to choose Fedora as a product in bugzilla.
The attached file will cause a huge malloc allocation with elfutils' nm tool. This will crash if you try to run it with address sanitizer.
The reason is likely that nm will try to allocate space for something based on the header value - no matter if that value makes any sense. A sanity check that checks in such cases if the file itself is smaller than the supposedly allocated memory could avoid that.
I've reported several similar issues before. Mark replied:
"I believe the "Argument 'size' of function malloc has a fishy (possibly negative) value" in dwarf_begin_elf.c (check_section) is correct, but harmless. We do check the value doesn't actually overflow, the allocation will likely fail, but that is also checked."
https://bugzilla.redhat.com/show_bug.cgi?id=1170810#c6
Specifically about nm -- https://bugzilla.redhat.com/show_bug.cgi?id=1170810#c40 .
On Wed, Jun 24, 2015 at 12:12:45AM +0300, Alexander Cherepanov wrote:
On 2015-06-23 18:44, Hanno Böck wrote:
The elfutils webpage says: "To report bugs: please open a bugzilla report against the elfutils component."
However it seems the redhat bugzilla doesn't have an elfutils component. Therefore I'm reporting it here, hope that's okay.
IIRC, to find elfutils, you have to choose Fedora as a product in bugzilla.
Yes, the link on the webpage should already point to that. I improved it a bit to directly point to the correct component too. https://bugzilla.redhat.com/enter_bug.cgi?product=Fedora&component=elfut...
Maybe we should have a different bug tracker? Historically all bugs were reported in the Red Hat bugzilla, then moved to the Fedora one. And I just happen to also package elfutils for Fedora, so left it like that.
The attached file will cause a huge malloc allocation with elfutils' nm tool. This will crash if you try to run it with address sanitizer.
The reason is likely that nm will try to allocate space for something based on the header value - no matter if that value makes any sense. A sanity check that checks in such cases if the file itself is smaller than the supposedly allocated memory could avoid that.
I've reported several similar issues before. Mark replied:
"I believe the "Argument 'size' of function malloc has a fishy (possibly negative) value" in dwarf_begin_elf.c (check_section) is correct, but harmless. We do check the value doesn't actually overflow, the allocation will likely fail, but that is also checked."
I am very interested in the results of the gcc sanitizers, valgrind, fuzzers, etc. It really helped make elfutils much more robust. For 0.163 all known crashers were fixed. So if you are still able to crash elfutils libraries or tools, please do report.
But in this case as far as I know these kind of malloc argument checks are indeed just noise. We do check the results of malloc everywhere (or should at least). I might be wrong of course, or miss something subtle. So please do let me know if you think it is something to fix differently from how we handle it currently.
Thanks,
Mark
On Wed, 24 Jun 2015 10:14:04 +0200 Mark Wielaard mjw@redhat.com wrote:
I am very interested in the results of the gcc sanitizers, valgrind, fuzzers, etc. It really helped make elfutils much more robust. For 0.163 all known crashers were fixed. So if you are still able to crash elfutils libraries or tools, please do report.
But in this case as far as I know these kind of malloc argument checks are indeed just noise. We do check the results of malloc everywhere (or should at least). I might be wrong of course, or miss something subtle. So please do let me know if you think it is something to fix differently from how we handle it currently.
Ok, I am aware that these things are debatable.
One reason you might want to fix such issues is that they could be used to cause memory exhaustion. E.g. you have a server that processes files and you send them specially crafted small files that will use up a lot of memory, but not that much that malloc failes.
Therefore imho it makes sense to add some sanity checks. Parsers should never accept any field sizes that are larger than the file itself.
This is probably not so much an issue in self-containing tools like elfutils. Honestly the biggest reason I report these is that asan complains about them and it makes fuzzing easier if they get fixed. But it's up to you. (Most other apps where I reported similar things fixed them)
On Sat, Jun 27, 2015 at 12:45:13PM +0200, Hanno Böck wrote:
One reason you might want to fix such issues is that they could be used to cause memory exhaustion. E.g. you have a server that processes files and you send them specially crafted small files that will use up a lot of memory, but not that much that malloc failes.
Therefore imho it makes sense to add some sanity checks. Parsers should never accept any field sizes that are larger than the file itself.
This is probably not so much an issue in self-containing tools like elfutils. Honestly the biggest reason I report these is that asan complains about them and it makes fuzzing easier if they get fixed. But it's up to you. (Most other apps where I reported similar things fixed them)
The fix is indeed simple. We just have to switch the getting of data (and detecing it is bogus) before allocating the memory. With that your example gives:
src/nm: bogus.elf: entry size in section 2 `(null)' is not what we expect src/nm: bogus.elf: INTERNAL ERROR 1207 (0.163): invalid data
And then exists before trying to allocate any memory.
Attached patch pushed to master. Hope that helps. Looking forward to more fuzzing results :)
Thanks,
Mark
On 2015-06-24 11:14, Mark Wielaard wrote:
The attached file will cause a huge malloc allocation with elfutils' nm tool. This will crash if you try to run it with address sanitizer.
The reason is likely that nm will try to allocate space for something based on the header value - no matter if that value makes any sense. A sanity check that checks in such cases if the file itself is smaller than the supposedly allocated memory could avoid that.
I've reported several similar issues before. Mark replied:
"I believe the "Argument 'size' of function malloc has a fishy (possibly negative) value" in dwarf_begin_elf.c (check_section) is correct, but harmless. We do check the value doesn't actually overflow, the allocation will likely fail, but that is also checked."
[skip]
But in this case as far as I know these kind of malloc argument checks are indeed just noise. We do check the results of malloc everywhere (or should at least). I might be wrong of course, or miss something subtle. So please do let me know if you think it is something to fix differently from how we handle it currently.
gcc doesn't support objects more than half the address space in size -- https://gcc.gnu.org/bugzilla/show_bug.cgi?id=67999 . So if you are malloc'ing >2GB on 32-bit platforms you should be concerned.
On Mon, 2015-10-19 at 03:50 +0300, Alexander Cherepanov wrote:
On 2015-06-24 11:14, Mark Wielaard wrote:
But in this case as far as I know these kind of malloc argument checks are indeed just noise. We do check the results of malloc everywhere (or should at least). I might be wrong of course, or miss something subtle. So please do let me know if you think it is something to fix differently from how we handle it currently.
gcc doesn't support objects more than half the address space in size -- https://gcc.gnu.org/bugzilla/show_bug.cgi?id=67999 . So if you are malloc'ing >2GB on 32-bit platforms you should be concerned.
Urgh. So malloc might return a memory object larger than PTRDIFF_MAX? I had indeed assumed something like that couldn't happen. It makes the size calculations and/or indexing into such a memory object afterwards a little tricky I believe. Since pointer + something > PTRDIFF_MAX seems not well defined. hmmmm.
I think it makes sense to raise this as a bug against glibc. And we probably do have to audit all such suspicious mallocs to make sure we aren't actually doing any pointer calculations using the size (or just reject any allocation > PTRDIFF_MAX).
Thanks,
Mark
On 19.10.2015 11:01, Mark Wielaard wrote:
On Mon, 2015-10-19 at 03:50 +0300, Alexander Cherepanov wrote:
On 2015-06-24 11:14, Mark Wielaard wrote:
But in this case as far as I know these kind of malloc argument checks are indeed just noise. We do check the results of malloc everywhere (or should at least). I might be wrong of course, or miss something subtle. So please do let me know if you think it is something to fix differently from how we handle it currently.
gcc doesn't support objects more than half the address space in size -- https://gcc.gnu.org/bugzilla/show_bug.cgi?id=67999 . So if you are malloc'ing >2GB on 32-bit platforms you should be concerned.
Urgh. So malloc might return a memory object larger than PTRDIFF_MAX?
Yup.
I had indeed assumed something like that couldn't happen. It makes the size calculations and/or indexing into such a memory object afterwards a little tricky I believe.
True.
Since pointer + something > PTRDIFF_MAX seems not well defined. hmmmm.
pointer + something doesn't have any such limits. It just have to point into the (same) object. That is, according to the C standards. It turned out it's not well-defined in gcc and clang for something > PTRDIFF_MAX (it doesn't matter if pointer + something > PTRDIFF_MAX). And it's not clear what works and what doesn't work. E.g. this:
for (size_t i = 0; i < len; i++) buf[i] = 'A';
seems to work but this:
char *end = buf + len; for (char *p = buf; p < end; p++) *p = 'B';
doesn't work. And those things which work now are not guaranteed to work in the future due to changes in optimization etc.
And things like pointer - pointer > PTRDIFF_MAX are not defined at all. They are UB in the C standards.
I think it makes sense to raise this as a bug against glibc.
You are subscribed to the gcc bug, so you can see the full picture there:-)
And we probably do have to audit all such suspicious mallocs to make sure we aren't actually doing any pointer calculations using the size (or just reject any allocation > PTRDIFF_MAX).
If you are not in a position that you _definitely_ need allocations > PTRDIFF_MAX, I guess the easiest solution is to reject them in a wrappers around malloc, mmap, etc. Until the compilers are fixed, for broken versions of compiler, or forever for all compilers. You are saying that you already assumed that it works that way so you are not loosing anything:-)
If you want allocations > PTRDIFF_MAX then you have to fix all cases of pointer - pointer > PTRDIFF_MAX. It affects only arrays of chars, for types with sizeof > 1 it's ok already.
If you want, in addition, to support broken compilers (this includes all(?) existing versions of gcc and clang) you have to check everything touching pointer arithmetic for miscompilations.
On 10/19/2015 02:50 AM, Alexander Cherepanov wrote:
gcc doesn't support objects more than half the address space in size -- https://gcc.gnu.org/bugzilla/show_bug.cgi?id=67999 . So if you are malloc'ing >2GB on 32-bit platforms you should be concerned.
This needs to be fixed in GCC. Even if we artificially fail large allocations in malloc, there will be cases where people call mmap or shmat directly. And at least for the latter two, there is an expectation that this works with larger-than-2-GiB mappings for 32-bit processes (to the degree that Red Hat shipped very special 32-bit kernels for a while to support this).
Florian
On 19.10.2015 12:07, Florian Weimer wrote:
On 10/19/2015 02:50 AM, Alexander Cherepanov wrote:
gcc doesn't support objects more than half the address space in size -- https://gcc.gnu.org/bugzilla/show_bug.cgi?id=67999 . So if you are malloc'ing >2GB on 32-bit platforms you should be concerned.
This needs to be fixed in GCC. Even if we artificially fail large allocations in malloc, there will be cases where people call mmap or shmat directly. And at least for the latter two, there is an expectation that this works with larger-than-2-GiB mappings for 32-bit processes (to the degree that Red Hat shipped very special 32-bit kernels for a while to support this).
I'm all for fixing it in GCC. It gives more flexibility: you cannot support huge objects in libc when your compiler doesn't support them but you can choose if you want to support them in libc when you compiler support them. But I guess it's not easy to fix.
OTOH perhaps ability to create huge objects in libc should be somehow hidden by default? As evidenced by this thread:-)
On 10/21/2015 10:17 PM, Alexander Cherepanov wrote:
On 19.10.2015 12:07, Florian Weimer wrote:
On 10/19/2015 02:50 AM, Alexander Cherepanov wrote:
gcc doesn't support objects more than half the address space in size -- https://gcc.gnu.org/bugzilla/show_bug.cgi?id=67999 . So if you are malloc'ing >2GB on 32-bit platforms you should be concerned.
This needs to be fixed in GCC. Even if we artificially fail large allocations in malloc, there will be cases where people call mmap or shmat directly. And at least for the latter two, there is an expectation that this works with larger-than-2-GiB mappings for 32-bit processes (to the degree that Red Hat shipped very special 32-bit kernels for a while to support this).
I'm all for fixing it in GCC. It gives more flexibility: you cannot support huge objects in libc when your compiler doesn't support them but you can choose if you want to support them in libc when you compiler support them. But I guess it's not easy to fix.
OTOH perhaps ability to create huge objects in libc should be somehow hidden by default? As evidenced by this thread:-)
It's possible to set a virtual address space limit with ulimit. Is this sufficient?
Florian
On 2015-10-21 23:18, Florian Weimer wrote:
On 10/21/2015 10:17 PM, Alexander Cherepanov wrote:
On 19.10.2015 12:07, Florian Weimer wrote:
On 10/19/2015 02:50 AM, Alexander Cherepanov wrote:
gcc doesn't support objects more than half the address space in size -- https://gcc.gnu.org/bugzilla/show_bug.cgi?id=67999 . So if you are malloc'ing >2GB on 32-bit platforms you should be concerned.
This needs to be fixed in GCC. Even if we artificially fail large allocations in malloc, there will be cases where people call mmap or shmat directly. And at least for the latter two, there is an expectation that this works with larger-than-2-GiB mappings for 32-bit processes (to the degree that Red Hat shipped very special 32-bit kernels for a while to support this).
I'm all for fixing it in GCC. It gives more flexibility: you cannot support huge objects in libc when your compiler doesn't support them but you can choose if you want to support them in libc when you compiler support them. But I guess it's not easy to fix.
OTOH perhaps ability to create huge objects in libc should be somehow hidden by default? As evidenced by this thread:-)
It's possible to set a virtual address space limit with ulimit. Is this sufficient?
Such a limit is overly strict for this problem as it bounds the total size of all allocations. And it have to be default in 32-bit distros to be effective. Which seems doubtful given its strictness. OTOH it's easy to change for those who need it and I guess it could be implemented by distros very fast, without waiting for gcc or glibc fixes.
On 10/22/2015 12:47 AM, Alexander Cherepanov wrote:
Such a limit is overly strict for this problem as it bounds the total size of all allocations. And it have to be default in 32-bit distros to be effective. Which seems doubtful given its strictness. OTOH it's easy to change for those who need it and I guess it could be implemented by distros very fast, without waiting for gcc or glibc fixes.
Okay, this is a valid argument. I'll have to think about this some more.
Florian
elfutils-devel@lists.fedorahosted.org