Hi,
This proposal was originally at https://fedorahosted.org/fesco/ticket/1104
(mitr asked me to move the discussion to fedora-devel to get more attention and feedback)
...
http://fedoraproject.org/wiki/Hardened_Packages page mentions that "FESCo requires some packages to use PIE and relro hardening by default."
It would be great if this list could be expanded to include even more packages which are at comparatively more risk of being exploited (locally or remotely).
Such packages will typically include various system daemons, network daemons and network enabled applications.
Lot of network daemons are already using PIE and RELRO (e.g. httpd, MariaDB). So a natural question is why packages in same "network daemons" class like PostgreSQL, Dovecot and MongoDB aren't being hardened?
Some of the ways to implement this proposal are,
1. Hardening flags should be turned on (by default) for all packages which are at comparatively more risk of being exploited or which meet some well-defined criteria (suggestions welcome).
"Packaging Guidelines" say that "Other packages may enable the flags at the maintainer's discretion."
Thinking from a security perspective, I find "Hardening flags can only be disabled for other packages at the maintainer's discretion provided enough justification is given to FESCo" to be more appropriate.
2. An alternate approach is to come up with an expanded list of packages which should be hardened.
Any feedback is welcome!
-- Dhiru
On Fri, Mar 29, 2013 at 10:08:37PM +0530, Dhiru Kholia wrote:
Hi,
This proposal was originally at https://fedorahosted.org/fesco/ticket/1104
(mitr asked me to move the discussion to fedora-devel to get more attention and feedback)
...
http://fedoraproject.org/wiki/Hardened_Packages page mentions that "FESCo requires some packages to use PIE and relro hardening by default."
It would be great if this list could be expanded to include even more packages which are at comparatively more risk of being exploited (locally or remotely).
Such packages will typically include various system daemons, network daemons and network enabled applications.
Qemu is surely a good candidate for this. Although it's not network- accessible, it is accessible from the guests that it runs via its huge and ill-specified surface of emulated devices.
- Hardening flags should be turned on (by default) for all packages
which are at comparatively more risk of being exploited or which meet some well-defined criteria (suggestions welcome).
Is there somewhere which describes what to do / what flags to enable?
Rich.
On Fri, Mar 29, 2013 at 05:13:33PM +0000, Richard W.M. Jones wrote:
On Fri, Mar 29, 2013 at 10:08:37PM +0530, Dhiru Kholia wrote:
Hi,
This proposal was originally at https://fedorahosted.org/fesco/ticket/1104
(mitr asked me to move the discussion to fedora-devel to get more attention and feedback)
...
http://fedoraproject.org/wiki/Hardened_Packages page mentions that "FESCo requires some packages to use PIE and relro hardening by default."
It would be great if this list could be expanded to include even more packages which are at comparatively more risk of being exploited (locally or remotely).
Such packages will typically include various system daemons, network daemons and network enabled applications.
Qemu is surely a good candidate for this. Although it's not network- accessible, it is accessible from the guests that it runs via its huge and ill-specified surface of emulated devices.
I'm running my own modified qemu package [qemu-1.4.0-5.fc20.x86_64] with hardening flags enabled. It seems to be working OK so far ...
Rich.
Il 29/03/2013 23:10, Richard W.M. Jones ha scritto:
Qemu is surely a good candidate for this. Although it's not network- accessible, it is accessible from the guests that it runs via its huge and ill-specified surface of emulated devices.
I'm running my own modified qemu package [qemu-1.4.0-5.fc20.x86_64] with hardening flags enabled. It seems to be working OK so far ...
QEMU's own configure script takes care of enabling PIE and relro, at least on x86/Linux and x86/OpenBSD. Testers are welcome for other architectures!
Paolo
On Fri, Mar 29, 2013 at 10:43 PM, Richard W.M. Jones rjones@redhat.com wrote:
On Fri, Mar 29, 2013 at 10:08:37PM +0530, Dhiru Kholia wrote:
- Hardening flags should be turned on (by default) for all packages
which are at comparatively more risk of being exploited or which meet some well-defined criteria (suggestions welcome).
Is there somewhere which describes what to do / what flags to enable?
http://wiki.debian.org/Hardening describes the various hardening flags.
"_hardened_build" rpm spec macro can be used to harden a package.
For an example, see http://pkgs.fedoraproject.org/cgit/clamav.git/tree/clamav.spec
-- Dhiru
On Saturday, March 30, 2013 08:54:30 AM Dhiru Kholia wrote:
On Fri, Mar 29, 2013 at 10:43 PM, Richard W.M. Jones rjones@redhat.com
wrote:
On Fri, Mar 29, 2013 at 10:08:37PM +0530, Dhiru Kholia wrote:
- Hardening flags should be turned on (by default) for all packages
which are at comparatively more risk of being exploited or which meet some well-defined criteria (suggestions welcome).
Is there somewhere which describes what to do / what flags to enable?
http://wiki.debian.org/Hardening describes the various hardening flags.
"_hardened_build" rpm spec macro can be used to harden a package.
For an example, see http://pkgs.fedoraproject.org/cgit/clamav.git/tree/clamav.spec
This flag is overly aggressive. We have a list of programs that need PIE enabled and doing more isn't necessarily constructive.
What would be nice is if the autotools got some macros to detect PIE and RELRO support in gcc so that its easy to add to CFLAGS and LDFLAGS so that it can be applied more precisely.
-Steve
On Tue, Apr 2, 2013 at 9:57 PM, Steve Grubb sgrubb@redhat.com wrote:
On Saturday, March 30, 2013 08:54:30 AM Dhiru Kholia wrote:
"_hardened_build" rpm spec macro can be used to harden a package.
For an example, see http://pkgs.fedoraproject.org/cgit/clamav.git/tree/clamav.spec
This flag is overly aggressive. We have a list of programs that need PIE enabled and doing more isn't necessarily constructive.
Why exactly it "isn't necessarily constructive"? If you have hard data, please share :) Mirek
On Wednesday, April 03, 2013 01:48:17 PM Miloslav Trmač wrote:
On Tue, Apr 2, 2013 at 9:57 PM, Steve Grubb sgrubb@redhat.com wrote:
On Saturday, March 30, 2013 08:54:30 AM Dhiru Kholia wrote:
"_hardened_build" rpm spec macro can be used to harden a package.
For an example, see http://pkgs.fedoraproject.org/cgit/clamav.git/tree/clamav.spec
This flag is overly aggressive. We have a list of programs that need PIE enabled and doing more isn't necessarily constructive.
Why exactly it "isn't necessarily constructive"? If you have hard data, please share :)
Because PIE is only supposed to be on long running apps and setuid apps. If its on everything, it will slow the system down too much and then you have the knee jerk reaction to remove it from anything. We want it applied when needed and otherwise not.
Also, the hardened macros adds the "now" directive to the linker. This is needed for PIE apps since there is a table for the indirection, but this also adds additional slowdown to startup. Jakub mentioned pretty much the same thing, too much PIE is not a good thing.
What we want is a balance between fast and secure. That is how the rpm-chksec script is written. Its coded to grade the distribution based on this philosophy.
-Steve
On Wed, Apr 3, 2013 at 2:05 PM, Steve Grubb sgrubb@redhat.com wrote:
On Wednesday, April 03, 2013 01:48:17 PM Miloslav Trmač wrote:
On Tue, Apr 2, 2013 at 9:57 PM, Steve Grubb sgrubb@redhat.com wrote:
On Saturday, March 30, 2013 08:54:30 AM Dhiru Kholia wrote:
"_hardened_build" rpm spec macro can be used to harden a package.
For an example, see http://pkgs.fedoraproject.org/cgit/clamav.git/tree/clamav.spec
This flag is overly aggressive. We have a list of programs that need
PIE
enabled and doing more isn't necessarily constructive.
Why exactly it "isn't necessarily constructive"? If you have hard data, please share :)
Because PIE is only supposed to be on long running apps and setuid apps. If its on everything, it will slow the system down too much and then you have the knee jerk reaction to remove it from anything. We want it applied when needed and otherwise not.
How much does it slow things down? I'm fairly certain you don't have any good data on this point. Dhiru is working out how to best figure out FWIW.
I'm willing to agree that PIE on x86 is going to be very slow due to register pressure. However, we should consider revisiting what we want built as PIE. Is Firefox a long running process? It is on my system. Revisiting our current list and trying to understand our needs is never a bad thing to do. Existing architectures are different now than they were when that list was created, no harm comes from talking about it.
Il 04/04/2013 04:05, Josh Bressers ha scritto:
On Wed, Apr 3, 2013 at 2:05 PM, Steve Grubb <sgrubb@redhat.com mailto:sgrubb@redhat.com> wrote:
On Wednesday, April 03, 2013 01:48:17 PM Miloslav Trmač wrote: > On Tue, Apr 2, 2013 at 9:57 PM, Steve Grubb <sgrubb@redhat.com <mailto:sgrubb@redhat.com>> wrote: > > On Saturday, March 30, 2013 08:54:30 AM Dhiru Kholia wrote: > > > "_hardened_build" rpm spec macro can be used to harden a package. > > > > > > For an example, see > > > http://pkgs.fedoraproject.org/cgit/clamav.git/tree/clamav.spec > > > > This flag is overly aggressive. We have a list of programs that need PIE > > enabled and doing more isn't necessarily constructive. > > Why exactly it "isn't necessarily constructive"? If you have hard data, > please share :) Because PIE is only supposed to be on long running apps and setuid apps. If its on everything, it will slow the system down too much and then you have the knee jerk reaction to remove it from anything. We want it applied when needed and otherwise not.
How much does it slow things down? I'm fairly certain you don't have any good data on this point. Dhiru is working out how to best figure out FWIW.
I'm willing to agree that PIE on x86 is going to be very slow due to register pressure.
Yes, but not on x86-64 which has %rip-relative addressing. It is probably a wash there.
Also, it is not really _that_ slow. The same register pressure issue exists with PIC. It was that bad, we would have problems for all the code we run from shared libraries.
Paolo
However, we should consider revisiting what we want built as PIE. Is Firefox a long running process? It is on my system. Revisiting our current list and trying to understand our needs is never a bad thing to do. Existing architectures are different now than they were when that list was created, no harm comes from talking about it.
-- JB
On Thu, Apr 04, 2013 at 09:39:18AM +0200, Paolo Bonzini wrote:
I'm willing to agree that PIE on x86 is going to be very slow due to register pressure.
Yes, but not on x86-64 which has %rip-relative addressing. It is probably a wash there.
It isn't, while the register pressure doesn't increase on x86-64 due to PIC/PIE and PIC register setup doesn't require any code, whenever you access data that aren't known at compile time to be in the binary/shared library (i.e. static or hidden mostly), then for PIC/PIE it means an extra indirection through GOT.
Jakub
On 04/04/2013 09:47 AM, Jakub Jelinek wrote:
On Thu, Apr 04, 2013 at 09:39:18AM +0200, Paolo Bonzini wrote:
I'm willing to agree that PIE on x86 is going to be very slow due to register pressure.
Yes, but not on x86-64 which has %rip-relative addressing. It is probably a wash there.
On x86_64, GCC uses %rip-relative addressing even in non-PIC mode.
It isn't, while the register pressure doesn't increase on x86-64 due to PIC/PIE and PIC register setup doesn't require any code, whenever you access data that aren't known at compile time to be in the binary/shared library (i.e. static or hidden mostly), then for PIC/PIE it means an extra indirection through GOT.
For PIE, ld should be able to avoid the indirection for function calls because the function in the binary always takes precedence. (A bit like protected visibility.) It seems this optimization is already implemented.
I think a similar optimization would be possible for access to global variables because ld could compute the final layout of all global variables in the binary itself, just as in the non-PIE case.
On Thu, Apr 04, 2013 at 10:27:31AM +0200, Florian Weimer wrote:
On 04/04/2013 09:47 AM, Jakub Jelinek wrote:
On Thu, Apr 04, 2013 at 09:39:18AM +0200, Paolo Bonzini wrote:
I'm willing to agree that PIE on x86 is going to be very slow due to register pressure.
Yes, but not on x86-64 which has %rip-relative addressing. It is probably a wash there.
On x86_64, GCC uses %rip-relative addressing even in non-PIC mode.
Only rarely, when it is less expensive or a wash. Whenever you need more complex addressing modes, we don't emit %rip addressing for non-pic and have to do for pic. Say we emit: movzwl local_symbol(%rsi,%rsi), %edx for non-pic, but have to emit: leaq local_symbol(%rip), %rax movzwl (%rax,%rsi,2), %edx for pic/pie code (and only when the symbol in static/hidden, otherwise there would need to be an extra indirection).
I think a similar optimization would be possible for access to global variables because ld could compute the final layout of all global variables in the binary itself, just as in the non-PIE case.
Nope. The thing is, depending on if the variable is known to bind locally (for PIC that is essentially static or hidden visibility, for PIE you can add to that global vars defined in the current CU), you either emit code that avoids the indirection (say %rip addressing, GOTOFF etc.), or emit code that does the indirection, there is no linker relaxation that could turn (albeit with worse generated code, increased register pressure etc.) code that uses indirection (loads from GOT) into one that doesn't, we'd need extra relocations and as that there are just too many forms of the instructions on i?86/x86_64, it would be pretty hard. So, PIE is definitely not for free, not even on x86_64.
Jakub
On 04/04/2013 10:42 AM, Jakub Jelinek wrote:
I think a similar optimization would be possible for access to global variables because ld could compute the final layout of all global variables in the binary itself, just as in the non-PIE case.
Nope. The thing is, depending on if the variable is known to bind locally (for PIC that is essentially static or hidden visibility, for PIE you can add to that global vars defined in the current CU), you either emit code that avoids the indirection (say %rip addressing, GOTOFF etc.),
Even in PIE mode, it is possible to bind all global variables locally. Even if the variable is defined in a DSO, we can allocate space for it in the main program and arrange for the GOT indirection in the DSO to point there. The DSO would use the indirection, but the main program wouldn't.
It's slightly backwards, but isn't this how variables in DSOs are referenced from position-dependent code?
On Thu, Apr 04, 2013 at 10:59:41AM +0200, Florian Weimer wrote:
On 04/04/2013 10:42 AM, Jakub Jelinek wrote:
I think a similar optimization would be possible for access to global variables because ld could compute the final layout of all global variables in the binary itself, just as in the non-PIE case.
Nope. The thing is, depending on if the variable is known to bind locally (for PIC that is essentially static or hidden visibility, for PIE you can add to that global vars defined in the current CU), you either emit code that avoids the indirection (say %rip addressing, GOTOFF etc.),
Even in PIE mode, it is possible to bind all global variables locally. Even if the variable is defined in a DSO, we can allocate space for it in the main program and arrange for the GOT indirection in the DSO to point there. The DSO would use the indirection, but the main program wouldn't.
It's slightly backwards, but isn't this how variables in DSOs are referenced from position-dependent code?
That requires copy relocations being used even for PIEs, so you'd need to change the whole toolchain for that, and somehow deal with the new dependencies (as in, PIE code with modified GCC would have to be linked with a new linker, otherwise it wouldn't work). Even if you do this, still PIE code won't be as fast as position dependent code, but it will be closer to that. Of course, you'll still be unable to prelink those, so the startup cost will be there in any case, so I hope you aren't suggesting we build ls, grep, sh, thousands of little GUI apps, etc. as PIE.
Jakub
On 04/04/2013 11:16 AM, Jakub Jelinek wrote:
On Thu, Apr 04, 2013 at 10:59:41AM +0200, Florian Weimer wrote:
On 04/04/2013 10:42 AM, Jakub Jelinek wrote:
I think a similar optimization would be possible for access to global variables because ld could compute the final layout of all global variables in the binary itself, just as in the non-PIE case.
Nope. The thing is, depending on if the variable is known to bind locally (for PIC that is essentially static or hidden visibility, for PIE you can add to that global vars defined in the current CU), you either emit code that avoids the indirection (say %rip addressing, GOTOFF etc.),
Even in PIE mode, it is possible to bind all global variables locally. Even if the variable is defined in a DSO, we can allocate space for it in the main program and arrange for the GOT indirection in the DSO to point there. The DSO would use the indirection, but the main program wouldn't.
It's slightly backwards, but isn't this how variables in DSOs are referenced from position-dependent code?
That requires copy relocations being used even for PIEs, so you'd need to change the whole toolchain for that, and somehow deal with the new dependencies (as in, PIE code with modified GCC would have to be linked with a new linker, otherwise it wouldn't work).
Sriraman Tallam has written a GCC patch which does this:
https://gcc.gnu.org/ml/gcc-patches/2014-05/msg01215.html
Related patches to binutils have already been committed.
On Wednesday, April 03, 2013 09:05:18 PM Josh Bressers wrote:
On Wed, Apr 3, 2013 at 2:05 PM, Steve Grubb sgrubb@redhat.com wrote:
On Wednesday, April 03, 2013 01:48:17 PM Miloslav Trmač wrote:
On Tue, Apr 2, 2013 at 9:57 PM, Steve Grubb sgrubb@redhat.com wrote:
On Saturday, March 30, 2013 08:54:30 AM Dhiru Kholia wrote:
"_hardened_build" rpm spec macro can be used to harden a package.
For an example, see http://pkgs.fedoraproject.org/cgit/clamav.git/tree/clamav.spec
This flag is overly aggressive. We have a list of programs that need
PIE
enabled and doing more isn't necessarily constructive.
Why exactly it "isn't necessarily constructive"? If you have hard data, please share :)
Because PIE is only supposed to be on long running apps and setuid apps. If its on everything, it will slow the system down too much and then you have the knee jerk reaction to remove it from anything. We want it applied when needed and otherwise not.
How much does it slow things down? I'm fairly certain you don't have any good data on this point. Dhiru is working out how to best figure out FWIW.
I'm willing to agree that PIE on x86 is going to be very slow due to register pressure. However, we should consider revisiting what we want built as PIE. Is Firefox a long running process?
Firefox fits into the category of a parser of untrusted media. Therefore it should hardened.
It is on my system. Revisiting our current list and trying to understand our needs is never a bad thing to do. Existing architectures are different now than they were when that list was created, no harm comes from talking about it.
I think the list, if enforced, is good enough for our needs. PIE is only part of the issues. Other things that are more important in my opinion.
1) Heap randomization is only 14 bits! If PIE is enabled, it has 29 bits of randomization. We need to do something about that. 2) Even though we have only a handful of apps violating NX stack, we have a bunch of apps that mmap writable and executabel memory. For example, polkitd running as root has WX memory. Mosty of KDE does, so does Cinnamon. Part of the problem seems to be libjs. What its doing is partially compiling, compiling as needed, optimizes as the script runs. If you look into libjs, you see that it has calls to mprotect to actually solve the problem. However, there is an obvious typo that disables it. So, when you fix that, you find out the actual use of the protections is completely missing. The code is BSD...so maybe the actual use is behind closed doors. 3) We need the -fstack-protector-string patch for gcc. This effectively doubles the coverage of the stack protector. For example, CVE-2013-0288 would have been stopped -fstack-protector-strong. 4) We need to get fortify_source macros into gnulib.
Last week I was looking at nspr and wondering why fortify_source was not getting used and found that it wrapped functions for "portability". For example, it has PL_strcpy which only wraps strcpy. The problem is the size information is lost by the wrapping so that the fortify macros have nothing to work with. I know this is a common technique, I've seen it a lot. But this idiom defeats a security mechanism.
PIE is a second layer defence. Assuming an attacker has exploited something, it makes ROP harder to do. I'd like to fix some of these other issues that stop attacks at the beginning.
-Steve
On Thu, Apr 04, 2013 at 09:26:34AM -0400, Steve Grubb wrote:
Last week I was looking at nspr and wondering why fortify_source was not getting used and found that it wrapped functions for "portability". For example, it has PL_strcpy which only wraps strcpy. The problem is the size information is lost by the wrapping so that the fortify macros have nothing to work with. I know this is a common technique, I've seen it a lot. But this idiom defeats a security mechanism.
Wrapping memory and string ops (except perhaps for wrapping in inline functions) is a terrible idea, not just because of -D_FORTIFY_SOURCE, but for many other reasons too, the compiler then can't optimize the calls if they are called with constant arguments (lengths, const string literals, etc.), can't choose best generated code, can't argue about those from aliasing, points to etc. POV, can't attempt to optimize say PL_strcat (str1, "abcde"); PL_strcat (str1, str2); etc. So, whenever somebody comes across such a mess in packages we ship in Fedora, please try to undo that mess by adding #defines or inline wrappers.
Jakub
On 04/04/13 at 09:26am, Steve Grubb wrote:
On Wednesday, April 03, 2013 09:05:18 PM Josh Bressers wrote:
On Wed, Apr 3, 2013 at 2:05 PM, Steve Grubb sgrubb@redhat.com wrote: How much does it (PIE) slow things down? I'm fairly certain you don't have any good data on this point. Dhiru is working out how to best figure out FWIW.
I'm willing to agree that PIE on x86 is going to be very slow due to register pressure. However, we should consider revisiting what we want built as PIE. Is Firefox a long running process?
Firefox fits into the category of a parser of untrusted media. Therefore it should hardened.
FWIW, Ubuntu has been shipping PIE enabled Firefox for years now.
https://bugs.launchpad.net/ubuntu/+source/xulrunner-1.9.1/+bug/507744
I repeated the benchmarks (mentioned in the above bug report) for Firefox 20.0 running on Fedora 18 64-bit.
http://dromaeo.com/?id=193034,193041,193043,193080,193080,193081,193082
First four columns are stock Firefox and last two columns are PIE enabled Firefox.
There are no performance regressions it seems (at least not in the Dromaeo JavaScript performance testing tool).
Upstream Bug (to add support for building Firefox as PIE),
https://bugzilla.mozilla.org/show_bug.cgi?id=857628
-- Dhiru
On Fri, Apr 05, 2013 at 07:31:55PM +0530, Dhiru Kholia wrote:
On 04/04/13 at 09:26am, Steve Grubb wrote:
On Wednesday, April 03, 2013 09:05:18 PM Josh Bressers wrote:
On Wed, Apr 3, 2013 at 2:05 PM, Steve Grubb sgrubb@redhat.com wrote: How much does it (PIE) slow things down? I'm fairly certain you don't have any good data on this point. Dhiru is working out how to best figure out FWIW.
I'm willing to agree that PIE on x86 is going to be very slow due to register pressure. However, we should consider revisiting what we want built as PIE. Is Firefox a long running process?
Firefox fits into the category of a parser of untrusted media. Therefore it should hardened.
FWIW, Ubuntu has been shipping PIE enabled Firefox for years now.
https://bugs.launchpad.net/ubuntu/+source/xulrunner-1.9.1/+bug/507744
I repeated the benchmarks (mentioned in the above bug report) for Firefox 20.0 running on Fedora 18 64-bit.
Firefox as benchmark doesn't look like a good idea (and I'm really surprised that we don't compile it as PIE, I thought we've been doing that for years). The special thing on firefox is that it is really tiny binary (< 64K of .text) with almost no libraries linked directly (just -lc, -ldl, -lstdc++, -lpthread and their dependencies (-lm, -lgcc_s)), so indeed the relocation processing isn't very expensive (only ~ 130 relocations) before reaching main and prelink can't make it significantly faster. Firefox is designed to dlopen all of its code from main and later on, something prelink doesn't significantly improve (the only improvement could be if all/some of those dlopened libraries were prelinked (or just prelink -R relocated) to picked up addresses, then it could avoid relative relocation processing). Even just starting firefox to show up a window shows around 8000 relocations though, but except for the first ~ 130 everything is during dlopen.
If you want to benchmark something where it makes a difference, you want to benchmark some program where the binary contains significant amount of code, and which links against lots of shared libraries, or stuff like configure scripts or similar usage scenarios where are thousands of small short running programs spawned each second and where the relocation processing consumes significant amount of time.
Jakub
On 04/05/13 at 04:16pm, Jakub Jelinek wrote:
On Fri, Apr 05, 2013 at 07:31:55PM +0530, Dhiru Kholia wrote:
I repeated the benchmarks (mentioned in the above bug report) for Firefox 20.0 running on Fedora 18 64-bit.
Firefox as benchmark doesn't look like a good idea (and I'm really surprised that we don't compile it as PIE, I thought we've been doing that for years). The special thing on firefox is that it is really tiny binary (< 64K of .text) with almost no libraries linked directly (just -lc, -ldl, -lstdc++, -lpthread and their dependencies (-lm, -lgcc_s)), so indeed the relocation processing isn't very expensive (only ~ 130 relocations) before reaching main and prelink can't make it significantly faster. ...
I see the problem with using Firefox as a benchmark now.
So I started looking for more suitable applications to benchmark and came across Gimp.
$ ldd /usr/bin/gimp-2.8 | wc -l 77
$ du -hs /usr/bin/gimp-2.8 5.8M /usr/bin/gimp-2.8
Looks good to use as a benchmark?
I ran some benchmarks on various builds of Gimp. Please note that the F18 upstream build has PIE enabled.
The benchmarking steps are described at https://github.com/kholia/gimp-bench
My Results ==========
~ 39.160s ==> upstream my build ~ 39.750s ==> upstream stock build ~ 40.850s ==> "non-PIE" my build
The PIE version turns out to be a bit faster. This seems weird and I can't explain it.
Grant repeated the tests independently and his results are below,
Grant's Results ===============
(No PIE)
[gm@localhost gimp-bench]$ time ./launch.sh batch command executed successfully
real 2m13.267s user 1m25.577s sys 0m46.474s
(PIE)
[gm@localhost gimp-bench]$ time ./launch.sh batch command executed successfully
real 2m4.328s user 1m22.506s sys 0m44.958s
PIE build is faster than the non-PIE build (again!) it seems.
...
These results are weird and I can't explain them. Do you have any insights on why the PIE build of Gimp is faster than the non-PIE build?
-- Dhiru
On 03/29/2013 09:38 AM, Dhiru Kholia wrote:
Lot of network daemons are already using PIE and RELRO (e.g. httpd, MariaDB). So a natural question is why packages in same "network daemons" class like PostgreSQL, Dovecot and MongoDB aren't being hardened? Some of the ways to implement this proposal are,
- Hardening flags should be turned on (by default) for all packages
which are at comparatively more risk of being exploited or which meet some well-defined criteria (suggestions welcome).
"Packaging Guidelines" say that "Other packages may enable the flags at the maintainer's discretion."
Thinking from a security perspective, I find "Hardening flags can only be disabled for other packages at the maintainer's discretion provided enough justification is given to FESCo" to be more appropriate.
-fPIE code is larger and takes longer to execute. The cost varies from minimal (< 2%) in many cases to 10% or more for "non-dynamic" arrays on i686. -fPIE for Thumb mode on ARM is particularly painful.
RELRO can cost one extra page of physical RAM per process because the placement of the RELRO region tends to increase fragmentation and decrease sharability.
I suggest that any requirement for increased hardening be restricted to only those programs which execute with elevated privileges. The package maintainer should retain primary discretion for anything which executes with "ordinary" user privileges.
--
CC to the users list because this is a interesting topic at all
Am 29.03.2013 18:48, schrieb John Reiser:
Thinking from a security perspective, I find "Hardening flags can only be disabled for other packages at the maintainer's discretion provided enough justification is given to FESCo" to be more appropriate.
-fPIE code is larger and takes longer to execute. The cost varies from minimal (< 2%) in many cases to 10% or more for "non-dynamic" arrays on i686
i686 becomes more or less dead
there could be made a difference in SPEC-files to in border cases only harden the x86_64 binaries because in context of servers i686 is already dead except legacy systems which are not relevant for recent fedora versions
-fPIE for Thumb mode on ARM is particularly painful.
RELRO can cost one extra page of physical RAM per process because the placement of the RELRO region tends to increase fragmentation and decrease sharability.
I suggest that any requirement for increased hardening be restricted to only those programs which execute with elevated privileges. The package maintainer should retain primary discretion for anything which executes with "ordinary" user privileges
wrong point of view
the question is what data a binary is supposed to get as input the "ordinary users privileges" doe snot help you much in case of local root exploits and keep also in mind that in context of network-serrvices or software which typically communicates with foreign network-services a "local exploit" very fast becomes a "remote exploit"
example: * foreign user uploads images / pdf-files to a web-form * on the server side imagemagick or poppler libs proceed the data * if these libraries are vulerable by the input file and at the same moment a local root exploit is not fixed on the machine you are very soon in the situation of a root-exploit * please do not argue with "but you need this and this AND this" the expierience of the last years shows how creative attackers are acting with RANDOM input data
yes performance matters
yes i am the first who likes optimized binaries but NOT for the price of weaker security
On 03/29/2013, Reindl Harald wrote:
-fPIE code is larger and takes longer to execute. The cost varies from minimal (< 2%) in many cases to 10% or more for "non-dynamic" arrays on i686
i686 becomes more or less dead
there could be made a difference in SPEC-files to in border cases only harden the x86_64 binaries because in context of servers i686 is already dead except legacy systems which are not relevant for recent fedora versions
The usage of i686 user-mode software is *INCREASING*, especially on x86_64 machines which run a 64-bit kernel. The same amount of physical RAM can support several percent more simultaneous 32-bit user-mode processes before paging. 64-bit .text, pointers, and longs are larger. Only a few applications need a 64-bit address space. It will be many years before i686 user mode dies.
[snip]
- please do not argue with "but you need this and this AND this" the expierience of the last years shows how creative attackers are acting with RANDOM input data
I'm arguing the total expected benefit (integral over time of estimated exposure times expected prevented loss) versus actual cost (more machines, RAM, heat, [avoided] latency). I'm not convinced that PIE+RELRO is worth it except for a process with elevated privilege or extended lifetime.
Please cite some documented cases where PIE and/or RELRO prevented or delayed an actual loss, or signaled with sufficient warning to be useful. Meanwhile I'm spending more each month to consume more resources because of PIE+RELRO.
Am 29.03.2013 23:07, schrieb John Reiser:
On 03/29/2013, Reindl Harald wrote:
-fPIE code is larger and takes longer to execute. The cost varies from minimal (< 2%) in many cases to 10% or more for "non-dynamic" arrays on i686
i686 becomes more or less dead
there could be made a difference in SPEC-files to in border cases only harden the x86_64 binaries because in context of servers i686 is already dead except legacy systems which are not relevant for recent fedora versions
The usage of i686 user-mode software is *INCREASING*, especially on x86_64 machines which run a 64-bit kernel. The same amount of physical RAM can support several percent more simultaneous 32-bit user-mode processes before paging. 64-bit .text, pointers, and longs are larger. Only a few applications need a 64-bit address space. It will be many years before i686 user mode dies.
the machines below are all installed 2008 this is five years ago
the machines did load-peaks only a few people saw in real-life well many times and i rebuild ANY relevant package with PIE
last year we bought a DL380 with 2 x Xeon E5-2640 and 92 GB RAM plus a additional CPU and 60 GB RAM for the other host by a price of around 8000 € and you will explain me that hacks like PAE are growing?
[root@buildserver:~]$ distribute-command.sh "rpm -qa | grep x86_64 | wc -l; rpm -qa | grep i686 | wc -l"
--------------------------------------------------------------------------
896 0
411 0
335 0
279 0
283 0
368 0
217 0
218 0
344 0
342 0
237 0
239 0
399 0
335 0
344 0
895 0
279 0
283 0
368 0
- please do not argue with "but you need this and this AND this" the expierience of the last years shows how creative attackers are acting with RANDOM input data
I'm arguing the total expected benefit (integral over time of estimated exposure times expected prevented loss) versus actual cost (more machines, RAM, heat, [avoided] latency). I'm not convinced that PIE+RELRO is worth it except for a process with elevated privilege or extended lifetime.
Please cite some documented cases where PIE and/or RELRO prevented or delayed an actual loss, or signaled with sufficient warning to be useful. Meanwhile I'm spending more each month to consume more resources because of PIE+RELRO
this is a naive approach you CAN NOT measure a failed code-execution
you can only measure a successful intrusion and that only if you take notice that it happened - looking in my firewall logs only a few people out there are in the position having the knowledge to notice intrusions on their machines
On Fri, 2013-03-29 at 10:48 -0700, John Reiser wrote:
-fPIE code is larger and takes longer to execute. The cost varies from minimal (< 2%) in many cases to 10% or more for "non-dynamic" arrays on i686.
Citation needed.
-fPIE for Thumb mode on ARM is particularly painful.
Citation needed.
RELRO can cost one extra page of physical RAM per process because the placement of the RELRO region tends to increase fragmentation and decrease sharability.
Almost true, but wildly misleading.
RELRO adds a class of variables that are "read-only after relocation processing". These are variables that _could not be shared anyway_ since their runtime value depends on where ld.so loads the process, which is randomized. They do have to be mapped to a different page, but that's because you can't map less than a page. And there's no fragmentation cost, because the relro section is mapped immediately after the normal rodata section.
I appreciate the concern for the extra page of dirty data per process (actually per relro'd ELF object in the link map, including DSOs, but let's not split hairs), but if we were concerned about 4k here and there I assure you there are more deserving targets for that wrath than relro.
- ajax
On 04/01/2013 04:58 AM, Adam Jackson wrote:
I appreciate the concern for the extra page of dirty data per process (actually per relro'd ELF object in the link map, including DSOs, but let's not split hairs), but if we were concerned about 4k here and there I assure you there are more deserving targets for that wrath than relro.
Citation needed.
On 04/01/2013 04:58 AM, Adam Jackson wrote:
On Fri, 2013-03-29 at 10:48 -0700, John Reiser wrote:
-fPIE code is larger and takes longer to execute. The cost varies from minimal (< 2%) in many cases to 10% or more for "non-dynamic" arrays on i686.
Citation needed.
ftp://ftp.inf.ethz.ch/doc/tech-reports/7xx/766.pdf which is cited by the FESCO ticket https://fedorahosted.org/fesco/ticket/1104#comment:11
It's also easy to see the mechanism: $ cat foo.c extern int a[];
void foo(int j) { a[j]=j; } $ gcc -m32 -fPIE -O -S foo.c $ cat foo.s # edited for brevity foo: # 25 bytes; about 15 cycles (incl. 3*3 cycles data cache fetch latency) call __x86.get_pc_thunk.cx addl $_GLOBAL_OFFSET_TABLE_, %ecx movl 4(%esp), %eax movl a@GOT(%ecx), %edx movl %eax, (%edx,%eax,4) ret $ gcc -m32 -O -S foo.c $ cat foo.s # edited for brevity foo: # 12 bytes; about 6 cycles (incl. 1*3 cycles data cache fetch latency) movl 4(%esp), %eax movl %eax, a(,%eax,4) ret $
-fPIE forces an additional level of run-time indirection which often costs around 13 bytes (CALL + ADD + fetch GOT - d32) and 2 to 5 cycles (fetch @GOT and cache latency). Some of the cost might be shared with other nearby uses, but scarcity of registers often inhibits sharing or requires spill code.
-fPIE for Thumb mode on ARM is particularly painful.
Citation needed.
The same code above applies. Thumb mode has no double indexing, so an explicit ADD is required. Registers are in still in short supply; HI registers (>=8) have dedicated usage or restricted access. Also, the range of the offset in base_register+offset addressing mode is severely restricted, which often requires more explicit ADDs.
--
John Reiser jreiser@bitwagon.com wrote:
It's also easy to see the mechanism: $ cat foo.c extern int a[];
void foo(int j) { a[j]=j; } $ gcc -m32 -fPIE -O -S foo.c $ cat foo.s # edited for brevity foo: # 25 bytes; about 15 cycles (incl. 3*3 cycles data cache fetch latency) call __x86.get_pc_thunk.cx addl $_GLOBAL_OFFSET_TABLE_, %ecx movl 4(%esp), %eax movl a@GOT(%ecx), %edx movl %eax, (%edx,%eax,4) ret
Yes, but... Am I right in thinking that a page containing the above can be shared, but...
$ gcc -m32 -O -S foo.c $ cat foo.s # edited for brevity foo: # 12 bytes; about 6 cycles (incl. 1*3 cycles data cache fetch latency) movl 4(%esp), %eax movl %eax, a(,%eax,4) ret $
... a page containing this cannot because it must be relocated prior to execution?
Admittedly, it is possible that if the address stored by the linker for 'a' is the same as 'a' is loaded at, then the loader might not need to adjust the instruction - but if we randomise the load addresses of various binaries, then that is unlikely to be true.
David
$ gcc -m32 -fPIE -O -S foo.c $ cat foo.s # edited for brevity foo: # 25 bytes; about 15 cycles (incl. 3*3 cycles data cache fetch latency) call __x86.get_pc_thunk.cx addl $_GLOBAL_OFFSET_TABLE_, %ecx movl 4(%esp), %eax movl a@GOT(%ecx), %edx movl %eax, (%edx,%eax,4) ret
Yes, but... Am I right in thinking that a page containing the above can be shared, ...
Yes. '_GLOBAL_OFFSET_TABLE_' and 'a@GOT' both are constants whose value is established by the static linker /bin/ld and unchanged for every subsequent execve(). The final relocation of _GLOBAL_OFFSET_TABLE_ is performed during execution by the 'addl' using the execution-time base address returned by __x86.get_pc_thunk.cx. The final relocation for the address of "a[]" is performed at the start of execution by ld-linux changing the value contained in the GOT.
$ gcc -m32 -O -S foo.c $ cat foo.s # edited for brevity foo: # 12 bytes; about 6 cycles (incl. 1*3 cycles data cache fetch latency) movl 4(%esp), %eax movl %eax, a(,%eax,4) ret $
... but a page containing this cannot because it must be relocated prior to execution?
For a main program in normal usage where 'a' must be defined in the target of execve [else static linking fails], then that page _can_ be shared, and is shared. The relocation occurs at static link time. If 'a' remains undefined at the end of static linking, or if 'a' is known to reside in some shared library, then yes, the address must be relocated at the time of execution, and therefore the page cannot be shared.
For a shared library, that code is not even -fPIC, so it won't work on many platforms, although i686 allows it.
Dhiru Kholia wrote:
- Hardening flags should be turned on (by default) for all packages
which are at comparatively more risk of being exploited or which meet some well-defined criteria (suggestions welcome).
Such criteria exist and are documented one click away from the page you linked to: http://fedoraproject.org/wiki/Packaging:Guidelines#PIE
The current criteria are: | If your package meets any of the following criteria you MUST enable | the PIE compiler flags: | | · Your package is long running. This means it's likely to be | started and keep running until the machine is rebooted, not start on | demand and quit on idle. | | · Your package has suid binaries, or binaries with capabilities. | | · Your package runs as root. | | If your package meets the following criteria you should consider | enabling the PIE compiler flags: | | · Your package accepts/processes untrusted input. | | FESCo maintains a list of packages that MUST have PIE turned on. | Other packages may enable the flags at the maintainer's discretion. | | There are some notable disadvantages to enabling PIE that should be | considered in making the decision: | | · Some code does not compile with PIE (or does not function | properly). | | · You can not use prelink on PIE enabled binaries, resulting in a | slower startup time.
Dhiru Kholia wrote:
Such packages will typically include various system daemons, network daemons and network enabled applications.
Lot of network daemons are already using PIE and RELRO (e.g. httpd, MariaDB). So a natural question is why packages in same "network daemons" class like PostgreSQL, Dovecot and MongoDB aren't being hardened?
Daemons are "long running", and therefore required to be hardened per the first criterion. Network enabled applications match the fourth criterion, so their maintainers "should consider enabling" _hardened_build.
"Packaging Guidelines" say that "Other packages may enable the flags at the maintainer's discretion."
Thinking from a security perspective, I find "Hardening flags can only be disabled for other packages at the maintainer's discretion provided enough justification is given to FESCo" to be more appropriate.
FESCo decided on the criteria after a lot of discussion. As I understood it the listed disadvantages were considered sufficient reason to not make the hardening flags default across the whole distribution.
- An alternate approach is to come up with an expanded list of packages
which should be hardened.
Since FESCo maintains a list, I suppose anyone can propose specific programs to be added to the list, but it seems pointless to explicitly list programs that are already covered by the first three criteria.
Björn Persson
On 03/29/13 at 08:47pm, Björn Persson wrote:
- An alternate approach is to come up with an expanded list of packages
which should be hardened.
Since FESCo maintains a list, I suppose anyone can propose specific programs to be added to the list, but it seems pointless to explicitly list programs that are already covered by the first three criteria.
I agree that it seems pointless (and tedious) to explicitly list programs which are already covered.
However many packages (like PostgreSQL, Dovecot and MongoDB) meet the criteria but still are not getting hardened. I am not sure about the underlying reasons (oversight / performance concerns / etc.).
What would be a good way to solve this problem in your opinion? (File bugs / Explicitly list such packages / Turn on hardening by default)
It would be great to have some sort of automated method to find if hardening criteria applies to a particular package. Ideas are welcome!
-- Dhiru
Le lundi 01 avril 2013 à 12:29 +0530, Dhiru Kholia a écrit :
On 03/29/13 at 08:47pm, Björn Persson wrote:
- An alternate approach is to come up with an expanded list of packages
which should be hardened.
Since FESCo maintains a list, I suppose anyone can propose specific programs to be added to the list, but it seems pointless to explicitly list programs that are already covered by the first three criteria.
I agree that it seems pointless (and tedious) to explicitly list programs which are already covered.
However many packages (like PostgreSQL, Dovecot and MongoDB) meet the criteria but still are not getting hardened. I am not sure about the underlying reasons (oversight / performance concerns / etc.).
What would be a good way to solve this problem in your opinion? (File bugs / Explicitly list such packages / Turn on hardening by default)
I would file bugs, and list those that were checked on a wiki page, along a link to the bug and a date, and revisit the reason on a regular basis.
It would be great to have some sort of automated method to find if hardening criteria applies to a particular package. Ideas are welcome!
You can take a look on http://people.redhat.com/sgrubb/security/ , there is a script rpm-chksec to verify that.
On 04/01/13 at 10:23am, Michael Scherer wrote:
Le lundi 01 avril 2013 à 12:29 +0530, Dhiru Kholia a écrit :
What would be a good way to solve this problem in your opinion? (File bugs / Explicitly list such packages / Turn on hardening by default)
I would file bugs, and list those that were checked on a wiki page, along a link to the bug and a date, and revisit the reason on a regular basis.
I have started doing this.
See https://bugzilla.redhat.com/show_bug.cgi?id=947022 for an example.
It would be great to have some sort of automated method to find if hardening criteria applies to a particular package. Ideas are welcome!
You can take a look on http://people.redhat.com/sgrubb/security/ , there is a script rpm-chksec to verify that.
Thanks! I found some neat ideas in rpm-chksec script.
I will incorporate them into https://github.com/kholia/checksec
-- Dhiru
On 04/01/13 at 03:05pm, Dhiru Kholia wrote:
On 04/01/13 at 10:23am, Michael Scherer wrote:
Le lundi 01 avril 2013 à 12:29 +0530, Dhiru Kholia a écrit :
It would be great to have some sort of automated method to find if hardening criteria applies to a particular package. Ideas are welcome!
You can take a look on http://people.redhat.com/sgrubb/security/ , there is a script rpm-chksec to verify that.
Thanks! I found some neat ideas in rpm-chksec script.
I will incorporate them into https://github.com/kholia/checksec
Here is a list of packages (daemons only) which are (possibly) violating packaging guidelines with respect to hardening.
http://dl.dropbox.com/u/1522424/probable-violations-F19.xls
(Please note that this list was generated by a program and might be buggy!)
-- Dhiru
On Tue, Apr 02, 2013 at 05:51:42PM +0530, Dhiru Kholia wrote:
That shows:
<D0><CF>^Q<U+0871>^Z<E1>^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@>^@^C^@<FE><FF> ^@^F^@^@^@^@^@^@^@^@^@^@^@^F^@^@^@<E6>^B^@^@^@^@^@^@^@^P^@^@<FE><FF><FF><FF>^@^@^@^@<FE><FF><FF><FF>^@^@^@^@<E0>^B^@^@<E1>^B^@^@<E2>^B^@^@<E3>^B^@^@<E4>^B^@^@<E5>^B^@^@<FF><FF><FF><FF><FF><FF><FF><FF><FF><FF><FF><FF><FF><FF><FF><FF><FF><FF><FF><FF><FF><FF><FF><FF><FF><FF><FF><FF><FF><FF><FF><FF><FF><FF><FF><FF><FF><FF><FF><FF><FF><FF><FF><FF><FF><FF>
Can you use a non-proprietary format please.
Rich.
On Tue, Apr 2, 2013 at 6:36 PM, Richard W.M. Jones rjones@redhat.com wrote:
On Tue, Apr 02, 2013 at 05:51:42PM +0530, Dhiru Kholia wrote:
That shows:
<garbage>
Can you use a non-proprietary format please.
Can you try "wget http://dl.dropbox.com/u/1522424/probable-violations-F19.xls" ?
This file was generated using open-source tools and it works great on LibreOffice.
I can generate a CSV file if you really want it.
On Tue, Apr 2, 2013 at 6:36 PM, Richard W.M. Jones rjones@redhat.com wrote:
On Tue, Apr 02, 2013 at 05:51:42PM +0530, Dhiru Kholia wrote:
That shows:
<garbage>
Can you use a non-proprietary format please.
http://dl.dropbox.com/u/1522424/probable-violations-F19.csv
On Tue, Apr 02, 2013 at 07:15:29PM +0530, Dhiru Kholia wrote:
On Tue, Apr 2, 2013 at 6:36 PM, Richard W.M. Jones rjones@redhat.com wrote:
On Tue, Apr 02, 2013 at 05:51:42PM +0530, Dhiru Kholia wrote:
That shows:
<garbage>
Can you use a non-proprietary format please.
FWIW, the following command produces much better output:
function display { echo "Package:" $1 ($2) echo " Binary:" $3 (mode $8 $9 $10) echo " " NX $4 CANARY $5 RELRO $6 PIE $7 } export -f display csvtool drop 1 probable-violations-F19.csv | csvtool call display - | less
like this:
Package: autodir (autodir-0.99.9-15.fc19.x86_64.rpm) Binary: /usr/sbin/autodir (mode 0100755 daemon autodir0) NX Enabled CANARY Enabled RELRO Partial PIE Disabled
Although it's not perfect because what you really have is a tree, not a table.
It would be helpful to have packager names alongside each package too.
Rich.
On 04/02/13 at 03:04pm, Richard W.M. Jones wrote:
On Tue, Apr 02, 2013 at 07:15:29PM +0530, Dhiru Kholia wrote:
FWIW, the following command produces much better output:
<snipped>
like this:
Package: autodir (autodir-0.99.9-15.fc19.x86_64.rpm) Binary: /usr/sbin/autodir (mode 0100755 daemon autodir0) NX Enabled CANARY Enabled RELRO Partial PIE Disabled
Thanks for the tip.
It would be helpful to have packager names alongside each package too.
Great idea!
I have implemented it and you can see the latest data at,
http://dl.dropbox.com/u/1522424/probable-violations-F19.csv
-- Dhiru
Hello, On Fri, Mar 29, 2013 at 5:38 PM, Dhiru Kholia dhiru.kholia@gmail.comwrote:
http://fedoraproject.org/wiki/Hardened_Packages page mentions that "FESCo requires some packages to use PIE and relro hardening by default."
It would be great if this list could be expanded to include even more packages which are at comparatively more risk of being exploited (locally or remotely).
Such packages will typically include various system daemons, network daemons and network enabled applications.
Lot of network daemons are already using PIE and RELRO (e.g. httpd, MariaDB). So a natural question is why packages in same "network daemons" class like PostgreSQL, Dovecot and MongoDB aren't being hardened?
The more general reference is https://fedoraproject.org/wiki/Packaging:Guidelines?rd=PackagingGuidelines#P..., which (at least in my reading) already covers these cases. The packages should just be fixed to comply.
(Perhaps the wording could be improved - right now the "Other packages may enable the flags at the maintainer's discretion." contradicts the criteria above it.)
- Hardening flags should be turned on (by default) for all packages
which are at comparatively more risk of being exploited or which meet some well-defined criteria (suggestions welcome).
It's not only well-defined criteria (which we perhaps already have), but also easy-to-check criteria or ideally easy-to-automate criteria, so that this wouldn't require manual package maintainer decisions. Does anyone have ideas how to design and implement such automatable criteria?
"Packaging Guidelines" say that "Other packages may enable the flags at
the maintainer's discretion."
Thinking from a security perspective, I find "Hardening flags can only be disabled for other packages at the maintainer's discretion provided enough justification is given to FESCo" to be more appropriate.
In other words, to enable PIE by default?
(For others - please read the FESCo ticket, it links to 2 papers measuring the performance impact, although they probably don't measure the case we are interested in, with PIE interacting with prelink - and they are all synthetic benchmarks, not measuring actual system performance in real-world use.)
The ~10% overhead on i686 makes this probably not worth it.
The ~3,6% overhead measured on x86_64 seems (with my little compiler background) rather high - what do the compiler developers think? (Again, note that the data we have probably don't measure the relevant case.)
Looking at it from another angle, enabling PIE impacts only code in executables, not in libraries; how much of Fedora's CPU-intensive code actually resides in executables? For image/video processing, I'd expect the vast majority of the "hot" code to actually reside in libraries and thus not be impacted by using PIE for executables; can anyone comment on how are preformance-relevant applications (e.g. httpd, Java runtimes or say Firefox) structured in this respect - or even better, measure it? Mirek
Dhiru Kholia wrote:
Any feedback is welcome!
My proposal: build ALL packages in Fedora with not only -fPIE and RELRO, but also -fstack-protector-all (which is not included in the current hardened cflags). Also get rid of prelink which reduces the effectiveness of ASLR. Then drop SELinux which becomes obsolete if the executables cannot be exploited in the first place. (It only papers over the real problem.)
Kevin Kofler
On Sun, Mar 31, 2013 at 01:09:36AM +0100, Kevin Kofler wrote:
Dhiru Kholia wrote:
Any feedback is welcome!
My proposal: build ALL packages in Fedora with not only -fPIE and RELRO, but also -fstack-protector-all (which is not included in the current hardened cflags). Also get rid of prelink which reduces the effectiveness of ASLR. Then drop SELinux which becomes obsolete if the executables cannot be exploited in the first place. (It only papers over the real problem.)
I know you're trolling here, but there are some misconceptions that should be corrected:
(1) -fstack-protector{,-all} doesn't implement full bounds checking for every C object.
(2) SELinux controls what labelled resources a process can access. This covers far more than buffer overflows in C programs. It covers other programming languages, design flaws and implementation 'thinko's of all sorts. I would argue (separate from this) that it's good to define precisely what resources a program can access, rather than the default "access just about everything".
However prelink does reduce the effectiveness of ASLR (a bit). See http://lwn.net/Articles/341440/ and follow-up conversation.
Rich.
On Sun, Mar 31, 2013 at 5:11 PM, Richard W.M. Jones rjones@redhat.comwrote:
On Sun, Mar 31, 2013 at 01:09:36AM +0100, Kevin Kofler wrote:
Dhiru Kholia wrote:
Any feedback is welcome!
My proposal: build ALL packages in Fedora with not only -fPIE and RELRO,
but
also -fstack-protector-all (which is not included in the current hardened cflags). Also get rid of prelink which reduces the effectiveness of ASLR. Then drop SELinux which becomes obsolete if the executables cannot be exploited in the first place. (It only papers over the real problem.)
I know you're trolling here, but there are some misconceptions that should be corrected:
(1) -fstack-protector{,-all} doesn't implement full bounds checking for every C object.
(2) SELinux controls what labelled resources a process can access. This covers far more than buffer overflows in C programs. It covers other programming languages, design flaws and implementation 'thinko's of all sorts. I would argue (separate from this) that it's good to define precisely what resources a program can access, rather than the default "access just about everything".
However prelink does reduce the effectiveness of ASLR (a bit). See http://lwn.net/Articles/341440/ and follow-up conversation.
Probably something had changed in the last years. I have posted the same
question, or related, some time ago http://www.redhat.com/archives/rhl-devel-list/2009-July/msg00674.html
Am 31.03.2013 21:24, schrieb yersinia:
However prelink does reduce the effectiveness of ASLR (a bit). See http://lwn.net/Articles/341440/ and follow-up conversation.
Probably something had changed in the last years. I have posted the same question, or related, some time ago http://www.redhat.com/archives/rhl-devel-list/2009-July/msg00674.html
You pay a security prize if you disable prelink, because it also performs address space randomization: http://lwn.net/Articles/190139/
probably you do not understand teh difference between ONE TIME randomization what prelink does or randomization by each start without prelink as the guy who said "You pay a security prize if you disable prelink" also did not
This is the a good paper, old but interesting anyway, on this subject. http://lists.xen.org/archives/html/xen-devel/2008-10/msg00411.html
Best
On Sun, Mar 31, 2013 at 9:36 PM, Reindl Harald h.reindl@thelounge.netwrote:
Am 31.03.2013 21:24, schrieb yersinia:
However prelink does reduce the effectiveness of ASLR (a bit). See http://lwn.net/Articles/341440/ and follow-up conversation.
Probably something had changed in the last years. I have posted the same
question, or related, some time ago
http://www.redhat.com/archives/rhl-devel-list/2009-July/msg00674.html
You pay a security prize if you disable prelink, because it also performs address space randomization: http://lwn.net/Articles/190139/
probably you do not understand teh difference between ONE TIME randomization what prelink does or randomization by each start without prelink as the guy who said "You pay a security prize if you disable prelink" also did not
-- devel mailing list devel@lists.fedoraproject.org https://admin.fedoraproject.org/mailman/listinfo/devel
On 31/03/13 08:11 AM, Richard W.M. Jones wrote:
However prelink does reduce the effectiveness of ASLR (a bit). See http://lwn.net/Articles/341440/ and follow-up conversation.
Ignoring the silly stuff, it does seem that this is Yet Another Reason Prelink Is Bad, and we seem to keep bumping up against those. It does rather seem like we should consider just killing it, at least by default.
Am 03.04.2013 00:18, schrieb Adam Williamson:
On 31/03/13 08:11 AM, Richard W.M. Jones wrote:
However prelink does reduce the effectiveness of ASLR (a bit). See http://lwn.net/Articles/341440/ and follow-up conversation.
Ignoring the silly stuff, it does seem that this is Yet Another Reason Prelink Is Bad, and we seem to keep bumping up against those. It does rather seem like we should consider just killing it, at least by default.
+1
* i see NO NOTICEABLE better performance in any environment * it does all the things which was a pseudo reason for "offline updates" * it makes intrusion detection only working with bad hacks * it is lowering security for zero benefit
On Tue, 2 Apr 2013, Adam Williamson wrote:
Date: Tue, 02 Apr 2013 15:18:55 -0700 From: Adam Williamson awilliam@redhat.com Reply-To: Development discussions related to Fedora devel@lists.fedoraproject.org To: devel@lists.fedoraproject.org Subject: Re: Expanding the list of "Hardened Packages"
On 31/03/13 08:11 AM, Richard W.M. Jones wrote:
However prelink does reduce the effectiveness of ASLR (a bit). See http://lwn.net/Articles/341440/ and follow-up conversation.
Ignoring the silly stuff, it does seem that this is Yet Another Reason Prelink Is Bad, and we seem to keep bumping up against those. It does rather seem like we should consider just killing it, at least by default.
At this point I have the feeling that Prelink is a cargo cult which we keep around to appease the Airplane Gods.
On Tue, Apr 02, 2013 at 05:14:21PM -0600, Stephen Smoogen wrote:
At this point I have the feeling that Prelink is a cargo cult which we keep around to appease the Airplane Gods.
Well, when I brought this up four years ago, we kept it around to appease the GCC development gods. (Or, at least, Jakub Jelinek.)
http://www.redhat.com/archives/rhl-devel-list/2009-July/msg00650.html
On Thu, Apr 04, 2013 at 10:05:31AM -0400, Matthew Miller wrote:
Well, when I brought this up four years ago, we kept it around to appease the GCC development gods. (Or, at least, Jakub Jelinek.)
(In case it wasn't clear, I didn't mean this to be a jab or sarcastic remark in any way; after having coffee I can see that it might be read that way; I didn't mean anything other than that Jakub provided the numbers-in-defense before.)
It does rather seem like we should consider just killing it [prelink], at least by default.
Prelinking shortens the time between execve() and first useful output. A prelinked module reduces time spent in ld-linux, and increases sharing of pages (which reduces time spent in kernel duplicating copy-on-write pages.) The savings are *visible* when invoking an interactive GUI program that has dozens of shared libraries, or when several hundred smaller executables are invoked each second, such as some 'make' clouds, etc.
Some systems want those savings, and are willing to pay with slightly less protection via reduced ASLR. Some administrators compensate by running a full prelink daily, and a partial prelink of "hot" modules (glibc, ...) a few times during the day, even as often as hourly; and with parameters to reduce interference with modules which are not being [re-]prelinked during the current run.
Am 03.04.2013 01:50, schrieb John Reiser:
It does rather seem like we should consider just killing it [prelink], at least by default.
Prelinking shortens the time between execve() and first useful output
in theory
A prelinked module reduces time spent in ld-linux, and increases sharing of pages (which reduces time spent in kernel duplicating copy-on-write pages.) The savings are *visible* when invoking an interactive GUI program that has dozens of shared libraries, or when several hundred smaller executables are invoked each second, such as some 'make' clouds, etc.
not noticeable compared with the security flaws
Some systems want those savings, and are willing to pay with slightly less protection via reduced ASLR.
then THIS SYSTEMS shoudk install prelink but not install it AS DEFAULT
Some administrators compensate by running a full prelink daily, and a partial prelink of "hot" modules (glibc, ...) a few times during the day, even as often as hourly; and with parameters to reduce interference with modules which are not being [re-]prelinked during the current run
fine they should do what they want
but as DEFAULT anything which beats ASLR is UNACCEPTABLE these days
On Wed, Apr 03, 2013 at 01:53:27AM +0200, Reindl Harald wrote:
A prelinked module reduces time spent in ld-linux, and increases sharing of pages (which reduces time spent in kernel duplicating copy-on-write pages.) The savings are *visible* when invoking an interactive GUI program that has dozens of shared libraries, or when several hundred smaller executables are invoked each second, such as some 'make' clouds, etc.
not noticeable compared with the security flaws
Security flaws? Security flaws are the bugs that can be exploited, you are clearly overestimating the role of ASLR (especially when on some targets like x86_64 there is a fixed address syscall+ret instruction mapped into every process anyway), it is just one of the many mitigating factors. Shared libraries loaded by a PIE are ignoring prelink chosen addresses, so they are fully randomized each time, and network facing daemons or suid apps should be built that way. But, for other binaries, PIE is way too costly (even when say on x86_64 the PIC register setup is basically for free, there is the significant cost of one extra indirection level) and when the binary isn't randomized, you can always return to the binary as opposed to shared libraries. If you don't care about the speed of execution of any programs, just compile everything with -fsanitize=address (that will be only ~ 2x slowdown or so).
Jakub
Jakub Jelinek jakub@redhat.com writes:
If you don't care about the speed of execution of any programs, just compile everything with -fsanitize=address (that will be only ~ 2x slowdown or so).
A different issue that worries me about PIE is the impact on the available address space in 32-bit builds. For instance, people routinely configure Postgres to allocate a shared-memory area of a couple GB, so if either the program text or the stack get moved too much, configurations that used to work will break for lack of enough contiguous free address space. I haven't been able to find anything definitive about the worst-case address space wastage due to ASLR in 32-bit builds; anyone here know?
regards, tom lane
On 04/03/2013 05:38 AM, Tom Lane wrote:
A different issue that worries me about PIE is the impact on the available address space in 32-bit builds
A test program says that the unused space is bounded by 1MiB per module (main program or shared library) and in a 32-bit environment the placement is top-down beginning with the stack. Thus the strategy gives something close to the maximum contiguous unallocated region, where "close" means "up to a gap of 1MB per library". See my test program in another thread of this topic.
Jakub Jelinek (jakub@redhat.com) said:
On Wed, Apr 03, 2013 at 01:53:27AM +0200, Reindl Harald wrote:
A prelinked module reduces time spent in ld-linux, and increases sharing of pages (which reduces time spent in kernel duplicating copy-on-write pages.) The savings are *visible* when invoking an interactive GUI program that has dozens of shared libraries, or when several hundred smaller executables are invoked each second, such as some 'make' clouds, etc.
not noticeable compared with the security flaws
Security flaws? Security flaws are the bugs that can be exploited, you are clearly overestimating the role of ASLR (especially when on some targets like x86_64 there is a fixed address syscall+ret instruction mapped into every process anyway), it is just one of the many mitigating factors. Shared libraries loaded by a PIE are ignoring prelink chosen addresses, so they are fully randomized each time, and network facing daemons or suid apps should be built that way. But, for other binaries, PIE is way too costly (even when say on x86_64 the PIC register setup is basically for free, there is the significant cost of one extra indirection level) and when the binary isn't randomized, you can always return to the binary as opposed to shared libraries. If you don't care about the speed of execution of any programs, just compile everything with -fsanitize=address (that will be only ~ 2x slowdown or so).
My concern is simply that prelink was (theoretically) sold as a mechanism to speed up the start of large, complex, GUI programs. Unfortuantely, most of the large, complex, GUI programs are the ones that are parsing untrusted content, and therefore make the most sense for PIE compilation.
Bill
Am 04.04.2013 20:54, schrieb Bill Nottingham:
Jakub Jelinek (jakub@redhat.com) said:
On Wed, Apr 03, 2013 at 01:53:27AM +0200, Reindl Harald wrote:
A prelinked module reduces time spent in ld-linux, and increases sharing of pages (which reduces time spent in kernel duplicating copy-on-write pages.) The savings are *visible* when invoking an interactive GUI program that has dozens of shared libraries, or when several hundred smaller executables are invoked each second, such as some 'make' clouds, etc.
not noticeable compared with the security flaws
Security flaws? Security flaws are the bugs that can be exploited, you are clearly overestimating the role of ASLR (especially when on some targets like x86_64 there is a fixed address syscall+ret instruction mapped into every process anyway), it is just one of the many mitigating factors. Shared libraries loaded by a PIE are ignoring prelink chosen addresses, so they are fully randomized each time, and network facing daemons or suid apps should be built that way. But, for other binaries, PIE is way too costly (even when say on x86_64 the PIC register setup is basically for free, there is the significant cost of one extra indirection level) and when the binary isn't randomized, you can always return to the binary as opposed to shared libraries. If you don't care about the speed of execution of any programs, just compile everything with -fsanitize=address (that will be only ~ 2x slowdown or so).
My concern is simply that prelink was (theoretically) sold as a mechanism to speed up the start of large, complex, GUI programs. Unfortuantely, most of the large, complex, GUI programs are the ones that are parsing untrusted content, and therefore make the most sense for PIE compilation
exactly that is the point
you do not need any network-service or long living prcoess google for "CVE poppler" to get a picture about PDF and you will find the same for nearly any application or common used library in the last few years
there is no single piece of software on this world which was not exploitable in whatever way, not a single and if you find one nobody cared enough to search for the exploit
On Tue, 02 Apr 2013 16:50:33 -0700 John Reiser jreiser@bitwagon.com wrote:
It does rather seem like we should consider just killing it [prelink], at least by default.
Prelinking shortens the time between execve() and first useful output. A prelinked module reduces time spent in ld-linux, and increases sharing of pages (which reduces time spent in kernel duplicating copy-on-write pages.) The savings are *visible* when invoking an interactive GUI program that has dozens of shared libraries, or when several hundred smaller executables are invoked each second, such as some 'make' clouds, etc.
I'm not so sure they are... perhaps it's time for another round of 'how fast does libreoffice start when prelinked vs not' ?
Some systems want those savings, and are willing to pay with slightly less protection via reduced ASLR. Some administrators compensate by running a full prelink daily, and a partial prelink of "hot" modules (glibc, ...) a few times during the day, even as often as hourly; and with parameters to reduce interference with modules which are not being [re-]prelinked during the current run.
Indeed. Also, some administrators remove prelink and do not use it on any of there systems. (Like say, Fedora Infrastructure, or all my home machines).
kevin
Am 03.04.2013 01:59, schrieb Kevin Fenzi:
On Tue, 02 Apr 2013 16:50:33 -0700 John Reiser jreiser@bitwagon.com wrote:
It does rather seem like we should consider just killing it [prelink], at least by default.
Prelinking shortens the time between execve() and first useful output. A prelinked module reduces time spent in ld-linux, and increases sharing of pages (which reduces time spent in kernel duplicating copy-on-write pages.) The savings are *visible* when invoking an interactive GUI program that has dozens of shared libraries, or when several hundred smaller executables are invoked each second, such as some 'make' clouds, etc.
I'm not so sure they are... perhaps it's time for another round of 'how fast does libreoffice start when prelinked vs not'?
after these dicussions i tried it again these days multiple times witout and with "prelink -mRa"
the possible difference is snake-oil on fast machines it doe snot matter
on small machines AKA notebooks the prelink-cronjob did hurt me all the years way much more as it brought benefits and the fact that you get neraly each day some updates which would make prelink necessary and bring rkhunter to whine and defeats at least the benfit of any IDS is a very strong indication to leave it in the repos for people who feel better with snakeoil but do not enforce it as default
On Wed, Apr 3, 2013 at 12:18 AM, Adam Williamson awilliam@redhat.comwrote:
On 31/03/13 08:11 AM, Richard W.M. Jones wrote:
However prelink does reduce the effectiveness of ASLR (a bit). See
http://lwn.net/Articles/**341440/ http://lwn.net/Articles/341440/ and follow-up conversation.
Ignoring the silly stuff, it does seem that this is Yet Another Reason Prelink Is Bad
Is it? The linked comment says the opposite: prelink might interfere with ASLR, but for most programs it doesn't make a difference. Even the latter discussion about local attackers doesn't really apply when any PIE executable automatically means prelink is ignored both for the executable and for any used shared libraries, as Jakub said. Mirek
On Wed, 3 Apr 2013, Miloslav Trmač wrote:
On Wed, Apr 3, 2013 at 12:18 AM, Adam Williamson awilliam@redhat.com wrote: On 31/03/13 08:11 AM, Richard W.M. Jones wrote:
However prelink does reduce the effectiveness of ASLR (a bit). See http://lwn.net/Articles/341440/ and follow-up conversation.
Ignoring the silly stuff, it does seem that this is Yet Another Reason Prelink Is Bad
Is it? The linked comment says the opposite: prelink might interfere with ASLR, but for most programs it doesn't make a difference. Even the latter discussion about local attackers doesn't really apply when any PIE executable automatically means prelink is ignored both for the executable and for any used shared libraries, as Jakub said.
To me, prelink is still evil for breaking FIPS. I've requested a few times that prelink plays nicer with FIPS mode, like running prelink -ua during bootup when FIPS mode is on. And running prelink -ua when the prelink package is uninstalled. Neither trivial solutions are implemented in the package.
The only argument in favour of prelink is speed. People selecting FIPS have clearly made the decision to favour extra security over speed.
I'm strongly in favour of getting rid of it completely, and letting Moore's Law do its job.
Paul
On Wed, Apr 3, 2013 at 5:19 PM, Paul Wouters pwouters@redhat.com wrote:
To me, prelink is still evil for breaking FIPS. I've requested a few times that prelink plays nicer with FIPS mode, like running prelink -ua during bootup when FIPS mode is on.
https://bugzilla.redhat.com/show_bug.cgi?id=923782 should resolve this concern. Mirek
Richard W.M. Jones wrote:
(1) -fstack-protector{,-all} doesn't implement full bounds checking for every C object.
But it prevents (with probability (256^n-1)/256^n, where n is the size of the canary in bytes, which for n=4 is approximately .99999999976717) exploiting the overflows to change the return address of any C function.
(2) SELinux controls what labelled resources a process can access. This covers far more than buffer overflows in C programs. It covers other programming languages, design flaws and implementation 'thinko's of all sorts. I would argue (separate from this) that it's good to define precisely what resources a program can access, rather than the default "access just about everything".
And I would argue that this amounts to second-guessing/duplicating what the program tries to do in an unmaintainable morass of rules, which even for the targeted policy (which is not even close to covering all programs in Fedora other than as "unconfined") keeps having bugs which need to be fixed every day, even after YEARS of debugging. SELinux just does not scale, it's a centralized database which needs to essentially contain a variant of every program's source code, rewritten in a rule language only few people actually comprehend.
Instead of duplicating the information already contained in the program's source code, the right approach is to ensure the program does not do anything that is NOT part of its source code, which means blocking arbitrary code execution exploits!
Kevin Kofler
On Sat, Apr 13, 2013 at 08:36:53PM +0200, Kevin Kofler wrote:
Richard W.M. Jones wrote:
(1) -fstack-protector{,-all} doesn't implement full bounds checking for every C object.
But it prevents (with probability (256^n-1)/256^n, where n is the size of the canary in bytes, which for n=4 is approximately .99999999976717) exploiting the overflows to change the return address of any C function.
I said it "doesn't implement full bounds checking for every C object", and I stand by that. I doesn't cover stack objects smaller than some cut-off size, nor any objects in static data or on the heap at all. I do know quite a lot about this, having written the very first bounds checking extension to GCC back in 1994/5:
http://www.doc.ic.ac.uk/~phjk/BoundsChecking.html
(2) SELinux controls what labelled resources a process can access. This covers far more than buffer overflows in C programs. It covers other programming languages, design flaws and implementation 'thinko's of all sorts. I would argue (separate from this) that it's good to define precisely what resources a program can access, rather than the default "access just about everything".
And I would argue that this amounts to second-guessing/duplicating what the program tries to do in an unmaintainable morass of rules, which even for the targeted policy (which is not even close to covering all programs in Fedora other than as "unconfined") keeps having bugs which need to be fixed every day, even after YEARS of debugging. SELinux just does not scale, it's a centralized database which needs to essentially contain a variant of every program's source code, rewritten in a rule language only few people actually comprehend.
That's your opinion. I suggest you take a look at SELinux policies as well as the many new policy management tools.
Instead of duplicating the information already contained in the program's source code, the right approach is to ensure the program does not do anything that is NOT part of its source code, which means blocking arbitrary code execution exploits!
This would be excellent, and projects in this area could make a significant contribution. I suspect that any general code-to-policy translator will hit the Halting Problem, since it seems trivial to write a program which would not be possible to translate, but that doesn't mean it can't be solved for many useful real world cases.
Rich.
Richard W.M. Jones wrote:
I said it "doesn't implement full bounds checking for every C object", and I stand by that. I doesn't cover stack objects smaller than some cut-off size, nor any objects in static data or on the heap at all.
I never claimed it did. I said it prevents overwriting the return address on the stack to execute arbitrary code. That's all it ever claimed to do. But it is sufficient to prevent the majority of arbitrary code execution exploits. And there is no cutoff size with -fstack-protector-all.
Now if you think the protection is not sufficient, then what you actually want is to enable one of the full bound-checking solution, though the performance and code size impact of that would be a lot larger. Still, papering over the fact that code is exploitable by duplicating all information about what the code is supposed to do elsewhere (in SELinux policy) does not make sense, it is simply not a maintainable approach.
That's your opinion. I suggest you take a look at SELinux policies as well as the many new policy management tools.
One needs to read pages of documentation to even do the small subset of tweaks intended for an end user. Maintaining the actual policy is something only a handful people are able to do.
This would be excellent, and projects in this area could make a significant contribution. I suspect that any general code-to-policy translator will hit the Halting Problem, since it seems trivial to write a program which would not be possible to translate, but that doesn't mean it can't be solved for many useful real world cases.
That's exactly why SELinux policy is the wrong representation. It duplicates information of the code without being automatically transformable either way, requiring every change to be made twice.
I repeat: The proper solution is to prevent executing any machine code which is not part of the program's source code. Block arbitrary-code execution exploits and SELinux is just dead weight.
Kevin Kofler
Le dimanche 14 avril 2013 à 01:43 +0200, Kevin Kofler a écrit :
Richard W.M. Jones wrote:
This would be excellent, and projects in this area could make a significant contribution. I suspect that any general code-to-policy translator will hit the Halting Problem, since it seems trivial to write a program which would not be possible to translate, but that doesn't mean it can't be solved for many useful real world cases.
That's exactly why SELinux policy is the wrong representation. It duplicates information of the code without being automatically transformable either way, requiring every change to be made twice.
If a software say he can open any arbitrary files on the filesystem doesn't mean it should. The code is most of the time not adequate to explain the permission really needed, that's too low level. And the unix model of user is not really adequate either.
You could argue the exact same things about unix permissions "why does apache requires me to modify permission on ~/public_html while I already expressed it can read them in code" or firewall "why should I open the port 80 when i already said in the code of apache that it will use this port".
I repeat: The proper solution is to prevent executing any machine code which is not part of the program's source code. Block arbitrary-code execution exploits and SELinux is just dead weight.
Repeating doesn't make it right. For example, what do you do for javascript interpreters ? ( like the one we can find in webpages, or in pdf, etc ). Or libreoffice macros.
Or php interpeter, whose source code do we take in account, the one of php, the one of apache, the one of the php application, ( unless someone add a plugin of course ) ?
The whole point of having it in 2 different places is to have a proper inspection of what it need to do. That's defence in depth. And security bugs have been fixed due to that inspection, like software leaking file descriptors by errors.
More over, SELinux do more than "blocking arbitrary code-execution exploit", it also allow to enforce access control to follow security model such as Bell-LaPadula model, it permit to have proper isolation for software like openshift origin, or SVirt.
But you are welcome to convince any upstream directly to invest more time in stuff like seccomp-bpf as did by Chrome, vsftpd and others if you think that's the right approach to fix security issues.
On Sun, Apr 14, 2013 at 01:43:05AM +0200, Kevin Kofler wrote:
I repeat: The proper solution is to prevent executing any machine code which is not part of the program's source code.
You're simply wrong about this. It's trivial to come up with a counter-example, if you're prepared to give it a couple of minutes of thought.
Rich.
On Sun, Apr 14, 2013 at 01:43:05AM +0200, Kevin Kofler wrote:
Richard W.M. Jones wrote:
I said it "doesn't implement full bounds checking for every C object", and I stand by that. I doesn't cover stack objects smaller than some cut-off size, nor any objects in static data or on the heap at all.
I never claimed it did. I said it prevents overwriting the return address on the stack to execute arbitrary code. That's all it ever claimed to do.
What you actually said was:
"build ALL packages in Fedora with not only -fPIE and RELRO, but also -fstack-protector-all (which is not included in the current hardened cflags). Also get rid of prelink which reduces the effectiveness of ASLR. Then drop SELinux which becomes obsolete if the executables cannot be exploited in the first place. (It only papers over the real problem.)"
which I interpret to mean that after using -fstack-protector-all and removing prelink, SELinux would become obsolete because no executable can be exploited.
And there is no cutoff size with -fstack-protector-all.
Not true, there is still a small cutoff size and many types of object not covered -- see Steve Grubb's email.
Rich.
On Monday, April 15, 2013 09:12:57 AM Richard W.M. Jones wrote:
which I interpret to mean that after using -fstack-protector-all and removing prelink, SELinux would become obsolete because no executable can be exploited.
I would say there is a place for SE Linux even if we compiled everything with "all" because FORTIFY_SOURCE coverage is not absolute. For example, about a month ago i ran the following test:
procs=`ls /proc | grep '^[0-9]' | sort -n` for p in $procs do res=`cat /proc/$p/maps 2>/dev/null | awk '$2 ~ "wx" { print $2 }'` if [ x"$res" != "x" ] ; then cat /proc/$p/cmdline | awk '{ printf "%-35s\t", $1 }' printf "%s\n" "$p" fi done
What this does is display the programs with Writable and Executable memory. All Fedora desktops except Mate have WX memory. (I checked KDE, Gnome, Cinnamon, and Mate.) WX memory is dangerous because the normal exploit pattern is:
1) Allocate executable memory 2) Copy shell code into it 3) Jump to shell code 4) Profit!
The WX memory on virtually all the desktops means Step 1 is completed. All an attacker needs to do is copy payload to Wx memory and jump to it. SE Linux is the last line of defence. Of course to be effective, this means that SE Linux has to have policy around the same applications that parse untrusted media so that when they are exploited it knows abnormal behavior.
And there is no cutoff size with -fstack-protector-all.
All really is all. There is no cutoff. (There is a cutoff for the regular stack- protector, but we have a good default.) It does not help for heap related objects, though. What I would prefer rather than "all" is the "strong" patch. If you have a void function with no local variables, "all" will place a canary even though one is not needed. However, "strong" will not and that will make the program run a bit faster. The "strong" patch really represents a good balance between speed and covering everything that matters.
And at Infiltrate 2013, one new technique for exploitation that was discussed was pivoting the stack pointer to something like the heap where you might have executable permissions. I think this was discussed in context to exploiting ARM systems, but in a post PC world this will be increasingly important.
Also demonstrated at Infiltrate last week was the next kind of attack that occurs even when you do things nearly perfect. Windows 8 has vastly improved exploit countermeasures (there is a presentation at BlackHat 2012). For example, it has guard pages between memory allocations, they changed heap allocations < 16K (which is the majority of all uses) to be bit mapped based so there is no possibility heap state attacks. It also randomly assigns blocks so that behavior is non-deterministic. Sounds hard to exploit?
It turns out there is a weakness. The medium sized allocator has predictable behavior and its memory gets reused. What the attack demonstrated was that an attacker can use the Feng Shui technique to cause the placement of memory allocation of a structure holding a function pointer right beside a vulnerable buffer so that they can modify the function pointer and then wait for it to get used. Eric also demonstrated that he could do this in the Windows 8 kernel, too. So, watch out for function pointers, too. :-)
-Steve
On 15/04/13 10:10 -0400, Steve Grubb wrote:
I would say there is a place for SE Linux even if we compiled everything with "all" because FORTIFY_SOURCE coverage is not absolute. For example, about a month ago i ran the following test:
procs=`ls /proc | grep '^[0-9]' | sort -n` for p in $procs do res=`cat /proc/$p/maps 2>/dev/null | awk '$2 ~ "wx" { print $2 }'` if [ x"$res" != "x" ] ; then cat /proc/$p/cmdline | awk '{ printf "%-35s\t", $1 }' printf "%s\n" "$p" fi done
What this does is display the programs with Writable and Executable memory. All Fedora desktops except Mate have WX memory. (I checked KDE, Gnome, Cinnamon, and Mate.)
FWIW, LXDE seems to be fine as well (if polkitd and firefox are not counted in).
Steve Grubb wrote:
On Monday, April 15, 2013 09:12:57 AM Richard W.M. Jones wrote:
which I interpret to mean that after using -fstack-protector-all and removing prelink, SELinux would become obsolete because no executable can be exploited.
I would say there is a place for SE Linux even if we compiled everything with "all" because FORTIFY_SOURCE coverage is not absolute. For example, about a month ago i ran the following test:
procs=`ls /proc | grep '^[0-9]' | sort -n` for p in $procs do res=`cat /proc/$p/maps 2>/dev/null | awk '$2 ~ "wx" { print $2 }'` if [ x"$res" != "x" ] ; then cat /proc/$p/cmdline | awk '{ printf "%-35s\t", $1 }' printf "%s\n" "$p" fi done
Neat. I saved that in a script, then realized I could simplify it. This is nearly equivalent:
$ grep -lE '^[0-9a-f-]+ .wx' /proc/*/maps 2>/dev/null \ |perl -ne 'm!^(/proc/(\d+))/.*! and printf qq(%5d %s\n), $2, `cat $1/cmdline`'
Sample output on an F18 system running the awesome window manager: 1836 /usr/lib/firefox/firefox-no-remote-Pdefault
Notice that the NUL-separated arguments aren't shown properly, so filter the result through e.g., | tr '\0' ' '
Adjusted output: 1836 /usr/lib/firefox/firefox -no-remote -P default
What this does is display the programs with Writable and Executable memory. All Fedora desktops except Mate have WX memory. (I checked KDE, Gnome, Cinnamon, and Mate.) WX memory is dangerous because the normal exploit pattern
On Mon, 2013-04-15 at 09:12 +0100, Richard W.M. Jones wrote:
which I interpret to mean that after using -fstack-protector-all and removing prelink, SELinux would become obsolete because no executable can be exploited.
No; there are plenty of exploits which aren't due to buffer overflows. Particularly in the era of web applications; a lot of people just toss up a Django or Ruby on Rails app, but it's *so* easy in those frameworks to have a bug that allows arbitrary code execution in the context of the service.
SELinux is a good match for these sorts of apps, we just don't have the management tools and documentation to make it easy for web application authors to use.
On 04/13/2013 07:43 PM, Kevin Kofler wrote:
Richard W.M. Jones wrote:
This would be excellent, and projects in this area could make a significant contribution. I suspect that any general code-to-policy translator will hit the Halting Problem, since it seems trivial to write a program which would not be possible to translate, but that doesn't mean it can't be solved for many useful real world cases.
That's exactly why SELinux policy is the wrong representation. It duplicates information of the code without being automatically transformable either way, requiring every change to be made twice.
From the security point of view this is a good thing, because it requires both the programmer's code and the security policy to independently agree to perform every action.
Otherwise, the programmer might write 'if (uid=0) then ...' and the automatic policy generator would obediently generate a rule for that.
I agree that it's tedious, but practical evidence seems to suggest that it's a converging process and we're almost there---'enforcing' SELinux is a viable setting for a majority of deployments.
Przemek Klosowski wrote:
I agree that it's tedious, but practical evidence seems to suggest that it's a converging process and we're almost there---'enforcing' SELinux is a viable setting for a majority of deployments.
I fail to see any kind of convergence. We still have weekly selinux-policy updates with a dozen bugs fixed every week! And new policies keep breaking things that used to work. To me, that's clear failure.
Kevin Kofler
On Saturday, April 13, 2013 08:44:44 PM Richard W.M. Jones wrote:
On Sat, Apr 13, 2013 at 08:36:53PM +0200, Kevin Kofler wrote:
Richard W.M. Jones wrote:
(1) -fstack-protector{,-all} doesn't implement full bounds checking for every C object.
But it prevents (with probability (256^n-1)/256^n, where n is the size of the canary in bytes, which for n=4 is approximately .99999999976717) exploiting the overflows to change the return address of any C function.
I said it "doesn't implement full bounds checking for every C object", and I stand by that.
It doesn't have to. It only places a canary on the stack without any notion of size. This technique is pretty effective and ruins most functions that could be used for ROP gadgets. If the C object is on the heap, then all you have protecting you from coding mistakes is FORTIFY_SOURCE. It requires size information at compile time and most of the time its not available.
I doesn't cover stack objects smaller than some cut-off size,
-fstack-protector-all really is all. The default in Fedora is 4 bytes which would cover cases where ints and char[] are interposed as in some networking code. But more importantly, the defaul stack-protector only kicks in when the object is a char array. If its an int array or something exotic like an array within a struct, it does not kick in. That is what the -fstack-protector- strong patch provides. Its been floating around the internet and is the default for chrome OS. All the testing I've done shows it catches all stack overflows of all kinds. We really need it integrated with Fedora's gcc.
-Steve
On 04/14/2013 03:34 AM, Steve Grubb wrote:
-fstack-protector-all really is all. The default in Fedora is 4 bytes which would cover cases where ints and char[] are interposed as in some networking code. But more importantly, the defaul stack-protector only kicks in when the object is a char array. If its an int array or something exotic like an array within a struct, it does not kick in. That is what the -fstack-protector- strong patch provides. Its been floating around the internet and is the default for chrome OS. All the testing I've done shows it catches all stack overflows of all kinds. We really need it integrated with Fedora's gcc.
The basic patch has been committed upstream:
http://gcc.gnu.org/viewcvs/gcc?view=revision&revision=198699
It's still incomplete, though, particularly for C++. Slots for structs returned from functions can be allocated in the caller and are addressable in the callee (as a consequence of the named return value optimization). This means that the calling function should be instrumented with a canary. Han Shen is going to work on a follow-up patch which addresses this gap. Once that additional patch is in, we should consider backporting both patches.
On Saturday, April 13, 2013 08:36:53 PM Kevin Kofler wrote:
(1) -fstack-protector{,-all} doesn't implement full bounds checking for every C object.
But it prevents (with probability (256^n-1)/256^n, where n is the size of the canary in bytes, which for n=4 is approximately .99999999976717) exploiting the overflows to change the return address of any C function.
There is the off chance that an attacker correctly guesses the canary value. :-)
One thing that I found in doing a recent study was that there is a build system, scons, where our defaults are not getting used during compile. For example, the zfs-fuse package uses the scons build system. It did not have PIE, RELRO, stack protector, or FORTIFY_SOURCE anywhere. Anything else that uses scons should be inspected for similar problems.
-Steve
Steve Grubb wrote:
On Saturday, April 13, 2013 08:36:53 PM Kevin Kofler wrote:
But it prevents (with probability (256^n-1)/256^n, where n is the size of the canary in bytes, which for n=4 is approximately .99999999976717) exploiting the overflows to change the return address of any C function.
There is the off chance that an attacker correctly guesses the canary value. :-)
That's exactly why I wrote that it works with probability .99999999976717 (assuming a 32-bit canary), not 1. :-) Of course, the larger you make the canary, the closer to 0 the probability of guessing it will be. And of course, talking about probabilities only makes sense if the way the canary gets generated can be reasonably considered random and uniformly distributed.
One thing that I found in doing a recent study was that there is a build system, scons, where our defaults are not getting used during compile. For example, the zfs-fuse package uses the scons build system. It did not have PIE, RELRO, stack protector, or FORTIFY_SOURCE anywhere. Anything else that uses scons should be inspected for similar problems.
Yes, you need to use something like this: http://pkgs.fedoraproject.org/cgit/mingw-nsis.git/tree/nsis-2.43-rpm-opt.pat... and RPM_LD_FLAGS should also be handled (it currently isn't in that patch).
Kevin Kofler
On 13/04/13 11:36 AM, Kevin Kofler wrote:
And I would argue that this amounts to second-guessing/duplicating what the program tries to do in an unmaintainable morass of rules, which even for the targeted policy (which is not even close to covering all programs in Fedora other than as "unconfined") keeps having bugs which need to be fixed every day, even after YEARS of debugging. SELinux just does not scale,
SELinux keeps having bugs *because* they progressively build out the policies. The coverage of the -targeted policy is now greater than it was a few releases back. If they kept the coverage of the stock policies the same over time there would be almost no new bugs, but instead, they increase the coverage and hence the security it provides progressively with each release. *Some* bugs are associated with files moving or program functionality changing or whatever, but most are just the result of the policies growing: the 'scaling' that you say isn't working.
Adam Williamson wrote:
SELinux keeps having bugs *because* they progressively build out the policies. The coverage of the -targeted policy is now greater than it was a few releases back. If they kept the coverage of the stock policies the same over time there would be almost no new bugs, but instead, they increase the coverage and hence the security it provides progressively with each release. *Some* bugs are associated with files moving or program functionality changing or whatever, but most are just the result of the policies growing: the 'scaling' that you say isn't working.
It isn't working because it's adding hundreds of new policy bugs in every new Fedora release. And coverage is still VERY far from 100% of Fedora.
Kevin Kofler
On Tue, 23 Apr 2013 22:35:41 +0200 Kevin Kofler kevin.kofler@chello.at wrote:
It isn't working because it's adding hundreds of new policy bugs in every new Fedora release.
<citation needed>
Seriously, can you please stop extrapolating from your personal usecase, and think of both the developers and actual users of the technology that /you/ do not need? Thank you.
Note that I am not implying that you are ignoring useful technology so statements about the effectiveness of SELinux are besides the point; which is that "useful" is in the eye of the beholder, and I am not one to tell you what is useful for you. I am asking you to return that favor.
--Stijn
Perhaps is not working because most of the new policy are deployed in enforcing mode and not in permissive ? But permissive not was born exactly for this ?
Best
2013/4/23, Kevin Kofler kevin.kofler@chello.at:
Adam Williamson wrote:
SELinux keeps having bugs *because* they progressively build out the policies. The coverage of the -targeted policy is now greater than it was a few releases back. If they kept the coverage of the stock policies the same over time there would be almost no new bugs, but instead, they increase the coverage and hence the security it provides progressively with each release. *Some* bugs are associated with files moving or program functionality changing or whatever, but most are just the result of the policies growing: the 'scaling' that you say isn't working.
It isn't working because it's adding hundreds of new policy bugs in every new Fedora release. And coverage is still VERY far from 100% of Fedora.
Kevin Kofler
-- devel mailing list devel@lists.fedoraproject.org https://admin.fedoraproject.org/mailman/listinfo/devel
As stated in the attach.
poma
On Mon, Apr 01, 2013 at 09:15:45AM +0200, poma wrote:
As stated in the attach.
This doesn't seem to have anything to do with hardened builds.
Rich.
On Mon, Apr 01, 2013 at 09:15:45AM +0200, poma wrote:
As stated in the attach.
For bugs please use Bugzilla (attach the patch there).
Also, in mailing list please send a new email instead of replying to an existing email. Many people on mailing lists use software that'll still show your email as a reply to the original email, despite changing the subject.
On 01.04.2013 20:21, Olav Vitters wrote:
On Mon, Apr 01, 2013 at 09:15:45AM +0200, poma wrote:
As stated in the attach.
For bugs please use Bugzilla (attach the patch there).
OK.
Also, in mailing list please send a new email instead of replying to an existing email. Many people on mailing lists use software that'll still show your email as a reply to the original email, despite changing the subject.
Lapsus calami. Sorry.
poma
re: Expanding the list of "Hardened Packages"
This proposal was originally at https://fedorahosted.org/fesco/ticket/1104
There is another performance interaction between -fPIE and prelinking. Random placement due to -fPIE on a main program can invalidate the pre-linking of shared libraries including glibc. As a result, costs immediately after execve() can be larger because ld-linux must re-base the library images dynamically inside the current process. If it happens when there are dozens of shared libraries, then the delay can be substantial because the interference is likely to cascade from one library to others. It is not possible to share any page which ld-linux modifies, so the cost is more physical RAM as well as more cycles.
In fact, random placement of vdso (linux-gate.so) causes a similar problem around 7% of the time (with just one shared library) on i686. Here's my analysis from 8 years ago: https://bugzilla.redhat.com/show_bug.cgi?id=162797#c4
--
Am 01.04.2013 20:28, schrieb John Reiser:
re: Expanding the list of "Hardened Packages"
This proposal was originally at https://fedorahosted.org/fesco/ticket/1104
There is another performance interaction between -fPIE and prelinking. Random placement due to -fPIE on a main program can invalidate the pre-linking of shared libraries including glibc
which is a argument against this idiotic prelinking as default and not against -fPIE
This proposal was originally at https://fedorahosted.org/fesco/ticket/1104
http://fedoraproject.org/wiki/Hardened_Packages page mentions that "FESCo requires some packages to use PIE and relro hardening by default."
"Position independent executables" use a weak form of ASLR on Fedora-19-Alpha-TC3-i686. The kernel always chooses the region below and *near* the stack. The stack placement is randomized (always, regardless of executable type), but the range for "a position- independent executable" (ET_DYN with 0==PT_LOAD.p_vaddr) is only a small subset of the address space. Experiment suggests that the window is 1MiB (20 bits), but this includes the 12 low-order bits which cannot be changed. Thus the kernel uses only 256 possibilities. See test program below.
Note that "gcc -fPIE" is for compiling. Static linking requires "gcc -pie", else the result has Elf32_Hdr.e_type == ET_EXEC, which is not eligible for ASLR.
$ cat where.c #include <stdlib.h> #include <sys/types.h> #include <fcntl.h>
char buf[8192];
main() { int const fd=open("/proc/self/maps", O_RDONLY); for (;;) { size_t len=read(fd, buf, sizeof(buf)); if (-1==len) { perror("read"); exit(1); } if (0==len) break; write(1, buf, len); } return 0; } $ gcc -m32 -pie -fPIE -g -o where where.c # -m32 is redundant on real i686 $ readelf --headers ./where | grep Type: Type: DYN (Shared object file) $ readelf --headers ./where | grep LOAD LOAD 0x000000 0x00000000 0x00000000 0x0092c 0x0092c R E 0x1000 LOAD 0x000ef0 0x00001ef0 0x00001ef0 0x00140 0x02170 RW 0x1000
$ ./where # on i686 hardware b750d000-b750e000 rw-p 00000000 00:00 0 b750e000-b76c6000 r-xp 00000000 08:3b 132197 /usr/lib/libc-2.17.so b76c6000-b76c8000 r--p 001b7000 08:3b 132197 /usr/lib/libc-2.17.so b76c8000-b76c9000 rw-p 001b9000 08:3b 132197 /usr/lib/libc-2.17.so b76c9000-b76cc000 rw-p 00000000 00:00 0 b76e5000-b76e6000 rw-p 00000000 00:00 0 b76e6000-b76e7000 r-xp 00000000 00:00 0 [vdso] b76e7000-b7706000 r-xp 00000000 08:3b 131776 /usr/lib/ld-2.17.so b7706000-b7707000 r--p 0001e000 08:3b 131776 /usr/lib/ld-2.17.so b7707000-b7708000 rw-p 0001f000 08:3b 131776 /usr/lib/ld-2.17.so b7708000-b7709000 r-xp 00000000 08:3b 654566 /home/jreiser/where b7709000-b770a000 r--p 00000000 08:3b 654566 /home/jreiser/where b770a000-b770b000 rw-p 00001000 08:3b 654566 /home/jreiser/where b770b000-b770d000 rw-p 00000000 00:00 0 bfa65000-bfa86000 rw-p 00000000 00:00 0 [stack]
$ for i in 0 1 2 3 4 5 6 7 8 9 0; do ./where | grep where | sed 1q; done b7749000-b774a000 r-xp 00000000 08:3b 654566 /home/jreiser/where b77f4000-b77f5000 r-xp 00000000 08:3b 654566 /home/jreiser/where b7795000-b7796000 r-xp 00000000 08:3b 654566 /home/jreiser/where b7719000-b771a000 r-xp 00000000 08:3b 654566 /home/jreiser/where b775f000-b7760000 r-xp 00000000 08:3b 654566 /home/jreiser/where b7785000-b7786000 r-xp 00000000 08:3b 654566 /home/jreiser/where b77a3000-b77a4000 r-xp 00000000 08:3b 654566 /home/jreiser/where b771a000-b771b000 r-xp 00000000 08:3b 654566 /home/jreiser/where b776f000-b7770000 r-xp 00000000 08:3b 654566 /home/jreiser/where b77d9000-b77da000 r-xp 00000000 08:3b 654566 /home/jreiser/where b7768000-b7769000 r-xp 00000000 08:3b 654566 /home/jreiser/where $
Hello all, the discussion has somewhat died down... If you have a specific proposal for a change in policy, please add it to https://fedorahosted.org/fesco/ticket/1104 ; hard data that demonstrate the impact, if any, in a situation relevant to Fedora (in particular, taking into account prelink as it is deployed by default) would be very welcome but is not a strict requirement.
(This is not intended to cut off the discussion on the mailing list, only to make it clear to FESCo whether there is any proposal for change or whether we are happy enough with the current status.)
Thank you, Mirek
Some more thoughts on possibly hardening more by default - comments and corrections very welcome.
(I'll call "mutating ASLR" a setup where the addresses change frequently, and "static ASLR" a setup where the addresses change only sometimes but differ between systems.)
* Servers that accept outside connections definitely should have mutating ASLR (attackers can make millions of connection attempts and outguess static ASLR). So PIE and prelink unused or ineffective (== current policy).
* Outguessing ASLR is actually more not that easy, if you guess wrong, the server very likely crashes. On many servers, that effectively ends the attack (but see https://fedorahosted.org/fpc/ticket/191 )
* Note that this is all made less effective by the practice of having a single daemon process that runs for months - AFAIK openssh is one of the very few exceptions that actually takes full advantage of PIE.
* For unprivileged applications that don't use the network (e.g. evince), mutating ASLR is not really useful. Sure, I can craft a a PDF exploit and convince the user to open it, but the user will not open the file a million times for me. Static ASLR is still useful because it prevents creating a single exploit that works on all systems.
* For applications that use the network as clients to mostly trusted servers or servers with limited functionality (e.g. chat, photo albums, IRC clients), mutating ASLR protects against attacks by the server to some extent, but is less necessary than for servers; million of requests that impact the UI would be definitely noticed by the user. (There might be a million server-originated requests that aren't visible in the UI, but would also likely crash the client and terminate the attack.)
* Applications like Firefox, that connect to many untrusted parties and are effectively controlled by them (even within a restricted sandbox), mutating ASLR is very useful.
With the current setup, we get "mutating ASLR" when compiled as PIE, and only partial "static ASLR" when not compiled as PIE (because the executable is static).
Based on the above I'm not so concerned about the prelink effect of making ASLR "static" - I'm more worried about the fact that executables are not randomized at all. So, would it perhaps make sense (and would it be possible) to build all executables as PIE, but continue to have two classes of builds, with "hardened builds" randomized on every start and ignoring prelink, and "nonhardened builds" prelinked every 14 days, to get complete "static ASLR" and still reduce startup latency? Mirek
On 04/11/2013 08:19 AM, Miloslav Trmač wrote:
(I'll call "mutating ASLR" a setup where the addresses change frequently, and "static ASLR" a setup where the addresses change only sometimes but differ between systems.)
- Servers that accept outside connections definitely should have mutating ASLR
(attackers can make millions of connection attempts and outguess static ASLR). So PIE and prelink unused or ineffective (== current policy).
What does it mean "So PIE and prelink unused or ineffective"? That phrase lacks a verb. Also missing is the reasoning of how the conclusion "... unused or ineffective" is connected to the antecedent "attackers can ... outguess static ASLR". Is it cause-and-effect, or is it a counterexample, or what?
A process that is invoked by xinetd in response to a particular packet, and which terminates after serving only one logical connection, and whose executable is built using "gcc -pie -FPIE, and not pre-inked, then operates with short-lived, high-frequency, mutating ASLR. That's one case of a "server" process invoked by xinetd.
That same executable can be prelinked twice per hour, or once per hour, or once per day depending on historical frequency, real-time monitoring of logs, etc. Then it operates under mutating ASLR with medium or adapting frequency. That's another case of "server".
If "server" is a whole system which lasts at least one day (tens or hundreds of thousands of processes, or more) then "all executables -pie and -fPIE; and no prelink" is a highest-frequency mutating ASLR. It also has the highest direct cost for performing all that randomized relocation.
What's the point?
On Thu, Apr 11, 2013 at 6:32 PM, John Reiser jreiser@bitwagon.com wrote:
On 04/11/2013 08:19 AM, Miloslav Trmač wrote:
(I'll call "mutating ASLR" a setup where the addresses change frequently, and "static ASLR" a setup where the addresses change only sometimes but differ between systems.)
- Servers that accept outside connections definitely should have
mutating ASLR
(attackers can make millions of connection attempts and outguess static
ASLR).
So PIE and prelink unused or ineffective (== current policy).
What does it mean "So PIE and prelink unused or ineffective"? That phrase lacks a verb.
Sorry. "So, let's keep the current policy: a) PIE enabled, b) prelink unused/ineffective for these executables". It's not that prelink is ineffective against attackers, it's that as currently implemented, prelink does nothing when the executable is a PIE, so prelink does not disrupt "mutating ASLR".
A process that is invoked by xinetd in response to a particular packet, and which terminates after serving only one logical connection, and whose executable is built using "gcc -pie -FPIE, and not pre-inked, then operates with short-lived, high-frequency, mutating ASLR. That's one case of a "server" process invoked by xinetd.
Which of the major and frequently deployed servers actually use xinetd as their execution method? Yes, xinetd is there; AFAIK it's by far not the common case; we usually have a separate long-running daemon (perhaps forking a child for each connection) instead.
If "server" is a whole system which lasts at least one day (tens or hundreds of thousands of processes, or more) then "all executables -pie and -fPIE; and no prelink" is a highest-frequency mutating ASLR. It also has the highest direct cost for performing all that randomized relocation.
Again, with PIE, prelink currently does nothing, so prelink/no prelink does not currently make a difference in this case. Mirek
On Thu, Apr 11, 2013 at 05:19:46PM +0200, Miloslav Trmač wrote:
With the current setup, we get "mutating ASLR" when compiled as PIE,
Surely ... you get "mutating ASLR" only when compiled as PIE *and* the server process restarts itself between each connection or at least on a regular basis (ie. it's a forking or pre-forking server, or the server is started on each connection by inetd/systemd)?
Rich.
Richard W.M. Jones wrote:
On Thu, Apr 11, 2013 at 05:19:46PM +0200, Miloslav Trmač wrote:
With the current setup, we get "mutating ASLR" when compiled as PIE,
Surely ... you get "mutating ASLR" only when compiled as PIE *and* the server process restarts itself between each connection or at least on a regular basis (ie. it's a forking or pre-forking server, or the server is started on each connection by inetd/systemd)?
Or it crashes and gets restarted every time the attacker fails to guess the addresses.
Björn Persson
Am 11.04.2013 19:52, schrieb Björn Persson:
Richard W.M. Jones wrote:
On Thu, Apr 11, 2013 at 05:19:46PM +0200, Miloslav Trmač wrote:
With the current setup, we get "mutating ASLR" when compiled as PIE,
Surely ... you get "mutating ASLR" only when compiled as PIE *and* the server process restarts itself between each connection or at least on a regular basis (ie. it's a forking or pre-forking server, or the server is started on each connection by inetd/systemd)?
Or it crashes and gets restarted every time the attacker fails to guess the addresses
which is exactly the goal ASLR is desigend for
On Thu, Apr 11, 2013 at 12:54 PM, Reindl Harald h.reindl@thelounge.net wrote:
which is exactly the goal ASLR is desigend for
It's designed to make certain types of attacks more difficult. It doesn't make them impossible, just much harder.
Here is an example.
When you write a security exploit, you generally have to do things like call into system libraries to do useful things. Generally you have a limited amount of room for your exploit's "payload", so the idea is to just leverage what the system can already do. Calling system() would be an example of this. Now long ago, before things like ASLR, if you had access to the binary you wanted to attack, you could inspect the binary to see what the address of system() was. It didn't change between runs of the binary, so I could hard code that address into my exploit. With ASLR, every time you run the binary the address of various system calls is now basically random (it's not exactly, but that's an exercise for the reader to figure out). If your payload needs to call system(), you need a way to figure out what that address is before you can use it, the added step should make it more difficult to exploit a problem. The technology isn't fool proof of course, but that's a topic for another day.
Thanks.
Am 12.04.2013 13:44, schrieb Josh Bressers:
On Thu, Apr 11, 2013 at 12:54 PM, Reindl Harald h.reindl@thelounge.net wrote:
which is exactly the goal ASLR is desigend for
It's designed to make certain types of attacks more difficult. It doesn't make them impossible, just much harder.
Here is an example.
When you write a security exploit, you generally have to do things like call into system libraries to do useful things. Generally you have a limited amount of room for your exploit's "payload", so the idea is to just leverage what the system can already do. Calling system() would be an example of this. Now long ago, before things like ASLR, if you had access to the binary you wanted to attack, you could inspect the binary to see what the address of system() was. It didn't change between runs of the binary, so I could hard code that address into my exploit. With ASLR, every time you run the binary the address of various system calls is now basically random (it's not exactly, but that's an exercise for the reader to figure out). If your payload needs to call system(), you need a way to figure out what that address is before you can use it, the added step should make it more difficult to exploit a problem. The technology isn't fool proof of course, but that's a topic for another day.
that is nothing new
that is the reason why any application which get input data from the internet has to use ASLR and anything which makes ASLR less effective has to be considered a bug
yes there is performance AND security but in these days security first
there are so many pieces of software written these days with no care about performance and mostly security is not the reason for most developers wasting ressources that there is no excuse
fix the really performance bugs in code but not compensate the overall situation with prelink and lesser security at all
On Friday, April 12, 2013 06:44:33 AM Josh Bressers wrote:
On Thu, Apr 11, 2013 at 12:54 PM, Reindl Harald h.reindl@thelounge.net
wrote:
which is exactly the goal ASLR is desigend for
It's designed to make certain types of attacks more difficult. It doesn't make them impossible, just much harder.
Here is an example.
When you write a security exploit, you generally have to do things like call into system libraries to do useful things. Generally you have a limited amount of room for your exploit's "payload", so the idea is to just leverage what the system can already do. Calling system() would be an example of this. Now long ago, before things like ASLR, if you had access to the binary you wanted to attack, you could inspect the binary to see what the address of system() was. It didn't change between runs of the binary, so I could hard code that address into my exploit. With ASLR, every time you run the binary the address of various system calls is now basically random (it's not exactly, but that's an exercise for the reader to figure out).
I would like to point out that a non-PIE 64 bit application will only get 14 bits of randomization of the heap. In my opinion, this must be fixed since this is very predictable. Even jemalloc provides 19 bits of heap randomization - which is not ideal, but is better than our current default.
-Steve
On Thu, Apr 11, 2013 at 7:19 PM, Richard W.M. Jones rjones@redhat.comwrote:
On Thu, Apr 11, 2013 at 05:19:46PM +0200, Miloslav Trmač wrote:
With the current setup, we get "mutating ASLR" when compiled as PIE,
Surely ... you get "mutating ASLR" only when compiled as PIE *and* the server process restarts itself between each connection or at least on a regular basis (ie. it's a forking or pre-forking server, or the server is started on each connection by inetd/systemd)?
Yes - actually you need an execve(); merely forking does not change address space layout. Mirek
On Wednesday, April 10, 2013 03:55:46 PM Miloslav Trmač wrote:
Hello all, the discussion has somewhat died down... If you have a specific proposal for a change in policy, please add it to https://fedorahosted.org/fesco/ticket/1104 ; hard data that demonstrate the impact, if any, in a situation relevant to Fedora (in particular, taking into account prelink as it is deployed by default) would be very welcome but is not a strict requirement.
(This is not intended to cut off the discussion on the mailing list, only to make it clear to FESCo whether there is any proposal for change or whether we are happy enough with the current status.)
I don't think there is any need to extend the set of packages that _should_ get hardening. The current guidelines are sufficient. What is not happening is the packages that have apps that fit the need to be hardened are not getting the proper hardening. I have opened dozens of bugs on the "core" packages that matter, but even those bz are still not complete.
Bottom line, we just need more prodding of maintainers that have apps that need hardening based on current guidelines.
-Steve
On Sat, Apr 13, 2013 at 11:33 AM, Steve Grubb wrote:
I don't think there is any need to extend the set of packages that _should_ get hardening. The current guidelines are sufficient. What is not happening is the packages that have apps that fit the need to be hardened are not getting the proper hardening. I have opened dozens of bugs on the "core" packages that matter, but even those bz are still not complete.
Is there a tracker bug? Proven packagers can help
Rahul
On Saturday, April 13, 2013 12:19:42 PM Rahul Sundaram wrote:
On Sat, Apr 13, 2013 at 11:33 AM, Steve Grubb wrote:
I don't think there is any need to extend the set of packages that _should_ get hardening. The current guidelines are sufficient. What is not happening is the packages that have apps that fit the need to be hardened are not getting the proper hardening. I have opened dozens of bugs on the "core" packages that matter, but even those bz are still not complete.
Is there a tracker bug? Proven packagers can help
I have a tracker bug for issues identified on the core set of packages that would be part of a common criteria certification:
https://bugzilla.redhat.com/show_bug.cgi?id=853068
which then shows: dbus https://bugzilla.redhat.com/show_bug.cgi?id=853152 NetworkManager https://bugzilla.redhat.com/show_bug.cgi?id=853199
I have not run the script that checks a distribution on F19 yet, so maybe there are more?
http://people.redhat.com/sgrubb/files/rpm-chksec
To check a typical install and only get the packages that do not meet policy, do this:
./rpm-chksec --all | sed -r "s/\x1B[([0-9]{1,2}(;[0-9]{1,2})?)?[m|K]//g" | egrep -w 'no|PACKAGE'
A small sample on F18:
PACKAGE RELRO PIE CLASS abrt-addon-ccpp.x86_64 yes no setuid abrt.x86_64 yes no daemon accountsservice.x86_64 yes no daemon acpid.x86_64 yes no daemon agave.x86_64 no yes exec akonadi.x86_64 yes no network-local alsa-lib.x86_64 yes no network-ip alsa-utils.x86_64 yes no network-ip apg.x86_64 yes no daemon arpwatch.x86_64 yes no daemon
But it should be noted that the script does not identify parsers of untrusted media. This would be stuff like: gnash, ooffice, evince, poppler, firefox, konqueror, xchat, wireshark, eog, kmail, evolution, rpm, etc. I don't know how to automate that.
-Steve
Am 13.04.2013 19:46, schrieb Steve Grubb:
http://people.redhat.com/sgrubb/files/rpm-chksec
To check a typical install and only get the packages that do not meet policy, ./rpm-chksec --all | sed -r "s/\x1B[([0-9]{1,2}(;[0-9]{1,2})?)?[m|K]//g" | egrep -w 'no|PACKAGE'
A small sample on F18:
PACKAGE RELRO PIE CLASS abrt-addon-ccpp.x86_64 yes no setuid abrt.x86_64 yes no daemon accountsservice.x86_64 yes no daemon acpid.x86_64 yes no daemon agave.x86_64 no yes exec akonadi.x86_64 yes no network-local alsa-lib.x86_64 yes no network-ip alsa-utils.x86_64 yes no network-ip apg.x86_64 yes no daemon arpwatch.x86_64 yes no daemon
But it should be noted that the script does not identify parsers of untrusted media. This would be stuff like: gnash, ooffice, evince, poppler, firefox, konqueror, xchat, wireshark, eog, kmail, evolution, rpm, etc. I don't know how to automate that
which raises the question again:
would it be not the better way to build the whole distribution hardened by expierience that nearly anything is exploitable over the long and performance comes after security
performance would be increaded by many developers learning what to do to prevent wasting ressources much more as do not ANY technique to make things more secure security is a concept of many pieces and each piece makes the overall system better
On Sat, Apr 13, 2013 at 7:51 PM, Reindl Harald h.reindl@thelounge.netwrote:
which raises the question again:
would it be not the better way to build the whole distribution hardened by expierience that nearly anything is exploitable over the long and performance comes after security
The logical conclusion from this is to move to a language with automatic memory management. The "top vulnerability" reports for programs written in C/C++ and most other languages so different that starting a new project that processes untrusted data in C/C++ is becoming indefensible.
We seem to be stuck with C as the lowest common denominator that can be used from any runtime; long-term we _need_ to move away from that, or Linux will gain the reputation of least-secure OS around.
Now, what to move to? I currently don't have see any language/runtime I could recommend, which is in itself rather frightening. Mirek
Am 15.04.2013 18:48, schrieb Miloslav Trmač:
On Sat, Apr 13, 2013 at 7:51 PM, Reindl Harald <h.reindl@thelounge.net mailto:h.reindl@thelounge.net> wrote:
which raises the question again: would it be not the better way to build the whole distribution hardened by expierience that nearly anything is exploitable over the long and performance comes after security
The logical conclusion from this is to move to a language with automatic memory management. The "top vulnerability" reports for programs written in C/C++ and most other languages so different that starting a new project that processes untrusted data in C/C++ is becoming indefensible.
no, that would mean thow away a lot of code and a hurry rewrite of whatelse in whatever language doe snot make things secure
We seem to be stuck with C as the lowest common denominator that can be used from any runtime; long-term we _need_ to move away from that, or Linux will gain the reputation of least-secure OS around.
not really, proven by securityfocus lists and changelogs of many Fedora apckages which are not in C/C++ a fool will always implement unsecure software and look at java-applets the last year!
Now, what to move to? I currently don't have see any language/runtime I could recommend, which is in itself rather frightening
and that is why existing technologies to make binaries more secure should be used
On Mon, Apr 15, 2013 at 7:40 PM, Reindl Harald h.reindl@thelounge.netwrote:
Am 15.04.2013 18:48, schrieb Miloslav Trmač:
On Sat, Apr 13, 2013 at 7:51 PM, Reindl Harald <h.reindl@thelounge.net<mailto:
h.reindl@thelounge.net>> wrote:
which raises the question again: would it be not the better way to build the whole distribution
hardened
by expierience that nearly anything is exploitable over the long and performance comes after security
The logical conclusion from this is to move to a language with automatic
memory management. The "top
vulnerability" reports for programs written in C/C++ and most other
languages so different that starting a new
project that processes untrusted data in C/C++ is becoming indefensible.
no, that would mean thow away a lot of code and a hurry rewrite of whatelse in whatever language doe snot make things secure
I was not advocating throwing away existing code, merely not continuing to start new projects in C if possible.
We seem to be stuck with C as the lowest common denominator that can be used from any runtime; long-term we _need_
to move away from that, or Linux will gain the reputation of
least-secure OS around.
not really, proven by securityfocus lists and changelogs of many Fedora apckages which are not in C/C++ a fool will always implement unsecure software and look at java-applets the last year!
Sure, moving away from C/C++ does not make programs completely secure; however, on average, C/C++ programs are noticeably less secure (because most vulnerabilities that can happen in higher-level languages can also happen in C, but not the other way around). We all wish for programs to be bug-free, but that's just not what happens in the real world. Mirek
On Mon, 2013-04-15 at 20:17 +0200, Miloslav Trmač wrote:
On Mon, Apr 15, 2013 at 7:40 PM, Reindl Harald h.reindl@thelounge.net wrote:
Am 15.04.2013 18:48, schrieb Miloslav Trmač: > On Sat, Apr 13, 2013 at 7:51 PM, Reindl Harald <h.reindl@thelounge.net <mailto:h.reindl@thelounge.net>> wrote: > > which raises the question again: > > would it be not the better way to build the whole distribution hardened > by expierience that nearly anything is exploitable over the long and > performance comes after security > > > The logical conclusion from this is to move to a language with automatic memory management. The "top > vulnerability" reports for programs written in C/C++ and most other languages so different that starting a new > project that processes untrusted data in C/C++ is becoming indefensible. no, that would mean thow away a lot of code and a hurry rewrite of whatelse in whatever language doe snot make things secure
I was not advocating throwing away existing code, merely not continuing to start new projects in C if possible.
> We seem to be stuck with C as the lowest common denominator that can be used from any runtime; long-term we _need_ > to move away from that, or Linux will gain the reputation of least-secure OS around. not really, proven by securityfocus lists and changelogs of many Fedora apckages which are not in C/C++ a fool will always implement unsecure software and look at java-applets the last year!
Sure, moving away from C/C++ does not make programs completely secure; however, on average, C/C++ programs are noticeably less secure (because most vulnerabilities that can happen in higher-level languages can also happen in C, but not the other way around). We all wish for programs to be bug-free, but that's just not what happens in the real world.
Mirek
I believe that you may be right about the vulnerabilities, but the higher level the language, the more hidden vulnerabilities exist, just perhaps not the same ones. I do not believe that C or C++ are inherently less secure than other languages, nor do I believe that there is some statistical way of proving that fact. One can write good or bad code in all languages. It just happens that C is a "senior" language and due to maturity more of its flaws are known. That doesn't mean newer languages are more secure, only that without sufficient exposure, their flaws have not yet been revealed.
Maybe I'm wrong, but given that I won't likely be around by the time these newer languages have become senior, I won't see my statement refuted.
Regards, Les H
From: les hlhowell@pacbell.net Maybe I'm wrong, but given that I won't likely be around by the time these newer languages have become senior, I won't see my statement refuted.
You needn't wait long. Ada has been around for three some decades already. ;-)
-- John Florian
les wrote:
I do not believe that C or C++ are inherently less secure than other languages, nor do I believe that there is some statistical way of proving that fact. One can write good or bad code in all languages.
I believe you are wrong. Some languages are more secure than other languages. Of course an infallible superhuman could write good code in any language, and a fool can write bad code in any language, but a normal human programmer will write better code in a well-designed language than in an ill-designed language.
Björn Persson
On 04/15/2013 08:17 PM, Miloslav Trmač wrote:
Sure, moving away from C/C++ does not make programs completely secure; however, on average, C/C++ programs are noticeably less secure (because most vulnerabilities that can happen in higher-level languages can also happen in C, but not the other way around).
To illustrate this point, here's a fairly concrete example: If you have got a program that is written in a memory-safe language which also provides some form of encapsulation, it is possible to demonstrate convincingly (*) that a software module which provides an encryption/decryption service never leaks the key material. If there is no memory safety, other code in the program could peek at the key bits, and encapsulation is no longer guaranteed. What should be a local property of the module now turns into a global property of the program, making review more difficult.
(*) As soon as cryptography is involved, mathematically rigorous results are the exception.
On Tue, 16 Apr 2013 14:05:39 +0200 Florian Weimer fweimer@redhat.com wrote:
On 04/15/2013 08:17 PM, Miloslav Trmač wrote:
Sure, moving away from C/C++ does not make programs completely secure; however, on average, C/C++ programs are noticeably less secure (because most vulnerabilities that can happen in higher-level languages can also happen in C, but not the other way around).
To illustrate this point, here's a fairly concrete example: If you have got a program that is written in a memory-safe language which also provides some form of encapsulation, it is possible to demonstrate convincingly (*) that a software module which provides an encryption/decryption service never leaks the key material. If there is no memory safety, other code in the program could peek at the key bits, and encapsulation is no longer guaranteed. What should be a local property of the module now turns into a global property of the program, making review more difficult.
(*) As soon as cryptography is involved, mathematically rigorous results are the exception.
Memory-safe languages don't protect against key material being left un-zeroed in pages, nor against side-channel attacks due to non-constant operation timing, power, etc. Sure there is a certain class of problems you aren't going to get in Python that you are in C, but it's not a panacea.
Conrad
Miloslav Trmač wrote:
The logical conclusion from this is to move to a language with automatic memory management. The "top vulnerability" reports for programs written in C/C++ and most other languages so different that starting a new project that processes untrusted data in C/C++ is becoming indefensible.
If by "automatic memory management" you mean garbage collection, then that's not really what we need. Garbage collection has advantages, but what is needed to stop the buffer overflows is bounds checking. The compiler needs to keep track of how big each object is and insert code to check that writes to an array stay within the bounds of the array.
Now, what to move to? I currently don't have see any language/runtime I could recommend, which is in itself rather frightening.
I recommend Ada. Ada does bounds checking, and is compiled to machine code with performance comparable to C. Only compiler bugs can cause buffer overflows in Ada, unless you're so foolhardy that you disable the bounds checking. Coding in Ada reduces not only security holes but also other bugs, because the language is designed to help the programmer avoid mistakes, and to allow the compiler to catch many mistakes as compile-time errors instead of run-time errors.
Ada doesn't do garbage collection across the whole program, but features such as controlled types, generic data structures and out parameters greatly reduce the need for garbage collection. The double-free problem is also eliminated. (Garbage collection was made optional in Ada so that the language would be suitable for embedded real-time systems, and in practice most compilers don't provide it.)
The disadvantage of Ada is a relative scarcity of libraries. That's not a problem with the language itself but a result of low popularity, and would change with time if more programmers would start using Ada. Help with packaging the libraries that do exist would be welcome.
A free compiler? Yes, we have one in Fedora.
Björn Persson
On 04/15/2013 09:04 PM, Björn Persson wrote:
Miloslav Trmač wrote:
The logical conclusion from this is to move to a language with automatic memory management. The "top vulnerability" reports for programs written in C/C++ and most other languages so different that starting a new project that processes untrusted data in C/C++ is becoming indefensible.
If by "automatic memory management" you mean garbage collection, then that's not really what we need. Garbage collection has advantages, but what is needed to stop the buffer overflows is bounds checking. The compiler needs to keep track of how big each object is and insert code to check that writes to an array stay within the bounds of the array.
There's also the issue of dangling pointers (pointers which point to a memory location which now holds an object of a different type). They can result from misapplied memory management, or from type safety loopholes in the language definition. An example for Ada is here:
http://www.enyo.de/fw/notes/ada-type-safety.html
(See the postscript—this was already known in the Ada 83 days. I still find it remarkable. It's possible to work around this in a GC-based implementation.)
Now, what to move to? I currently don't have see any language/runtime I could recommend, which is in itself rather frightening.
I recommend Ada. Ada does bounds checking, and is compiled to machine code with performance comparable to C.
Yes, Ada has some nice features. At least there are real arrays, but they are somewhat cumbersome to work with, compared to Java, Python or, well, C pointers. There are two aspects: preservation of array bounds in slices (so that you have to write Table (Table'First + Offset) to access the element Offset of Table, Offset ranging from 0 to Table'Length - 1), and the fact that is impossible to put an unconstrained array (of arbitrary length) into a constrained object (i.e., you need an indirection).
For many programming tasks, arrays might be at the wrong level of abstraction, but we have a lot of plumbing code which uses them heavily.
Garbage collection support would make it easier to introduce the indirection, but it would require a conservative collector at present, and those we have right now (Boehm-Dehmers-Weiser and the Go collectors) require a process-global view, touch signal handlers etc., so they do away with one significant Ada advantage (see below).
Only compiler bugs can cause buffer overflows in Ada, unless you're so foolhardy that you disable the bounds checking.
The GNAT run-time is compiled without language-defined checks, and it used to have at least one buffer overflow in the Ada part. Many Ada libraries used to follow GNAT's example and disabled the checks as well, but this has changed during the last few years, it appears. Manual overflow checks are hampered by the fact that -gnato still isn't the default.
Ada doesn't do garbage collection across the whole program, but features such as controlled types, generic data structures and out parameters greatly reduce the need for garbage collection. The double-free problem is also eliminated. (Garbage collection was made optional in Ada so that the language would be suitable for embedded real-time systems, and in practice most compilers don't provide it.)
Controlled types have a fixed overhead which is quite visible with small objects. By default, code for abort deferral is emitted, the vtable pointer takes space, and avoiding unnecessary indirect calls takes some care by the programmer. There's also no well-defined ABI for shared libraries (and adding a subprogram can change the name of existing subprograms).
On the other hand, lack of garbage collection means that it's feasible to have some GNAT-compiled part in a larger program, without the larger program noticing that there's a component not written in C. I sometimes call this "deep embedding support", and only very few language implementations have this property at present. (Even with GNAT, you have to restrict yourself to a language subset.) The list of feasible systems programming languages is much, much longer, but most need global run-time state, threads, signal handler manipulation, have address space layout requirements etc. But that is primarily an implementation issue, not an aspect which is inherent to most languages.
The other aspect is low baseline overhead from the run-time system. We don't want programmers to rewrite working system components in C only to reduce memory usage. This is what happened (or is expected to happen) to some daemons written in Python.
Florian Weimer wrote:
Yes, Ada has some nice features. At least there are real arrays, but they are somewhat cumbersome to work with, compared to Java, Python or, well, C pointers. There are two aspects: preservation of array bounds in slices (so that you have to write Table (Table'First + Offset) to access the element Offset of Table, Offset ranging from 0 to Table'Length - 1)
That array bounds must be preserved becomes obvious when you consider arrays where the index type has a meaning beyond just position in the array. If you have an array Week with a range of Monday..Sunday, and you take the slice Week(Saturday..Sunday) and call it Weekend, then you really don't want Weekend to suddenly have the indexes Monday and Tuesday.
The GNAT run-time is compiled without language-defined checks, and it used to have at least one buffer overflow in the Ada part. Many Ada libraries used to follow GNAT's example and disabled the checks as well, but this has changed during the last few years, it appears. Manual overflow checks are hampered by the fact that -gnato still isn't the default.
Those are things that we can control in Fedora. I don't see why we couldn't compile Libgnat with checks enabled if we wanted to – except for the code that performs the checking i guess.
The RPM macros Gnatmake_optflags and GPRbuild_optflags contain mandatory compiler flags that try to prevent suppression of important checks. Unfortunately they can't override pragmas, but tools to check for dangerous pragmas could be developed. I will add -gnato to the mandatory compiler flags if the FPC decides so.
Controlled types have a fixed overhead which is quite visible with small objects.
Of course there is always some overhead. Do you mean that they have a significantly larger overhead than garbage collectors have?
Björn Persson
On 04/18/2013 01:08 AM, Björn Persson wrote:
Florian Weimer wrote:
Yes, Ada has some nice features. At least there are real arrays, but they are somewhat cumbersome to work with, compared to Java, Python or, well, C pointers. There are two aspects: preservation of array bounds in slices (so that you have to write Table (Table'First + Offset) to access the element Offset of Table, Offset ranging from 0 to Table'Length - 1)
That array bounds must be preserved becomes obvious when you consider arrays where the index type has a meaning beyond just position in the array. If you have an array Week with a range of Monday..Sunday, and you take the slice Week(Saturday..Sunday) and call it Weekend, then you really don't want Weekend to suddenly have the indexes Monday and Tuesday.
Weekdays are a very bad example because it is locale-dependent whether Monday < Sunday or the other way round. So you really can't fit them well into an enumeration type, and Java 8 orders the enumeration values alphabetically by their name, to make that point perfectly clear.
In addition, in Ada, enumeration types can have holes, which makes their use as array indexes particularly suspect. All this suggests to me that arrays over enumeration types are probably better served by associative arrays than by arrays accessed and ordered by some (integer-equivalent) scalar value.
Controlled types have a fixed overhead which is quite visible with small objects.
Of course there is always some overhead. Do you mean that they have a significantly larger overhead than garbage collectors have?
Compared to C++ destructors. Abort deferral takes its toll, and the last time I looked at this, the front end emitted the finalizer call in such a way that an indirect call remained in the generated machine code. (In C++, destructors can be non-virtual.)
On 13 May 2013 11:21, Florian Weimer fweimer@redhat.com wrote:
On 04/18/2013 01:08 AM, Björn Persson wrote:
Florian Weimer wrote:
Yes, Ada has some nice features. At least there are real arrays, but they are somewhat cumbersome to work with, compared to Java, Python or, well, C pointers. There are two aspects: preservation of array bounds in slices (so that you have to write Table (Table'First + Offset) to access the element Offset of Table, Offset ranging from 0 to Table'Length - 1)
That array bounds must be preserved becomes obvious when you consider arrays where the index type has a meaning beyond just position in the array. If you have an array Week with a range of Monday..Sunday, and you take the slice Week(Saturday..Sunday) and call it Weekend, then you really don't want Weekend to suddenly have the indexes Monday and Tuesday.
Weekdays are a very bad example because it is locale-dependent whether Monday < Sunday or the other way round. So you really can't fit them well into an enumeration type, and Java 8 orders the enumeration values alphabetically by their name, to make that point perfectly clear.
I suppose a week actually looks like a ring buffer rather than a linear array. Week(Saturday..Sunday) would make sense in that context, but it'd take someone more familiar with esoteric languages than me to say whether there's any language that provides that (not a feature much in demand I'd think).
On Mon, Apr 15, 2013 at 06:48:32PM +0200, Miloslav Trmač wrote:
Now, what to move to? I currently don't have see any language/runtime I could recommend, which is in itself rather frightening.
Ada, Eiffel, Go, Coq + OCaml, Erlang, Haskell, CompCert[*], etc. etc.
All these languages are viable. I think that programmers falsely think they cannot choose the most suitable language for the task at hand, but my experience is this is more of a mental barrier than a real problem.
Rich.
[*] Very unfortunately CompCert, a certified correct subset-of-C compiler, is non-free.
On Mon, Apr 15, 2013 at 11:19 PM, Richard W.M. Jones rjones@redhat.comwrote:
On Mon, Apr 15, 2013 at 06:48:32PM +0200, Miloslav Trmač wrote:
Now, what to move to? I currently don't have see any language/runtime I could recommend, which is in itself rather frightening.
Ada, Eiffel, Go, Coq + OCaml, Erlang, Haskell, CompCert[*], etc. etc.
All these languages are viable.
Perhaps for end-user applications[1], but not for libraries/code reuse/implementing platform interfaces to be usable by applications. How do I call an Eiffel library from Ada and pass it a callback written in Go? And if widely-used libraries are not available, that again makes it less viable to write applications using them. Mirek
[1] To take a random set of examples, how many of these languages have libraries or bindings for (all of) TLS, good i18n, libselinux, readline, D-Bus, GTK? https://admin.fedoraproject.org/mailman/listinfo/devel
On Tue, Apr 16, 2013 at 03:12:38PM +0200, Miloslav Trmač wrote:
On Mon, Apr 15, 2013 at 11:19 PM, Richard W.M. Jones rjones@redhat.comwrote:
On Mon, Apr 15, 2013 at 06:48:32PM +0200, Miloslav Trmač wrote:
Now, what to move to? I currently don't have see any language/runtime I could recommend, which is in itself rather frightening.
Ada, Eiffel, Go, Coq + OCaml, Erlang, Haskell, CompCert[*], etc. etc.
All these languages are viable.
Perhaps for end-user applications[1], but not for libraries/code reuse/implementing platform interfaces to be usable by applications. How do I call an Eiffel library from Ada and pass it a callback written in Go?
The answer (perhaps sadly) is you have to expose a C API. At least OCaml and golang can generate C-compatible shared libraries. Probably Eiffel and Ada too, although I'm not certain on the details.
Passing pointers to objects from one language to another is likely *not* possible however. And it gets hard when you want to mix lots of languages (because GCs won't cooperate with each other).
.Net does this right, although requiring a heavyweight VM to do it is probably not necessary.
[1] To take a random set of examples, how many of these languages have libraries or bindings for (all of) TLS, good i18n, libselinux, readline, D-Bus, GTK?
OCaml has 3/6. Having an easy to use FFI helps a lot here. The OCaml code in libguestfs uses a number of different C APIs, and mostly I've just hand-written snippets of FFI to do it. It's not a lot of code, although not ideal.
Rich.
Richard W.M. Jones wrote:
On Tue, Apr 16, 2013 at 03:12:38PM +0200, Miloslav Trmač wrote:
On Mon, Apr 15, 2013 at 11:19 PM, Richard W.M. Jones rjones@redhat.comwrote:
On Mon, Apr 15, 2013 at 06:48:32PM +0200, Miloslav Trmač wrote:
Now, what to move to? I currently don't have see any language/runtime I could recommend, which is in itself rather frightening.
Ada, Eiffel, Go, Coq + OCaml, Erlang, Haskell, CompCert[*], etc. etc.
All these languages are viable.
Perhaps for end-user applications[1], but not for libraries/code reuse/implementing platform interfaces to be usable by applications. How do I call an Eiffel library from Ada and pass it a callback written in Go?
The answer (perhaps sadly) is you have to expose a C API. At least OCaml and golang can generate C-compatible shared libraries. Probably Eiffel and Ada too, although I'm not certain on the details.
The Ada standard specifies features for interfacing to C, COBOL and Fortran, both importing and exporting interfaces. GNAT adds support for C++, claiming complete interoperability between Ada tagged types and C++ classes. I'm sure support for other compiled languages could be added if there were a demand, especially if GCC can compile those languages.
Passing pointers to objects from one language to another is likely *not* possible however.
Ada imports and exports C pointers just fine, including pointers to functions and records. Strings typically need to be converted though, as strings aren't null-terminated in Ada.
Of course Ada can't prevent the C code from corrupting the pointers.
Björn Persson
On Thu, Apr 18, 2013 at 01:14:33AM +0200, Björn Persson wrote: [...]
Passing pointers to objects from one language to another is likely *not* possible however.
Ada imports and exports C pointers just fine, including pointers to functions and records.
I should have been clearer. I meant passing pointers between non-C languages is likely not possible.
We also suffer this problem in reverse: passing a pointer between two C libraries is not possible in another non-C language. eg. Passing a libvirt virDomainPtr from (Perl) Sys::Virt to Sys::Guestfs.
Rich.
Once upon a time, Richard W.M. Jones rjones@redhat.com said:
I should have been clearer. I meant passing pointers between non-C languages is likely not possible.
Well, it depends on the language, but it is probably possible with some glue code (not necessarily practical though). For example, giving perl code a pointer isn't particularly useful, since perl doesn't provide for random memory access that way. It would be possible for an XS module to provide access to the memory at the location though.
We also suffer this problem in reverse: passing a pointer between two C libraries is not possible in another non-C language. eg. Passing a libvirt virDomainPtr from (Perl) Sys::Virt to Sys::Guestfs.
Why do you say that? I'm pretty sure you can.
On Thu, Apr 18, 2013 at 10:39:43AM -0500, Chris Adams wrote:
Once upon a time, Richard W.M. Jones rjones@redhat.com said:
We also suffer this problem in reverse: passing a pointer between two C libraries is not possible in another non-C language. eg. Passing a libvirt virDomainPtr from (Perl) Sys::Virt to Sys::Guestfs.
Why do you say that? I'm pretty sure you can.
I shouldn't say "not possible", but not very easy. A virDomainPtr in Sys::Virt is wrapped in some object which is very specific to the internals of Sys::Virt, thus cannot be extracted by Sys::Guestfs. It would require Sys::Virt to have a separate C library to provide this.
One advantage of gobject is it can do this sort of thing, although IME gobject has so many other disadvantages that it's not worth considering for serious bindings.
Rich.
Once upon a time, Richard W.M. Jones rjones@redhat.com said:
I shouldn't say "not possible", but not very easy. A virDomainPtr in Sys::Virt is wrapped in some object which is very specific to the internals of Sys::Virt, thus cannot be extracted by Sys::Guestfs. It would require Sys::Virt to have a separate C library to provide this.
No, it wouldn't require a separate C library, just a way to export it. This has nothing to do with it being a pointer (or the language being perl); something that is internal to one object/library/module/etc. and not exported can't be accessed from another object/library/module/etc.
Whatever you think. But it means you can't just write:
$dom = Sys::Virt->get_domain_by_name ("foo"); $g = Sys::Guestfs->create (); $g->add_libvirt_domain ($dom);
Rich.
Once upon a time, Richard W.M. Jones rjones@redhat.com said:
Whatever you think. But it means you can't just write:
$dom = Sys::Virt->get_domain_by_name ("foo"); $g = Sys::Guestfs->create (); $g->add_libvirt_domain ($dom);
And that has nothing to do with what you said, languages being able (or not) to pass pointers. That's the API of the two modules not supporting what you want to do. I'm pretty sure the APIs of the modules could be changed to support this, assuming the underlying libraries support it.
Richard W.M. Jones wrote:
On Thu, Apr 18, 2013 at 01:14:33AM +0200, Björn Persson wrote: [...]
Passing pointers to objects from one language to another is likely *not* possible however.
Ada imports and exports C pointers just fine, including pointers to functions and records.
I should have been clearer. I meant passing pointers between non-C languages is likely not possible.
If you mean some kind of fat pointer, that the compiler implements as more than just a memory address, then the one compiler would have to know how the other compiler implements those pointers, and the source code would need to contain an instruction to the compiler to use the other compiler's convention. If both compilers are GCC this should be possible. With compilers from different vendors the coordination could be more of a problem.
It seems to me that it's not a matter of pointers so much as the datatypes that the pointers point to. If a type can be represented in both languages, then both objects of that type and pointers to objects can be passed around and used in both languages, as long as at least one of the compilers has appropriate interfacing features. If a type in one language can't be represented in the other language, then objects of that type can't be passed to the other language, and pointers can only be handled as opaque handles to be kept and passed to callbacks.
If you mean purely high-level languages that don't even have pointers on the source code level, then yes, handling pointers in those languages could be problematic. Still, any language that can interface with C and deal with C pointers can also share C pointers with any other language that can interface with C.
Björn Persson
Richard W.M. Jones wrote:
We also suffer this problem in reverse: passing a pointer between two C libraries is not possible in another non-C language. eg. Passing a libvirt virDomainPtr from (Perl) Sys::Virt to Sys::Guestfs.
Well, if the language supports sufficiently large integers and runs in process, you can abuse an integer as the handle.
"Sufficiently large" generally means sizeof(void*) bytes, i.e. sizeof(void*)<<3 bits, but you can get creative and get away with (sizeof(void*)<<3)-k-bit integers if your pointers are guaranteed to be aligned to 1<<k bits.
Kevin Kofler
On 15/04/13 09:48 AM, Miloslav Trmač wrote:
On Sat, Apr 13, 2013 at 7:51 PM, Reindl Harald <h.reindl@thelounge.net mailto:h.reindl@thelounge.net> wrote:
which raises the question again: would it be not the better way to build the whole distribution hardened by expierience that nearly anything is exploitable over the long and performance comes after security
The logical conclusion from this is to move to a language with automatic memory management. The "top vulnerability" reports for programs written in C/C++ and most other languages so different that starting a new project that processes untrusted data in C/C++ is becoming indefensible.
We seem to be stuck with C as the lowest common denominator that can be used from any runtime; long-term we _need_ to move away from that, or Linux will gain the reputation of least-secure OS around.
Now, what to move to? I currently don't have see any language/runtime I could recommend, which is in itself rather frightening.
Can I step in and ask: move *what* exactly?
This is the *Fedora* development list, remember. This thread was a discussion of the security of the Fedora package base as a whole. The Fedora project does not control the development of the code behind 99% of the Fedora package base. "The logical conclusion is to move to a different language" doesn't seem particularly logical at all in context - as a reply to Harald's proposal for build parameters for all Fedora packages - because you're advocating a completely different change, one it is not at all feasible for Fedora to effect in this context.
So you've just pivoted the entire thread, for which congratulations, but this could really have been a separate discussion.
On Wed, Apr 17, 2013 at 1:16 AM, Adam Williamson awilliam@redhat.comwrote:
On 15/04/13 09:48 AM, Miloslav Trmač wrote:
The logical conclusion from this is to move to a language with automatic memory management. The "top vulnerability" reports for programs written in C/C++ and most other languages so different that starting a new project that processes untrusted data in C/C++ is becoming indefensible.
We seem to be stuck with C as the lowest common denominator that can be used from any runtime; long-term we _need_ to move away from that, or Linux will gain the reputation of least-secure OS around.
Now, what to move to? I currently don't have see any language/runtime I could recommend, which is in itself rather frightening.
Can I step in and ask: move *what* exactly?
This is the *Fedora* development list, remember. This thread was a discussion of the security of the Fedora package base as a whole. The Fedora project does not control the development of the code behind 99% of the Fedora package base.
"The ecosystem" :) That's the problem, isn't it - one upstream switching a language doesn't solve anything, and it is likely to add interoperability problems.
Fedora, as a place where contributors to many different upstreams intersect, seems like a fairly good place to discuss such ecosystem changes.
"The logical conclusion is to move to a different language" doesn't seem particularly logical at all in context - as a reply to Harald's proposal for build parameters for all Fedora packages - because you're advocating a completely different change, one it is not at all feasible for Fedora to effect in this context.
Sure, it's not something that Fedora can do quickly. All I want right now is to get the idea in people's minds, and to see if there is some kind of support for it, or an obvious direction that could be encouraged/recommended, especially for new projects.
So you've just pivoted the entire thread, for which congratulations, but this could really have been a separate discussion.
Yes, it was hijacking the thread a little, and I apologize for that. However, in a sense, it is strongly on-topic - we're spending time on PIE and related technologies that are, to simplify, primarily workaround for using an unsuitable language. That's fine as it goes; still, I think it's important to get a wide understanding that the language itself is a problem, and that we shouldn't be locking the ecosystem even stronger into C/C++ (e.g. by considering "safe" languages inferior because they don't support the same exploit mitigations that the C runtime does). Mirek
Miloslav Trmač wrote:
The logical conclusion from this is to move to a language with automatic memory management. The "top vulnerability" reports for programs written in C/C++ and most other languages so different that starting a new project that processes untrusted data in C/C++ is becoming indefensible.
We seem to be stuck with C as the lowest common denominator that can be used from any runtime; long-term we _need_ to move away from that, or Linux will gain the reputation of least-secure OS around.
Now, what to move to? I currently don't have see any language/runtime I could recommend, which is in itself rather frightening.
Well, you don't see any such language/runtime because there isn't any. :-)
Moving from C/C++ to a slower language is neither helpful nor necessary. If you really want full bound checking, it can be added to C/C++ rather than moving to a completely different language, and you'd still get the other benefits of C/C++, in particular, fast native code. It shouldn't be worse for peformance than switching to another language: After all, the other languages have to do the bound checking as well!
But the point of the discussion is to find out how much checking is actually needed for security. In fact, full bound checking is probably not necessary.
I shall also note that vulnerable C++ code is mainly code which uses C-style data structures for whatever reason (often, interfacing with C-only libraries). If we all wrote pure Qt code, there would be little to no buffer overflow vulnerabilities. It's much harder to overflow a QString by accident than a char *. (The STL is similar there, but IMHO the STL is a horrible runtime library and should be replaced by QtCore or a subset of it. :-) The main issues being overuse of templates (e.g., why is std::string a template?!) and lack of implicit sharing (copy on write). IMHO, C++ is great as a language, but the STL is hurting its adoption. But still, you also don't as easily overflow a std::string by accident as a char *.)
Kevin Kofler
On 04/23/2013 11:36 PM, Kevin Kofler wrote:
Moving from C/C++ to a slower language is neither helpful nor necessary. If you really want full bound checking, it can be added to C/C++ rather than moving to a completely different language, and you'd still get the other benefits of C/C++, in particular, fast native code. It shouldn't be worse for peformance than switching to another language: After all, the other languages have to do the bound checking as well!
At least you can avoid pointless discussions with developers if the overhead is unavoidable. 8-/
I shall also note that vulnerable C++ code is mainly code which uses C-style data structures for whatever reason (often, interfacing with C-only libraries). If we all wrote pure Qt code, there would be little to no buffer overflow vulnerabilities. It's much harder to overflow a QString by accident than a char *. (The STL is similar there, but IMHO the STL is a horrible runtime library and should be replaced by QtCore or a subset of it. :-)
The standard container library has some rather strange stuff, like operator[] on array-like containers which doesn't do bounds checking. To get bounds checking, you have to use the at() member function instead.
Furthermore, operators are assumed to be cheap to copy, so they are usually implemented as pointers. It is theoretically possible to make sure that the pointed-to container elements are live or that the iterator does not stray out of bounds, but the debug mode which does that is too slow for production use.
There is some fairly horrible stuff, like std::copy:
http://en.cppreference.com/w/cpp/algorithm/copy
You can pass a std::vector<T>::iterator (say, the result of begin()) as the output iterator, but it's your job to ensure that there's enough space. Just like strcpy, and we all know how well that worked in practice.
That being said, my recent experience *writing* C++03 code has been rather positive.
While we're dredging up old threads ;) .
On Fri, 10 May, 2013 at 12:29:16 GMT, Florian Weimer wrote:
There is some fairly horrible stuff, like std::copy:
http://en.cppreference.com/w/cpp/algorithm/copy
You can pass a std::vector<T>::iterator (say, the result of begin()) as the output iterator, but it's your job to ensure that there's enough space. Just like strcpy, and we all know how well that worked in practice.
Well, the STL has a solution for that, but the header is, unfortunately, underused IME.
#include <iterator>
std::copy(src.begin(), src.end(), std::back_inserter(dest));
That said, I do wish there were a "InsertIterator" concept or the like which std::copy would require (and probably move the existing std::copy to std::unsafe_copy if it's deemed required still).
--Ben
On 05/17/2013 07:17 AM, Ben Boeckel wrote:
While we're dredging up old threads ;) .
On Fri, 10 May, 2013 at 12:29:16 GMT, Florian Weimer wrote:
There is some fairly horrible stuff, like std::copy:
http://en.cppreference.com/w/cpp/algorithm/copy
You can pass a std::vector<T>::iterator (say, the result of begin()) as the output iterator, but it's your job to ensure that there's enough space. Just like strcpy, and we all know how well that worked in practice.
Well, the STL has a solution for that, but the header is, unfortunately, underused IME.
#include <iterator> std::copy(src.begin(), src.end(), std::back_inserter(dest));
That said, I do wish there were a "InsertIterator" concept or the like which std::copy would require (and probably move the existing std::copy to std::unsafe_copy if it's deemed required still).
True, and even if std::back_inserter didn't exist, you could roll your own. I guess that's one of the strengths of the standard container library.
But I really dislike the concept of iterators, that is, lightweight pointer-like objects that can be copied cheaply and do not keep, by themselves, the pointed-to data structures alive. Some ABIs even treat classes with just one scalar member as scalars themselves, so that they can be passed in registers, and for a long time, GCC only performed scalar replacement for single-member classes, so the impact of this design decision has been quite pervasive.
There are safer abstractions than iterators. If you try to translate iterators to a memory-safe language, you are forced to combine iterators in pairs, like quarks (sometimes called "ranges"). That might be a better choice for C++ as well, especially if you discourage copying, so that the construction/destruction overhead does not come into play that much.
On Sat, Apr 13, 2013 at 11:46 AM, Steve Grubb sgrubb@redhat.com wrote:
I have not run the script that checks a distribution on F19 yet, so maybe there are more?
That script reports all .o files (yes, those are sometimes packaged) as "exec no no", with a red "no" in the RELRO column. But RELRO doesn't make any sense for a .o, so perhaps that should be a green "N/A" instead.
Thanks for the script. -- Jerry James http://www.jamezone.org/
On Saturday, April 13, 2013 12:28:04 PM Jerry James wrote:
I have not run the script that checks a distribution on F19 yet, so maybe there are more?
That script reports all .o files (yes, those are sometimes packaged) as "exec no no", with a red "no" in the RELRO column. But RELRO doesn't make any sense for a .o, so perhaps that should be a green "N/A" instead.
Probably. But it has caught a few packages that did not even know they were shipping .o files and they removed them right away. That's a tough one. I can probably fix it to reclassify them not as an exec and that would make the triage easier.
-Steve
On Sat, Apr 13, 2013 at 11:16 PM, Steve Grubb sgrubb@redhat.com wrote:
On Saturday, April 13, 2013 12:19:42 PM Rahul Sundaram wrote:
Is there a tracker bug? Proven packagers can help
I have a tracker bug for issues identified on the core set of packages that would be part of a common criteria certification:
I have not run the script that checks a distribution on F19 yet, so maybe there are more?
I have analyzed all F9 packages and have already published a list of packages violating packaging guidelines.
See http://dl.dropbox.com/u/1522424/probable-violations-F19.csv
(I made some last minute changes which might be buggy. Feedback and corrections are welcome!)
Also note that all this analysis stuff has been *automated*. Additionally, my code works for all RHEL and Fedora versions (and even deb based distributions).
The analysis code doesn't install any packages on the system, is host OS agnostic and is quite fast (scales linearly).
See https://github.com/kholia/checksec (currently only the interactive tools are described in the README, bulk analysis tools are hopefully intuitive enough).
On Sun, Apr 14, 2013 at 12:26 AM, Dhiru Kholia dhiru.kholia@gmail.com wrote:
On Sat, Apr 13, 2013 at 11:16 PM, Steve Grubb sgrubb@redhat.com wrote:
On Saturday, April 13, 2013 12:19:42 PM Rahul Sundaram wrote:
Is there a tracker bug? Proven packagers can help
I have a tracker bug for issues identified on the core set of packages that would be part of a common criteria certification:
I have not run the script that checks a distribution on F19 yet, so maybe there are more?
I have analyzed all F9 packages and have already published a list of packages violating packaging guidelines.
See http://dl.dropbox.com/u/1522424/probable-violations-F19.csv
(I made some last minute changes which might be buggy. Feedback and corrections are welcome!)
Also note that all this analysis stuff has been *automated*. Additionally, my code works for all RHEL and Fedora versions (and even deb based distributions).
The analysis code doesn't install any packages on the system, is host OS agnostic and is quite fast (scales linearly).
See https://github.com/kholia/checksec (currently only the interactive tools are described in the README, bulk analysis tools are hopefully intuitive enough).
My analysis code combines the original checksec (bash script), rpm-chksec (Steve's script) and Grant's Go port into one Python code base.
I am planning to extend it with more checks and ideas. Your tips are welcome!