On 18 March 2017 at 06:21, Nico Kadel-Garcia <nkadel@gmail.com> wrote:
[..]  
> So here is kind contradiction because my past experience that such binaries
> are used so long (+6 years) that it causes silent issues with conflicts on
> kernel<->user space and sooner or later initial intention turns into
> disaster as no one remembers who and how initially such binary was made.

That is also a problem. It's also not really solved by dynamic
libraries. if the ABI for the libraries is changing that much, it's
likely necessary to recompile the base software *anyway*.

As long as binary X is using libc and libc is hiding all kernel<->user ABI changes dynamic linking solves ALL ISSUES here.
Look on Solaris. This is "walking proof" that such approach solves those issues.
All what is necessary is strict control of what happens on ABI layer which in recent years at least comes to top of priorities critical Linux developers.
Many of those people in recent years changed his mid to be "true believers of ABI stability" like it happened on Solaris on other grown Unices :)
 
> 2) some bootstrapping scenarios like for example static linking grub
> binaries
>
> In such cases binaries are will be regenerated with every small change in
> non-public ABI/API changes will be followed by immediate recompilation of
> such binaries so ris here is effective null and such limited number binaries
> should be accepted and carefully maintained.

And... we're to build them without glibc-static.... how?

uClibc is the answer :)
There is no any reasons why for these few critical binaries uClibc cannot be used :)
Better .. many Linux distributions entered on this path many years ago so it is practical proof which is possible to touch and smell that glibc-static is only way of generate those binaries. 

> Scenario when such special binaries been crafted for initrd already are
> nullified because today even smallest systems have enough memory to use
> regular shared libraries. Simple one one needs today to fit such initrds on
> 3.5' floppy disk with 1.44MB available storage.

Boot media are one of those cases where binary stability are *most* critical.

Try to observe what other distros are doing. At the moment they are packing in initrd ALL possible to load kernel module. 
As long as HW has +256MB RAM you can almost all this RAM for initrd. At the end of initrd everything will be released.
In other words: still maintaining special way to craft some binaries for initr is pure waste of time in context of Fedora as Fedora never was and withing next +3-4 still will be not true embedded system platform. It will be at the time when nail size computers will have +256-512MB RAM.
Even today HW like VoCore2 (http://vocore.it/ has only 128MB RAM)
I have 5 these small cubes at home and they are perfect for home systems engineer like me :D
3 weeks ago I've started working on organize on such HW full install server for all my Linuxes and Solarises which I have at home and I must say I have reall fun plaing this HW :)
Look on https://kloczek.wordpress.com/2017/03/08/vocore2-serial-console-equipment/ how really small it is :)
I have now fully working PXE (DNS, DHCP, TFTP, HTTP) server for Linuxes and yesterday I've started on putting on the same cube all Solaris AI server (Automated Installer) stuff. As long as AI is using only protocols it is possible to use Linux to install remotely Solaris.
When I'll finish this will try to write full step-by-step instruction how to do this.
Such HW is not speed monster but it is perfect to put it gassy box with label "in case emergency situation break the glass".
Many years ago RealAdmin(tm) had in the pocket rescue 3.5' disk. After this it was replaced by CDs/DVDs to replaced by USB stick but now in pocket you can hold "Swiss army knife" with which you can rescue at the same time few computers in DC after plugging it into the network and rescue box over PXE in booted rescue mode :D

Going back to the static glibc subject .. by natural reasons point about bootstrap/initrd static binaries sooner or later will gone from static libc dependencies.
IIRC Solaris in own initrd (miniroot) is using only dynamic binaries copied from regular system.
Sooner or later the same will be with all full Linux distros.
Crafting those static binaries is more and more pointless with growing size of the RAM even in smallests systems HWs reachable by distribution like Fedora.

> 3) some people are thinking that static linking make sense from performance
> or resource consumption perspective.
[..] 
Straw man argument, 5 yard penalty.

The relocation of binaries to different places in the same operating
system is not the problem. Moving it from one operating sytem or
container to another, *that* is the problem. And running exactly the
version you expect, even if it's inside a heavily modified container
or distinct operating system of some sort is a powerful goal. I don't
run into it as much as I used to, but I certainly used to have to
compile components statically in various environments for use
elsewhere.

I'm betting that sooner or later even here static libraries will be abandoned.
Why? because with growing number of CPU cores in such systems still CPU caches is the biggest bottleneck. 
Simple such cache is not made out of rubber.
In nearest future it is easy to imagine that virtualization platform would be running not tenths like it is usually today but thousands instances maximum sharing memory pages between instances VMs/partitions/zone/containers will be more and more critical from CPU cache perspective.
Only solution of those type challenges will be using maximum sharing resources between those mimi-systems .. with full sharing on DSO layer as well.
People today working on dockers and other containerization pinning up those thing t statically linked binaries are shooting in own foot making those platform prone on not sharing as much as it is only possible.
No .. by DEFINITION here as well should be used regular distro binaries. If not those containers ships will start sinking after collisions with CPU caches icebergs.
In future eben VM based containerisation as may relly beneath on deduplication may start share some physical memory pages mapped to different VMs minimizing CPU caches misses.
More than ten years ago on one of fist version Solaris 10 as POC/test have been done experiment with running few thousands of zones.
IIRC it was done on using UltraSparch 10 workstation. Today laptops are more powerful than this old HW.
You will be really surprised how much Solaris non-global zones is possible to pack into single T5 HW with 256-512 CPUs visible in global zone.
Cloning system volumes on ZFS layer allow to share very easy physical memory pages between all those zone if they are not running some branded zones (solution how to run old binaries even Solaris 7 or 8 on top of latest Solaris 11.3 kernel).
People without such experience really are doing more harm than good for Linux by using static binaries :( .. and it will be obvious withing next few years.
 
>> You seem to pointing out that NSS is a stability problem. You're quite
>> right. Saying that "NSS is unstable, therefore glibc should be forced
>> to dynamic libraries only" does not follow. The underlying API for
>> nsswitch.conf and NSS does not change that quickly, it's the feature
>> churn for add-ons that are being tied into NSS. For high stability
>> software, *who cared*? You won't use the most recent NSS changes, and
>> if you do, they're quite likely to be available the the glibc static
>> library at the time you compile them. Statically.
>
>
> You are wrong that this is abut messiness/stability of the NSS interface.
> We are talking about internal glibc ABI/API on which none of the
> system/distribution binaries should rely on such internal interfaces which
> any project maintainer has freedom to change without even noticing this in
> changelogs.

If they're dinking with it that much, it's likely to lead to changes
in the ABI as well and force recompilation. That level of change is
not normally permitted within a single Fedora release, and packages
for Fedora are rebuilt for new releases.

Again, straw man argument, and another 5 yeard penalty.

As I said already. To the glibc NSS subsystem are coming now waves of necessary extension.
Sorry to say this again but with every day probability that we will reach some new NSS step will cause issue which last time happened +12 years ago.
Now really is best time to prepare to abandon this ship and enter on try land to not be affected by coming next tsunami.
You may be happy that you will have 5 yards more on this sunny beach .. because nothing bad happened in last 10 years but .. sea uncovering beach is only a pre symptom coming tsunami :)
Remember one of the Murphy's law: "if something may happen it will happen" .. or old engineering joke about automated announcement prerecorded to played in emergency situation on airport "Everything is under control. Nothing can go wrong (hrr) .. can go wrong (hrr) ..  can go wrong (hrr) .. can go wrong .." :)

> Using statically linked binaries creates RISK here if those binaries will be
> not refreshed.

Now, *that* is real. However, see above, to wit:

* Packages for Fedora are rebuilt for new releases.

At the moment such risk is nullified by quite regular full scale Fedora as distro mass rebuilds.
Nevertheless this part was not about binaries coming with distro but binaries generated by distro consumers (mostly sys admins).

[..]
> Risk of not to be exposed on internal NSS ABI changes can be very easy
> nullified by using in such rare cases static linking with libc like uClibc
> which has no NSS interface.

And if I walk on only my right foot, I can save wear and tear on my
left shoe. Doesn't seem worth the effort. uClibc is a cool idea, but
unlikely to work well for a broad variety of complex software which
has ever been compiled or tested with it.

Using uClibc it is not an idea. It is solution of some well known glic issues.
You may like or hate glibc but as same as libcs from other Unices already maaany years ago hit NSS issue as well they decided to stop provide libc.a generated from regular full scale system libc.
Look on OpenIdiana source tree and you will find that Sun many years ago separated and started maintaining these binaries separately as kind of exception which simple was not possible to solve using libc.a generated out of regular source code of regular OS libc.
Simple such libc is to complex to use it in such cases.
As well risk of maintaining such low level exceptional code in future by some big-OS-changes is almost null as those changes very rarely are touching bootstrap.
In case of Solaris last time it happened when this OS started moving from UFS to ZFS with BEs (Boot Environment) created on top cloned rootfs it started touching early boot stage procedure and it was necessary to change couple of thing here.
If Linux will decide to fully commit similar way on top of btrfs because some Solaris adaptations have been already done and committed to Grub2 probability that it will be necessary to do something to prepare Linux for such scenario will be almost zero because it will be probably possible share Solaris/ZFS code or number of such adaptations will be AlmostNothing(tm).
Today playing with latest uboot on my pine64 just found that even uboot has merged zfs support (which is jaw dropping ..)
I don't thing that it will happen in next 5 years because btrfs in such context is in quite embryonal stage.

> In last decade bash already created enough number CVEs to start thinking
> about moving away from bash as /bin/sh provider.
> Sun/Oracle already done careful ksh security review which bash never had
> done. Because ksh is very small and his main goal is provide /bin/sh number
> of any future changes will be very well known and limited to cleanups and
> bug fixes.

And we can get C++ programmers to going back to C, to avoid the
function overloading problem.

I'm afraid it's not gonna happen

I think that you totally lost my point here.
I have o idea how it is possible pin up your C/C++ analogy to security risk context.
Please try to read above one more time. I'm sure that you can understand above ..

> Changing /bin/sh to real SH interpreter IMO should be kind of long term
> Fedora target. It will be not easy and maybe even painful but minimize
> /bin/sh dependencies and minimize security risk is IMO worth to start
> thinking about some preparations to be opened on such change in the future.
> Other issue is that bash is not the fastest /bin/sh interpreter :)

It's a lot of work with a profound loss of shell functionality and
features, with a nebulous security benefit. Sorry, but I don't see it
happening anytime in the foreseeable future.

Not to much .. really. Trust me I've walked all this path in only few months doing more than 50/60% of whole work doing many things in parallel.
Here is just your impression how it complicated it is against what I've personally done.
As result of this work most results of necessary fixes already landed in source trees of many OSS projects so partially some work already has been done.

First as pre step you should only take care of using #!/bin/sh instead #!bin/bash. Most (+95%) of the scripts are overusing bash in script preamble.
Will try to open thread with this soon,
Such work can be perfectly scaled on many Fedora developers even with low skills giving them something "real to fix" opportunity :)
"Build it and they will come .. " :)

Fedora Debashfication Project may as well raise generally Linux community awareness about difference between Bash and Posix SH as scripts interpreters that here relation is like between squares and rectangles. Long term effects will be only positive ..

Even if now complexity level is for Fedora relatively big opening such project as low priority thing would be very fruitful.
Start discussion about this ASAP does not mean "move Fedora /bin/sh to Posix SH in next few months".
No. Decision about final push can be made at any time as consequence of conclusion that "we can remove this stain because presoaking has been done enough long .. so now we can put everything in washing machine" :)

[..]
> In other words all those goals which you mention are not strictly related to
> static linking and it is possible to do a lot (or much more) to minimize
> such risks without even thinking about provide glibc-static. Static linking

Umm. No. You're mistaking "reduction of library dependencies" with
"discaarding static library compilation".

You been a bit ticked my distraction in whole discussion that linker optimization has something to do with static linking in a some straight way :)
No .. these are completely two parallel/independent subjects which have one common part in its roots about lowering risks, lowering number of issues catched by regression tests and keeping distro layer entropy on healthy/lowest possible level. Nothing more and only this ..

[..]
*Cool*. And you've my sympathies, those kinds of subtle undetected
dependencies can be tricky to resolve.

Thx for +1 about linker optimisations :)
However I think that we need more voices/votes to push first pebble on this slope .. anyone else?

kloczek
-- 
Tomasz Kłoczko |  LinkedIn: http://lnkd.in/FXPWxH