Heya,
today I installed the current Fedora 30 Workstation beta on my new laptop. It was a bumpy ride, I must say (the partitioner (blivet?) crashed five times or so on me, always kicking me out of anaconda again, just because I wanted to undo something). But I don't really want to discuss that. What I do want to discuss is this:
Can we maybe reduce the default set of packages a bit? In particular the following ones I really don't think should be in our default install:
1. multipathd. On a workstation, uh?? I obviously have no multipath devices configured on my laptop, how would I even? Has anyone? This is a really nasty one: to this day it pulls in udev settle, which is really backwards, and slows down our boot considerably. No current daemon should require udev settle, any daemon that still does is just backwards because it assumes that hardware would guarantee to have shown up at some specific time at boot, though in today's world that's really not how this works: hardware can take any time it wants, and thus instead of "waiting for everything" you can reasonably just wait for the stuff you know you actually need, based on your configuration. systemd-udev-settle.service however is a compat kludge that is supposed to provide "wait for everything", though this is racy and flaky. To say this clearly: anything that still relied on systemd-udev-settle.service 5y ago was bad, but still pulling that in today in 2019, and doing that in a default fedora install is just bad bad bad. This alone costs half the boot time on my system because it just waits for stuff for nothing, and for what? And beyond that, this daemon is really ugly too: it logs at high log levels during boot that it found no configuration and hence nothing to do. Yes, obviously, but that's a reason to shut up and proceed quickly, not to complain loudly about that so that it even appears on the scren (I mean srsly, this is the first thing I saw when i booted from the fedora live media: a log message printed all over the screen that multipathd has no working configuration...).
2. dmraid. Not quite as bad as multipathd as it is more likely to exist on a workstation (still quite exotic though), but also pulls in udev settle and hence should not be in our default boot. Much like multipathd this should be fixed to not require udev settle anymore, and in the absence of that at least not end up in the default fedora boot process, except for those people who actually have dmraid.
3. atd? Do we still need that? Do we have postinst scripts that need this? If so, wouldn't systemd-run be a better approach for those? Isn't it time to make this an RPM people install if they want it?
4. Similar crond. On my fresh install it's only used by "zfs-fuse", which I really wonder why it even is in the default install? And "mdadm" wants this too. (which would be great if it would just use timer units)
5. libvirtd. Why is this running? Can't we make this socket activatable + exit-on-idel? While I am sure it's useful on workstations why run it all the time, given that only very few users probably actually need that, and if they do starting it on demand would be much more appropriate? On my freshly installed system it is running all the time even though there are no VMs or anything around.
Ideally, the top 4 wouldn't be installed at all anymore (in case of the first two at least on the systems which do not need them). But if that's not in the cards, it would be great to at least not enable these services anymore in the default boot so that they are only a "systemctl enable" away for people who need them?
I wonder the first one is rooted in a misconception about systemd's unit condition concept: conditions are extremely lightwight: they just bypass service start-up, that's all. They have no effect on whether dependencies are pulled in before hand or not, and they are only tested the instant the service is ready to be fork()ed off. This means multipathd.service (which has ConditionPathExists=/etc/multipathd.conf) pulls in systemd-udev-settle.service regardless if the condition holds or fails...
I guess I should file bz issues about all of the above, but I am not sure against which packages? anaconda? comps (does that still exist)? the individual packages?
It's also my hope that maybe some champion volunteers for tracking down issues like this and fixing them? i.e. keeping udev settle out of the default install alone would be a worthy goal for every release, given that it doubles boot time on typical systems... Anyone up for that?
Lennart
On Tue, 9 Apr 2019 at 12:07, Lennart Poettering mzerqung@0pointer.de wrote:
Heya,
today I installed the current Fedora 30 Workstation beta on my new laptop. It was a bumpy ride, I must say (the partitioner (blivet?) crashed five times or so on me, always kicking me out of anaconda again, just because I wanted to undo something). But I don't really want to discuss that. What I do want to discuss is this:
Can we maybe reduce the default set of packages a bit? In particular the following ones I really don't think should be in our default install:
This is not the first time this has come up and I expect it won't be the last time.
I think the main reason they stick around is that the people who want them gone just show up right after a release, drop a bunch of requests, and then go off to their own busy work. Then they come back a release later, don't see any change and either drop another email detailing things to be dropped OR discouraged that no-one ever listens. The things that do get changed and pulled out (or kept in) do so because people come in and work on scrubbing out the reasons and making sure the replacements are socialized in.
One of the things is that I am not sure any of these items
1. multipathd. On a workstation, uh?? I obviously have no multipath
2. dmraid. Not quite as bad as multipathd as it is more likely to
I think these two are here because of the blivet you mentioned earlier. Advanced partitioning requires them to be there... and there do seem to be people who actually do expect both of those to work on their workstations when it was looked at to be removed in the past.
I do not know if the SIlverBlue does not have them on the other hand.
- atd? Do we still need that? Do we have postinst scripts that need
4. Similar crond. On my fresh install it's only used by "zfs-fuse",
This is more about socializing and teaching the systemd replacements... because most of the systemd advocates and heavy users I have asked aren't sure about how systemd replaces them and go back to cron/atd. I actually think that the replacements seem much better thought out than cruft-ware but.. but I also have little confidence I could get it to work consistently while I can find 10k tutorials on cron.
- libvirtd. Why is this running? Can't we make this socket
I don't know on this. I remember something about containers and flatpaks but .. I don't know.
I wonder the first one is rooted in a misconception about systemd's unit condition concept: conditions are extremely lightwight: they just
I think it is actually more about what comes up more in the Arch and serverfault pages on how to set up timed jobs. It has to do with tools to make it 'one-liners' and 'convert your cruft cron' or 'this will read your cron and make it cron-d'
As you say below it is about finding champions but those champions have got to feel comfortable that they can answer things. Those people would then be the ones to help shepherd this through.
I guess I should file bz issues about all of the above, but I am not sure against which packages? anaconda? comps (does that still exist)? the individual packages?
It may actually require a larger change that goes through the release process. It would work better to work with the Workstation and/or Silverblue team to get them to champion it themselves as it does meet what they have said they want..
It's also my hope that maybe some champion volunteers for tracking down issues like this and fixing them? i.e. keeping udev settle out of the default install alone would be a worthy goal for every release, given that it doubles boot time on typical systems... Anyone up for that?
Lennart _______________________________________________ devel mailing list -- devel@lists.fedoraproject.org To unsubscribe send an email to devel-leave@lists.fedoraproject.org Fedora Code of Conduct: https://getfedora.org/code-of-conduct.html List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines List Archives: https://lists.fedoraproject.org/archives/list/devel@lists.fedoraproject.org
On Tue, 2019-04-09 at 12:54 -0400, Stephen John Smoogen wrote:
On Tue, 9 Apr 2019 at 12:07, Lennart Poettering mzerqung@0pointer.de wrote:
Heya,
today I installed the current Fedora 30 Workstation beta on my new laptop. It was a bumpy ride, I must say (the partitioner (blivet?) crashed five times or so on me, always kicking me out of anaconda again, just because I wanted to undo something). But I don't really want to discuss that. What I do want to discuss is this:
Can we maybe reduce the default set of packages a bit? In particular the following ones I really don't think should be in our default install:
This is not the first time this has come up and I expect it won't be the last time.
I think the main reason they stick around is that the people who want them gone just show up right after a release, drop a bunch of requests, and then go off to their own busy work. Then they come back a release later, don't see any change and either drop another email detailing things to be dropped OR discouraged that no-one ever listens. The things that do get changed and pulled out (or kept in) do so because people come in and work on scrubbing out the reasons and making sure the replacements are socialized in.
One of the things is that I am not sure any of these items
multipathd. On a workstation, uh?? I obviously have no multipath
dmraid. Not quite as bad as multipathd as it is more likely to
I think these two are here because of the blivet you mentioned earlier. Advanced partitioning requires them to be there... and there do seem to be people who actually do expect both of those to work on their workstations when it was looked at to be removed in the past.
I do not know if the SIlverBlue does not have them on the other hand.
Basically, anything that's part of the install environment is going to be present after a live install. That accounts for both of the above: the installer supports multipath and dmraid storage devices, so the relevant packages are part of the install environment, so they're part of the lives, so they're installed by a live install.
This is kind of a limitation of the live deployment mechanism. In theory a post-install stage could be added to strip things that were only needed at install time, or that we can tell aren't actually needed by the installed system, but this has never been done, though I recall it being discussed at times.
- atd? Do we still need that? Do we have postinst scripts that need
- Similar crond. On my fresh install it's only used by "zfs-fuse",
This is more about socializing and teaching the systemd replacements... because most of the systemd advocates and heavy users I have asked aren't sure about how systemd replaces them and go back to cron/atd. I actually think that the replacements seem much better thought out than cruft-ware but.. but I also have little confidence I could get it to work consistently while I can find 10k tutorials on cron.
To be specific here, 'at' is part of the @standard group. 'chrony' is pulled in several ways. It's part of @standard *if gnome-control-center is being installed*, so effectively it'll be installed with Workstation but not other editions/spins. That sort of implies that there's some functionality in GNOME that depends on chrony; I am not sure what that is, off hand. It's also part of 'anaconda-tools' (so it will be in all live images and all live installs), part of 'server-product' (so it is in Server installs), and part of 'system-tools' (so it'll be in anything that includes that). It's also part of 'workstation-product', so it's really super *definitely* included in Workstation. :P
I think it is reasonable to suggest that there is a general expectation that, on an out of the box *nix system, you can put stuff in crontab and it will work. I like systemd timers, but the system doesn't attempt cron compatibility so far as I'm aware; if we don't install a cron daemon, this won't be the case. (I'm actually slightly interested in whether you wind up with chrony if you do a non-live install of a non- GNOME desktop; it looks to me like you don't, which is I guess notable).
- libvirtd. Why is this running? Can't we make this socket
I don't know on this. I remember something about containers and flatpaks but .. I don't know.
Boxes is a key component of Workstation, and it relies on libvirt. It's in the 'Core Applications' definition of the Workstation tech spec:
https://fedoraproject.org/wiki/Workstation/Technical_Specification#Core_Appl...
On Tue, 2019-04-09 at 10:11 -0700, Adam Williamson wrote:
To be specific here, 'at' is part of the @standard group. 'chrony' is pulled in several ways. It's part of @standard *if gnome-control-center is being installed*, so effectively it'll be installed with Workstation but not other editions/spins. That sort of implies that there's some functionality in GNOME that depends on chrony; I am not sure what that is, off hand. It's also part of 'anaconda-tools' (so it will be in all live images and all live installs), part of 'server-product' (so it is in Server installs), and part of 'system-tools' (so it'll be in anything that includes that). It's also part of 'workstation-product', so it's really super *definitely* included in Workstation. :P
nirik points out that I have been sunk by homonyms here: chrony is an NTP daemon, not a cron daemon. :P
Our 'default' cron daemon is cronie, but that hasn't appeared in comps at all since it was specifically removed by a PR:
https://pagure.io/fedora-comps/pull-request/179
However, I think I know why it's still showing up: 'crontabs' is in @workstation-product and @standard in comps, and crontabs Recommends cronie.
On Di, 09.04.19 10:11, Adam Williamson (adamwill@fedoraproject.org) wrote:
Basically, anything that's part of the install environment is going to be present after a live install. That accounts for both of the above: the installer supports multipath and dmraid storage devices, so the relevant packages are part of the install environment, so they're part of the lives, so they're installed by a live install.
Hmm, but the installed OS is not 100% the same as the livesys, or is it? If not, it should be possible to add a "systemctl disable dmraid.service --root=/path/to/os" somewhere, no?
To be specific here, 'at' is part of the @standard group. 'chrony' is
Yupp, it's very confusing that we have chrony and cronie in our OS and both are installed by default... ;-)
I don't know on this. I remember something about containers and flatpaks but .. I don't know.
Boxes is a key component of Workstation, and it relies on libvirt. It's in the 'Core Applications' definition of the Workstation tech spec:
Hmm, but boxes supposedly uses the user session version of libvirt, no? it doesn't actually use the system service?
I mean, I am even fine if that gets instaleld by default and is listening on a local IPC socket, but why does it have to run all the time? activation by socket and exit-on-idle should be fine too.
Very similar is actually "fwupd", why does that need to run all the time? Seems like something that should be bus activatable, and exit-on-idle, but why run it all the time?
Lennart
-- Lennart Poettering, Berlin
On Tue, 9 Apr 2019 at 19:21, Lennart Poettering mzerqung@0pointer.de wrote:
Very similar is actually "fwupd", why does that need to run all the time? Seems like something that should be bus activatable, and exit-on-idle, but why run it all the time?
It does exit on idle, if you don't have hardware that is tricky to get version numbers from. If we're polling for the list of firmware updates once per day we don't want to cause display flicker or that kind of thing. ThunderBolt and MST are the main offenders here. We're working on it, but it's not super simple.
Richard.
On Di, 09.04.19 19:24, Richard Hughes (hughsient@gmail.com) wrote:
On Tue, 9 Apr 2019 at 19:21, Lennart Poettering mzerqung@0pointer.de wrote:
Very similar is actually "fwupd", why does that need to run all the time? Seems like something that should be bus activatable, and exit-on-idle, but why run it all the time?
It does exit on idle, if you don't have hardware that is tricky to get version numbers from. If we're polling for the list of firmware updates once per day we don't want to cause display flicker or that kind of thing. ThunderBolt and MST are the main offenders here. We're working on it, but it's not super simple.
Hmm? Can you elaborate? Why does fwupd's runtime have something to do with display flickers? Not grokking the connection?
Lennart
-- Lennart Poettering, Berlin
On Tue, 9 Apr 2019 at 19:27, Lennart Poettering mzerqung@0pointer.de wrote:
Hmm? Can you elaborate? Why does fwupd's runtime have something to do with display flickers? Not grokking the connection?
More information in https://github.com/hughsie/fwupd/commit/75b965d01d80d70ae51816acd4d4cafdaf79... -- in the case of MST it's where a fake monitor gets created on the hardware so the chip can wake up enough to tell us the current firmware version, but the fake monitor causes a flicker for a couple of reasons. I guess we could always just cache the last known version in some database somewhere and just assume it's not changed, but if we're not "alive" to see the device removal/insertion event we don't know the true state of the hardware. If you're worried about the startup speed or memory usage, we've been pretty keenly fixing both, but if you have any specific concerns let me know.
Richard.
On Di, 09.04.19 20:12, Richard Hughes (hughsient@gmail.com) wrote:
On Tue, 9 Apr 2019 at 19:27, Lennart Poettering mzerqung@0pointer.de wrote:
Hmm? Can you elaborate? Why does fwupd's runtime have something to do with display flickers? Not grokking the connection?
More information in https://github.com/hughsie/fwupd/commit/75b965d01d80d70ae51816acd4d4cafdaf79... -- in the case of MST it's where a fake monitor gets created on the hardware so the chip can wake up enough to tell us the current firmware version, but the fake monitor causes a flicker for a couple of reasons. I guess we could always just cache the last known version in some database somewhere and just assume it's not changed, but if we're not "alive" to see the device removal/insertion event we don't know the true state of the hardware. If you're worried about the startup speed or memory usage, we've been pretty keenly fixing both, but if you have any specific concerns let me know.
Hmm, but why would you scan for firmware versions during hotplug? why not just when the user triggers a query if there's something to update?
I presume fwupd subscribes to udev events for this, what precisely is it susbcribing to?
Lennart
-- Lennart Poettering, Berlin
On Tue, 2019-04-09 at 20:20 +0200, Lennart Poettering wrote:
On Di, 09.04.19 10:11, Adam Williamson (adamwill@fedoraproject.org) wrote:
Basically, anything that's part of the install environment is going to be present after a live install. That accounts for both of the above: the installer supports multipath and dmraid storage devices, so the relevant packages are part of the install environment, so they're part of the lives, so they're installed by a live install.
Hmm, but the installed OS is not 100% the same as the livesys, or is it? If not, it should be possible to add a "systemctl disable dmraid.service --root=/path/to/os" somewhere, no?
It's possible, sure. Every bit of post-install tinkering added to anaconda is another thing anaconda has to maintain and that could go wrong or become stale, is the argument against it, I think. But anyone can send a pull request. :P
On 4/9/19 2:20 PM, Lennart Poettering wrote:
On Di, 09.04.19 10:11, Adam Williamson (adamwill@fedoraproject.org) wrote:
Basically, anything that's part of the install environment is going to be present after a live install. That accounts for both of the above: the installer supports multipath and dmraid storage devices, so the relevant packages are part of the install environment, so they're part of the lives, so they're installed by a live install.
Hmm, but the installed OS is not 100% the same as the livesys, or is it? If not, it should be possible to add a "systemctl disable dmraid.service --root=/path/to/os" somewhere, no?
To be specific here, 'at' is part of the @standard group. 'chrony' is
Yupp, it's very confusing that we have chrony and cronie in our OS and both are installed by default... ;-)
I don't know on this. I remember something about containers and flatpaks but .. I don't know.
Boxes is a key component of Workstation, and it relies on libvirt. It's in the 'Core Applications' definition of the Workstation tech spec:
Hmm, but boxes supposedly uses the user session version of libvirt, no? it doesn't actually use the system service?
You're right it does not explicitly talk to the system libvirtd instance. But boxes implicitly depends on system libvirtd to autostart the 'default' virtual network, which is the preferred networking method. boxes VMs then essentially use a small setuid helper shipped with qemu to use the default virbr0 for unprivileged VMs.
Thanks, Cole
On 4/9/19 1:00 PM, Cole Robinson wrote:
On 4/9/19 2:20 PM, Lennart Poettering wrote:
On Di, 09.04.19 10:11, Adam Williamson (adamwill@fedoraproject.org) wrote:
Basically, anything that's part of the install environment is going to be present after a live install. That accounts for both of the above: the installer supports multipath and dmraid storage devices, so the relevant packages are part of the install environment, so they're part of the lives, so they're installed by a live install.
Hmm, but the installed OS is not 100% the same as the livesys, or is it? If not, it should be possible to add a "systemctl disable dmraid.service --root=/path/to/os" somewhere, no?
To be specific here, 'at' is part of the @standard group. 'chrony' is
Yupp, it's very confusing that we have chrony and cronie in our OS and both are installed by default... ;-)
I don't know on this. I remember something about containers and flatpaks but .. I don't know.
Boxes is a key component of Workstation, and it relies on libvirt. It's in the 'Core Applications' definition of the Workstation tech spec:
Hmm, but boxes supposedly uses the user session version of libvirt, no? it doesn't actually use the system service?
You're right it does not explicitly talk to the system libvirtd instance. But boxes implicitly depends on system libvirtd to autostart the 'default' virtual network, which is the preferred networking method. boxes VMs then essentially use a small setuid helper shipped with qemu to use the default virbr0 for unprivileged VMs.
Thanks, Cole
I've long thought that the virtual network should be its own service: https://bugzilla.redhat.com/show_bug.cgi?id=1597326#c5
but of course I don't have the time to do the work so that doesn't count for much. But this possibly points to another reason to do so.
On Tue, Apr 09, 2019 at 09:14:18PM -0600, Orion Poplawski wrote:
On 4/9/19 1:00 PM, Cole Robinson wrote:
On 4/9/19 2:20 PM, Lennart Poettering wrote:
On Di, 09.04.19 10:11, Adam Williamson (adamwill@fedoraproject.org) wrote:
Basically, anything that's part of the install environment is going to be present after a live install. That accounts for both of the above: the installer supports multipath and dmraid storage devices, so the relevant packages are part of the install environment, so they're part of the lives, so they're installed by a live install.
Hmm, but the installed OS is not 100% the same as the livesys, or is it? If not, it should be possible to add a "systemctl disable dmraid.service --root=/path/to/os" somewhere, no?
To be specific here, 'at' is part of the @standard group. 'chrony' is
Yupp, it's very confusing that we have chrony and cronie in our OS and both are installed by default... ;-)
I don't know on this. I remember something about containers and flatpaks but .. I don't know.
Boxes is a key component of Workstation, and it relies on libvirt. It's in the 'Core Applications' definition of the Workstation tech spec:
Hmm, but boxes supposedly uses the user session version of libvirt, no? it doesn't actually use the system service?
You're right it does not explicitly talk to the system libvirtd instance. But boxes implicitly depends on system libvirtd to autostart the 'default' virtual network, which is the preferred networking method. boxes VMs then essentially use a small setuid helper shipped with qemu to use the default virbr0 for unprivileged VMs.
Thanks, Cole
I've long thought that the virtual network should be its own service: https://bugzilla.redhat.com/show_bug.cgi?id=1597326#c5
but of course I don't have the time to do the work so that doesn't count for much. But this possibly points to another reason to do so.
I'm working on an upstream libvirt re-architecture that will make it into its own daemon. In fact libvirtd will be split into many smaller daemons each specific responsibilities.
This won't let us use systemd activation though. Libvirtd can't use activation right now because it needs to be able to auto-start VMs and networks at system boot up. This functionality pre-dates systemd so is handled by libvirtd itself. Eventually it would be ideal if libvirtd can dynamically create systemd units for its resources which need start-on-boot functionality, but that's some significant work to do still.
Regards, Daniel
I don't think we particularly need autostarting vms on the workstation. It would be very nice to get libvirtd activated. I know we've asked for this before...
Le mardi 16 avril 2019 à 14:57 +0000, Matthias Clasen a écrit :
I don't think we particularly need autostarting vms on the workstation. It would be very nice to get libvirtd activated. I know we've asked for this before...
I have autostarted vms on my other_os workstation and it is very convenient. You set the hypervisor to save vm state on shutdown, and autorestart all the vms that were running next boot, and you don't need to bother about the hybernation vs sleep vs shutdown nonsense anymore. It just works
On Tue, Apr 9, 2019 at 8:21 PM Lennart Poettering mzerqung@0pointer.de wrote:
Hmm, but the installed OS is not 100% the same as the livesys, or is it? If not, it should be possible to add a "systemctl disable dmraid.service --root=/path/to/os" somewhere, no?
AFAIK, they are 100% same. There's a hack, check your /etc/rc.d/init.d/livesys /etc/rc.d/init.d/livesys-late They are executed every time during boot, but immediately quit if they detect they're not running on a Live image (using kernel command line). You can see them also here: https://pagure.io/fedora-kickstarts/blob/f30/f/fedora-live-base.ks#_73 https://pagure.io/fedora-kickstarts/blob/f30/f/fedora-live-base.ks#_232 They are ugly, I think, and many improvements could be made there. But some Live image adjustments are possible through them. But those are just runtime changes for Live environment, they don't affect the installed system. If anaconda had a post-install phase where it would make appropriate changes to the installed system (and also ideally removed itself and those livesys scripts), that would be great, yes.
On Mi, 10.04.19 12:49, Kamil Paral (kparal@redhat.com) wrote:
Hmm, but the installed OS is not 100% the same as the livesys, or is it? If not, it should be possible to add a "systemctl disable dmraid.service --root=/path/to/os" somewhere, no?
AFAIK, they are 100% same. There's a hack, check your /etc/rc.d/init.d/livesys /etc/rc.d/init.d/livesys-late They are executed every time during boot, but immediately quit if they detect they're not running on a Live image (using kernel command line). You can see them also here: https://pagure.io/fedora-kickstarts/blob/f30/f/fedora-live-base.ks#_73 https://pagure.io/fedora-kickstarts/blob/f30/f/fedora-live-base.ks#_232 They are ugly, I think, and many improvements could be made there. But some Live image adjustments are possible through them. But those are just runtime changes for Live environment, they don't affect the installed system. If anaconda had a post-install phase where it would make appropriate changes to the installed system (and also ideally removed itself and those livesys scripts), that would be great, yes.
Those scripts, can't we make them part of some RPM btw? I filed a bug about that yesterday:
https://bugzilla.redhat.com/show_bug.cgi?id=1698119
These scripts are a bit annoying since they are the only SysV scripts left really (not even LSB!), have no purposes on an installed system and live outside of any RPM ownership and validation.
Lennart
-- Lennart Poettering, Berlin
On Wed, 2019-04-10 at 12:49 +0200, Kamil Paral wrote:
On Tue, Apr 9, 2019 at 8:21 PM Lennart Poettering mzerqung@0pointer.de wrote:
Hmm, but the installed OS is not 100% the same as the livesys, or is it? If not, it should be possible to add a "systemctl disable dmraid.service --root=/path/to/os" somewhere, no?
AFAIK, they are 100% same. There's a hack, check your /etc/rc.d/init.d/livesys /etc/rc.d/init.d/livesys-late They are executed every time during boot, but immediately quit if they detect they're not running on a Live image (using kernel command line). You can see them also here: https://pagure.io/fedora-kickstarts/blob/f30/f/fedora-live-base.ks#_73 https://pagure.io/fedora-kickstarts/blob/f30/f/fedora-live-base.ks#_232
That's only part of the story. All the other kickstarts can add bits to the scripts, too (by catting to them). The actual script on any given live image is a combination of the bits from live-base and whatever the other kickstarts involved in building that image do to it.
A long time ago I rewrote these as systemd units; the change was rejected...
On Tue, Apr 9, 2019 at 1:11 PM Adam Williamson adamwill@fedoraproject.org wrote:
On Tue, 2019-04-09 at 12:54 -0400, Stephen John Smoogen wrote:
On Tue, 9 Apr 2019 at 12:07, Lennart Poettering mzerqung@0pointer.de wrote:
Heya,
today I installed the current Fedora 30 Workstation beta on my new laptop. It was a bumpy ride, I must say (the partitioner (blivet?) crashed five times or so on me, always kicking me out of anaconda again, just because I wanted to undo something). But I don't really want to discuss that. What I do want to discuss is this:
Can we maybe reduce the default set of packages a bit? In particular the following ones I really don't think should be in our default install:
This is not the first time this has come up and I expect it won't be the last time.
I think the main reason they stick around is that the people who want them gone just show up right after a release, drop a bunch of requests, and then go off to their own busy work. Then they come back a release later, don't see any change and either drop another email detailing things to be dropped OR discouraged that no-one ever listens. The things that do get changed and pulled out (or kept in) do so because people come in and work on scrubbing out the reasons and making sure the replacements are socialized in.
One of the things is that I am not sure any of these items
multipathd. On a workstation, uh?? I obviously have no multipath
dmraid. Not quite as bad as multipathd as it is more likely to
I think these two are here because of the blivet you mentioned earlier. Advanced partitioning requires them to be there... and there do seem to be people who actually do expect both of those to work on their workstations when it was looked at to be removed in the past.
I do not know if the SIlverBlue does not have them on the other hand.
Basically, anything that's part of the install environment is going to be present after a live install. That accounts for both of the above: the installer supports multipath and dmraid storage devices, so the relevant packages are part of the install environment, so they're part of the lives, so they're installed by a live install.
This is kind of a limitation of the live deployment mechanism. In theory a post-install stage could be added to strip things that were only needed at install time, or that we can tell aren't actually needed by the installed system, but this has never been done, though I recall it being discussed at times.
I'd personally like to see some kind of post-install mechanism that could remove unneeded things or apply updates before rebooting into the new environment. That's something that Ubiquity, DrakX, Calamares, and YaST all do, and it's quite nice to have...
- atd? Do we still need that? Do we have postinst scripts that need
- Similar crond. On my fresh install it's only used by "zfs-fuse",
This is more about socializing and teaching the systemd replacements... because most of the systemd advocates and heavy users I have asked aren't sure about how systemd replaces them and go back to cron/atd. I actually think that the replacements seem much better thought out than cruft-ware but.. but I also have little confidence I could get it to work consistently while I can find 10k tutorials on cron.
To be specific here, 'at' is part of the @standard group. 'chrony' is pulled in several ways. It's part of @standard *if gnome-control-center is being installed*, so effectively it'll be installed with Workstation but not other editions/spins. That sort of implies that there's some functionality in GNOME that depends on chrony; I am not sure what that is, off hand. It's also part of 'anaconda-tools' (so it will be in all live images and all live installs), part of 'server-product' (so it is in Server installs), and part of 'system-tools' (so it'll be in anything that includes that). It's also part of 'workstation-product', so it's really super *definitely* included in Workstation. :P
I think it is reasonable to suggest that there is a general expectation that, on an out of the box *nix system, you can put stuff in crontab and it will work. I like systemd timers, but the system doesn't attempt cron compatibility so far as I'm aware; if we don't install a cron daemon, this won't be the case. (I'm actually slightly interested in whether you wind up with chrony if you do a non-live install of a non- GNOME desktop; it looks to me like you don't, which is I guess notable).
'chrony' isn't a cron job thing. That's 'cron' and 'anacron'. 'chrony' is a time server thing, and we should keep that. :)
On Tue, 2019-04-09 at 14:20 -0400, Neal Gompa wrote:
On Tue, Apr 9, 2019 at 1:11 PM Adam Williamson adamwill@fedoraproject.org wrote:
On Tue, 2019-04-09 at 12:54 -0400, Stephen John Smoogen wrote:
On Tue, 9 Apr 2019 at 12:07, Lennart Poettering mzerqung@0pointer.de wrote:
Heya,
today I installed the current Fedora 30 Workstation beta on my new laptop. It was a bumpy ride, I must say (the partitioner (blivet?) crashed five times or so on me, always kicking me out of anaconda again, just because I wanted to undo something). But I don't really want to discuss that. What I do want to discuss is this:
Can we maybe reduce the default set of packages a bit? In particular the following ones I really don't think should be in our default install:
This is not the first time this has come up and I expect it won't be the last time.
I think the main reason they stick around is that the people who want them gone just show up right after a release, drop a bunch of requests, and then go off to their own busy work. Then they come back a release later, don't see any change and either drop another email detailing things to be dropped OR discouraged that no-one ever listens. The things that do get changed and pulled out (or kept in) do so because people come in and work on scrubbing out the reasons and making sure the replacements are socialized in.
One of the things is that I am not sure any of these items
multipathd. On a workstation, uh?? I obviously have no multipath
dmraid. Not quite as bad as multipathd as it is more likely to
I think these two are here because of the blivet you mentioned earlier. Advanced partitioning requires them to be there... and there do seem to be people who actually do expect both of those to work on their workstations when it was looked at to be removed in the past.
I do not know if the SIlverBlue does not have them on the other hand.
Basically, anything that's part of the install environment is going to be present after a live install. That accounts for both of the above: the installer supports multipath and dmraid storage devices, so the relevant packages are part of the install environment, so they're part of the lives, so they're installed by a live install.
This is kind of a limitation of the live deployment mechanism. In theory a post-install stage could be added to strip things that were only needed at install time, or that we can tell aren't actually needed by the installed system, but this has never been done, though I recall it being discussed at times.
I'd personally like to see some kind of post-install mechanism that could remove unneeded things
This is on the technical level doable as long as: - RPM & DNF tooling on the image works - you are only removing stuff - the "recipe" what to remove needs to be maintained somewhere and kept current
or apply updates before rebooting into the new environment.
This on the other hand is a bit harder four a couple reasons: - the current live installation does not need networking and this network is (IIRC) generally not setup, so you would need to make sure network is available at the time you attempt to do this - unlike the live image that is always the same and passes quite some testing (both automated and manual) before it is declared fit for GA, update repos can change more or less at random, creating a miriad of combinations with the live image package set, some of them potentially wrong (file conflicts, broken deps, failing scriptlets) - any errors due to updates might be harder to debug in the live environment than on installed system - note that thisa is not the same as netins as you would be always taking a frozen package set and then aplying a bunch of ever changing updates on top of it VS installing a full system from latest packages on netinst - while making the system up to date (and potentially more secure) this could make the live installation run quite a bit longer & taking ever longer the older the live image is (current live image just rsyncs the base filesystem content to the target, no need to run all the scriptlets and rpm machinery)
That's something that Ubiquity, DrakX, Calamares, and YaST all do, and it's quite nice to have...
It's all possible in principle, but both remove & update is: - extra code not useed on netinst that needs to be written, maintained & QAed - will this be the same for all lives or different per spins vs workstation - do the SIGs taking care of the Workstation live and other live spins actually want this ?
- atd? Do we still need that? Do we have postinst scripts that need
- Similar crond. On my fresh install it's only used by "zfs-fuse",
This is more about socializing and teaching the systemd replacements... because most of the systemd advocates and heavy users I have asked aren't sure about how systemd replaces them and go back to cron/atd. I actually think that the replacements seem much better thought out than cruft-ware but.. but I also have little confidence I could get it to work consistently while I can find 10k tutorials on cron.
To be specific here, 'at' is part of the @standard group. 'chrony' is pulled in several ways. It's part of @standard *if gnome-control-center is being installed*, so effectively it'll be installed with Workstation but not other editions/spins. That sort of implies that there's some functionality in GNOME that depends on chrony; I am not sure what that is, off hand. It's also part of 'anaconda-tools' (so it will be in all live images and all live installs), part of 'server-product' (so it is in Server installs), and part of 'system-tools' (so it'll be in anything that includes that). It's also part of 'workstation-product', so it's really super *definitely* included in Workstation. :P
I think it is reasonable to suggest that there is a general expectation that, on an out of the box *nix system, you can put stuff in crontab and it will work. I like systemd timers, but the system doesn't attempt cron compatibility so far as I'm aware; if we don't install a cron daemon, this won't be the case. (I'm actually slightly interested in whether you wind up with chrony if you do a non-live install of a non- GNOME desktop; it looks to me like you don't, which is I guess notable).
'chrony' isn't a cron job thing. That's 'cron' and 'anacron'. 'chrony' is a time server thing, and we should keep that. :)
-- 真実はいつも一つ!/ Always, there's only one truth! _______________________________________________ devel mailing list -- devel@lists.fedoraproject.org To unsubscribe send an email to devel-leave@lists.fedoraproject.org Fedora Code of Conduct: https://getfedora.org/code-of-conduct.html List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines List Archives: https://lists.fedoraproject.org/archives/list/devel@lists.fedoraproject.org
On Tue, Apr 9, 2019 at 12:11 PM, Adam Williamson adamwill@fedoraproject.org wrote:
To be specific here, 'at' is part of the @standard group. 'chrony' is pulled in several ways. It's part of @standard *if gnome-control-center is being installed*, so effectively it'll be installed with Workstation but not other editions/spins.
Note that Workstation doesn't include @standard at all.
Le mardi 09 avril 2019 à 10:11 -0700, Adam Williamson a écrit :
To be specific here, 'at' is part of the @standard group. 'chrony' is pulled in several ways. It's part of @standard *if gnome-control- center is being installed*, so effectively it'll be installed with Workstation but not other editions/spins. That sort of implies that there's some functionality in GNOME that depends on chrony;
chrony, mdadm and dmraid are all necessary parts of a workstation setup.
The first one because some virtualisation systems (windows 10 Hyper-V for example) are braindamaged and will wake up VMs (including GNOME VMs) with a system clock stuck at the time the VM was frozen. So you better have a clock-skewing ntp client running in all your vms. chrony is the best existing one right now IIRC (kuddo to chrony devs, they passed their security audits with style)
The others because raid does exist on workstations, and the dmraid guys never got around to finish replicating all the mdadm functionnalities (pretty much like networkd stopped at integrating bonding, and never bothered to do teaming, for example).
chrond, at, udev settle… that's all lack or good proeminent systemd documentation, that causes third party projects to kludge, and gave up on things they do not understand.
On Di, 09.04.19 12:54, Stephen John Smoogen (smooge@gmail.com) wrote:
I think these two are here because of the blivet you mentioned earlier. Advanced partitioning requires them to be there... and there do seem to be people who actually do expect both of those to work on their workstations when it was looked at to be removed in the past.
Well, but anaconda makes some changes to the image after copying in the OS, no? it could also do an "systemctl disable mdraid.service --root=/installed/tree" or so if it knows that mdraid is not actually needed...
This is more about socializing and teaching the systemd replacements... because most of the systemd advocates and heavy users I have asked aren't sure about how systemd replaces them and go back to cron/atd. I actually think that the replacements seem much better thought out than cruft-ware but.. but I also have little confidence I could get it to work consistently while I can find 10k tutorials on cron.
Well, we addresses similar cases by placing README files or so in those directories, so that people might notice if they are looking for something there... i.e. /var/log/README is a similar case and /etc/inittab too. I think we can do the same here too and make clear that people need to install cronie first before these things work.
Alternatively. just drop the cron dropin dirs from the base image: if i want to drop in a script in the dirs I should notice if I can't because the dir doesn't actually exist.
Lennart
-- Lennart Poettering, Berlin
On 4/9/2019 11:14 AM, Lennart Poettering wrote:
On Di, 09.04.19 12:54, Stephen John Smoogen (smooge@gmail.com) wrote:
This is more about socializing and teaching the systemd replacements... because most of the systemd advocates and heavy users I have asked aren't sure about how systemd replaces them and go back to cron/atd. I actually think that the replacements seem much better thought out than cruft-ware but.. but I also have little confidence I could get it to work consistently while I can find 10k tutorials on cron.
Well, we addresses similar cases by placing README files or so in those directories, so that people might notice if they are looking for something there... i.e. /var/log/README is a similar case and /etc/inittab too. I think we can do the same here too and make clear that people need to install cronie first before these things work.
Alternatively. just drop the cron dropin dirs from the base image: if i want to drop in a script in the dirs I should notice if I can't because the dir doesn't actually exist.
Is this really worth the effort? cronie in F30 is a 103K package, and a decent chunk of that might be the ChangeLog. crontabs is all of 18K, which is 95% the GPL and the RPM header. It seems like a very small price to pay for something everyone is going to assume will be on any *nix-compatible system of note.
The last thing I'd want to have to deal with is solving for a missing /etc/cron.* because someone forgot to click a checkbox somewhere or didn't call it out in kickstart.
-jc
On Tue, Apr 9, 2019 at 8:40 PM Japheth Cleaver cleaver@terabithia.org wrote:
Is this really worth the effort? cronie in F30 is a 103K package, and a decent chunk of that might be the ChangeLog. crontabs is all of 18K, which is 95% the GPL and the RPM header. It seems like a very small price to pay for something everyone is going to assume will be on any *nix-compatible system of note.
I read, possibly misread, the original comment as being about the number of "unneeded" things in the install, not necessarily the weight of the specific packages. What I think we are hearing from containers, OSTree, etc. is that there is a group of people that wants their systems more minimal with less unnecessary stuff. Some of this is about resource-sizing (RAM, Disk, etc.), some is about update and security footprints, and some of it is about "psychic weight." I realize that we have to make these tradeoffs in some cases, for example, aiui, gnome-keyring is not able to be removed and still have a functional Gnome environment. But this isn't universally the case.
This seems to go back to who is the primary target audience for our Workstation edition and what do they want/expect. Then we can document the changes and socialize them over a few releases so that other users can get to where they want to be. Basically "extra" isn't what no one wants, its what our defined target doesn't want/expect. I don't expect the tools I use to always be installed by default and I don't think anyone else on the list does either. It also speaks to our spins/labs as ways to take our existing software and reformulate the install to meet different users' needs.
Lastly, taking a position on some of this, for example, removing cron, is a form of opinionation that calls back to our roots of innovating in the OS space. We would be saying, we recognize this is the way we did things X years ago, but there are new ways and processes and we see value in those. If we can't remove these things, then we are being a good distribution by pointing out where solutions that claim to fix something have fallen short so that those upstreams can make decisions about what to do.
The last thing I'd want to have to deal with is solving for a missing /etc/cron.* because someone forgot to click a checkbox somewhere or didn't call it out in kickstart.
Yes, but I also don't want to deal with a security fix in cron when I didn't want it to begin with. Adding software the user doesn't want to have it as assumed for other users is always a trade-off.
regards,
bex
On Wed, Apr 10, 2019 at 3:22 PM Michael Watters wattersm@watters.ws wrote:
You mean like systemd? ;)
Given the origin of this thread, I tried not to go there. However, now that you've broached it, yes. This :D.
systemd is a lot of things, but it also is the way forward we think that our audience wants. It has the adoption that shows that leaving it out is bad, and in this specific cron example, not using its full feature set is probably bad for our users and is poor of us as a distro.
The more interesting set of cases is what our users want/need to be successful. For example, if we are targetting developers, should we work on replacing some tools like `cat` with `bat` as it is code aware. This would be a HUGE undertaking, potentially, but could potentially create an ease of use and experience that shows we understand our market.
regards,
bex
On 4/10/19 7:10 AM, Brian (bex) Exelbierd wrote:
Adding software the user doesn't want to have it as assumed for other users is always a trade-off.
devel mailing list -- devel@lists.fedoraproject.org To unsubscribe send an email to devel-leave@lists.fedoraproject.org Fedora Code of Conduct: https://getfedora.org/code-of-conduct.html List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines List Archives: https://lists.fedoraproject.org/archives/list/devel@lists.fedoraproject.org
On 4/10/2019 4:10 AM, Brian (bex) Exelbierd wrote:
On Tue, Apr 9, 2019 at 8:40 PM Japheth Cleaver cleaver@terabithia.org wrote:
Is this really worth the effort? cronie in F30 is a 103K package, and a decent chunk of that might be the ChangeLog. crontabs is all of 18K, which is 95% the GPL and the RPM header. It seems like a very small price to pay for something everyone is going to assume will be on any *nix-compatible system of note.
I read, possibly misread, the original comment as being about the number of "unneeded" things in the install, not necessarily the weight of the specific packages. What I think we are hearing from containers, OSTree, etc. is that there is a group of people that wants their systems more minimal with less unnecessary stuff.
*snip*
This seems to go back to who is the primary target audience for our Workstation edition and what do they want/expect. Then we can document the changes and socialize them over a few releases so that other users can get to where they want to be. Basically "extra" isn't what no one wants, its what our defined target doesn't want/expect.
Although this was originally posted about Workstation, I can virtually guarantee that a solution accepted for implementation would not be "Remove cron in %post," thus this really comes about as removing it as a default install pretty much everywhere. Of course, things like OSTree/atomic, containers, and micro-environments where every byte counts are likely to be bypassing typical installation mechanisms regardless to fine-tune what's delivered -- eg, removing documentation, etc in docker kickstart %post, or re-implementing parts of RPM to begin with.
Reducing the Minimal size is, in general, good, but it's possible to go too far, and I think that's the case with low-level, *nix wide tools like this. I'm reminded of the time someone thought tar needed to go too: https://bugzilla.redhat.com/show_bug.cgi?id=1409920
One might make the case for a removal from @core -- *maybe* -- but definitely not @base.
Lastly, taking a position on some of this, for example, removing cron, is a form of opinionation that calls back to our roots of innovating in the OS space. We would be saying, we recognize this is the way we did things X years ago, but there are new ways and processes and we see value in those. If we can't remove these things, then we are being a good distribution by pointing out where solutions that claim to fix something have fallen short so that those upstreams can make decisions about what to do.
But what, exactly, has cron fallen short in? I'm reminded of that old testimonial tag line for the similarly named, but unrelated, utility cronolog: "cronolog kicks ass in every conceivable way in which a utility like cronolog can kick ass." ( http://web.archive.org/web/20090627031834/http://cronolog.org/ ) There are other launching mechanisms, sure... I've worked on one of them, but that doesn't mean there's anything wrong with cron, or /etc/cron.*/ directories.
More directly, I'm old enough to remember when we were assured that systemd's timers were not "to be in competition here" with cron, and and it was promised to "make sure that cron is advertised as a good solution for people who just want to queue a simple cronjob." With all the discussion of ease-of-use and discoverability, removing the "good solution" mechanism for users in favor of something requiring a package install doesn't seem to be a great example of that. https://lists.fedoraproject.org/pipermail/devel/2014-March/196293.html
The last thing I'd want to have to deal with is solving for a missing /etc/cron.* because someone forgot to click a checkbox somewhere or didn't call it out in kickstart.
Yes, but I also don't want to deal with a security fix in cron when I didn't want it to begin with. Adding software the user doesn't want to have it as assumed for other users is always a trade-off.
True, but - as written elsewhere - that can be taken to a logical extreme, both via removal of simple, auditable utilities and shell scripts, and eventual giant replacements.
-jc
On Wed, Apr 10, 2019 at 8:54 PM Japheth Cleaver cleaver@terabithia.org wrote:
Reducing the Minimal size is, in general, good, but it's possible to go too far, and I think that's the case with low-level, *nix wide tools like this. I'm reminded of the time someone thought tar needed to go too: https://bugzilla.redhat.com/show_bug.cgi?id=1409920
My interest, and the way I read the OP was not about minimal size, directly. In this case, it sounds like we have adopted a tool, systemd, that replaces another tool, cron. We can debate the completeness of the replacement, etc, but it is also valid to question why we ship both as default installed.
Extending this, tar is a reasonable thing to install on all systems today. That might not be true tomorrow. I encourage Fedora to be willing to explore these questions and make opinionated decisions.
Lastly, taking a position on some of this, for example, removing cron, is a form of opinionation that calls back to our roots of innovating in the OS space. We would be saying, we recognize this is the way we did things X years ago, but there are new ways and processes and we see value in those. If we can't remove these things, then we are being a good distribution by pointing out where solutions that claim to fix something have fallen short so that those upstreams can make decisions about what to do.
But what, exactly, has cron fallen short in?
In this case, I was trying to communicate that if systemd, which seems to want to replace cron, can't meet all the use cases, we should be reporting those that we find in the distro. That lets the systemd upstream decide if it is in scope or not and make changes as needed.
I was not suggesting cron had a short fall.
The last thing I'd want to have to deal with is solving for a missing /etc/cron.* because someone forgot to click a checkbox somewhere or didn't call it out in kickstart.
Yes, but I also don't want to deal with a security fix in cron when I didn't want it to begin with. Adding software the user doesn't want to have it as assumed for other users is always a trade-off.
True, but - as written elsewhere - that can be taken to a logical extreme, both via removal of simple, auditable utilities and shell scripts, and eventual giant replacements.
I don't consider the evolution of replacements to be inherently bad. They may not meet your use case, but they may meet the use case of the Fedora Workstation core target audience. If we build a workstation that tries to be all things for all people, it is often not great for everyone. I am advocating that we tighten our scope for our deliverables and allow for differentation in them instead of trying to be all things for all people.
regards,
bex
On Thu, 11 Apr 2019 12:30:11 +0200 "Brian (bex) Exelbierd" bexelbie@redhat.com wrote:
To: Japheth Cleaver cleaver@terabithia.org CC: Development discussions related to Fedora devel@lists.fedoraproject.org Subject: Re: Can we maybe reduce the set of packages we install by default a bit? Date: Thu, 11 Apr 2019 12:30:11 +0200 Reply-To: Development discussions related to Fedora devel@lists.fedoraproject.org
On Wed, Apr 10, 2019 at 8:54 PM Japheth Cleaver cleaver@terabithia.org wrote:
Reducing the Minimal size is, in general, good, but it's possible to go too far, and I think that's the case with low-level, *nix wide tools like this. I'm reminded of the time someone thought tar needed to go too: https://bugzilla.redhat.com/show_bug.cgi?id=1409920
My interest, and the way I read the OP was not about minimal size, directly. In this case, it sounds like we have adopted a tool, systemd, that replaces another tool, cron. We can debate the completeness of the replacement, etc, but it is also valid to question why we ship both as default installed.
Extending this, tar is a reasonable thing to install on all systems today. That might not be true tomorrow. I encourage Fedora to be willing to explore these questions and make opinionated decisions.
Lastly, taking a position on some of this, for example, removing cron, is a form of opinionation that calls back to our roots of innovating in the OS space. We would be saying, we recognize this is the way we did things X years ago, but there are new ways and processes and we see value in those. If we can't remove these things, then we are being a good distribution by pointing out where solutions that claim to fix something have fallen short so that those upstreams can make decisions about what to do.
But what, exactly, has cron fallen short in?
In this case, I was trying to communicate that if systemd, which seems to want to replace cron, can't meet all the use cases, we should be reporting those that we find in the distro. That lets the systemd upstream decide if it is in scope or not and make changes as needed.
From a security point, we'd want the equivalent of cron.allow, cron.deny. We'd need the pam stack to run just as it did for cron which means it needs to be privileged and switch to the user account so that it can correctly recreate the user environment, optionally do auditing as needed (just like cronie), and it would need to be MLS (Multi-level Security) capable.
If it did all these things, I could see ditching cron. I also have not looked into the systemd timers to see if or how much of this it currently does.
-Steve
On Do, 11.04.19 14:19, Steve Grubb (sgrubb@redhat.com) wrote:
But what, exactly, has cron fallen short in?
In this case, I was trying to communicate that if systemd, which seems to want to replace cron, can't meet all the use cases, we should be reporting those that we find in the distro. That lets the systemd upstream decide if it is in scope or not and make changes as needed.
From a security point, we'd want the equivalent of cron.allow, cron.deny. We'd need the pam stack to run just as it did for cron which means it needs to be privileged and switch to the user account so that it can correctly recreate the user environment, optionally do auditing as needed (just like cronie), and it would need to be MLS (Multi-level Security) capable.
If it did all these things, I could see ditching cron. I also have not looked into the systemd timers to see if or how much of this it currently does.
systemd follows a more restrictive approach there: system services and timers can only be installed by root. If the service uses User= it can run code as a non-root user that way. If the service uses PAMName= this can be done through a PAM session if needed.
However, that's intended for system services only (i.e. for services running as users UID < 1000). For regular users (i.e. human ones, those with UID >= 1000), the idea is to install timer units in the per-user instance of the systemd service manager instead. That service manager runs inside a PAM session of the user, and the lifetime is normally bound to the time the user is logged in, meaning that users who are not logged in cannot run stuff. (however, specific users can be marked as "lingering" though a privileged operation and if so their specific service manager is started at boot and stays around until shutdown, so that their timers can run outside of the immediate login time of the user).
This model is supposed to be a bit more restrictive security-wise than traditional cron: it's not privileged code that parses and schedules timer expression and then transitions on each dispatching, but it's always the user's own code that is responsible for that.
systemd is not a 1:1 replacement for cron, not by a long shot, and it's not supposed to be. It's supposed to be more restrictive and run the scheduler itself with the same privileges of the user it shall run stuff as. There are difference besides lifecycle and security though. For example, it serializes execution of timer-triggered services, and merges trigger events.
It never was the intention of to provide the exact same feature set as cron. And I wouldn't push users who want cron to use systemd timers instead. But I'd say the semantics and security model we expose in systemd timers is actually better fitting for the various housekeeping jobs we ship along with our various RPMs.
Lennart
-- Lennart Poettering, Berlin
On 4/11/19 10:16 AM, Lennart Poettering wrote:
However, that's intended for system services only (i.e. for services running as users UID < 1000). For regular users (i.e. human ones, those with UID >= 1000), the idea is to install timer units in the per-user instance of the systemd service manager instead. That service manager runs inside a PAM session of the user, and the lifetime is normally bound to the time the user is logged in, meaning that users who are not logged in cannot run stuff. (however, specific users can be marked as "lingering" though a privileged operation and if so their specific service manager is started at boot and stays around until shutdown, so that their timers can run outside of the immediate login time of the user).
I run a bunch of background jobs like harvesting podcasts that are released weekly, collecting weather stats for my garden watering system, monitoring my power feed and UPS, collecting ADSB data, etc. I don't think of those as 'system' services, so I run them in my own cron jobs. The system works well because even if my system reboots on a power glitch, or my session crashes, the jobs still run--but in the systemd world it wouldn't work.
I'd like the system jobs to be strictly about the OS infrastructure---both for the 'ideological purity' and because it seems to me that it'd be easier to move them to some sort of cloud environment where I don't manage the underlying OS.
I think you're saying that systemd is designed on an assumption that such jobs are part of system operation, and will have to run as system/privileged jobs or at least be designated as 'lingering', which you say requires system privilege. I would argue that on my own system (which is a majority of systems now) it should be easy to designate low-privilege jobs as lingering: I should get to decide if it's useful for them to run even if I don't have a current login session.
Compare this with Android: the apps can run in background, and it's fine; I implicitly authorized them by owning the device, and installing the app after authenticating to the device and to the app store and maybe to cloud services they depend on. I think the Android model is more relevant in this IoT age than the traditional timesharing, 'kick-me-off-when-I-log-out' mode.
On 4/11/19 5:32 PM, Przemek Klosowski wrote:
I think the Android model is more relevant in this IoT age than the traditional timesharing, 'kick-me-off-when-I-log-out' mode.
I would agree and observe that even the timesharing model was never really kick-me-off-when-I-log-out. Processes have an owner (username) and run by themselves. Some effects related to father-child lifecycle are almost accidental (broken pipe and so on) and easily avoided (nohup, screen). The concept of "login" was just associated to how you entered the system (authentication,...), and there was no real concept of "session". The "session" concept mostly came from the graphical interfaces, where many pieces have to collaborate to give the final experience, that started the "session daemons" fashion. Some not-Unix operating system that were almost useless without a login (and without a graphic card) reinforced this idea that "the machine does something only when somebody is logged in".
Regards.
On Do, 11.04.19 11:32, Przemek Klosowski (przemek.klosowski@nist.gov) wrote:
On 4/11/19 10:16 AM, Lennart Poettering wrote:
However, that's intended for system services only (i.e. for services running as users UID < 1000). For regular users (i.e. human ones, those with UID >= 1000), the idea is to install timer units in the per-user instance of the systemd service manager instead. That service manager runs inside a PAM session of the user, and the lifetime is normally bound to the time the user is logged in, meaning that users who are not logged in cannot run stuff. (however, specific users can be marked as "lingering" though a privileged operation and if so their specific service manager is started at boot and stays around until shutdown, so that their timers can run outside of the immediate login time of the user).
I run a bunch of background jobs like harvesting podcasts that are released weekly, collecting weather stats for my garden watering system, monitoring my power feed and UPS, collecting ADSB data, etc. I don't think of those as 'system' services, so I run them in my own cron jobs. The system works well because even if my system reboots on a power glitch, or my session crashes, the jobs still run--but in the systemd world it wouldn't work.
Why would't they? If you want to to run your own stuff independent of you being logged in then do "loginctl set-linger" on your user and it's done.
The logic in systemd is more strict on putting boundaries on resource usage, and thus will by default not allow you to consume resources while you are not logged in. It's really how this always should have been designed. However, we fully acknowledge that there are many uses where the ability to run stuff independently of any login as your own user is fine, but you need to turn on lingering for that (which is privileged), so that this is explicitly marked OK.
But anyway, it's totally fine if you use cron for what you are doing too, if you are more comfortable with it. It's just a question if cron needs to be around on the workstation default install. I mean, all the stuff you list up there, it's not precisely part of the base install either, you installed a number of packages to make that work, and it should be fine if crond is one of them, no?
I think you're saying that systemd is designed on an assumption that such jobs are part of system operation, and will have to run as system/privileged jobs or at least be designated as 'lingering', which you say requires system privilege. I would argue that on my own system (which is a majority of systems now) it should be easy to designate low-privilege jobs as lingering: I should get to decide if it's useful for them to run even if I don't have a current login session.
Privileges are required to turn on lingering for your user. After that your user can consume resources on the system from power-on to power-off without further privileges.
i.e. it's the act of turning on lingering that requires privs. After that's done once you don't have to think about that anymore.
Lennart
-- Lennart Poettering, Berlin
On Thu, 11 Apr 2019 19:08:38 +0200 Lennart Poettering mzerqung@0pointer.de wrote:
On Do, 11.04.19 11:32, Przemek Klosowski (przemek.klosowski@nist.gov) wrote:
On 4/11/19 10:16 AM, Lennart Poettering wrote:
However, that's intended for system services only (i.e. for services running as users UID < 1000). For regular users (i.e. human ones, those with UID >= 1000), the idea is to install timer units in the per-user instance of the systemd service manager instead. That service manager runs inside a PAM session of the user, and the lifetime is normally bound to the time the user is logged in, meaning that users who are not logged in cannot run stuff. (however, specific users can be marked as "lingering" though a privileged operation and if so their specific service manager is started at boot and stays around until shutdown, so that their timers can run outside of the immediate login time of the user).
I run a bunch of background jobs like harvesting podcasts that are released weekly, collecting weather stats for my garden watering system, monitoring my power feed and UPS, collecting ADSB data, etc. I don't think of those as 'system' services, so I run them in my own cron jobs. The system works well because even if my system reboots on a power glitch, or my session crashes, the jobs still run--but in the systemd world it wouldn't work.
Why would't they? If you want to to run your own stuff independent of you being logged in then do "loginctl set-linger" on your user and it's done.
Was this the privileged operation? What privilege does it require? I just run the command as a non-admin user and saw no errors or prompts for passwords or anything.
-Steve
The logic in systemd is more strict on putting boundaries on resource usage, and thus will by default not allow you to consume resources while you are not logged in. It's really how this always should have been designed. However, we fully acknowledge that there are many uses where the ability to run stuff independently of any login as your own user is fine, but you need to turn on lingering for that (which is privileged), so that this is explicitly marked OK.
But anyway, it's totally fine if you use cron for what you are doing too, if you are more comfortable with it. It's just a question if cron needs to be around on the workstation default install. I mean, all the stuff you list up there, it's not precisely part of the base install either, you installed a number of packages to make that work, and it should be fine if crond is one of them, no?
I think you're saying that systemd is designed on an assumption that such jobs are part of system operation, and will have to run as system/privileged jobs or at least be designated as 'lingering', which you say requires system privilege. I would argue that on my own system (which is a majority of systems now) it should be easy to designate low-privilege jobs as lingering: I should get to decide if it's useful for them to run even if I don't have a current login session.
Privileges are required to turn on lingering for your user. After that your user can consume resources on the system from power-on to power-off without further privileges.
i.e. it's the act of turning on lingering that requires privs. After that's done once you don't have to think about that anymore.
Lennart
-- Lennart Poettering, Berlin _______________________________________________ devel mailing list -- devel@lists.fedoraproject.org To unsubscribe send an email to devel-leave@lists.fedoraproject.org Fedora Code of Conduct: https://getfedora.org/code-of-conduct.html List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines List Archives: https://lists.fedoraproject.org/archives/list/devel@lists.fedoraproject.org
On Fri, 12 Apr 2019 10:01:33 +0200 Dridi Boukelmoune dridi.boukelmoune@gmail.com wrote:
Was this the privileged operation? What privilege does it require? I just run the command as a non-admin user and saw no errors or prompts for passwords or anything.
Are you part of the wheel group
No, this account does not have wheel. That's what I meant by non-sysadmin acct.
and is wheel configured to be password-less in sudo?
Default F29 sudo config.
-Steve
On Do, 11.04.19 20:49, Steve Grubb (sgrubb@redhat.com) wrote:
I run a bunch of background jobs like harvesting podcasts that are released weekly, collecting weather stats for my garden watering system, monitoring my power feed and UPS, collecting ADSB data, etc. I don't think of those as 'system' services, so I run them in my own cron jobs. The system works well because even if my system reboots on a power glitch, or my session crashes, the jobs still run--but in the systemd world it wouldn't work.
Why would't they? If you want to to run your own stuff independent of you being logged in then do "loginctl set-linger" on your user and it's done.
Was this the privileged operation? What privilege does it require? I just run the command as a non-admin user and saw no errors or prompts for passwords or anything.
It uses PolicyKit, like most of the system level services that need to authenticate clients. Hence, not sure what your local configuration of PolicyKit is, but iirc it allows users in "wheel" to do a lot of stuff without password. We require "auth_admin_keep" from pk for the action of enabling lingering.
Lennart
-- Lennart Poettering, Berlin
On 4/11/19 1:08 PM, Lennart Poettering wrote:
I run a bunch of background jobs like harvesting podcasts that are released weekly, collecting weather stats for my garden watering system, monitoring my power feed and UPS, collecting ADSB data, etc. I don't think of those as 'system' services, so I run them in my own cron jobs. The system works well because even if my system reboots on a power glitch, or my session crashes, the jobs still run--but in the systemd world it wouldn't work.
Why would't they? If you want to to run your own stuff independent of you being logged in then do "loginctl set-linger" on your user and it's done.
The logic in systemd is more strict on putting boundaries on resource usage, and thus will by default not allow you to consume resources while you are not logged in. It's really how this always should have been designed. However, we fully acknowledge that there are many uses where the ability to run stuff independently of any login as your own user is fine, but you need to turn on lingering for that (which is privileged), so that this is explicitly marked OK.
It IS very useful for systemctl to prevent resource leaks by killing errant processes (hanging browser, etc)---however, as we discussed, some processes should not be killed; I know which processes I want to annoint this way, and I take responsibility for their possible misbehavior.
I understand that set-linger disables process harvesting for all processes in the session, though, and I would like to just do it only for SOME processes.
I guess I could create another user for the persistent jobs and set-linger that session... but it gets complicated.
I think it's a better design if I could designate individual processes using my UID as set-linger. I remember talking about remote sessions with long-running jobs, and how the connection timeouts caused by remote clients or network routing prevented their completion. The solution was IIRC running them in tmux/screen. Here also, it'd be nice if I could designate 'my-long-running-service-like-executable' as eligible for lingering.
On Do, 11.04.19 17:08, Przemek Klosowski (przemek.klosowski@nist.gov) wrote:
The logic in systemd is more strict on putting boundaries on resource usage, and thus will by default not allow you to consume resources while you are not logged in. It's really how this always should have been designed. However, we fully acknowledge that there are many uses where the ability to run stuff independently of any login as your own user is fine, but you need to turn on lingering for that (which is privileged), so that this is explicitly marked OK.
It IS very useful for systemctl to prevent resource leaks by killing errant processes (hanging browser, etc)---however, as we discussed, some processes should not be killed; I know which processes I want to annoint this way, and I take responsibility for their possible misbehavior.
I understand that set-linger disables process harvesting for all processes in the session, though, and I would like to just do it only for SOME processes.
If you enable lingering for a user, it's the "systemd --user" instance (i.e. the per-user service manager) that is started at boot and terminated at shutdown (instead of started at first login and terminated at last logout of the user), that's all.
If you then run code as user service (i.e. as a service started and managed by the "systemd --user" instance instead of PID 1) then it is lifecycled (and its processes killed as needed) by the user service manager. And you can configure the way you want killing to behave like you would for any systemd service: with KillMode= in the unit file.
Lennart
-- Lennart Poettering, Berlin
On Fri, 12 Apr 2019 11:21:13 +0200 Lennart Poettering mzerqung@0pointer.de wrote:
On Do, 11.04.19 17:08, Przemek Klosowski (przemek.klosowski@nist.gov) wrote:
The logic in systemd is more strict on putting boundaries on resource usage, and thus will by default not allow you to consume resources while you are not logged in. It's really how this always should have been designed. However, we fully acknowledge that there are many uses where the ability to run stuff independently of any login as your own user is fine, but you need to turn on lingering for that (which is privileged), so that this is explicitly marked OK.
It IS very useful for systemctl to prevent resource leaks by killing errant processes (hanging browser, etc)---however, as we discussed, some processes should not be killed; I know which processes I want to annoint this way, and I take responsibility for their possible misbehavior.
I understand that set-linger disables process harvesting for all processes in the session, though, and I would like to just do it only for SOME processes.
If you enable lingering for a user, it's the "systemd --user" instance (i.e. the per-user service manager) that is started at boot and terminated at shutdown (instead of started at first login and terminated at last logout of the user), that's all.
If you then run code as user service (i.e. as a service started and managed by the "systemd --user" instance instead of PID 1) then it is lifecycled (and its processes killed as needed) by the user service manager. And you can configure the way you want killing to behave like you would for any systemd service: with KillMode= in the unit file.
This doesn't really fit with the security requirements we need. Anything run outside of a user session needs to have an audit session id and login uid assigned to anything run. We also need to have the ability to know the name of the script that is being run in an audit event.
-Steve
On Sa, 13.04.19 14:03, Steve Grubb (sgrubb@redhat.com) wrote:
If you enable lingering for a user, it's the "systemd --user" instance (i.e. the per-user service manager) that is started at boot and terminated at shutdown (instead of started at first login and terminated at last logout of the user), that's all.
If you then run code as user service (i.e. as a service started and managed by the "systemd --user" instance instead of PID 1) then it is lifecycled (and its processes killed as needed) by the user service manager. And you can configure the way you want killing to behave like you would for any systemd service: with KillMode= in the unit file.
This doesn't really fit with the security requirements we need. Anything run outside of a user session needs to have an audit session id and login uid assigned to anything run.
It has. As mentioned, systemd --user runs as part of a PAM session, hence it acquire its own session ID and loginuid setting as part of that.
We also need to have the ability to know the name of the script that is being run in an audit event.
To my knowledge audit collects the comm name of any process already, no?
Lennart
-- Lennart Poettering, Berlin
On 4/11/2019 8:32 AM, Przemek Klosowski wrote:
On 4/11/19 10:16 AM, Lennart Poettering wrote:
However, that's intended for system services only (i.e. for services running as users UID < 1000). For regular users (i.e. human ones, those with UID >= 1000), the idea is to install timer units in the per-user instance of the systemd service manager instead. That service manager runs inside a PAM session of the user, and the lifetime is normally bound to the time the user is logged in, meaning that users who are not logged in cannot run stuff. (however, specific users can be marked as "lingering" though a privileged operation and if so their specific service manager is started at boot and stays around until shutdown, so that their timers can run outside of the immediate login time of the user).
I think you're saying that systemd is designed on an assumption that such jobs are part of system operation, and will have to run as system/privileged jobs or at least be designated as 'lingering', which you say requires system privilege. I would argue that on my own system (which is a majority of systems now) it should be easy to designate low-privilege jobs as lingering: I should get to decide if it's useful for them to run even if I don't have a current login session.
While we're discussing session-tying for cron-esque background automation, it would be nice to have a resolution for https://bugzilla.redhat.com/show_bug.cgi?id=911766 via https://bugzilla.redhat.com/show_bug.cgi?id=995792#c16
It's unfortunate that it still fills up basic syslog with it: https://access.redhat.com/solutions/1564823
-jc
I'd say that backward compatibility is important and as a Fedora workstation and server user I expect crond to work OOTB. Yes, users can install and enable the service if needed but cron is such an essential part of every system that I see no reason to exclude it.
On 4/11/19 6:30 AM, Brian (bex) Exelbierd wrote:
My interest, and the way I read the OP was not about minimal size, directly. In this case, it sounds like we have adopted a tool, systemd, that replaces another tool, cron. We can debate the completeness of the replacement, etc, but it is also valid to question why we ship both as default installed.
On 4/9/19 2:14 PM, Lennart Poettering wrote:
On Di, 09.04.19 12:54, Stephen John Smoogen (smooge@gmail.com) wrote:
I think these two are here because of the blivet you mentioned earlier. Advanced partitioning requires them to be there... and there do seem to be people who actually do expect both of those to work on their workstations when it was looked at to be removed in the past.
Well, but anaconda makes some changes to the image after copying in the OS, no? it could also do an "systemctl disable mdraid.service --root=/installed/tree" or so if it knows that mdraid is not actually needed...
The general argument against doing things like that in anaconda is that it will change later and this email thread becomes "Why is anaconda running systemctl disable mdraid.service after installing the OS"
This is more about socializing and teaching the systemd replacements... because most of the systemd advocates and heavy users I have asked aren't sure about how systemd replaces them and go back to cron/atd. I actually think that the replacements seem much better thought out than cruft-ware but.. but I also have little confidence I could get it to work consistently while I can find 10k tutorials on cron.
Well, we addresses similar cases by placing README files or so in those directories, so that people might notice if they are looking for something there... i.e. /var/log/README is a similar case and /etc/inittab too. I think we can do the same here too and make clear that people need to install cronie first before these things work.
Alternatively. just drop the cron dropin dirs from the base image: if i want to drop in a script in the dirs I should notice if I can't because the dir doesn't actually exist.
Lennart
-- Lennart Poettering, Berlin _______________________________________________ devel mailing list -- devel@lists.fedoraproject.org To unsubscribe send an email to devel-leave@lists.fedoraproject.org Fedora Code of Conduct: https://getfedora.org/code-of-conduct.html List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines List Archives: https://lists.fedoraproject.org/archives/list/devel@lists.fedoraproject.org
On Tue, Apr 09, 2019 at 06:07:09PM +0200, Lennart Poettering wrote:
multipathd [...] And beyond that, this daemon is really ugly too: it logs at high log levels during boot that it found no configuration and hence nothing to do. Yes, obviously, but that's a reason to shut up and proceed quickly, not to complain loudly about that so that it even appears on the scren (I mean srsly, this is the first thing I saw when i booted from the fedora live media: a log message printed all over the screen that multipathd has no working configuration...).
This was supposed to be fixed https://bugzilla.redhat.com/show_bug.cgi?id=1631772. If not, please reopen that bug.
- libvirtd. Why is this running? Can't we make this socket activatable + exit-on-idel?
This was supposed to happen. See https://bugzilla.redhat.com/show_bug.cgi?id=1290357.
I guess I should file bz issues about all of the above, but I am not sure against which packages? anaconda? comps (does that still exist)? the individual packages?
Individual packages. Logging issue obviously belong in the packages. In principle fedora-release gets to decide what is started by default, but it's probably better to start with a bug on the package because its the maintainers know best if not starting the service by default is possible and can file a fedora-release PR.
It's also my hope that maybe some champion volunteers for tracking down issues like this and fixing them? i.e. keeping udev settle out of the default install alone would be a worthy goal for every release, given that it doubles boot time on typical systems... Anyone up for that?
Zbyszek
On Di, 09.04.19 17:09, Zbigniew Jędrzejewski-Szmek (zbyszek@in.waw.pl) wrote:
On Tue, Apr 09, 2019 at 06:07:09PM +0200, Lennart Poettering wrote:
multipathd [...] And beyond that, this daemon is really ugly too: it logs at high log levels during boot that it found no configuration and hence nothing to do. Yes, obviously, but that's a reason to shut up and proceed quickly, not to complain loudly about that so that it even appears on the scren (I mean srsly, this is the first thing I saw when i booted from the fedora live media: a log message printed all over the screen that multipathd has no working configuration...).
This was supposed to be fixed https://bugzilla.redhat.com/show_bug.cgi?id=1631772. If not, please reopen that bug.
Appears to be a problem still. Commented there...
- libvirtd. Why is this running? Can't we make this socket activatable + exit-on-idel?
This was supposed to happen. See https://bugzilla.redhat.com/show_bug.cgi?id=1290357.
But it's what I see here. :-(
Lennart
-- Lennart Poettering, Berlin
On 4/9/19 1:09 PM, Zbigniew Jędrzejewski-Szmek wrote:
On Tue, Apr 09, 2019 at 06:07:09PM +0200, Lennart Poettering wrote:
multipathd [...] And beyond that, this daemon is really ugly too: it logs at high log levels during boot that it found no configuration and hence nothing to do. Yes, obviously, but that's a reason to shut up and proceed quickly, not to complain loudly about that so that it even appears on the scren (I mean srsly, this is the first thing I saw when i booted from the fedora live media: a log message printed all over the screen that multipathd has no working configuration...).
This was supposed to be fixed https://bugzilla.redhat.com/show_bug.cgi?id=1631772. If not, please reopen that bug.
- libvirtd. Why is this running? Can't we make this socket activatable + exit-on-idel?
This was supposed to happen. See https://bugzilla.redhat.com/show_bug.cgi?id=1290357.
This bug (briefly) describes why at present libvirt can't use socket activation: https://bugzilla.redhat.com/show_bug.cgi?id=1326136
Libvirtd has a feature to autostart VMs and other resources at host boot up, which is useful and used often that people get mad when it breaks. We need some new work to make this play with socket activation, maybe move the autostart checking to some separate service that runs once at startup. But with a simple implementation Workstation wouldn't benefit because the installed 'default' network is still set to autostart.
Thanks, Cole
On Di, 09.04.19 14:16, Cole Robinson (crobinso@redhat.com) wrote:
On 4/9/19 1:09 PM, Zbigniew Jędrzejewski-Szmek wrote:
On Tue, Apr 09, 2019 at 06:07:09PM +0200, Lennart Poettering wrote:
multipathd [...] And beyond that, this daemon is really ugly too: it logs at high log levels during boot that it found no configuration and hence nothing to do. Yes, obviously, but that's a reason to shut up and proceed quickly, not to complain loudly about that so that it even appears on the scren (I mean srsly, this is the first thing I saw when i booted from the fedora live media: a log message printed all over the screen that multipathd has no working configuration...).
This was supposed to be fixed https://bugzilla.redhat.com/show_bug.cgi?id=1631772. If not, please reopen that bug.
- libvirtd. Why is this running? Can't we make this socket activatable + exit-on-idel?
This was supposed to happen. See https://bugzilla.redhat.com/show_bug.cgi?id=1290357.
This bug (briefly) describes why at present libvirt can't use socket activation: https://bugzilla.redhat.com/show_bug.cgi?id=1326136
Hmm, so that is a valid request I guess. But why can't libvirt do exit-on-idle? i.e. after it started up, after it noticed that it has no VMs to run, checks that there are no IPC requests pending, can't it just exit then and then rely on socket activation to be started again when it is needed the next time? That way libvirt would start at boot, but not stick around during normal runtime.
Lennart
-- Lennart Poettering, Berlin
On 4/9/19 2:24 PM, Lennart Poettering wrote:
On Di, 09.04.19 14:16, Cole Robinson (crobinso@redhat.com) wrote:
On 4/9/19 1:09 PM, Zbigniew Jędrzejewski-Szmek wrote:
On Tue, Apr 09, 2019 at 06:07:09PM +0200, Lennart Poettering wrote:
multipathd [...] And beyond that, this daemon is really ugly too: it logs at high log levels during boot that it found no configuration and hence nothing to do. Yes, obviously, but that's a reason to shut up and proceed quickly, not to complain loudly about that so that it even appears on the scren (I mean srsly, this is the first thing I saw when i booted from the fedora live media: a log message printed all over the screen that multipathd has no working configuration...).
This was supposed to be fixed https://bugzilla.redhat.com/show_bug.cgi?id=1631772. If not, please reopen that bug.
- libvirtd. Why is this running? Can't we make this socket activatable + exit-on-idel?
This was supposed to happen. See https://bugzilla.redhat.com/show_bug.cgi?id=1290357.
This bug (briefly) describes why at present libvirt can't use socket activation: https://bugzilla.redhat.com/show_bug.cgi?id=1326136
Hmm, so that is a valid request I guess. But why can't libvirt do exit-on-idle? i.e. after it started up, after it noticed that it has no VMs to run, checks that there are no IPC requests pending, can't it just exit then and then rely on socket activation to be started again when it is needed the next time? That way libvirt would start at boot, but not stick around during normal runtime.
I don't know off hand of anything that would prevent it. Libvirt does process events from running qemu VMs, but if there's no API users connected to the daemon then I don't think libvirtd needs to be running; it can handle restart and reconnecting to running VMs. That's essentially the same behavior the session libvirtd instance uses which auto shuts down after 30 seconds if there's no clients IIRC. danpb would know best though, CCd
Thanks, Cole
On Tue, Apr 09, 2019 at 02:55:51PM -0400, Cole Robinson wrote:
On 4/9/19 2:24 PM, Lennart Poettering wrote:
On Di, 09.04.19 14:16, Cole Robinson (crobinso@redhat.com) wrote:
On 4/9/19 1:09 PM, Zbigniew Jędrzejewski-Szmek wrote:
On Tue, Apr 09, 2019 at 06:07:09PM +0200, Lennart Poettering wrote:
multipathd [...] And beyond that, this daemon is really ugly too: it logs at high log levels during boot that it found no configuration and hence nothing to do. Yes, obviously, but that's a reason to shut up and proceed quickly, not to complain loudly about that so that it even appears on the scren (I mean srsly, this is the first thing I saw when i booted from the fedora live media: a log message printed all over the screen that multipathd has no working configuration...).
This was supposed to be fixed https://bugzilla.redhat.com/show_bug.cgi?id=1631772. If not, please reopen that bug.
- libvirtd. Why is this running? Can't we make this socket activatable + exit-on-idel?
This was supposed to happen. See https://bugzilla.redhat.com/show_bug.cgi?id=1290357.
This bug (briefly) describes why at present libvirt can't use socket activation: https://bugzilla.redhat.com/show_bug.cgi?id=1326136
Hmm, so that is a valid request I guess. But why can't libvirt do exit-on-idle? i.e. after it started up, after it noticed that it has no VMs to run, checks that there are no IPC requests pending, can't it just exit then and then rely on socket activation to be started again when it is needed the next time? That way libvirt would start at boot, but not stick around during normal runtime.
I don't know off hand of anything that would prevent it. Libvirt does process events from running qemu VMs, but if there's no API users connected to the daemon then I don't think libvirtd needs to be running; it can handle restart and reconnecting to running VMs. That's essentially the same behavior the session libvirtd instance uses which auto shuts down after 30 seconds if there's no clients IIRC. danpb would know best though, CCd
The reasons systemd libvirtd starts on boot is that it needs to perform auto-start of various resources its manages. These resources don't have any associated systemd unit so we can't use systemd for this purpose. Ideally we would enable libvirtd to create systemd units for the resources it manages that need autostart, but that's a significant bit of work. So today we can't use systemd activation.
Regards, Daniel
On Do, 11.04.19 11:18, Daniel P. Berrangé (berrange@redhat.com) wrote:
I don't know off hand of anything that would prevent it. Libvirt does process events from running qemu VMs, but if there's no API users connected to the daemon then I don't think libvirtd needs to be running; it can handle restart and reconnecting to running VMs. That's essentially the same behavior the session libvirtd instance uses which auto shuts down after 30 seconds if there's no clients IIRC. danpb would know best though, CCd
The reasons systemd libvirtd starts on boot is that it needs to perform auto-start of various resources its manages. These resources don't have any associated systemd unit so we can't use systemd for this purpose. Ideally we would enable libvirtd to create systemd units for the resources it manages that need autostart, but that's a significant bit of work. So today we can't use systemd activation.
socket activation doesn't mean you need to start libvirt strictly on socket traffic. A common pattern is to start both the socket and the service at boot, but the service exits when idle and then gets restarted when needed. And that's what I'd like to see here: make libvirt socket-activatable, but also start it at boot. This would mean libvirt could start any VMs it wants at boot if they are configured. And if nothing is configured and nothing has an open IPC connection it would just exit, knowing that the instance something wants to talk to it it would be started again.
Or to say this differently: you can combine activation methods nicely and it's common to do so: activation-on-boot + activation-on-timer + activation-on-socket + activation-on-hardware + activation-on-anything-else can be nicely done for the same service.
Or to say this even differently: service activation in systemd is substantially more powerful than in inetd: it's not exclusively about *on-demand* socket activation, but socket activation is just one form that can neatly be composed with every other trigger you like.
Lennart
-- Lennart Poettering, Berlin
On Thu, Apr 11, 2019 at 03:57:30PM +0200, Lennart Poettering wrote:
On Do, 11.04.19 11:18, Daniel P. Berrangé (berrange@redhat.com) wrote:
I don't know off hand of anything that would prevent it. Libvirt does process events from running qemu VMs, but if there's no API users connected to the daemon then I don't think libvirtd needs to be running; it can handle restart and reconnecting to running VMs. That's essentially the same behavior the session libvirtd instance uses which auto shuts down after 30 seconds if there's no clients IIRC. danpb would know best though, CCd
The reasons systemd libvirtd starts on boot is that it needs to perform auto-start of various resources its manages. These resources don't have any associated systemd unit so we can't use systemd for this purpose. Ideally we would enable libvirtd to create systemd units for the resources it manages that need autostart, but that's a significant bit of work. So today we can't use systemd activation.
socket activation doesn't mean you need to start libvirt strictly on socket traffic. A common pattern is to start both the socket and the service at boot, but the service exits when idle and then gets restarted when needed. And that's what I'd like to see here: make libvirt socket-activatable, but also start it at boot. This would mean libvirt could start any VMs it wants at boot if they are configured. And if nothing is configured and nothing has an open IPC connection it would just exit, knowing that the instance something wants to talk to it it would be started again.
Ok, I see what you mean, enabling both the service & socket by default will probably work. We already have support for systemd activation and auto-exiting, for our non-root instance, so in theory we can just change the unit file setup. Will have to think if there are any other edge cases with this in the root instance though as it has much broader functionality.
Regards, Daniel
On Tue, Apr 9, 2019 at 10:07 AM Lennart Poettering mzerqung@0pointer.de wrote:
Heya,
today I installed the current Fedora 30 Workstation beta on my new laptop. It was a bumpy ride, I must say (the partitioner (blivet?) crashed five times or so on me, always kicking me out of anaconda again, just because I wanted to undo something).
I haven't seen a single one come across in QA
- multipathd.
I'm pretty sure it gets dragged in by the installer, i.e. the installation environment needs it because installing to multipath is supported. And since it's on the Workstation LiveOS, it just gets copied over along with the installer itself (LiveOS installs use rsync). I wonder if it's reasonable to apply more exclude filtering during rsync to avoid copying some things needed for Live OS environment, but not on the final installed system. But that's sorta up to Workstation WG I think.
- dmraid.
Same as above. I'm not sure whether, or when, dmraid stuff is going to be dropped by anaconda. For a long time now dmraid is deprecated. The two supported ways of doing software raid are managed by mdadm and lvm, both of which actually use the md driver in the kernel.
So I think this is a question for the anaconda team.
- Similar crond. On my fresh install it's only used by "zfs-fuse", which I really wonder why it even is in the default install? And "mdadm" wants this too. (which would be great if it would just use timer units)
I think zfs-fuse and glusterfs are dragged in by libvirt, which is present because of GNOME Boxes. I don't know why any of those want crond.
mdadm scrub and monitoring depends on crond, and then email notifications if mismatch count != 0; it's archaic these days I guess, but that's how it works.
- libvirtd. Why is this running? Can't we make this socket activatable + exit-on-idel? While I am sure it's useful on workstations why run it all the time, given that only very few users probably actually need that, and if they do starting it on demand would be much more appropriate? On my freshly installed system it is running all the time even though there are no VMs or anything around.
Agreed.
Ideally, the top 4 wouldn't be installed at all anymore (in case of the first two at least on the systems which do not need them). But if that's not in the cards, it would be great to at least not enable these services anymore in the default boot so that they are only a "systemctl enable" away for people who need them?
At the least it seems reasonable they can be disabled on the installed system, and enabled for Live OS boot if the installer needs them.
On Tue, Apr 9, 2019 at 8:35 PM Chris Murphy lists@colorremedies.com wrote:
On Tue, Apr 9, 2019 at 10:07 AM Lennart Poettering mzerqung@0pointer.de wrote:
Heya,
today I installed the current Fedora 30 Workstation beta on my new laptop. It was a bumpy ride, I must say (the partitioner (blivet?) crashed five times or so on me, always kicking me out of anaconda again, just because I wanted to undo something).
I haven't seen a single one come across in QA
- multipathd.
I'm pretty sure it gets dragged in by the installer, i.e. the installation environment needs it because installing to multipath is supported. And since it's on the Workstation LiveOS, it just gets copied over along with the installer itself (LiveOS installs use rsync). I wonder if it's reasonable to apply more exclude filtering during rsync to avoid copying some things needed for Live OS environment, but not on the final installed system. But that's sorta up to Workstation WG I think.
Not having the rpmdb in sync with what content is on the system is probably not a good idea. I'd advocate for anaconda being able to run dnf to clean up stuff instead.
- dmraid.
Same as above. I'm not sure whether, or when, dmraid stuff is going to be dropped by anaconda. For a long time now dmraid is deprecated. The two supported ways of doing software raid are managed by mdadm and lvm, both of which actually use the md driver in the kernel.
So I think this is a question for the anaconda team.
- Similar crond. On my fresh install it's only used by "zfs-fuse", which I really wonder why it even is in the default install? And "mdadm" wants this too. (which would be great if it would just use timer units)
I think zfs-fuse and glusterfs are dragged in by libvirt, which is present because of GNOME Boxes. I don't know why any of those want crond.
These could be converted to systemd units. There's no reason not to, really...
mdadm scrub and monitoring depends on crond, and then email notifications if mismatch count != 0; it's archaic these days I guess, but that's how it works.
- libvirtd. Why is this running? Can't we make this socket activatable + exit-on-idel? While I am sure it's useful on workstations why run it all the time, given that only very few users probably actually need that, and if they do starting it on demand would be much more appropriate? On my freshly installed system it is running all the time even though there are no VMs or anything around.
Agreed.
Ideally, the top 4 wouldn't be installed at all anymore (in case of the first two at least on the systems which do not need them). But if that's not in the cards, it would be great to at least not enable these services anymore in the default boot so that they are only a "systemctl enable" away for people who need them?
At the least it seems reasonable they can be disabled on the installed system, and enabled for Live OS boot if the installer needs them.
-- Chris Murphy _______________________________________________ devel mailing list -- devel@lists.fedoraproject.org To unsubscribe send an email to devel-leave@lists.fedoraproject.org Fedora Code of Conduct: https://getfedora.org/code-of-conduct.html List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines List Archives: https://lists.fedoraproject.org/archives/list/devel@lists.fedoraproject.org
I'm really interested on the livet crash, but I can't reproduce it with latest branched compose. Can you provide us with reproduction steps?
Hau idatzi du Neal Gompa (ngompa13@gmail.com) erabiltzaileak (2019 api. 10, az. (02:59)):
On Tue, Apr 9, 2019 at 8:35 PM Chris Murphy lists@colorremedies.com wrote:
On Tue, Apr 9, 2019 at 10:07 AM Lennart Poettering mzerqung@0pointer.de
wrote:
Heya,
today I installed the current Fedora 30 Workstation beta on my new laptop. It was a bumpy ride, I must say (the partitioner (blivet?) crashed five times or so on me, always kicking me out of anaconda again, just because I wanted to undo something).
I haven't seen a single one come across in QA
- multipathd.
I'm pretty sure it gets dragged in by the installer, i.e. the installation environment needs it because installing to multipath is supported. And since it's on the Workstation LiveOS, it just gets copied over along with the installer itself (LiveOS installs use rsync). I wonder if it's reasonable to apply more exclude filtering during rsync to avoid copying some things needed for Live OS environment, but not on the final installed system. But that's sorta up to Workstation WG I think.
Not having the rpmdb in sync with what content is on the system is probably not a good idea. I'd advocate for anaconda being able to run dnf to clean up stuff instead.
- dmraid.
Same as above. I'm not sure whether, or when, dmraid stuff is going to be dropped by anaconda. For a long time now dmraid is deprecated. The two supported ways of doing software raid are managed by mdadm and lvm, both of which actually use the md driver in the kernel.
So I think this is a question for the anaconda team.
- Similar crond. On my fresh install it's only used by "zfs-fuse", which I really wonder why it even is in the default install? And "mdadm" wants this too. (which would be great if it would just use timer units)
I think zfs-fuse and glusterfs are dragged in by libvirt, which is present because of GNOME Boxes. I don't know why any of those want crond.
These could be converted to systemd units. There's no reason not to, really...
mdadm scrub and monitoring depends on crond, and then email notifications if mismatch count != 0; it's archaic these days I guess, but that's how it works.
- libvirtd. Why is this running? Can't we make this socket activatable + exit-on-idel? While I am sure it's useful on workstations why run it all the time, given that only very few users probably actually need that, and if they do starting it on demand would be much more appropriate? On my freshly installed system it is running all the time even though there are no VMs or anything around.
Agreed.
Ideally, the top 4 wouldn't be installed at all anymore (in case of the first two at least on the systems which do not need them). But if that's not in the cards, it would be great to at least not enable these services anymore in the default boot so that they are only a "systemctl enable" away for people who need them?
At the least it seems reasonable they can be disabled on the installed system, and enabled for Live OS boot if the installer needs them.
-- Chris Murphy _______________________________________________ devel mailing list -- devel@lists.fedoraproject.org To unsubscribe send an email to devel-leave@lists.fedoraproject.org Fedora Code of Conduct: https://getfedora.org/code-of-conduct.html List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines List Archives:
https://lists.fedoraproject.org/archives/list/devel@lists.fedoraproject.org
-- 真実はいつも一つ!/ Always, there's only one truth! _______________________________________________ devel mailing list -- devel@lists.fedoraproject.org To unsubscribe send an email to devel-leave@lists.fedoraproject.org Fedora Code of Conduct: https://getfedora.org/code-of-conduct.html List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines List Archives: https://lists.fedoraproject.org/archives/list/devel@lists.fedoraproject.org
On Mi, 10.04.19 08:16, Julen Landa Alustiza (julen@zokormazo.info) wrote:
I'm really interested on the livet crash, but I can't reproduce it with latest branched compose. Can you provide us with reproduction steps?
Urks, I don't remember. I think created an ESP part, two ext4 partitions, one encrypted swap and one encrypted btrfs or so, and then changed my mind and flushed it all out with that button in the lower right, and then it died on me. And then I did the same twice again it it died again. But sorry, I didn't record the errors, it wasn't particularly special though...
Lennart
-- Lennart Poettering, Berlin
On Wed, Apr 10, 2019 at 2:35 AM Chris Murphy lists@colorremedies.com wrote:
- multipathd.
I'm pretty sure it gets dragged in by the installer
Nope, multipath seems to be present because libblockdev and udisks (and perhaps some more), which is in turn required by GNOME:
$ rpm -q --whatrequires device-mapper-multipath libblockdev-fs-2.21-2.fc30.x86_64 libblockdev-part-2.21-2.fc30.x86_64 libblockdev-mpath-2.21-2.fc30.x86_64 fcoe-utils-1.0.32-6.fc29.x86_64
$ rpm -q --whatrequires libblockdev-fs udisks2-2.8.2-1.fc30.x86_64
$ rpm -q --whatrequires udisks2 gvfs-1.40.0-1.fc30.x86_64 gnome-disk-utility-3.32.0-1.fc30.x86_64
That was latest F30 Workstation Live.
On 4/10/19 11:57 AM, Kamil Paral wrote:
On Wed, Apr 10, 2019 at 2:35 AM Chris Murphy lists@colorremedies.com wrote:
- multipathd.
I'm pretty sure it gets dragged in by the installer
Nope, multipath seems to be present because libblockdev and udisks (and perhaps some more), which is in turn required by GNOME:
$ rpm -q --whatrequires device-mapper-multipath libblockdev-fs-2.21-2.fc30.x86_64 libblockdev-part-2.21-2.fc30.x86_64
That's actually a bug in libblockdev, only multipath plugin should depend on device-mapper-multipath.
libblockdev-mpath-2.21-2.fc30.x86_64 fcoe-utils-1.0.32-6.fc29.x86_64
On 4/11/19 2:12 AM, Vojtěch Trefný wrote:
That's actually a bug in libblockdev, only multipath plugin should depend on device-mapper-multipath.
Bug opened.
https://bugzilla.redhat.com/show_bug.cgi?id=1699071
I can make the spec changes if you need help, but I wanted to document this action in case there is something we're missing.
Thanks, michael
On Di, 09.04.19 18:34, Chris Murphy (lists@colorremedies.com) wrote:
Ideally, the top 4 wouldn't be installed at all anymore (in case of the first two at least on the systems which do not need them). But if that's not in the cards, it would be great to at least not enable these services anymore in the default boot so that they are only a "systemctl enable" away for people who need them?
At the least it seems reasonable they can be disabled on the installed system, and enabled for Live OS boot if the installer needs them.
I mean, it could be as easy that these services remain off, but the installer does a "systemctl enable --now …" on them as soon as it notices it needs them. That way the livesys and the host installed version can be 100% identical, if that's what people want.
Lennart
-- Lennart Poettering, Berlin
On Tue, Apr 9, 2019 at 12:07 PM Lennart Poettering mzerqung@0pointer.de wrote: [...]
Can we maybe reduce the default set of packages a bit? In particular the following ones I really don't think should be in our default install:
Although somewhat orthogonal to your notes below, overall there's a lot of package-entangling in the basic platform underlying the Workstation as well. This is something we should look at if we're to make progress in CI and Lifecycle objectives -- i.e. being able to produce basic platform for integration more quickly. I was talking to contyk about this the other day and we are starting to throw some ideas around about that. Again, doesn't solve all your individual concerns below but at least related. A good portion of the other subthread is really about choices made and how we enable bits properly for something like Workstation, which is also valid but a different effort I think.
[...]
- atd? Do we still need that? Do we have postinst scripts that need this? If so, wouldn't systemd-run be a better approach for those? Isn't it time to make this an RPM people install if they want it?
Interestingly I think Google Chrome needs this when it installs, though it seems nonsensical to me. (Chrome is installed by about 50% of our users given some informal stats, so writing it off would be shooting ourselves in the foot.) That's something the Workstation folks may want to work with them to fix in a more systemd-ish way.
On Thursday, 11 April 2019 at 18:09, Paul Frields wrote:
On Tue, Apr 9, 2019 at 12:07 PM Lennart Poettering mzerqung@0pointer.de wrote:
[...]
[...]
- atd? Do we still need that? Do we have postinst scripts that need this? If so, wouldn't systemd-run be a better approach for those? Isn't it time to make this an RPM people install if they want it?
Interestingly I think Google Chrome needs this when it installs, though it seems nonsensical to me. (Chrome is installed by about 50% of our users given some informal stats, so writing it off would be shooting ourselves in the foot.) That's something the Workstation folks may want to work with them to fix in a more systemd-ish way.
Chrome doesn't require atd explicitly (nor is it pulled in by any of its dependencies).
It does use it in %post to sneak in a cron job to to add a repo config file and its GPG key trust behind your back:
service atd start echo "sh /etc/cron.daily/google-chrome" | at now + 2 minute > /dev/null 2>&1
So, actually not having atd installed won't break Chrome as it will just ignore the 'at' command execution error due to 'exit 0' a few lines below it.
Regards, Dominik
On Fr, 12.04.19 11:35, Dominik 'Rathann' Mierzejewski (dominik@greysector.net) wrote:
Interestingly I think Google Chrome needs this when it installs, though it seems nonsensical to me. (Chrome is installed by about 50% of our users given some informal stats, so writing it off would be shooting ourselves in the foot.) That's something the Workstation folks may want to work with them to fix in a more systemd-ish way.
Chrome doesn't require atd explicitly (nor is it pulled in by any of its dependencies).
It does use it in %post to sneak in a cron job to to add a repo config file and its GPG key trust behind your back:
service atd start echo "sh /etc/cron.daily/google-chrome" | at now + 2 minute > /dev/null 2>&1
So, actually not having atd installed won't break Chrome as it will just ignore the 'at' command execution error due to 'exit 0' a few lines below it.
Just out of curiosity, why does a web browser need a daily chrome job?
Lennart
-- Lennart Poettering, Berlin
* Lennart Poettering:
Just out of curiosity, why does a web browser need a daily chrome job?
It uses this to persist itself, so that it is more difficult to remove the Google repository.
I guess we can be lucky that it doesn't does this via /etc/ld.so.preload or a kernel module.
Thanks, Florian
On Fri, Apr 12, 2019 at 01:12:51PM +0200, Lennart Poettering wrote:
Just out of curiosity, why does a web browser need a daily chrome job?
From the script's comment:
# It creates the repository configuration file for package updates, since # we cannot do this during the google-chrome installation since the repository # is locked. # # This functionality can be controlled by creating the $DEFAULTS_FILE and # setting "repo_add_once" to "true" or "false" as desired. An empty # $DEFAULTS_FILE is the same as setting the value to "false".
Despite this, the complete script (500+ lines!) seems to be in-line copied to the rpm's post-install script (nearly 1200 lines!) too.
I gave up trying to understand everything they do in there years ago. I used to repackage Chrome in earlier days (i.e. make an rpm that extracts the Google binary rpm and builds a new one with pre/post scripts I wrote myself), also because I refused to run Google scripts as root with (in those days, a.o.) lines like "rm -rf $SOMEVAR" in it. But I became lazy and you can use your time only once, so now I decided not to bother anymore.
Cheers,
Le vendredi 12 avril 2019 à 13:12 +0200, Lennart Poettering a écrit :
On Fr, 12.04.19 11:35, Dominik 'Rathann' Mierzejewski ( dominik@greysector.net) wrote:
Interestingly I think Google Chrome needs this when it installs, though it seems nonsensical to me. (Chrome is installed by about 50% of our users given some informal stats, so writing it off would be shooting ourselves in the foot.) That's something the Workstation folks may want to work with them to fix in a more systemd-ish way.
Chrome doesn't require atd explicitly (nor is it pulled in by any of its dependencies).
It does use it in %post to sneak in a cron job to to add a repo config file and its GPG key trust behind your back:
service atd start echo "sh /etc/cron.daily/google-chrome" | at now + 2 minute > /dev/null 2>&1
So, actually not having atd installed won't break Chrome as it will just ignore the 'at' command execution error due to 'exit 0' a few lines below it.
Just out of curiosity, why does a web browser need a daily chrome job?
Probably a check-for-updates or download white/black lists thing.
Regards,
On Fri, Apr 12, 2019, at 7:13 AM, Lennart Poettering wrote:
On Fr, 12.04.19 11:35, Dominik 'Rathann' Mierzejewski (dominik@greysector.net) wrote:
Interestingly I think Google Chrome needs this when it installs, though it seems nonsensical to me. (Chrome is installed by about 50% of our users given some informal stats, so writing it off would be shooting ourselves in the foot.) That's something the Workstation folks may want to work with them to fix in a more systemd-ish way.
Chrome doesn't require atd explicitly (nor is it pulled in by any of its dependencies).
It does use it in %post to sneak in a cron job to to add a repo config file and its GPG key trust behind your back:
service atd start echo "sh /etc/cron.daily/google-chrome" | at now + 2 minute > /dev/null 2>&1
So, actually not having atd installed won't break Chrome as it will just ignore the 'at' command execution error due to 'exit 0' a few lines below it.
Just out of curiosity, why does a web browser need a daily chrome job?
Chrome needs a cron job kept in time with chrony of course!
I am not 100% certain on this but I am pretty sure it's because of RPM and GPG. librpm stores the keys in the rpmdb, and it's not supported to import the keys during a "transaction".
Whereas at least libdnf uses /etc/pki/rpm-gpg and imports the keys before doing anything else; see also https://github.com/rpm-software-management/libdnf/issues/43
But I think Chrome is trying to support multiple librpm-based tools with a single package. The model of "add a repository with GPG key" is not really standardized across rpm-based distributions.
Once upon a time, Dominik 'Rathann' Mierzejewski dominik@greysector.net said:
Chrome doesn't require atd explicitly (nor is it pulled in by any of its dependencies).
That's incorrect. The Google Chrome RPM requires /usr/bin/lsb_release, which is from redhat-lsb-core, and that requires /usr/bin/at.
On Friday, 12 April 2019 at 14:47, Chris Adams wrote:
Once upon a time, Dominik 'Rathann' Mierzejewski dominik@greysector.net said:
Chrome doesn't require atd explicitly (nor is it pulled in by any of its dependencies).
That's incorrect. The Google Chrome RPM requires /usr/bin/lsb_release, which is from redhat-lsb-core, and that requires /usr/bin/at.
Ah, you're right. I thought atd is in atd package but it's actually in at and I have that installed, so I didn't see it in the transaction when installing chrome. Sorry, my bad.
Regards, Dominik
On Tue, Apr 9, 2019, at 12:07 PM, Lennart Poettering wrote:
Heya,
today I installed the current Fedora 30 Workstation beta on my new laptop. It was a bumpy ride, I must say (the partitioner (blivet?) crashed five times or so on me, always kicking me out of anaconda again, just because I wanted to undo something). But I don't really want to discuss that. What I do want to discuss is this:
Can we maybe reduce the default set of packages a bit?
The dependency chain of libvirtd is just doomed from this perspective. For Fedora Silverblue (and FCOS) we make the intention decision not to include it by default (though I personally have it package layered).
(Using qemu inside a container without libvirt is also a nice pattern, we use this in https://github.com/coreos/coreos-assembler )
On Thu, Apr 11, 2019 at 12:48:13PM -0400, Colin Walters wrote:
On Tue, Apr 9, 2019, at 12:07 PM, Lennart Poettering wrote:
Heya,
today I installed the current Fedora 30 Workstation beta on my new laptop. It was a bumpy ride, I must say (the partitioner (blivet?) crashed five times or so on me, always kicking me out of anaconda again, just because I wanted to undo something). But I don't really want to discuss that. What I do want to discuss is this:
Can we maybe reduce the default set of packages a bit?
The dependency chain of libvirtd is just doomed from this perspective.
This is a rather sweeping inaccurate statement IMHO.
The scale of the dependancies you get from installing libvirt varies significantly depending on which libvirt RPMs you choose to install or depend on. There's quite alot of modularization there if you pick the right sub-RPMs to minimize install footprint. In addition some of the footprint you get when installing libvirt is actually coming from QEMU itself.
eg starting from the fedora:30 docker image
* "libvirt-daemon-driver-qemu"
The bare minimum currently needed by the libvirt QEMU driver impl.
56 RPMs / 100 MB
* "libvirt-daemon-kvm"
All functionality usable in combination with libvirt and KVM. Also pulls in the qemu-system-XXXX to match your host arch.
300 RPMs / 430 MB
Of this, 211 RPMs / 300 MB is due to qemu-system-x86 & qemu-img RPMs, rather than libvirt itself. So real libvirt overhead here is only 90 RPMs / 120 MB
* "libvirt-daemon-qemu"
All functionality usable in combination with libvirt and QEMU (any arch emulation). Pulls in every qemu-system-XXX RPM
350 RPMs / 1 GB
(The extra delta here is really coming from QEMU not libvirt itself)
The first libvirt-daemon-driver-qemu RPM should in fact be even smaller than it is, but we have an accidental dependancy between two parts of libvirt codebase. This will be addressed in F31.
Regards, Daniel
On Tue, Apr 9, 2019 at 12:08 PM Lennart Poettering mzerqung@0pointer.de wrote:
Heya,
today I installed the current Fedora 30 Workstation beta on my new laptop. It was a bumpy ride, I must say (the partitioner (blivet?) crashed five times or so on me, always kicking me out of anaconda again, just because I wanted to undo something). But I don't really want to discuss that. What I do want to discuss is this:
Ideally, the top 4 wouldn't be installed at all anymore (in case of the first two at least on the systems which do not need them). But if that's not in the cards, it would be great to at least not enable these services anymore in the default boot so that they are only a "systemctl enable" away for people who need them?
I think all of these are good ideas. "No udev-settle" seems like a nice highlevel goal to shoot for.
Another one I might add: "No stuck stop jobs" - it annoys me every single time when I reboot and something like rngd or conmon holds up my reboot for several minutes for no reason at all.
On Tue, 2019-04-16 at 11:48 -0400, Matthias Clasen wrote:
On Tue, Apr 9, 2019 at 12:08 PM Lennart Poettering mzerqung@0pointer.de wrote:
Heya,
today I installed the current Fedora 30 Workstation beta on my new laptop. It was a bumpy ride, I must say (the partitioner (blivet?) crashed five times or so on me, always kicking me out of anaconda again, just because I wanted to undo something). But I don't really want to discuss that. What I do want to discuss is this:
Ideally, the top 4 wouldn't be installed at all anymore (in case of the first two at least on the systems which do not need them). But if that's not in the cards, it would be great to at least not enable these services anymore in the default boot so that they are only a "systemctl enable" away for people who need them?
I think all of these are good ideas. "No udev-settle" seems like a nice highlevel goal to shoot for.
Another one I might add: "No stuck stop jobs" - it annoys me every single time when I reboot and something like rngd or conmon holds up my reboot for several minutes for no reason at all.
I've seen the rngd stop thing, hadn't had time to investigate it yet as more urgent fires keep showing up :/
On Di, 16.04.19 09:06, Adam Williamson (adamwill@fedoraproject.org) wrote:
I think all of these are good ideas. "No udev-settle" seems like a nice highlevel goal to shoot for.
Another one I might add: "No stuck stop jobs" - it annoys me every single time when I reboot and something like rngd or conmon holds up my reboot for several minutes for no reason at all.
I've seen the rngd stop thing, hadn't had time to investigate it yet as more urgent fires keep showing up :/
What's the story anyway for rngd? Why would userspace be better at providing entropy to the kernel than the kernel itself? Why do we enable it on desktops at all, such systems should not be entropy-starved. Do we need this at all now that the kernel can use RDRAND itself?
rngd runs as regular system service, hence what's the point of that altogether? I mean, it runs so late during boot, at a point where the entropy pool is full anyway, and we need the kernel's RNG much much earlier already (already because systemd assigns a uuid to each service invocation that derives from kernel RNG, and it does that super early). So, why run a service that is supposed to fill up the entropy pool at a point where we don't need it anymore, and if the kernel can do what it does most likely already on its own?
Isn't it time to kick rngd out of the default install, in particular on the workstation image? Isn't keeping it around just cargo culting?
Lennart
-- Lennart Poettering, Berlin
On Wed, Apr 17, 2019 at 10:38:18AM +0200, Lennart Poettering wrote:
On Di, 16.04.19 09:06, Adam Williamson (adamwill@fedoraproject.org) wrote:
I think all of these are good ideas. "No udev-settle" seems like a nice highlevel goal to shoot for.
Another one I might add: "No stuck stop jobs" - it annoys me every single time when I reboot and something like rngd or conmon holds up my reboot for several minutes for no reason at all.
I've seen the rngd stop thing, hadn't had time to investigate it yet as more urgent fires keep showing up :/
What's the story anyway for rngd? Why would userspace be better at providing entropy to the kernel than the kernel itself? Why do we enable it on desktops at all, such systems should not be entropy-starved. Do we need this at all now that the kernel can use RDRAND itself?
IIUC, RDRAND exists from IvyBridge generation CPUs onwards for Intel or EPYC CPUs for AMD. I've no idea what the story is for non-x86 CPUs & RDRAND equivalent.
Anyway, whether we can rely on RDRAND depends on what we consider the minimum targetted CPU models & architectures. I'm guessing that we do intend Fedora to work correctly in CPUs predating/lacking RDRAND.
KVM guests can have a virtio-rng device provided on any architecture, which feeds from host's /dev/urandom, but it is unfortunately fairly rare for public cloud providers to enable it :-(
rngd includes support for the "jitter entropy" source which uses CPU jitter to feed the RNG. At least in RHEL, this is the recommended option when the CPUs lack RDRAND or equivalent and is why rngd is enabled by default there. IIUC it is reading the jitter entropy from the kernel's crypto APIs, optionally applying AES to data, and then feeding it back into the kernel's rng pool.
rngd runs as regular system service, hence what's the point of that altogether? I mean, it runs so late during boot, at a point where the entropy pool is full anyway, and we need the kernel's RNG much much earlier already (already because systemd assigns a uuid to each service invocation that derives from kernel RNG, and it does that super early). So, why run a service that is supposed to fill up the entropy pool at a point where we don't need it anymore, and if the kernel can do what it does most likely already on its own?
/dev/random can get depleted after boot. Though modern recommendation is for apps to use /dev/urandom by default (or getrandom/getentropy syscalls), some probably still uses /dev/random for historical baggage reasons.
Regards, Daniel
On Wednesday, April 17, 2019 4:38:18 AM EDT Lennart Poettering wrote:
On Di, 16.04.19 09:06, Adam Williamson (adamwill@fedoraproject.org) wrote:
I think all of these are good ideas. "No udev-settle" seems like a nice highlevel goal to shoot for.
Another one I might add: "No stuck stop jobs" - it annoys me every single time when I reboot and something like rngd or conmon holds up my reboot for several minutes for no reason at all.
I've seen the rngd stop thing, hadn't had time to investigate it yet as more urgent fires keep showing up :/
What's the story anyway for rngd? Why would userspace be better at providing entropy to the kernel than the kernel itself? Why do we enable it on desktops at all, such systems should not be entropy-starved. Do we need this at all now that the kernel can use RDRAND itself?
The kernel uses RDRAND/SEED but it does not increment the entropy estimate based on it. Another interesting thing is that TPM chips also have entropy available, but the kernel does not use it. So, if you have a hardware based entropy source such as TPM, you need rngd to move the entropy to the kernel. And it also can mine CPU jitter to create some entropy on its own. And it also supports the NIST beacon if you want that kind of entropy. Rngd greatly helps system recover from low entropy situations.
rngd runs as regular system service, hence what's the point of that altogether? I mean, it runs so late during boot, at a point where the entropy pool is full anyway,
I'd really like to see it start much earlier. Any way to make that happen?
and we need the kernel's RNG much much earlier already (already because systemd assigns a uuid to each service invocation that derives from kernel RNG, and it does that super early). So, why run a service that is supposed to fill up the entropy pool at a point where we don't need it anymore, and if the kernel can do what it does most likely already on its own?
The kernel cannot recover quickly when stressed for continued entropy depletion. For example, we are required to be able to supply all guest VM's with entropy from the host. They draw down the entropy pools which need replenishment. The kernel is constantly starved for entropy.
Isn't it time to kick rngd out of the default install, in particular on the workstation image? Isn't keeping it around just cargo culting?
I think you're being harsh without really looking deeply into the problem. If we could set a sysctl to tell the kernel to use a TPM or increment entropy estimate when RDSEED is used, I'd agree we should consider this. And to be honest, it should be running during an anaconda or kickstart install in order to safely setup an encrypted disk. Also, livecd uses are starved for entropy and must use rngd to be responsive and safe. If you have a TPM, the best use you'll get out of it is providing random numbers via rngd. :-)
-Steve
On Mi, 17.04.19 10:55, Steve Grubb (sgrubb@redhat.com) wrote:
On Wednesday, April 17, 2019 4:38:18 AM EDT Lennart Poettering wrote:
On Di, 16.04.19 09:06, Adam Williamson (adamwill@fedoraproject.org) wrote:
I think all of these are good ideas. "No udev-settle" seems like a nice highlevel goal to shoot for.
Another one I might add: "No stuck stop jobs" - it annoys me every single time when I reboot and something like rngd or conmon holds up my reboot for several minutes for no reason at all.
I've seen the rngd stop thing, hadn't had time to investigate it yet as more urgent fires keep showing up :/
What's the story anyway for rngd? Why would userspace be better at providing entropy to the kernel than the kernel itself? Why do we enable it on desktops at all, such systems should not be entropy-starved. Do we need this at all now that the kernel can use RDRAND itself?
The kernel uses RDRAND/SEED but it does not increment the entropy estimate based on it. Another interesting thing is that TPM chips also have entropy
That's not true anymore. There's a kernel compile time option now for that in CONFIG_RANDOM_TRUST_CPU=y. And yes, the Fedora kernel sets that since a while.
available, but the kernel does not use it. So, if you have a hardware based entropy source such as TPM, you need rngd to move the entropy to the kernel. And it also can mine CPU jitter to create some entropy on its own. And it also supports the NIST beacon if you want that kind of entropy. Rngd greatly helps system recover from low entropy situations.
Yeah, all that stuff is stuff the kernel could do better on its own. If the CPU jitter stuff or the TPM stuff is a good idea, then why not add that to the kernel natively, why involve userspace with that? i.e. if the TPM and the CPU jitter stuff can be trusted, then the same thing as for CONFIG_RANDOM_TRUST_CPU=y should be done: pass the random data into the pool directly inside in the kernel.
rngd runs as regular system service, hence what's the point of that altogether? I mean, it runs so late during boot, at a point where the entropy pool is full anyway,
I'd really like to see it start much earlier. Any way to make that happen?
Well, no. I mean, the only way you can do that is by turning rngd into its own init system, if you want it to run before the init system.
RNG, and it does that super early). So, why run a service that is supposed to fill up the entropy pool at a point where we don't need it anymore, and if the kernel can do what it does most likely already on its own?
The kernel cannot recover quickly when stressed for continued entropy depletion. For example, we are required to be able to supply all guest VM's with entropy from the host. They draw down the entropy pools which need replenishment. The kernel is constantly starved for entropy.
That's not how the entropy pool works. Once it is full it's full, and it doesn't run empty anymore.
I think you're being harsh without really looking deeply into the problem. If we could set a sysctl to tell the kernel to use a TPM or increment entropy estimate when RDSEED is used, I'd agree we should consider this. And to be
OK, so I guess that point in time is now. Though it's not a sysctl, but a compile time option (see above).
Lennart
-- Lennart Poettering, Berlin
On Wed, 2019-04-17 at 19:36 +0200, Lennart Poettering wrote:
On Mi, 17.04.19 10:55, Steve Grubb (sgrubb@redhat.com) wrote:
On Wednesday, April 17, 2019 4:38:18 AM EDT Lennart Poettering wrote:
On Di, 16.04.19 09:06, Adam Williamson (adamwill@fedoraproject.org) wrote:
I think all of these are good ideas. "No udev-settle" seems like a nice highlevel goal to shoot for.
Another one I might add: "No stuck stop jobs" - it annoys me every single time when I reboot and something like rngd or conmon holds up my reboot for several minutes for no reason at all.
I've seen the rngd stop thing, hadn't had time to investigate it yet as more urgent fires keep showing up :/
What's the story anyway for rngd? Why would userspace be better at providing entropy to the kernel than the kernel itself? Why do we enable it on desktops at all, such systems should not be entropy-starved. Do we need this at all now that the kernel can use RDRAND itself?
The kernel uses RDRAND/SEED but it does not increment the entropy estimate based on it. Another interesting thing is that TPM chips also have entropy
That's not true anymore. There's a kernel compile time option now for that in CONFIG_RANDOM_TRUST_CPU=y. And yes, the Fedora kernel sets that since a while.
available, but the kernel does not use it. So, if you have a hardware based entropy source such as TPM, you need rngd to move the entropy to the kernel. And it also can mine CPU jitter to create some entropy on its own. And it also supports the NIST beacon if you want that kind of entropy. Rngd greatly helps system recover from low entropy situations.
Yeah, all that stuff is stuff the kernel could do better on its own. If the CPU jitter stuff or the TPM stuff is a good idea, then why not add that to the kernel natively, why involve userspace with that? i.e. if the TPM and the CPU jitter stuff can be trusted, then the same thing as for CONFIG_RANDOM_TRUST_CPU=y should be done: pass the random data into the pool directly inside in the kernel.
Big +1, I've been saying this for ages as well ...
rngd runs as regular system service, hence what's the point of that altogether? I mean, it runs so late during boot, at a point where the entropy pool is full anyway,
I'd really like to see it start much earlier. Any way to make that happen?
Well, no. I mean, the only way you can do that is by turning rngd into its own init system, if you want it to run before the init system.
RNG, and it does that super early). So, why run a service that is supposed to fill up the entropy pool at a point where we don't need it anymore, and if the kernel can do what it does most likely already on its own?
The kernel cannot recover quickly when stressed for continued entropy depletion. For example, we are required to be able to supply all guest VM's with entropy from the host. They draw down the entropy pools which need replenishment. The kernel is constantly starved for entropy.
That's not how the entropy pool works. Once it is full it's full, and it doesn't run empty anymore.
I think you're being harsh without really looking deeply into the problem. If we could set a sysctl to tell the kernel to use a TPM or increment entropy estimate when RDSEED is used, I'd agree we should consider this. And to be
OK, so I guess that point in time is now. Though it's not a sysctl, but a compile time option (see above).
I concur, I would really like to see rngd become a thing of the past as well. The kernel has all the tools and access needed to reseed itself, *requiring* a racy userspace tool to do the kernel's job is a bit ridiculous.
Simo.
"LP" == Lennart Poettering mzerqung@0pointer.de writes:
LP> That's not true anymore. There's a kernel compile time option now LP> for that in CONFIG_RANDOM_TRUST_CPU=y. And yes, the Fedora kernel LP> sets that since a while.
Isn't this arch-dependent?
config RANDOM_TRUST_CPU bool "Trust the CPU manufacturer to initialize Linux's CRNG" depends on X86 || S390 || PPC default n
Not sure what happens on ARM but I think it would need to be considered.
- J<
On Mi, 17.04.19 13:01, Jason L Tibbitts III (tibbs@math.uh.edu) wrote:
"LP" == Lennart Poettering mzerqung@0pointer.de writes:
LP> That's not true anymore. There's a kernel compile time option now LP> for that in CONFIG_RANDOM_TRUST_CPU=y. And yes, the Fedora kernel LP> sets that since a while.
Isn't this arch-dependent?
Yes it is. But so is rngd afaik? It uses the RDTSC, RDSEED and TPM RNG iiuc and those are either x86 specific (in case of RDTSC/RDSEED) or pretty much so (in case of the TPM RNG).
Lennart
-- Lennart Poettering, Berlin
On 4/17/2019 10:36 AM, Lennart Poettering wrote:
On Mi, 17.04.19 10:55, Steve Grubb (sgrubb@redhat.com) wrote:
On Wednesday, April 17, 2019 4:38:18 AM EDT Lennart Poettering wrote:
rngd runs as regular system service, hence what's the point of that altogether? I mean, it runs so late during boot, at a point where the entropy pool is full anyway,
I'd really like to see it start much earlier. Any way to make that happen?
Well, no. I mean, the only way you can do that is by turning rngd into its own init system, if you want it to run before the init system.
This seems like a false dichotomy, no? Surely, things like this are a possibility: https://lists.freedesktop.org/archives/systemd-devel/2010-September/000225.h...
But beyond that, is there really no way to lift this earlier in the boot logic?
-jc
On Mi, 17.04.19 11:29, Japheth Cleaver (cleaver@terabithia.org) wrote:
On 4/17/2019 10:36 AM, Lennart Poettering wrote:
On Mi, 17.04.19 10:55, Steve Grubb (sgrubb@redhat.com) wrote:
On Wednesday, April 17, 2019 4:38:18 AM EDT Lennart Poettering wrote:
rngd runs as regular system service, hence what's the point of that altogether? I mean, it runs so late during boot, at a point where the entropy pool is full anyway,
I'd really like to see it start much earlier. Any way to make that happen?
Well, no. I mean, the only way you can do that is by turning rngd into its own init system, if you want it to run before the init system.
This seems like a false dichotomy, no? Surely, things like this are a possibility: https://lists.freedesktop.org/archives/systemd-devel/2010-September/000225.h...
That too means the service gets started after the init system is up, and the init system already requires entropy, so it's pointless.
But beyond that, is there really no way to lift this earlier in the boot logic?
Sure, you can invoke rngd before systemd, in which case it would have to be able to run as PID 1 itself pretty much and then hand over things.
But why do that in userspace at all? the "Trust CPU RNG" kernel compile time option shows that these things are trivial to solve if people just want to. Instead of involving rngd at all, why not add a similar option for the TPM RNG (or any other non-CPU hw rng) and then rngd doesn't do anything useful anymore whatsoever? I mean, to my knowledge all those other RNGs already feed into the pool anyway, they just don't get trusted and thus don't add to the entropy estimate. Fixing that should be quite doable and given that CONFIG_RANDOM_TRUST_CPU exists now it shouldn't be politically too hard to argue for a CONFIG_RANDOM_TRUST_TPM either...
Lennart
-- Lennart Poettering, Berlin
On Thu, 18 Apr 2019 10:22:27 +0200 Lennart Poettering mzerqung@0pointer.de wrote:
On Mi, 17.04.19 11:29, Japheth Cleaver (cleaver@terabithia.org) wrote:
This seems like a false dichotomy, no? Surely, things like this are a possibility: https://lists.freedesktop.org/archives/systemd-devel/2010-September/000225.h...
That too means the service gets started after the init system is up, and the init system already requires entropy, so it's pointless.
On shutdown the existing entropy is stored for use at startup (it is still entropy on restart if an attacker hasn't seen it). So, if init uses that entropy and depletes it, it would be a positive to restore it as soon as possible. But that is the purpose of having the CPRNG be robust. The seeds from last run are used until there is enough new entropy to reseed, and in the meantime the CPRNG provides the 'random' numbers. For most purposes this is more than adequate. If the CPRNG can be cracked with the short stream fed from it to start up the system, it is not robust. And that assumes that an attacker has complete access to that stream.
The main threat from random numbers is that 'password' is a perfectly legitimate random string, so even hardware random number generators can generate easy to crack strings of numbers. That is, random doesn't mean strong cryptographically. What we really want are unpredictable crpytographically strong numbers. And that is also why it is so hard to validate random number generators; if they aren't failing the tests occasionally, they aren't random. But if they are failing the tests occasionally, it might or might not indicate they are not random, and thus predictable. And we might not even be testing the right thing! In any case, a good reason to randomly change seeds to CPRNGs regularly, if possible. Even a vulnerable CPRNG with a long stream required to crack it is robust if reseeded constantly at short intervals with randomness (the strategy of the linux kernel).
On Do, 18.04.19 09:16, stan (upaitag@zoho.com) wrote:
On Thu, 18 Apr 2019 10:22:27 +0200 Lennart Poettering mzerqung@0pointer.de wrote:
On Mi, 17.04.19 11:29, Japheth Cleaver (cleaver@terabithia.org) wrote:
This seems like a false dichotomy, no? Surely, things like this are a possibility: https://lists.freedesktop.org/archives/systemd-devel/2010-September/000225.h...
That too means the service gets started after the init system is up, and the init system already requires entropy, so it's pointless.
On shutdown the existing entropy is stored for use at startup (it is still entropy on restart if an attacker hasn't seen it). So, if init uses that entropy and depletes it, it would be a positive to restore it as soon as possible.
That is pretty late: it's systemd-random-seed.service that does that and it runs after /var is mounted writable, which is relatively late in the early-boot phase. Moreover we don't credit entropy when writing the seed back into the kernel, since it's not safe to do so in the general case, as people frequently deploy the same pre-built image on multiple systems and tend to forget to invalidate the saved seed then. And all images that come up with the same saved seed would have the same entropy pool initially hence the excercsie would be pointless.
There has been work on making this opt-in (https://github.com/systemd/systemd/pull/10621) but this has stalled since. If anyone wants to resurrect that, please do.
However, regardless whether s-r-s.s credits entropy or does not: it runs too late: there are plenty entropy users running before that that need to wait for the pool to be filled. And we can't really move s-r-s.s earlier.
[And also: the concept of "depleting" the entropy pool is a misconception. This doesn't happen if people use the APIs correctly, i.e. /dev/urandom instead of /dev/random (or their getrandom() equivalents). The kernel documentation calls /dev/random a "legacy interface" for a reason (see http://man7.org/linux/man-pages/man4/urandom.4.html). Once the entropy pool is filled it is filled for good, if /dev/urandom is used.]
(BTW: in case you wonder why we wait for /var being writable before s-r-s.s is run: that's because we need to invalidate the old stored seed when you use it, so that it is never reused again. This means we need to overwrite the seed file when we use it.)
Lennart
-- Lennart Poettering, Berlin
On Thu, Apr 18, 2019 at 10:23 AM Lennart Poettering mzerqung@0pointer.de wrote:
Sure, you can invoke rngd before systemd, in which case it would have to be able to run as PID 1 itself pretty much and then hand over things.
But why do that in userspace at all? the "Trust CPU RNG" kernel compile time option shows that these things are trivial to solve if people just want to. Instead of involving rngd at all, why not add a similar option for the TPM RNG (or any other non-CPU hw rng) and then rngd doesn't do anything useful anymore whatsoever? I mean, to my knowledge all those other RNGs already feed into the pool anyway, they just don't get trusted and thus don't add to the entropy estimate. Fixing that should be quite doable and given that CONFIG_RANDOM_TRUST_CPU exists now it shouldn't be politically too hard to argue for a CONFIG_RANDOM_TRUST_TPM either...
I like the part that this is trivial to solve if people want to. Making people agree is an order of magnitude harder than fixing any code. Nevertheless, without rngd, getrandom() would block in one of the first services started by systemd (if it doesn't block in systemd itself). The kernel option CONFIG_RANDOM_TRUST_CPU, is not portable so you'll need something more for non-x86. What rngd does that the kernel doesn't is a jitter entropy "subsystem" which will feed the kernel with random data, even when the hardware doesn't support something native.
Can the jitter entropy gather be done by the kernel? It seems yes via the jitterentropy_rng module. So a combo of CONFIG_RANDOM_TRUST_CPU and the jitterentropy_rng may help in simplifying fedora (if people agree :).
regards, Nikos
On Mi, 24.04.19 12:02, Nikos Mavrogiannopoulos (nmav@redhat.com) wrote:
On Thu, Apr 18, 2019 at 10:23 AM Lennart Poettering mzerqung@0pointer.de wrote:
Sure, you can invoke rngd before systemd, in which case it would have to be able to run as PID 1 itself pretty much and then hand over things.
But why do that in userspace at all? the "Trust CPU RNG" kernel compile time option shows that these things are trivial to solve if people just want to. Instead of involving rngd at all, why not add a similar option for the TPM RNG (or any other non-CPU hw rng) and then rngd doesn't do anything useful anymore whatsoever? I mean, to my knowledge all those other RNGs already feed into the pool anyway, they just don't get trusted and thus don't add to the entropy estimate. Fixing that should be quite doable and given that CONFIG_RANDOM_TRUST_CPU exists now it shouldn't be politically too hard to argue for a CONFIG_RANDOM_TRUST_TPM either...
I like the part that this is trivial to solve if people want to. Making people agree is an order of magnitude harder than fixing any code. Nevertheless, without rngd, getrandom() would block in one of the first services started by systemd (if it doesn't block in systemd itself).
As mentioned before: systemd itself already needs entropy itself (it assigns a random 128bit id to each service invocation, dubbed the "invocation ID" of it, and it generates the machine ID and seeds its hash table hash functions), hence rngd doesn't cut it anyway, since it starts after systemd, being a service managed by systemd. If rngd was supposed to fill up the entropy pool at boot, it would have to run as initial PID 1 in the initrd, before systemd, and then hand over to systemd only after the pool is full. But it doesn't, hence rngd is pointless: it runs too late to be useful.
Lennart
-- Lennart Poettering, Berlin
On Wed, Apr 24, 2019 at 12:24 PM Lennart Poettering mzerqung@0pointer.de wrote:
But why do that in userspace at all? the "Trust CPU RNG" kernel compile time option shows that these things are trivial to solve if people just want to. Instead of involving rngd at all, why not add a similar option for the TPM RNG (or any other non-CPU hw rng) and then rngd doesn't do anything useful anymore whatsoever? I mean, to my knowledge all those other RNGs already feed into the pool anyway, they just don't get trusted and thus don't add to the entropy estimate. Fixing that should be quite doable and given that CONFIG_RANDOM_TRUST_CPU exists now it shouldn't be politically too hard to argue for a CONFIG_RANDOM_TRUST_TPM either...
I like the part that this is trivial to solve if people want to. Making people agree is an order of magnitude harder than fixing any code. Nevertheless, without rngd, getrandom() would block in one of the first services started by systemd (if it doesn't block in systemd itself).
As mentioned before: systemd itself already needs entropy itself (it assigns a random 128bit id to each service invocation, dubbed the "invocation ID" of it, and it generates the machine ID and seeds its hash table hash functions), hence rngd doesn't cut it anyway, since it starts after systemd, being a service managed by systemd. If rngd was supposed to fill up the entropy pool at boot, it would have to run as initial PID 1 in the initrd, before systemd, and then hand over to systemd only after the pool is full. But it doesn't, hence rngd is pointless: it runs too late to be useful.
The goal of running rngd early was to have the system boot, not necessarily to address systemd's need for random numbers. In that it is successful. I do not disagree that it is not a clean solution.
regards, Nikos
On Mi, 24.04.19 12:37, Nikos Mavrogiannopoulos (nmav@redhat.com) wrote:
As mentioned before: systemd itself already needs entropy itself (it assigns a random 128bit id to each service invocation, dubbed the "invocation ID" of it, and it generates the machine ID and seeds its hash table hash functions), hence rngd doesn't cut it anyway, since it starts after systemd, being a service managed by systemd. If rngd was supposed to fill up the entropy pool at boot, it would have to run as initial PID 1 in the initrd, before systemd, and then hand over to systemd only after the pool is full. But it doesn't, hence rngd is pointless: it runs too late to be useful.
The goal of running rngd early was to have the system boot, not necessarily to address systemd's need for random numbers. In that it is successful. I do not disagree that it is not a clean solution.
But how can it be successful? If systemd already needs to wait until the pool is full to get the randomness it needs (and thus blocks system boot-up as a whole) then what's the point in running rngd afterwards? To reach the point where rngd can be run we already need the pool to be full, and hence rngd can't do any good at all anymore, whatsoever.
Lennart
-- Lennart Poettering, Berlin
On Wed, 2019-04-24 at 14:16 +0200, Lennart Poettering wrote:
On Mi, 24.04.19 12:37, Nikos Mavrogiannopoulos (nmav@redhat.com) wrote:
As mentioned before: systemd itself already needs entropy itself (it assigns a random 128bit id to each service invocation, dubbed the "invocation ID" of it, and it generates the machine ID and seeds its hash table hash functions), hence rngd doesn't cut it anyway, since it starts after systemd, being a service managed by systemd. If rngd was supposed to fill up the entropy pool at boot, it would have to run as initial PID 1 in the initrd, before systemd, and then hand over to systemd only after the pool is full. But it doesn't, hence rngd is pointless: it runs too late to be useful.
The goal of running rngd early was to have the system boot, not necessarily to address systemd's need for random numbers. In that it is successful. I do not disagree that it is not a clean solution.
But how can it be successful? If systemd already needs to wait until the pool is full to get the randomness it needs (and thus blocks system boot-up as a whole) then what's the point in running rngd afterwards? To reach the point where rngd can be run we already need the pool to be full, and hence rngd can't do any good at all anymore, whatsoever.
What does systemd use to generate these random numbers? Does it directly call getrandom() or does something else?
On Mi, 24.04.19 17:43, Tomas Mraz (tmraz@redhat.com) wrote:
But how can it be successful? If systemd already needs to wait until the pool is full to get the randomness it needs (and thus blocks system boot-up as a whole) then what's the point in running rngd afterwards? To reach the point where rngd can be run we already need the pool to be full, and hence rngd can't do any good at all anymore, whatsoever.
What does systemd use to generate these random numbers? Does it directly call getrandom() or does something else?
Depends.
For the invocation IDs we use getrandom() with default args (i.e. blocking behaviour). Similar for all other cases where we pick 128bit random identifiers (also known as uuids).
For the hashtable seeds we use classic /dev/urandom (i.e. entropy from a possibly non-initialized pool) since it's OK if those seeds are crappy initially, as long as they get better over time, since we reseed if we see too many hash collisions.
We never use /dev/random or GRND_RANDOM.
Lennart
-- Lennart Poettering, Berlin
On 4/25/19 5:14 AM, Lennart Poettering wrote:
For the hashtable seeds we use classic /dev/urandom (i.e. entropy from a possibly non-initialized pool) since it's OK if those seeds are crappy initially, as long as they get better over time, since we reseed if we see too many hash collisions.
I thought that hashing would be fine with a completely predictable generator, as long as the sequence itself is not correlated, i.e. it would be OK if the sequence used for hashing was the same on every system.
Of course that particular sequence might lead to collisions, but then another uncorrelated but completely predictable sequence should fix that. In other words, it could be seeded from a constant table like [1,2,3,4,.....], just as well as from /dev/urandom regardless of its entropy.
My point here is that actual entropy of the seeding is irrelevant, at all times---would you agree?
That leaves the invocation IDs---the UUIDs need to be random to be truly Universally Unique, but a limited entropy system is implicitly isolated, so maybe the limited UUIDs could be seen as Universal in its very small Universe. What is the time duration of the original invocation IDs? What are the negative implication of the initial UUIDs being less random than the subsequent ones?
On Do, 25.04.19 13:14, Przemek Klosowski (przemek.klosowski@nist.gov) wrote:
On 4/25/19 5:14 AM, Lennart Poettering wrote:
For the hashtable seeds we use classic /dev/urandom (i.e. entropy from a possibly non-initialized pool) since it's OK if those seeds are crappy initially, as long as they get better over time, since we reseed if we see too many hash collisions.
I thought that hashing would be fine with a completely predictable generator, as long as the sequence itself is not correlated, i.e. it would be OK if the sequence used for hashing was the same on every system.
No, because then I can calculate in advance which hashes the target system uses and this still trigger the collisions. The seed hence must be hard to guess from the outside, and thus cannot follow a predictable scheme.
My point here is that actual entropy of the seeding is irrelevant, at all times---would you agree?
No, I would not agree.
That leaves the invocation IDs---the UUIDs need to be random to be truly Universally Unique, but a limited entropy system is implicitly isolated, so maybe the limited UUIDs could be seen as Universal in its very small Universe. What is the time duration of the original invocation IDs? What are the negative implication of the initial UUIDs being less random than the subsequent ones?
Invocation IDs are useful for globally pinpointing a specific service invocation. If the UUIDs would stop to be truly random then they'd stop being universally unique and thus stop being useful.
Lennart
-- Lennart Poettering, Berlin
Lennart Poettering wrote:
On Do, 25.04.19 13:14, Przemek Klosowski (przemek.klosowski@nist.gov) wrote:
That leaves the invocation IDs---the UUIDs need to be random to be truly Universally Unique, but a limited entropy system is implicitly isolated, so maybe the limited UUIDs could be seen as Universal in its very small Universe. What is the time duration of the original invocation IDs? What are the negative implication of the initial UUIDs being less random than the subsequent ones?
Invocation IDs are useful for globally pinpointing a specific service invocation. If the UUIDs would stop to be truly random then they'd stop being universally unique and thus stop being useful.
It's perfectly possible for a number to be unique without being random.
As an example, you could hash the machine ID, which is supposedly unique in space, and the system clock, which is unique in time. That makes the hash unique in both space and time. Produce invocation IDs by counting up from that value, or by hashing repeatedly. That way you wouldn't need entropy for invocation IDs at every boot, only during installation.
Such values would of course be somewhat predictable, but according to what you've said in this thread, invocation IDs don't need to be unpredictable. You've only said that you want them unique.
(Of course one needs to be aware that collisions are not impossible, only improbable. That's equally true for hashes and random numbers.)
Björn Persson
On 4/25/19 6:10 PM, Björn Persson wrote:
It's perfectly possible for a number to be unique without being random. As an example, you could hash the machine ID, which is supposedly unique in space, and the system clock, which is unique in time. That makes the hash unique in both space and time. Produce invocation IDs by counting up from that value, or by hashing repeatedly. That way you wouldn't need entropy for invocation IDs at every boot, only during installation.
Such values would of course be somewhat predictable, but according to what you've said in this thread, invocation IDs don't need to be unpredictable. You've only said that you want them unique.
That is a good point---and by the way, you COULD make the same argument for hashing: one could create another installation-time seed value that will be guaranteed to not leak from the system, and mix it in the hash creation, making the hash unpredictable.
Between those two workarounds, it looks to me like we don't need randomness in secular systemd startup at all?
(Of course one needs to be aware that collisions are not impossible, only improbable. That's equally true for hashes and random numbers.)
At the UUID-level bit lengths, the probability is vanishingly small---although one does have to realize that even very small probability events can be realized with enough statistics, like in this recent measurement of Xenon124 radioactive decay with time constant of over 10^22 years, trillion times longer than the life of the Universe:
On Wed, 24 Apr 2019 at 06:24, Lennart Poettering mzerqung@0pointer.de wrote:
On Mi, 24.04.19 12:02, Nikos Mavrogiannopoulos (nmav@redhat.com) wrote:
On Thu, Apr 18, 2019 at 10:23 AM Lennart Poettering mzerqung@0pointer.de wrote:
Sure, you can invoke rngd before systemd, in which case it would have to be able to run as PID 1 itself pretty much and then hand over things.
But why do that in userspace at all? the "Trust CPU RNG" kernel compile time option shows that these things are trivial to solve if people just want to. Instead of involving rngd at all, why not add a similar option for the TPM RNG (or any other non-CPU hw rng) and then rngd doesn't do anything useful anymore whatsoever? I mean, to my knowledge all those other RNGs already feed into the pool anyway, they just don't get trusted and thus don't add to the entropy estimate. Fixing that should be quite doable and given that CONFIG_RANDOM_TRUST_CPU exists now it shouldn't be politically too hard to argue for a CONFIG_RANDOM_TRUST_TPM either...
I like the part that this is trivial to solve if people want to. Making people agree is an order of magnitude harder than fixing any code. Nevertheless, without rngd, getrandom() would block in one of the first services started by systemd (if it doesn't block in systemd itself).
As mentioned before: systemd itself already needs entropy itself (it assigns a random 128bit id to each service invocation, dubbed the "invocation ID" of it, and it generates the machine ID and seeds its hash table hash functions), hence rngd doesn't cut it anyway, since it starts after systemd, being a service managed by systemd. If rngd was supposed to fill up the entropy pool at boot, it would have to run as initial PID 1 in the initrd, before systemd, and then hand over to systemd only after the pool is full. But it doesn't, hence rngd is pointless: it runs too late to be useful.
useful to systemd and your problems. What people are trying to say is that it is useful to their problems.
There are several solutions to try here: 1. Make something like it run sooner so it helps your problems 2. Add something like it into the kernel (which has been a Sisyphus task from what i can tell) 3. Pull it into systemd so it helps your problems and others. 4. Keep this thread going with everyone talking past each other.
On Mi, 24.04.19 06:40, Stephen John Smoogen (smooge@gmail.com) wrote:
As mentioned before: systemd itself already needs entropy itself (it assigns a random 128bit id to each service invocation, dubbed the "invocation ID" of it, and it generates the machine ID and seeds its hash table hash functions), hence rngd doesn't cut it anyway, since it starts after systemd, being a service managed by systemd. If rngd was supposed to fill up the entropy pool at boot, it would have to run as initial PID 1 in the initrd, before systemd, and then hand over to systemd only after the pool is full. But it doesn't, hence rngd is pointless: it runs too late to be useful.
useful to systemd and your problems. What people are trying to say is that it is useful to their problems.
but it can't be. it's logically impossible. let me explain this again:
1. systemd needs entropy to start services and other purposes 2. if the entropy pool is not filled up systemd thus might need to wait for it to fill up, in a blocking fashion. When it blocks for that it won't start any services until it unblocks again. 3. rngd is supposed to fill up the entropy pool, thus allowing systemd to unblock and start the first services 4. rngd runs as regular service however, i.e.
And ther you have your ordering cycle:
a. systemd starts before rngd. b. rngd runs before the entropy pool is full. c. the entropy pool needs to be full for systemd to start
a before b before c before a before b before c before a? How's that solvable?
So if you want rngd to stay and do something useful, then it needs to be modified to start *before* systemd, in the initrd, before systemd is invoked. i.e. not as regular service, but as kind of an init before the real init.
The current mode is just entirely bogus...
Lennart
-- Lennart Poettering, Berlin
On Wed, 24 Apr 2019 at 08:26, Lennart Poettering mzerqung@0pointer.de wrote:
On Mi, 24.04.19 06:40, Stephen John Smoogen (smooge@gmail.com) wrote:
As mentioned before: systemd itself already needs entropy itself (it assigns a random 128bit id to each service invocation, dubbed the "invocation ID" of it, and it generates the machine ID and seeds its hash table hash functions), hence rngd doesn't cut it anyway, since it starts after systemd, being a service managed by systemd. If rngd was supposed to fill up the entropy pool at boot, it would have to run as initial PID 1 in the initrd, before systemd, and then hand over to systemd only after the pool is full. But it doesn't, hence rngd is pointless: it runs too late to be useful.
useful to systemd and your problems. What people are trying to say is
that
it is useful to their problems.
but it can't be. it's logically impossible. let me explain this again:
- systemd needs entropy to start services and other purposes
- if the entropy pool is not filled up systemd thus might need to wait for it to fill up, in a blocking fashion. When it blocks for that it won't start any services until it unblocks again.
- rngd is supposed to fill up the entropy pool, thus allowing systemd to unblock and start the first services
- rngd runs as regular service however, i.e.
And ther you have your ordering cycle:
a. systemd starts before rngd. b. rngd runs before the entropy pool is full. c. the entropy pool needs to be full for systemd to start
a before b before c before a before b before c before a? How's that solvable?
Again, I am not disagreeing that it isn't important.. I am just saying that the other people saying they need it later and you are coming across as saying get rid of it completely just because it doesn't meet your needs. Most of them are seeing the system way after your problem and needing it fixed then.
Let us look at it as a plumbing issue. We currently have a building with a bunch of pipes with small feeds and you as the morning janitor come in first of the day to wash the floors and clean things so other people can get to work. To fill your buckets you need a big basin to start up and have to instead wait around as the pipes fill up your cleaning bucket. You look around and see that people installed various buckets and pots to act as basins in their rooms they use to wash their hands and fases with but you can't use them as they need to be cleaned first. No one sees your problem because by the time their day starts.. you have been in there for hours and got your drip drip going and done your work. The problem here is that how you have come across is "Well I need more water, so we should rip out all the basins until I get one too. Just use this mopbucket water like I do."
I don't know if that is what you are meaning to say or not. If it isn't then I am just trying to explain why people are 'reacting' versus 'fixing'. Yes the problem needs to be solved sooner in the chain. You need a proper basin to fill up water in. In fact we all need proper plumbing which helps each service we are running. Working out how to get it is what we should be doing but instead we are arguing over who is going to go on strike first to get it.
So if you want rngd to stay and do something useful, then it needs to
be modified to start *before* systemd, in the initrd, before systemd is invoked. i.e. not as regular service, but as kind of an init before the real init.
Which is what I was trying to say but you cut out.
The current mode is just entirely bogus...
Lennart
-- Lennart Poettering, Berlin _______________________________________________ devel mailing list -- devel@lists.fedoraproject.org To unsubscribe send an email to devel-leave@lists.fedoraproject.org Fedora Code of Conduct: https://getfedora.org/code-of-conduct.html List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines List Archives: https://lists.fedoraproject.org/archives/list/devel@lists.fedoraproject.org
On Wed, 2019-04-24 at 14:25 +0200, Lennart Poettering wrote:
On Mi, 24.04.19 06:40, Stephen John Smoogen (smooge@gmail.com) wrote:
As mentioned before: systemd itself already needs entropy itself (it assigns a random 128bit id to each service invocation, dubbed the "invocation ID" of it, and it generates the machine ID and seeds its hash table hash functions), hence rngd doesn't cut it anyway, since it starts after systemd, being a service managed by systemd. If rngd was supposed to fill up the entropy pool at boot, it would have to run as initial PID 1 in the initrd, before systemd, and then hand over to systemd only after the pool is full. But it doesn't, hence rngd is pointless: it runs too late to be useful.
useful to systemd and your problems. What people are trying to say is that it is useful to their problems.
but it can't be. it's logically impossible. let me explain this again:
- systemd needs entropy to start services and other purposes
- if the entropy pool is not filled up systemd thus might need to wait for it to fill up, in a blocking fashion. When it blocks for that it won't start any services until it unblocks again.
- rngd is supposed to fill up the entropy pool, thus allowing systemd to unblock and start the first services
- rngd runs as regular service however, i.e.
And ther you have your ordering cycle:
a. systemd starts before rngd. b. rngd runs before the entropy pool is full. c. the entropy pool needs to be full for systemd to start
a before b before c before a before b before c before a? How's that solvable?
So if you want rngd to stay and do something useful, then it needs to be modified to start *before* systemd, in the initrd, before systemd is invoked. i.e. not as regular service, but as kind of an init before the real init.
The current mode is just entirely bogus...
This is all based, though, on your expectation that everything uses non-blocking interfaces, right? For anything that *does* use /dev/random or blocking getrandom() - which absolutely does happen, even the docs say it's deprecated - rngd is still useful.
On Mi, 24.04.19 08:27, Adam Williamson (adamwill@fedoraproject.org) wrote:
a. systemd starts before rngd. b. rngd runs before the entropy pool is full. c. the entropy pool needs to be full for systemd to start
a before b before c before a before b before c before a? How's that solvable?
So if you want rngd to stay and do something useful, then it needs to be modified to start *before* systemd, in the initrd, before systemd is invoked. i.e. not as regular service, but as kind of an init before the real init.
The current mode is just entirely bogus...
This is all based, though, on your expectation that everything uses non-blocking interfaces, right? For anything that *does* use /dev/random or blocking getrandom() - which absolutely does happen, even the docs say it's deprecated - rngd is still useful.
Well, the fix for that is probably not to clutter the system with rngd though. Patching /dev/random out, and patching /dev/urandom into those packages shouldn't be that difficult. It's low-hanging fruit. Very low-hanging in fact, you don't get to fix bugs that often by inserting a single character in your sources... ;-)
I mean, how is this ever going to be fixed if not by simply dropping rngd from the default install and then fixing everything popping up? You can't fix these things any other way, it doesn't work in real-life.
Lennart
-- Lennart Poettering, Berlin
Lennart Poettering wrote:
As mentioned before: systemd itself already needs entropy itself (it assigns a random 128bit id to each service invocation, dubbed the "invocation ID" of it, and it generates the machine ID and seeds its hash table hash functions)
Given that access to entropy during early boot is so problematic, hardware-dependent and full of catch-22s, it seems to me that an init system should use the entropy pool only if it really must.
With that in mind, could you explain why the invocation ID and the hash tables need to be cryptographically secure? Why is rand or a simple serial number not good enough? I never heard that lack of a cryptographically secure invocation ID was a big security problem before SystemD.
Björn Persson
On Wed, 24 Apr 2019 at 11:30, Björn Persson <Bjorn@rombobjörn.se> wrote:
Lennart Poettering wrote:
As mentioned before: systemd itself already needs entropy itself (it assigns a random 128bit id to each service invocation, dubbed the "invocation ID" of it, and it generates the machine ID and seeds its hash table hash functions)
Given that access to entropy during early boot is so problematic, hardware-dependent and full of catch-22s, it seems to me that an init system should use the entropy pool only if it really must.
With that in mind, could you explain why the invocation ID and the hash tables need to be cryptographically secure? Why is rand or a simple serial number not good enough? I never heard that lack of a cryptographically secure invocation ID was a big security problem before SystemD.
I expect they have to be because someone pointed out some security hack that can be done without it and no one ever noticed it before (or had a way to fix it before so we just knocked it as a 'well cant fix it so never mind'). Over the years in this business I have seen a lot of issues in the past with that mantra... they only usually get re-earthed when someone gets a nit because a new tool doesn't have it.
Björn Persson _______________________________________________ devel mailing list -- devel@lists.fedoraproject.org To unsubscribe send an email to devel-leave@lists.fedoraproject.org Fedora Code of Conduct: https://getfedora.org/code-of-conduct.html List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines List Archives: https://lists.fedoraproject.org/archives/list/devel@lists.fedoraproject.org
On Mi, 24.04.19 17:28, Björn Persson (Bjorn@rombobjörn.se) wrote:
Lennart Poettering wrote:
As mentioned before: systemd itself already needs entropy itself (it assigns a random 128bit id to each service invocation, dubbed the "invocation ID" of it, and it generates the machine ID and seeds its hash table hash functions)
Given that access to entropy during early boot is so problematic, hardware-dependent and full of catch-22s, it seems to me that an init system should use the entropy pool only if it really must.
With that in mind, could you explain why the invocation ID and the hash tables need to be cryptographically secure? Why is rand or a simple serial number not good enough? I never heard that lack of a cryptographically secure invocation ID was a big security problem before systemd.
init systems before systemd had no notion of "sevice lifecycles", they just didn't care. systemd cares about lifecycles however, and assigns a random 128bit id (aka "uuid") to each service invocation cycle. This can be used to associate logs and resource data of a specific service invocation with each other.
To be suitable for their purpose, of being *universally unique* you need a good random source for them. You cannot just use srand() and start from a fixed or time source, because then every system would always generate the same uuids...
Naive hash tables are prone to collision attacks: if you know the hash function used by the hash tables you might be able to trigger a DoS by forcing collisions and thus degrading the assumed O(1) complexity of hash table operations to O(n). See bug report how this was exploitable in Perl hash tables for example:
https://rt.perl.org/Public/Bug/Display.html?id=22371
Because of that modern hash table implementations (including those in systemd) will use a keyed hash function and pick a random value as seed every now and then, so that clients cannot easily trigger DoS like that. This random seed needs to be of relatively high quality, since if clients could guess the seed the excercise would be pointless. Hence no, srand() from timer or constant value wouldn't cut it. But do note that systemd doesn't use blocking getrandom() for seeding hash tables, but uses /dev/urandom instead (i.e. is happy with an uninitialized entropy pool), because for the hash table collisions it's fine if we initially don't have the best entropy as long as it gets better over time. That's because the hash tables in systemd will monitor the fill level and rehash with a fresh seed if we hit a threshold.
Anyway, there are a number of other places systemd needs a bit of entropy, these are just two prominent cases.
We also make use of /proc/sys/kernel/random/boot_id btw, which also needs some entropy.
Lennart
-- Lennart Poettering, Berlin
On Wed, 2019-04-24 at 12:02 +0200, Nikos Mavrogiannopoulos wrote:
Can the jitter entropy gather be done by the kernel? It seems yes via the jitterentropy_rng module. So a combo of CONFIG_RANDOM_TRUST_CPU and the jitterentropy_rng may help in simplifying fedora (if people agree :).
This sounds like a useful change, can we make Fedora load this module by default in initrd before systemd starts? Will it help? Or is this module not adding into the entropy estimate as well ?
Simo.
On Wednesday, April 17, 2019 1:36:08 PM EDT Lennart Poettering wrote:
On Mi, 17.04.19 10:55, Steve Grubb (sgrubb@redhat.com) wrote:
On Wednesday, April 17, 2019 4:38:18 AM EDT Lennart Poettering wrote:
What's the story anyway for rngd? Why would userspace be better at providing entropy to the kernel than the kernel itself? Why do we enable it on desktops at all, such systems should not be entropy-starved. Do we need this at all now that the kernel can use RDRAND itself?
The kernel uses RDRAND/SEED but it does not increment the entropy estimate based on it. Another interesting thing is that TPM chips also have entropy
That's not true anymore. There's a kernel compile time option now for that in CONFIG_RANDOM_TRUST_CPU=y. And yes, the Fedora kernel sets that since a while.
Ah...the devil is in the details. It does not credit entropy. This can easily be tested. systemctl stop rngd. Then open 2 terminal windows. In one terminal start this shell script:
#!/bin/sh
while [ 1 ] do /bin/cat /proc/sys/kernel/random/entropy_avail sleep 1 done
Then in another:
cat /dev/random >/dev/null
After a couple seconds, hit ctl-c to kill cat. Watch what happens to the entropy.
I have a Kabylake system idling. It takes 3 minutes for entropy to get back to 3k after stopping the consumer. At that point its losing about as much as its gaining. If I start rngd and do the same test, my entropy bounces back to over 3k in less than a second. As it stands today, rngd has a dramatic effect on entropy.
available, but the kernel does not use it. So, if you have a hardware based entropy source such as TPM, you need rngd to move the entropy to the kernel. And it also can mine CPU jitter to create some entropy on its own. And it also supports the NIST beacon if you want that kind of entropy. Rngd greatly helps system recover from low entropy situations.
Yeah, all that stuff is stuff the kernel could do better on its own.
Many have tried to convince upstream about this. If anyone here has influence, please try.
If the CPU jitter stuff or the TPM stuff is a good idea, then why not add that to the kernel natively, why involve userspace with that?
I agree. :-)
i.e. if the TPM and the CPU jitter stuff can be trusted, then the same thing as for CONFIG_RANDOM_TRUST_CPU=y should be done: pass the random data into the pool directly inside in the kernel.
And credit entropy!
rngd runs as regular system service, hence what's the point of that altogether? I mean, it runs so late during boot, at a point where the entropy pool is full anyway,
I'd really like to see it start much earlier. Any way to make that happen?
Well, no. I mean, the only way you can do that is by turning rngd into its own init system, if you want it to run before the init system.
RNG, and it does that super early). So, why run a service that is supposed to fill up the entropy pool at a point where we don't need it anymore, and if the kernel can do what it does most likely already on its own?
The kernel cannot recover quickly when stressed for continued entropy depletion. For example, we are required to be able to supply all guest VM's with entropy from the host. They draw down the entropy pools which need replenishment. The kernel is constantly starved for entropy.
That's not how the entropy pool works. Once it is full it's full, and it doesn't run empty anymore.
Empirical evidence suggests otherwise. See the test above.
I think you're being harsh without really looking deeply into the problem. If we could set a sysctl to tell the kernel to use a TPM or increment entropy estimate when RDSEED is used, I'd agree we should consider this. And to be
OK, so I guess that point in time is now. Though it's not a sysctl, but a compile time option (see above).
It looks as though it may be controlled as a boot commandline option, too. But that is likely intended to disable the effect it has.
-Steve
On Wed, 2019-04-17 at 15:14 -0400, Steve Grubb wrote:
Many have tried to convince upstream about this. If anyone here has influence, please try.
If upstream is currently resistant, what about turning rngd into a loadable kernel module and then insure it is in the initramfs and loaded at kernel boot time ?
Would this be a way to show upstream that this works and perhaps allow inclusion later on ?
Simo.
On Mi, 17.04.19 15:25, Simo Sorce (simo@redhat.com) wrote:
On Wed, 2019-04-17 at 15:14 -0400, Steve Grubb wrote:
Many have tried to convince upstream about this. If anyone here has influence, please try.
If upstream is currently resistant, what about turning rngd into a loadable kernel module and then insure it is in the initramfs and loaded at kernel boot time ?
Would this be a way to show upstream that this works and perhaps allow inclusion later on ?
So apparently the kernel can do both the RDSEED/RDRAND stuff already on its own (and this is turned on in Fedora) and also can credit entropy based on other hwrngs too (see other mail). The latter is a bit awkward since it requires a kernel cmdline option currently to enable, and is global for all drivers though it would probably be wise to enable this individually for each driver judging by how much the device is trusted or not.
(Also note that virtio-rng is something systemd automatically loads if it's not around but the environment would support it, and it appears to credit entropy too.)
Lennart
-- Lennart Poettering, Berlin
On Wed, 17 Apr 2019 15:14:54 -0400 Steve Grubb sgrubb@redhat.com wrote:
Ah...the devil is in the details. It does not credit entropy. This can easily be tested. systemctl stop rngd. Then open 2 terminal windows. In one terminal start this shell script:
#!/bin/sh
while [ 1 ] do /bin/cat /proc/sys/kernel/random/entropy_avail sleep 1 done
Then in another:
cat /dev/random >/dev/null
After a couple seconds, hit ctl-c to kill cat. Watch what happens to the entropy.
I have a Kabylake system idling. It takes 3 minutes for entropy to get back to 3k after stopping the consumer. At that point its losing about as much as its gaining. If I start rngd and do the same test, my entropy bounces back to over 3k in less than a second. As it stands today, rngd has a dramatic effect on entropy.
I run a daemon that harvests entropy from the atmosphere via an rtl2832 and feeds it into the kernel via /dev/random. And, yes it makes a big difference to feed the entropy pool.
When random.c was rewritten to use chacha instead of the modified mersenne twister (4.xx?), the way entropy was used changed. It used to bleed constantly across from the pool that is /dev/random into /dev/urandom when it was above the threshold set in write_wakeup_threshold. Now, it only checks when the kernel routine for get_random is called and reseeds if enough entropy is present. It always has decremented and still decrements entropy available when it uses some. Under mersenne it used to be possible to set a timer as well, but that went away with chacha. I patch to enable that feature in the new random.c, so I can reseed the chacha on a periodic interval.
The rationale for chacha was that server farms were starving for entropy, and it is considered more robust for low entropy conditions, at least that is what I understand from my reading.
As far as the CPU hardware entropy generators, those are not open source, so it is not possible to determine if they have a backdoor. Research has shown, however, that if any true entropy is fed into a stream with compromised entropy, it results in a stream with better entropy. That is, a system using a compromised hardware generator will have more robust entropy when combined with other sources of entropy. The kernel does this via a hash to smear the mix. An attacker can no longer utilize an attack knowing the bits came from the compromised generator.
Things like the bit bubbler are reasonably cheap (~100 dollars US) and provide enough entropy for a small server farm. Even the rtl2832 (~10 dollars US) provides about 90 Kbytes of entropy per second (the kernel entropy pool is 4kB). Not enough for monte carlo simulations, but plenty for a home system or a few servers.
On Mi, 17.04.19 15:14, Steve Grubb (sgrubb@redhat.com) wrote:
#!/bin/sh
while [ 1 ] do /bin/cat /proc/sys/kernel/random/entropy_avail sleep 1 done
Then in another:
cat /dev/random >/dev/null
After a couple seconds, hit ctl-c to kill cat. Watch what happens to the entropy.
Well, don't use /dev/random. Use /dev/urandom. The official documentation declares /dev/random a "legacy interface".
http://man7.org/linux/man-pages/man4/random.4.html
Lennart
-- Lennart Poettering, Berlin
On Wed, Apr 17, 2019 at 11:36 AM Lennart Poettering mzerqung@0pointer.de wrote:
Yeah, all that stuff is stuff the kernel could do better on its own. If the CPU jitter stuff or the TPM stuff is a good idea, then why not add that to the kernel natively, why involve userspace with that? i.e. if the TPM and the CPU jitter stuff can be trusted, then the same thing as for CONFIG_RANDOM_TRUST_CPU=y should be done: pass the random data into the pool directly inside in the kernel.
$ grep CONFIG_HW_RANDOM_TPM /boot/config-5.0.6-300.fc30.x86_64 CONFIG_HW_RANDOM_TPM=y
I've got no idea if this is for TPM 1.x or 2.x or both.
Well, no. I mean, the only way you can do that is by turning rngd into its own init system, if you want it to run before the init system.
/usr/lib/systemd/system/rngd.service contains
WantedBy=multi-user.target
I'm gonna guess Steve Grubb is wondering whether it could be wanted by an earlier target, possibly cryptsetup-pre.target? I don't see a service file in the upstream project so this may have been selected by the Fedora packager as a known to work option.
On Mi, 17.04.19 16:05, Chris Murphy (lists@colorremedies.com) wrote:
On Wed, Apr 17, 2019 at 11:36 AM Lennart Poettering mzerqung@0pointer.de wrote:
Yeah, all that stuff is stuff the kernel could do better on its own. If the CPU jitter stuff or the TPM stuff is a good idea, then why not add that to the kernel natively, why involve userspace with that? i.e. if the TPM and the CPU jitter stuff can be trusted, then the same thing as for CONFIG_RANDOM_TRUST_CPU=y should be done: pass the random data into the pool directly inside in the kernel.
$ grep CONFIG_HW_RANDOM_TPM /boot/config-5.0.6-300.fc30.x86_64 CONFIG_HW_RANDOM_TPM=y
So apparently, since a long time the kernel actually could push data from hwrngs into the kernel pool while crediting entropy:
https://lkml.org/lkml/2018/11/2/193
i.e. it's the "rng_core.default_quality=700" switch on the kernel cmdline.
It sounds like that option is just something that needs a compile time option that Fedora could just turn on.
Quoting from that mail: "This is better than relying on rng-tools."
/usr/lib/systemd/system/rngd.service contains
WantedBy=multi-user.target
I'm gonna guess Steve Grubb is wondering whether it could be wanted by an earlier target, possibly cryptsetup-pre.target? I don't see a service file in the upstream project so this may have been selected by the Fedora packager as a known to work option.
WantedBy= doesn't really say much about when something is started, just about what wants it started. It's not about ordering, it's about requirement.
If you want to order it early then set DefaultDependencies=no and use Before= some appropriate unit.
But this is all pretty much pointless, since PID 1 (systemd) itself already needs entropy, and thus starting this after PID 1 is useless.
Lennart
-- Lennart Poettering, Berlin
On Wed, Apr 17, 2019 at 10:55:58AM -0400, Steve Grubb wrote:
On Wednesday, April 17, 2019 4:38:18 AM EDT Lennart Poettering wrote:
On Di, 16.04.19 09:06, Adam Williamson (adamwill@fedoraproject.org) wrote:
I think all of these are good ideas. "No udev-settle" seems like a nice highlevel goal to shoot for.
Another one I might add: "No stuck stop jobs" - it annoys me every single time when I reboot and something like rngd or conmon holds up my reboot for several minutes for no reason at all.
I've seen the rngd stop thing, hadn't had time to investigate it yet as more urgent fires keep showing up :/
What's the story anyway for rngd? Why would userspace be better at providing entropy to the kernel than the kernel itself? Why do we enable it on desktops at all, such systems should not be entropy-starved. Do we need this at all now that the kernel can use RDRAND itself?
The kernel uses RDRAND/SEED but it does not increment the entropy estimate based on it. Another interesting thing is that TPM chips also have entropy available, but the kernel does not use it. So, if you have a hardware based entropy source such as TPM, you need rngd to move the entropy to the kernel. And it also can mine CPU jitter to create some entropy on its own. And it also supports the NIST beacon if you want that kind of entropy. Rngd greatly helps system recover from low entropy situations.
rngd runs as regular system service, hence what's the point of that altogether? I mean, it runs so late during boot, at a point where the entropy pool is full anyway,
I'd really like to see it start much earlier. Any way to make that happen?
and we need the kernel's RNG much much earlier already (already because systemd assigns a uuid to each service invocation that derives from kernel RNG, and it does that super early). So, why run a service that is supposed to fill up the entropy pool at a point where we don't need it anymore, and if the kernel can do what it does most likely already on its own?
The kernel cannot recover quickly when stressed for continued entropy depletion. For example, we are required to be able to supply all guest VM's with entropy from the host. They draw down the entropy pools which need replenishment. The kernel is constantly starved for entropy.
The recommendation is for virtio-rng to be backed by /dev/urandom these days, so you won't deplete the host /dev/random anymore.
Regards, Daniel
On Wed, 2019-04-17 at 10:55 -0400, Steve Grubb wrote:
On Wednesday, April 17, 2019 4:38:18 AM EDT Lennart Poettering wrote:
On Di, 16.04.19 09:06, Adam Williamson (adamwill@fedoraproject.org) wrote:
I think all of these are good ideas. "No udev-settle" seems like a nice highlevel goal to shoot for.
Another one I might add: "No stuck stop jobs" - it annoys me every single time when I reboot and something like rngd or conmon holds up my reboot for several minutes for no reason at all.
I've seen the rngd stop thing, hadn't had time to investigate it yet as more urgent fires keep showing up :/
What's the story anyway for rngd? Why would userspace be better at providing entropy to the kernel than the kernel itself? Why do we enable it on desktops at all, such systems should not be entropy-starved. Do we need this at all now that the kernel can use RDRAND itself?
The kernel uses RDRAND/SEED but it does not increment the entropy estimate based on it. Another interesting thing is that TPM chips also have entropy available, but the kernel does not use it. So, if you have a hardware based entropy source such as TPM, you need rngd to move the entropy to the kernel. And it also can mine CPU jitter to create some entropy on its own. And it also supports the NIST beacon if you want that kind of entropy. Rngd greatly helps system recover from low entropy situations.
rngd runs as regular system service, hence what's the point of that altogether? I mean, it runs so late during boot, at a point where the entropy pool is full anyway,
I'd really like to see it start much earlier. Any way to make that happen?
and we need the kernel's RNG much much earlier already (already because systemd assigns a uuid to each service invocation that derives from kernel RNG, and it does that super early). So, why run a service that is supposed to fill up the entropy pool at a point where we don't need it anymore, and if the kernel can do what it does most likely already on its own?
The kernel cannot recover quickly when stressed for continued entropy depletion. For example, we are required to be able to supply all guest VM's with entropy from the host. They draw down the entropy pools which need replenishment. The kernel is constantly starved for entropy.
Isn't it time to kick rngd out of the default install, in particular on the workstation image? Isn't keeping it around just cargo culting?
I think you're being harsh without really looking deeply into the problem. If we could set a sysctl to tell the kernel to use a TPM or increment entropy estimate when RDSEED is used, I'd agree we should consider this. And to be honest, it should be running during an anaconda or kickstart install in order to safely setup an encrypted disk.
AFAIK, we already home that in place - if there is not enough entropy when storage encryption is being setup, the installation will pause & notify the user to provide more entropy (generally by monkey-bashing keyboard keys).
Also, livecd uses are starved for entropy and must use rngd to be responsive and safe. If you have a TPM, the best use you'll get out of it is providing random numbers via rngd. :-)
-Steve
devel mailing list -- devel@lists.fedoraproject.org To unsubscribe send an email to devel-leave@lists.fedoraproject.org Fedora Code of Conduct: https://getfedora.org/code-of-conduct.html List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines List Archives: https://lists.fedoraproject.org/archives/list/devel@lists.fedoraproject.org
On 4/17/19 4:38 AM, Lennart Poettering wrote:
On Di, 16.04.19 09:06, Adam Williamson (adamwill@fedoraproject.org) wrote:
I think all of these are good ideas. "No udev-settle" seems like a nice highlevel goal to shoot for.
Another one I might add: "No stuck stop jobs" - it annoys me every single time when I reboot and something like rngd or conmon holds up my reboot for several minutes for no reason at all.
I've seen the rngd stop thing, hadn't had time to investigate it yet as more urgent fires keep showing up :/
What's the story anyway for rngd? Why would userspace be better at providing entropy to the kernel than the kernel itself? Why do we enable it on desktops at all, such systems should not be entropy-starved.
Non developers, true. Developer's workstations, wrong. Just signing a few packages (java's jarsigner) to test your code runs fine under those conditions can drop to near zero the entropy, taking a lot of time to finish the signing.
Do we need this at all now that the kernel can use RDRAND itself?
rngd runs as regular system service, hence what's the point of that altogether? I mean, it runs so late during boot, at a point where the entropy pool is full anyway, and we need the kernel's RNG much much earlier already (already because systemd assigns a uuid to each service invocation that derives from kernel RNG, and it does that super early). So, why run a service that is supposed to fill up the entropy pool at a point where we don't need it anymore, and if the kernel can do what it does most likely already on its own?
Isn't it time to kick rngd out of the default install, in particular on the workstation image? Isn't keeping it around just cargo culting?
Lennart
-- Lennart Poettering, Berlin _______________________________________________ devel mailing list -- devel@lists.fedoraproject.org To unsubscribe send an email to devel-leave@lists.fedoraproject.org Fedora Code of Conduct: https://getfedora.org/code-of-conduct.html List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines List Archives: https://lists.fedoraproject.org/archives/list/devel@lists.fedoraproject.org
On Mo, 22.04.19 08:35, Robert Marcano (robert@marcanoonline.com) wrote:
What's the story anyway for rngd? Why would userspace be better at providing entropy to the kernel than the kernel itself? Why do we enable it on desktops at all, such systems should not be entropy-starved.
Non developers, true. Developer's workstations, wrong. Just signing a few packages (java's jarsigner) to test your code runs fine under those conditions can drop to near zero the entropy, taking a lot of time to finish the signing.
Well, "jarsigner" is broken then. It appears to use /dev/random instead of /dev/urandom. if you use the latter, then you can pull out as much randomness as you want, it's not affected by "entropy depletion".
See man page about that:
http://man7.org/linux/man-pages/man4/urandom.4.html
Lennart
-- Lennart Poettering, Berlin
On Tue, Apr 16, 2019 at 09:06:02AM -0700, Adam Williamson wrote:
On Tue, 2019-04-16 at 11:48 -0400, Matthias Clasen wrote:
On Tue, Apr 9, 2019 at 12:08 PM Lennart Poettering mzerqung@0pointer.de wrote:
Heya,
today I installed the current Fedora 30 Workstation beta on my new laptop. It was a bumpy ride, I must say (the partitioner (blivet?) crashed five times or so on me, always kicking me out of anaconda again, just because I wanted to undo something). But I don't really want to discuss that. What I do want to discuss is this:
Ideally, the top 4 wouldn't be installed at all anymore (in case of the first two at least on the systems which do not need them). But if that's not in the cards, it would be great to at least not enable these services anymore in the default boot so that they are only a "systemctl enable" away for people who need them?
I think all of these are good ideas. "No udev-settle" seems like a nice highlevel goal to shoot for.
Another one I might add: "No stuck stop jobs" - it annoys me every single time when I reboot and something like rngd or conmon holds up my reboot for several minutes for no reason at all.
I've seen the rngd stop thing, hadn't had time to investigate it yet as more urgent fires keep showing up :/
I opened a bug a while back, but it hasn't cropped up since I enabled additional logging: https://bugzilla.redhat.com/show_bug.cgi?id=1690364
Zbyszek