performance boost provided by the BFQ I/O scheduler
by Paolo Valente
Hi,
I'm Paolo, the main developer of the BFQ I/O scheduler.
The switch to the BFQ I/O scheduler by Fedora paves the way to up to a
~10X throughput boost, and up a ~400X latency reduction. This
performance improvement concerns I/O workloads generated by multiple
containers that share common storage devices. Actually it concerns,
in general, also workloads generated by multiple groups, VMs or
entities of any kind.
The reason for these apparently impressive numbers is that all other
solutions for controlling I/O severely underutilize the speed of
storage devices (usually between 10 and 20%).
If so, why probably you have never been warned about such an
impressive waste of resources? Because it is extremely difficult to
guarantee bandwidths and latencies on a loaded drive. So the most
common solution for avoiding starvation, or very high latencies, has
always been to keep storage devices underutilized. When an
underutilized device is hit by the I/O of some container/group/VM, it
is likely to serve this I/O very quickly, because it is unlikely to be
already busy serving other I/O. If the I/O demand grows, then one
simply adds more drives, so as to keep utilization low. And when this
stops scaling, one goes buy faster drives.
More clever solutions do exist. They are based on I/O throttling.
But, depending on the workload, these solutions may happen to forcibly
lower utilization to about the same values reached with the above
solution.
In contrast, BFQ is smart enough to highly utilize drives, with every
workload. So, using, e.g., only one drive, BFQ satisfies an I/O
demand that requires from 5 to 10 drives with the other solutions.
If you want to take advantage of this performance boost in Fedora
CoreOS, I'm willing to help in every step.
Thanks,
Paolo
4 years, 6 months
Reinstall of CoreOS after adding disk bombs
by Shivaram Mysore
Hello,
I used mmcblk0 as the initial boot disk and was able to get coreos
installed and fully operational. I then added a disk sda. I reinstalled
(via PXE - fully wipe) coreos on mmcblk0
Generating "/run/initramfs/rdsosreport.txt"
Entering emergency mode. Exit the shell to continue.
Type "journalctl" to view system logs.
You might want to save "/run/initramfs/rdsosreport.txt" to a USB stick or
/boot
after mounting them and attach it to a bug report.
possible culprit segment: (attached full report)
0m] Started dracut initqueue hook.
[ 19.087502] audit: type=1130 audit(1569089886.588:10): pid=1 uid=0
auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue
comm="systemd'
[ 19.087975] systemd[1]: Reached target Remote File Systems (Pre).
[ OK ] Reached target Remote File Systems (Pre).
[ OK 19.122542] systemd[1]: Reached target Remote File Systems.
0m] Reached target Remote File Systems.
[ 19.139331] systemd[1]: Starting dracut pre-mount hook...
Starting dracut pre-mount hook...
[ 19.187689] systemd[1]: Started dracut pre-mount hook.
[ OK ] Started dracut pre-mount hook.
[ 19.201538] audit: type=1130 audit(1569089886.702:11): pid=1 uid=0
auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount
comm="systemd'
[ 19.204533] systemd[1]: Starting File System Check on
/dev/disk/by-label/root...
Starting File System Check on /dev/disk/by-label/root...
[ 19.260857] systemd-fsck[863]: /usr/sbin/fsck.xfs: XFS file system.
[ OK 19.270410] systemd[1]: Started File System Check on
/dev/disk/by-label/root.
0m] Started File System Check on /dev/disk/by-label/root.
[ 19.291614] systemd[1]: Mounting /sysroot...
Mountin[ 19.296078] audit: type=1130 audit(1569089886.789:12):
pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel
msg='unit=systemd-fsck-r'
g /sysroot...
[ 19.771792] SGI XFS with ACLs, security attributes, scrub, no debug
enabled
[ 19.788579] XFS (sda4): Mounting V5 Filesystem
[ 19.813813] XFS (sda4): Ending clean mount
[ OK 20.125540] systemd[1]: Mounted /sysroot.
0m] Mounted /sysroot.
[ 20.137017] systemd[1]: Condition check resulted in Remount /sysroot
read-write for Ignition being skipped.
Startin[ 20.147072] systemd[1]: Starting OSTree Prepare OS/...
g OSTree Prepare OS/ 20.155340] ostree-prepare-root[885]:
ostree-prepare-root: Couldn't find specified OSTree root
'/sysroot//ostree/boot.1/fedora-corey
m...
[ 19.202077] ostree-prepare-root[885]: ostree-prepare-root: Couldn't find
specified OSTree root
'/sysroot//ostree/boot.1/fedora-coreos/b2601a3ea2062ef1y
[FAILED[ 20.201741] systemd[1]: ostree-prepare-root.service: Main process
exited, code=exited, status=1/FAILURE
] Failed to [ 20.212566] systemd[1]: ostree-prepare-root.service: Failed
with result 'exit-code'.
start O[ 20.221776] systemd[1]: Failed to start OSTree Prepare OS/.
STree Prepare OS/
Let me know if you need any other info
4 years, 6 months
Re: performance boost provided by the BFQ I/O scheduler
by Paolo Valente
> Il giorno 4 ott 2019, alle ore 19:14, Ed Vielmetti <ed(a)packet.com> ha scritto:
>
> Paolo, I am looking forward to seeing this work go forward!
>
Very glad to hear that, thanks! :)
I have done a lot of benchmarking already, and put my results in an
article [1]. I should publish soon two new articles comparing BFQ
against the new Facebook I/O controllers.
I think next developments now also depend on users/companies starting
to schedule container I/O workloads with BFQ (as well as any kind of
multi-client workload).
[1] https://www.linaro.org/blog/io-bandwidth-management-for-production-qualit...
> I suspect that combined with having the relevant fs on a tmpfs in RAM that this may make some things very, very fast.
>
That's rather intriguing. I hope someone will try such a
combination for their real workloads, and report back.
Thanks,
Paolo
> On Fri, Oct 4, 2019 at 1:08 PM Paolo Valente <paolo.valente(a)linaro.org> wrote:
>
>
> > Il giorno 4 ott 2019, alle ore 17:32, Dusty Mabe <dusty(a)dustymabe.com> ha scritto:
> >
> >
> >
> > On 10/4/19 11:06 AM, Paolo Valente wrote:
> >> Hi, I'm Paolo, the main developer of the BFQ I/O scheduler.
> >
> > Hi Paolo!
> >
>
> Hi
>
> >>
> >> The switch to the BFQ I/O scheduler by Fedora paves the way to up to
> >> a ~10X throughput boost, and up a ~400X latency reduction. This
> >> performance improvement concerns I/O workloads generated by multiple
> >> containers that share common storage devices. Actually it concerns,
> >> in general, also workloads generated by multiple groups, VMs or
> >> entities of any kind.
> >>
> >> The reason for these apparently impressive numbers is that all other
> >> solutions for controlling I/O severely underutilize the speed of
> >> storage devices (usually between 10 and 20%).
> >>
> >> If so, why probably you have never been warned about such an
> >> impressive waste of resources? Because it is extremely difficult to
> >> guarantee bandwidths and latencies on a loaded drive. So the most
> >> common solution for avoiding starvation, or very high latencies, has
> >> always been to keep storage devices underutilized. When an
> >> underutilized device is hit by the I/O of some container/group/VM,
> >> it is likely to serve this I/O very quickly, because it is unlikely
> >> to be already busy serving other I/O. If the I/O demand grows, then
> >> one simply adds more drives, so as to keep utilization low. And when
> >> this stops scaling, one goes buy faster drives.
> >>
> >> More clever solutions do exist. They are based on I/O throttling.
> >> But, depending on the workload, these solutions may happen to
> >> forcibly lower utilization to about the same values reached with the
> >> above solution.
> >>
> >> In contrast, BFQ is smart enough to highly utilize drives, with
> >> every workload. So, using, e.g., only one drive, BFQ satisfies an
> >> I/O demand that requires from 5 to 10 drives with the other
> >> solutions.
> >>
> >> If you want to take advantage of this performance boost in Fedora
> >> CoreOS, I'm willing to help in every step.
> >
> > It looks like the original request for this was made to Fedora in [1]
> > and applied to F31+.
> >
> > Fedora CoreOS uses the same systemd from Fedora so unless we explicitly decide
> > against it we'll be using what Fedora does. I don't see any reason to differ here.
> >
> > Paolo, does that match your understanding?
> >
>
> Yep. The issue I wanted to address with this topic is that maybe few
> people know about the 10X throughput they can get, with BFQ, for
> container workloads. And now that they know it, maybe they still
> don't know how to enable this boost (fortunately, it is extremely
> easy). So I'm saying mainly "hey, here I am to help!" :)
>
> Thanks,
> Paolo
>
> > Dusty
> >
> >
> >
> > [1] https://bugzilla.redhat.com/show_bug.cgi?id=1738828
> > [2] https://github.com/systemd/systemd/pull/13321#issuecomment-522700152
> _______________________________________________
> CoreOS mailing list -- coreos(a)lists.fedoraproject.org
> To unsubscribe send an email to coreos-leave(a)lists.fedoraproject.org
> Fedora Code of Conduct: https://docs.fedoraproject.org/en-US/project/code-of-conduct/
> List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines
> List Archives: https://lists.fedoraproject.org/archives/list/coreos@lists.fedoraproject.org
4 years, 6 months
Fedora CoreOS Meeting Minutes 2019-10-02
by Dusty Mabe
Minutes: https://meetbot.fedoraproject.org/fedora-meeting-1/2019-10-02/fedora_core...
Minutes (text): https://meetbot.fedoraproject.org/fedora-meeting-1/2019-10-02/fedora_core...
Log: https://meetbot.fedoraproject.org/fedora-meeting-1/2019-10-02/fedora_core...
========================================
#fedora-meeting-1: fedora_coreos_meeting
========================================
Meeting started by dustymabe at 16:30:25 UTC. The full logs are
available at
https://meetbot.fedoraproject.org/fedora-meeting-1/2019-10-02/fedora_core...
.
Meeting summary
---------------
* roll call (dustymabe, 16:30:34)
* Action items from last meeting (dustymabe, 16:34:25)
* FCOS as Kubernetes / OKD node (dustymabe, 16:39:27)
* LINK: https://github.com/coreos/fedora-coreos-tracker/issues/93
(dustymabe, 16:39:32)
* Encryption: All disks are belong to us (dustymabe, 16:48:16)
* LINK: https://github.com/coreos/fedora-coreos-tracker/issues/287
(dustymabe, 16:48:21)
* TL;DR: we'd like encrypted disks in RHCOS, so there will be some
supporting work in FCOS. FCOS could consider adopting encrypted
disks support as well (dustymabe, 16:53:18)
* LINK:
https://github.com/coreos/fedora-coreos-tracker/issues/94#issuecomment-51...
(ajeddeloh, 16:57:16)
* for FCOS we want disk encryption. There are a few ways to achieve
that goal and we prefer leaving the existing partition structure in
place and replacing it on boot if the users asks for encryption
(dustymabe, 17:07:50)
* CI: Prow integration (dustymabe, 17:08:37)
* LINK: https://github.com/coreos/fedora-coreos-tracker/issues/263
(dustymabe, 17:08:44)
* open floor (dustymabe, 17:19:12)
* we added architecture information into our image filenames that are
output by our build system
https://github.com/coreos/fedora-coreos-tracker/issues/264
(dustymabe, 17:20:21)
* jlebon got a kernel patch accepted upstream to help us support
selinux with ignition/dracut:
https://lore.kernel.org/selinux/20190912133007.27545-1-jlebon@redhat.com/...
(dustymabe, 17:28:41)
Meeting ended at 17:29:51 UTC.
Action Items
------------
Action Items, by person
-----------------------
* **UNASSIGNED**
* (none)
People Present (lines said)
---------------------------
* dustymabe (80)
* ajeddeloh (51)
* walters (27)
* darkmuggle (20)
* zodbot (20)
* strigazi (19)
* jlebon (11)
* bgilbert (8)
* jbrooks (2)
* kaeso[m] (1)
* slowrie (1)
Generated by `MeetBot`_ 0.1.4
.. _`MeetBot`: http://wiki.debian.org/MeetBot
4 years, 6 months