using fstrim to save some space in cloud images
by Dusty Mabe
Hey all,
Have we considered running fstrim against our cloud image filesystems
before we package it up? I wrote a small script to do it (inside a
container) at [1]. Looks like we can save ~28M:
[root@f22 xzimg]# ls -lh ./
total 1.3G
-rw-r--r--. 1 root root 3.0G May 22 00:13 orig.raw
-rw-r--r--. 1 root root 147M May 22 00:13 orig.raw.xz
-rw-r--r--. 1 root root 3.0G Jun 17 16:52 trimmed.raw
-rw-r--r--. 1 root root 129M Jun 17 17:21 trimmed.raw.xz
[root@f22 xzimg]# du -sh ./*
517M ./orig.raw
147M ./orig.raw.xz
489M ./trimmed.raw
129M ./trimmed.raw.xz
[1] - https://gist.github.com/dustymabe/ad4be48c948c2e601b85
8 years, 3 months
atomic, kubernetes, etc on non x86_64
by Dennis Gilmore
Hi all,
Last night I had some time to myself, I decided to look at what it would take
to get atomic running on arm. after having to tweak some of the json files.
the hardcoded ref in it if not flexible at all
- "ref": "fedora-atomic/rawhide/x86_64/docker-host",
+ "ref": "fedora-atomic/rawhide/armhfp/docker-host",
Neither is the hardcoded packages,
- "grub2", "grub2-efi", "ostree-grub2",
- "efibootmgr", "shim",
+ "extlinux-bootloader",
the packages in every other part of our deliverables are dealt with by using
comps and yum/dnf skipping over missing things. Which made me curious about
how it was envisioned to support atomic on multiple arches as it seems to be
designed around a single arch silo.
However once I got past that I discovered that atomic and kubernetes both had
"ExclusiveArch: x86_64" in the spec files (Violating packaging guidelines in
the process) but they do actually build just fine for all the primary arches
and are installable on arm at least. I was able to make a atomic repo in the
end. I plan to throw together a kickstart and attempt to install it as soon
as I can.
What will it take to fix the packaging and get people on board for supporting
the greater world? could it be something we work with someone like
https://www.scaleway.com/ who have arm based cloud servers today to support?
Dennis
8 years, 3 months
Atomic Host and the kernel
by Josh Boyer
Hi All,
I'm emailing my questions on the topic here as it seems to be the best
Fedora focused place to discuss Atomic Host and kernel interaction.
If that isn't the case, please point me to where you believe that is.
I have two basic questions around the interaction of Atomic Host and
the kernel. The first is fairly straightforward: is there anything
Atomic Host or the atomic toolset needs that the kernel does not
provide today? Missing features, bugs that have been hit but not
fixed, etc. I believe the answer is likely no, given that atomic is
off and running fine and leverages hardlinks but I thought I would
ask.
The second question is a bit more involved. Atomic provides the nice
ability for rollback across the entire OS tree. However, that
requires an atomic image to be spun for every instance of that tree.
That, naturally, means that whenever a new Atomic Host instance is
spun it will use whatever kernel happens to be the latest in the
Fedora release it is built from. This means that one cannot leverage
the nice side effect of being able to update the kernel independently
of userspace. (Which is also nice from a testing perspective when it
comes to kernels and regressions.)
To my understanding, the only way to provide such testing would be to
create Atomic Host images that only deviate from the official images
in that they provide a new kernel. Then one could use the standard
atomic tools to do testing and rollback of _only_ the kernel if a
problem is detected. While this is certainly possible, I'm not sure
it is something the Cloud sig (or whomever) is really interested in
doing. On the kernel side, we could provide such images built on our
own but I'm not sure the effort or duplication of
tooling/infrastructure is worthwhile overall. Particularly when
non-atomic Rawhide continues to be flexible enough for these purposes.
With a two week image release timeframe though, being able to use
different kernels might be a good idea. Does anyone have any thoughts
around this topic and how to possibly accomplish such testing? The
only other idea I had was to spin the Atomic Host images containing
the last 3 kernels in them, but I am not sure if choosing between them
at boot is currently possible with multiple kernels installed.
Thanks in advance.
josh
8 years, 3 months
[DISCUSS] Making Atomic the cloud edition
by Joe Brockmeier
Hi all,
For the folks who were at Flock last week, this is a recap of the
discussion we had and what I recall as the general agreement in the
room. If my memory has failed me, please add or correct as necessary.
For folks who weren't at Flock (or were, but not in the cloud working
group meeting), this is a brief recap of what we discussed and is
proposed - but *not* decided. I would like to reach a decision /
consensus here, so let's discuss here and I'll ask the working group
members to explicitly +1 (or not) within 72 hours. But absent any hard
-1s, better proposals, etc. then I'd like to close the discussion within
that timeframe so we can move on to discussing with FESCo and other
groups (Websites, marketing) who we'll need to sync with.
Given that a great deal of interesting work is going into the Fedora
Atomic host, we'd like to make Atomic the main deliverable/focus for the
Cloud Working Group and Cloud edition.
However, we know that Atomic doesn't fit well in the standard Fedora
six-month cycle, so we'd further propose making the two-week releases
the default deliverable - and work on appropriate testing so that users
who are using Fedora Atomic can expect that their containers and
Kubernetes orchestration won't break, but also will not need to care
whether the underlying release is based on F23, rawhide, etc.
This is going to require a lot of work to be done on testing so we can
ensure that we're not breaking anything and containers "just work" on
Atomic as users follow the updates on the 2-week cycle.
This will, I believe, need to go to FESCo and we'll have to put in some
serious cycles on documentation and work on marketing this. It's also
worth noting that this will mean very frequent releases and marketing
touchpoints as opposed to just alpha, beta, and final releases every six
months.
We also will continue to do the base cloud image - that won't go away -
but it won't be the focus of the working group or its marketing.
Finally, we also discussed that the host was only part of the larger
effort - we also need to pour some attention into improving the Docker
image, making that smaller and a better option.
Thoughts, comments, flames? Did I miss anything?
(Apologies if this is not the most coherent summary - I'm typing this
from the floor of LinuxCon North America and not able to give this the
amount of revision I would usually give for something of this
importance. However, time is a factor as the decision is required to
move forward on other items.)
Thanks,
jzb
--
Joe Brockmeier | Community Team, OSAS
jzb(a)redhat.com | http://community.redhat.com/
Twitter: @jzb | http://dissociatedpress.net/
8 years, 3 months
Local DNSSEC resolver & Containers
by P J P
Hello,
-> https://lists.fedoraproject.org/pipermail/cloud/2015-January/004867.html
As per the previous discussion above, I was able to use iptables(8) DNAT rule to divert DNS traffic from Docker containers to a DNSSEC resolver on the host at 127.0.0.1:53.
Please see:
-> https://fedoraproject.org/wiki/Changes/Default_Local_DNS_Resolver#Docker_...
One needs to enable local 'lo' routing via 'docker0' bridge and add the DNAT rule to divert DNS requests to the local resolver. Above configuration is working good on F22 with Docker version 1.6.0, build 9d26a07/1.6.0.
I'd like to hear if you have any comments/suggestions/inputs about the same. Because when the local DNSSEC feature goes live(F23), it would be required to add such configuration on the host, so that the container applications could take full advantage of the DNSSEC resolver.
IMO, Docker daemon is best suited to make the required configuration changes on the host. Because one, it already adds few iptables(8) rules on the host. And second, it checks host's name-server settings in '/etc/resolv.conf' and copies the non-localhost(127.0.0.1) servers to the container. When localhost(127.0.0.1) is the only name-server on the host, it defaults to using Google public DNS servers inside containers. It should be fairly straight forward for the Docker daemon to enable local 'lo' routing and add the DNAT rule upon detecting '127.0.0.1' as name-server on the host.
Your comments/suggestions/inputs are most welcome.
Thank you.
---
Regards
-P J P
http://feedmug.com
8 years, 3 months
Notes/Minutes from the Flock Meeting
by Brian Exelbierd
Hi All,
At Flock I was asked to take notes during the meeting. The following
represents my attempt to follow the conversation and provide some
logical flow. I did not record names of speakers, partially out of not
knowing everyone, and partially because I didn't think of it. I
strongly encourage replies with questions (to suss out details I may
have glossed over) and to continue these conversations, where not
already started.
Please see/edit the notes here:
https://fedoraproject.org/wiki/Cloud_SIG_Meeting_-_Flock_2015_-_14_August...
Thank you.
regards,
bex
8 years, 3 months