Custom built tree with new kernel
by Dusty Mabe
Please test it out so that we can get the new kernel karma'd as soon as possible:
ostree remote add --set=gpg-verify=false kerneltest https://dustymabe.fedorapeople.org/repo/
rpm-ostree rebase kerneltest:fedora-atomic/25/x86_64/docker-host
This tree currently has the following changes from current stable:
Changed:
kernel 4.9.4-201.fc25 -> 4.9.5-200.fc25
kernel-core 4.9.4-201.fc25 -> 4.9.5-200.fc25
kernel-modules 4.9.4-201.fc25 -> 4.9.5-200.fc25
kubernetes 1.4.7-1.fc25 -> 1.5.2-2.fc25
kubernetes-client 1.4.7-1.fc25 -> 1.5.2-2.fc25
kubernetes-master 1.4.7-1.fc25 -> 1.5.2-2.fc25
kubernetes-node 1.4.7-1.fc25 -> 1.5.2-2.fc25
I'll give a link to the bodhi update when it is submitted.
Dusty
7 years, 2 months
Re: [atomic-devel] F25: older versions of cockpit in atomic host
by Dusty Mabe
On 01/17/2017 11:26 AM, Stef Walter wrote:
>
> Indeed. We'll be releasing Cockpit 129 shortly, which will include an
> Obsoletes directive in the RPM and should fix this issue. Sorry for the
> breakage.
No worries. Glad I saw it and we plan on detecting this sort of thing
in the future.
Dusty
7 years, 3 months
F25: older versions of cockpit in atomic host
by Dusty Mabe
I noticed today when comparing two trees, one from Dec25 and the
latest one, that the version of cockpit went backwards.
!cockpit-bridge-126-1.fc25.x86_64
=cockpit-bridge-120-1.fc25.x86_64
!cockpit-docker-126-1.fc25.x86_64
=cockpit-docker-120-1.fc25.x86_64
!cockpit-networkmanager-126-1.fc25.noarch
=cockpit-networkmanager-120-1.fc25.noarch
!cockpit-ostree-126-1.fc25.x86_64
=cockpit-ostree-120-1.fc25.x86_64
!cockpit-shell-126-1.fc25.noarch
=cockpit-shell-120-1.fc25.noarch
This is actually because cockpit changed cockpit-shell to cockpit-system [1]
but we are explicitly including cockpit-shell in our manifest
[2]. This means that the only version it could find that had
cockpit-networkmanager and cockpit-shell was the one from fedora 25
release (not updates).
I'll open an issue to try to come up with ideas on how to prevent this
type of thing from happening in the future without going noticed.
Dusty
[1] https://github.com/cockpit-project/cockpit/commit/e13e193ec19708f6488c32b...
[2] https://pagure.io/fedora-atomic/blob/f25/f/fedora-atomic-docker-host.json...
7 years, 3 months
[atomic-wg] Issue #185 `November 21 ISO is not bootable`
by Pagure
jberkus reported a new issue against the project: `atomic-wg` that you are following:
``
I'm currently testing the November 21 ISOs by installing on my minnowboard cluster. It seems to be the case that these ISOs are not bootable; after getting through most of the install, they give the following error:
failed to write boot loader configration
Once the new ISOs are available, we need to retest and see if this is a general issue.
``
To reply, visit the link below or just reply to this email
https://pagure.io/atomic-wg/issue/185
7 years, 3 months
Hardware for FOSP
by Michael Scherer
Hi,
so as people asked me on irc, it seems there was a lack of communication
regarding the hardware for the FOSP project
(https://pagure.io/atomic-wg/issue/153).
Due to internal changes (on top of regular changes) on the other side of
the RH firewall, the acquisition of the servers and the allocation of
the budget took a much longer time than planned, and we got the fund
allocated last week.
So we started working right away on a quote for hardware, and for
various reason, we settled on the following to be shared between the
requirement for my team (OSAS) and Fedora Cloud:
- 1 6U super micro chassis,
https://www.supermicro.com/products/MicroBlade/index.cfm IT can hold 28
blades servers.
So to fill the chassis, we selected:
- 2 * MBI-6418A-T5H, with 4 nodes, each having 4G of ram and a small
disk ( 128 or 64G ssd, depending on price)
- 2 * MBI-6118D-T4, with 4 disk of 1T (or 2T) per disk
- 16 * MBI-6128R-T2, each with 128G of ram, with ssd and 256G if
possible.
and free slots for later (8 slots).
The 4 first blades are to be used by OSAS, to host various small
services and backups. The rest is for Fedora, and the 16 MBI-6128R-T2
would have at least 128G of ram, with 256G ssd, 2 xeon with 18 cores, 1G
connectivity. I did try to optimize to not have too much unused
ressources, but that's gonna still be a lot of ressources.
We plan to host that in the new space we are gonna have near Raleigh, in
the community cage. I do not have a public page to give about that yet,
since we are still working on hosting public pages and a website about
that, but people can ping me if they want more information on this.
For the deployment itself, I had emergency (server, laptop) that did
prevent me from working more on it for now.
--
Michael Scherer
Sysadmin, Community Infrastructure and Platform, OSAS
7 years, 3 months