F24 Talking Points for Fedora Server: Request for Info
by Remy DeCausemaker
Server WG's,
The marketing team has been putting together general talking points
for our upcoming F24 release, and sourcing input from working groups.
For the most part, this has been quite effective, but our section for
Fedora Server is a bit lacking:
https://fedoraproject.org/wiki/Fedora_24_talking_points#Fedora_Server
If some of the folks here wanted to add some talking points to the
wiki page above, that would be super helpful. Otherwise, if you didn't
want to edit the wiki directly, you can reply here with any
information/bullet points, and I would be happy to add those to the
page itself.
Thank you for your time, and we look forward to helping promote Fedora Server :)
--RemyD.
--
Remy DeCausemaker
Fedora Community Lead & Council
<decause(a)redhat.com>
https://whatcanidoforfedora.org
7 years
Agenda for Server SIG Meeting (2016-03-15)
by Stephen Gallagher
We've had two discussions on the list this past week that I think were very
good, but we should probably hash out a decision on them at the weekly meeting.
1) Available Environments on the Server DVD (and what do we call them).
Quick recap summary: thanks to new changes in the compose process, the
selectable environments on the DVD installer are no longer automatically created
from the comps.xml, so we can choose whether we want the DVD to offer anything
besides the basic Server Edition. In particular, the most important question on
the table is whether we want to install a "Minimal" environment (and if so,
whether we want to rename it so that it's clear that it isn't actually Fedora
Server).
2) Default guided partitioning scheme
Chris Murphy raised some excellent questions about how we might lay out the
default partitioning scheme of Fedora Server. Currently, the default layout is
pretty much what it was in the pre-Fedora.next period, which means that it's
basically three partitions; an up to 50GB / (root) partition, a swap partition
based on available RAM and a /home partition with all the remaining space. This
is a layout that makes a lot of sense for a Workstation installation, but
perhaps not as much for a Server installation.
The current proposal on this would be:
Modify the InstallClass in the F24 Server productimg package so that the default
installation would require a minimum of 2GiB and a maximum of ??? (TBD in
meeting) GiB for the / (root) partition, the usual swap calculation and then
reserve the remainder of the space for future use and modification by tools such
as docker-storage-setup and Cockpit.
(Also up for discussion: whether /var should be a separate partition by default)
7 years
Passing the torch
by Stephen Gallagher
Today, during the Server SIG weekly meeting, I announced that I plan to step
down from the Server Working Group (and by extension as the FESCo liaison),
effective at 1600 UTC on Tuesday March 29th, or following the Fedora 24 Alpha Go
decision, whichever comes later.
This was a difficult decision for me, but there were several factors in my
decision. One was burn-out: I've been giving a lot of myself to this project and
it has been draining. The other was that my tasks at Red Hat have shifted away
from the base platform and into the PaaS world and thus I can justify less of my
paid time to work on Fedora Server directly.
So for a time at least, I am going to step away and watch from a distance. To
that end, we will want to start looking for
1) Someone new to occupy my former seat on the WG
2) A WG member to take over duties as chair and FESCo liaison
If you are reading this and know of a good candidate to fill this vacancy,
please get them in touch with us.
7 years
Fedora Server partition tweaking
by Chris Murphy
Most Server product users probably use custom storage configurations
in the installer. But that might be a wrong guess.
This is the default auto partitioning for a 2TB drive in Server:
https://drive.google.com/open?id=0B_2Asp8DGjJ9b0ViMUpPZkFoTW8
/home is obviously quite huge, and is optimal for something like a
file server. It's suboptimal for VM's, databases, and containers.
While a Btrfs layout (I can't help myself) solves the unknown space
and use case question without involving the user, there are perhaps
other suboptimals.
But Cloud Atomic ISO offers a solution. Its autopartitioning layout
reserves a large pile of free extents in the VG, not associating them
with any LV. On first boot, docker-storage-setup turns those free
extents into a thin pool LV. But the Server product could just leave
them unallocated, and the user could allocate that space however they
want post-install.
Documentation could suggest system-storage-manger or blivet-gui as
alternatives to LVM tools for allocating that space.
Comments?
--
Chris Murphy
7 years
RE: Fedora Server partition tweaking
by Mark Dean
This looks more like a desktop partitioning scheme. Most servers will need a larger /var especially /var/log and /opt depending on what type of server so I like to see those broken out as separate partitions. /home can be very small on servers. I like to see a 2GB /boot. So to keep it simple, make /home 50GB and / 1.95TB. That way a runaway app under /opt or excessive logging under /var doesn't fill up the root partition. I use separate partition /data for where databases, user shares or other content is stored.
My preference, as you bring out, us to leave large amounts of unused space so the specific server needs can be met once the base fike system is set since it really is on a case by case basis as to the "optimal" partitioning layout.
-------- Original message --------
From: Chris Murphy <lists(a)colorremedies.com>
Date: 03/10/2016 23:19 (GMT-05:00)
To: server(a)lists.fedoraproject.org
Subject: Fedora Server partition tweaking
Most Server product users probably use custom storage configurations
in the installer. But that might be a wrong guess.
This is the default auto partitioning for a 2TB drive in Server:
https://drive.google.com/open?id=0B_2Asp8DGjJ9b0ViMUpPZkFoTW8
/home is obviously quite huge, and is optimal for something like a
file server. It's suboptimal for VM's, databases, and containers.
While a Btrfs layout (I can't help myself) solves the unknown space
and use case question without involving the user, there are perhaps
other suboptimals.
But Cloud Atomic ISO offers a solution. Its autopartitioning layout
reserves a large pile of free extents in the VG, not associating them
with any LV. On first boot, docker-storage-setup turns those free
extents into a thin pool LV. But the Server product could just leave
them unallocated, and the user could allocate that space however they
want post-install.
Documentation could suggest system-storage-manger or blivet-gui as
alternatives to LVM tools for allocating that space.
Comments?
--
Chris Murphy
_______________________________________________
server mailing list
server(a)lists.fedoraproject.org
http://lists.fedoraproject.org/admin/lists/server@lists.fedoraproject.org
7 years
Environments on the Server DVD: do we want minimal?
by Adam Williamson
Hi, folks!
So we noticed today (actually I noticed earlier, but forgot) a couple
of issues with the available 'environments' (package sets) on the
F24/Rawhide Server DVDs.
Most obviously, the actual 'Fedora Server' environment (server-product-
environment) doesn't show up as an available selection; you only get
the 'Infrastructure Server' and 'Web Server' environments. This
*obviously* isn't what we want. I've sent a pungi-fedora PR which
should get Server back:
https://pagure.io/pungi-fedora/pull-request/12
that's straightforward. However, there's another question: what
environments apart from Server, if any, do we want to show?
In F23 we showed these environments in this order (comps name in
brackets):
Minimal Install (minimal-environment)
Fedora Server (server-product-environment)
Web Server (web-server-environment)
Infrastructure Server (infrastructure-server-environment)
As I understand it, this wasn't really because we wanted to show those
environments, but because under the F23 compose process, the list would
always contain any environment for which all packages happened to be
available from the current package sources.
It seems that for F24+, the list is kinda 'baked in' at compose time,
so we have the opportunity to list exactly the environments we want,
and only those environments.
For now my PR would make the DVD list only Fedora Server as an
available environment. However, it's not clear whether or not we should
include Minimal.
QA actually has a specific use case for minimal on the Server DVD:
openQA. We have a bunch of generic installer tests which just need to
run the installer and do...something, like install to a specific
filesystem or with a specific disk layout. Up to now, our ideal setup
for these tests has been to run them on the Server DVD image with the
'minimal' environment. That means all the packages come from the
install image (no network traffic to download them) and a small package
set is installed (saves time).
If Server DVD doesn't have minimal, we can't have this ideal
environment any more. We'd have to pick Server DVD with Server
environment - which avoids network traffic, but installs nearly twice
as many packages as minimal - or Everything boot.iso with minimal
environment - which installs the smallest package set, but re-downloads
all the packages over the network for every test (which is about 50
tests).
Still, that's a fairly artificial use case, so we're resigned to losing
minimal from the DVD if no-one wants it for other reasons.
WDYT?
-- Adam WilliamsonFedora QA Community MonkeyIRC: adamw | Twitter: AdamW_Fedora | XMPP: adamw AT happyassassin . nethttp://www.happyassassin.net
7 years
Summary of Presenation on Fedora.next and the Modularity Initiative
by Máirín Duffy
Hi,
Here's a summary of the Fedora.next and the Modularity Initiative presentation Langdon gave today. It's mostly a log copy/paste job, but I tried to follow the threading of the conversation and organize it a bit differently than a straight chronological log would have done.
---
= Modularization =
The idea that we can start to work with modules as a larger granular unit than a package. This does not mean "packages" go away but rather that we start to worry about the things at a higher level. This allows us to recognize some of the changes to the development world and packaging world. This also allows tradeoffs between them as we go.
== Core Module ==
We want to consider a "core" module that is useful for physical, vm, and container images installs.. key being "binaries available but not all installed in all cases, but we also want to consider a few simple modules that are basically just "applications".."
== Example Modules ==
Simple module examples being considered:
* httpd - in fedora but a desire for updates on their own cycle)
* howdoi - a little python app not in fedora but fun and useful, also changes a lot)
== Motivations / Goals ==
Look at puppet as an example (which is in my forthcoming blog post).. important to fedora .. but countless problems about maintaining the "correct" version of ruby for it (and I am sure many other examples)..
1) allow software to operate at a difference pace from OS
one of the main goals here is to allow for things to operate on a separate pace from the core OS.. in other words.. less like a "distro is released at point x" rather "an os is released at point x" and a bunch of apps are released on their own lifecycles..
2) allow an application to have an independent lifecycle, including its dependencies, from other applications and the OS but only branch on deps when necessary
another motivation is "quality" a la rings.. but that isn't "quite" right.. as in the quality of a beta isn't "bad" it just isn't ready yet.. the "quality" of puppet isn't bad.. it just hasn't updated to latest ruby yet.. and the latest python isn't "bad" just the rest of the distro isn't ready for it yet.. so .. these things need their own lifecycles.. modularity is attempting to support that
3) make an explicit promise about whether a dep is for an app or meant to be shared
another motivation is the i included this package for my app to work but i am only supporting it for my app.. but, now a few apps use it.. and now we have another lifecycle problem.. does app-a or app-b get to decide on an update? what if we could declare a dep as a shared one or a private-to-app one..
4) move testing from implicit every package to explicit exposed public apis of a module
Make testing at a module level rather than individual or whole-distro level. right now we implicitly promise an api/abi for every rpm for the lifecycle of the release.. but we don't really mean it, i propose moving it to more explicit, limited api/abis at a less granular level
= Questions =
== Q: Module == role? ==
* roles are possibly one implementation or a subset of modules
* roles, as i understand them, are about getting something "running".. a role may require / use one or more modules. but modules are more about the logical set of things.. rather than the use of them..
== Q: Would the core be the same for each use case or would it be tuned specifically via configs / etc for each case? ==
* the "module" would be the same... but the installation on demand would be different
* the core is a set of packages, which may or may not all be installed for each usecase. for example, containers don't need a kernel. but the kernel is probably going to be in the core module
* a module is just the software, the specific config is layered on top
* If you want to build something and call it Fedora, the foundational pieces would come from that repo (core)
== Q: Q: So different lifecycles this implies that some modules are incompatible with others, right? ==
* yes
* that is a great question.. i think interactions between them might be "not supported" (in a sense).. but i think the long term goal is that they can coexist without hurting each other.. aside from deps...as in httpd-24-module deps core-f23 & core-f24 .. but not core-f22
* not trying to go into implementation, but something like SCL's?
** scls are another good example of a "solution" that was never a first-order citizen.. just a way to solve a problem
** in other words.. we have lots of things that deal with this lifecycle symptom.. but not at the core (sorry to reuse the word) of the distro
== Q: how do we avoid balkanization and having to maintain 15 different versions of the same package ? and importantly, keeping things around that have known security issues? (how do we even maintain 2 version in Fedora, given currently you cannot have more than one version per package in repos ?) ==
* That's an open question and one that can *only* lead to complicated implementation discussions. Let's hold that for later. Or I guess we can discuss the high-level maintenance burden issues.
* righto.. that is a problem.. but one we currently have.. i think it is just not as explicit.. we need to still foster a community working together... but, enforcement via fiat seems to cause workarounds not solutions
* figuring out an implementation for that is part of the immediate work to do #todo
* I'd suggest that each module should probably have its own SIG maintaining it. And if the SIG falls into a lack of maintenance, the module is retired.
** SIG is too big a construct
** something smaller than SIG
** maintainers/co-maintainers, like packages probably
* People who need it for their module are responsible. If a version maintained to a high standard by someone else exists, great. If not, don't stop them from doing it as long as bar for security responsiblity is met
== Q: are modules "supported" (as in security) by Fedora itself? ==
* I'd like to have levels, ranging from "very centrally important -- critpath" to "gets security attention and help" to "copr-like edge" to "third party"
== Q: so there is a strong incentive to Fedora to keep the modules it supports on the same schedule? at least until/if tooling makes it a non-issue? ==
* Yes, very much so. Although we would at least have the ability to have differing schedules in special cases where we want it --- if Fedora atomic host wants to move kube+docker faster, then that would be possible if we have a distinct "container support" module - But still keep the core on the same schedule.
== Q: It seems like another requirement (or perhaps consequence) is "more maintainers" or more interested parties? ==
* A high-level hope is to move the focus of a lot of contributor activity away from working on individual tiny dep rpms to larger chunks.
== Q: Do we need a new packaging system to improve the app -> module in Fedora path? ==
* Maybe new tooling, but there's some work underway to help with automating the creation of RPMs
** I am thinking of delivery, the problem with rpms is that we'll end up with app-a requiring foo-1.1 and app-b requiring foo1.2 and if they are both build with standard rpms the 2 modules wll conflict
*** In my ideal world, I'd like to see this handled with RPMs and groups/metapackages. So we don't lose the flexibility we have with all the little pieces.
**** so you won't be able to install both app-a and app-b unless you allow relocation/embedding of packages
***** No: figuring out how to cleanly parallel-install deps is a key point.
****** I am not against, that's why I was thinking a new module-evel packaging system may be required, or may be we simply build a big RPM with the contents fo all the RPMs used to generate the module, a meta-rpm of sort - that exposes only the deps you want to make public, and not every singlle package, but that would require some relocation/smarts
****** i really don't want a new packaging method.. i would much rather embrace all the existing ones
****** it doesn't matter what packaging tool you use underneath, but you have to deliver a module and be able to coherently introspect it - call it a meta-package, you need info on all files, where they go, how to verify if they are still what was delivered, etc...
******* ahh i see what you are saying.. in that sense.. yes.., but.. this "packaging" would not actually package the binary like rpm does.. we are imagining a split channel.. metadata that maps to binaries through two channels
*******You might run into the issue that some "modules" have pieces that don't need to be installed (Like the base distribution module) whereas on the app spectrum it's more likely all the packages in the module need to be installed.
**** Indeed. So we may need to be able to package those bundled dependencies in a path that belongs to the app that wants them, and not have them share the system path.
***** langdon's app-to-module path. If a dependency is brought in purely to support one particular app, and the maintainer has no real interest in the dependency itself,then it may actually be just _fine_ to support multiple instances of that. Mark those dependencies as belonging just to the app. And if somebody wants to maintain the dependency as an independent thing in its own right, then it belongs in some module of its own, not in the app module.
******* that is kinda what i am thinking (marking as "for that app")
**** but that is longer term.. for now.. we are just playing with modules as rpm repos.. the simplest case.. so a repo ~= module.. with some extra metadata and some client side tooling
**** make dealing with libraries and dependent modules less time consuming, freeing effort for applications people really care about
== Q: not to be awkward, but has anyone actually convincingly defined a 'core / apps' boundary yet? /me tends to find one person's core is another person's app. which is GTK+? ==
* We've been kind of actively avoiding talking about the desktop stack, because it's the hardest problem. It's one of the reasons we came to Server SIG first: the headless case is a lot more direct.
** fine, which is python-requests ?
* i think this has been a huge problem.. so we are somewhat taking the approach of "pull all the things out we can, and what is left is the core"
* GTK+ is "stacks" :)
== Q: Should there be levels of importance inside modules? ==
* ie, the 'cockpit' module would have cockpit itself as critical, but 'libfoo' thats pulled in for something as much less so...
* i dunno. if cockpit doesn't function without libfoo, then it's important. if it does, why is it in the cockpit module?
* They care that it works *for their app* They may not care at all if it works for anyone else's app
* well, it may only need some small part of libfoo and not care about anything else it does... so it could be upgraded and as long as that small part works, great
* oh, yes. but that doesn't mean we need to mark importance levels - importance might have been the wrong word...
* that is very much what i mean.. essentially a module only tests its deps for the api it needs.. if a dep becomes popular enough, it might get its own api test
* its bundling in the sense of being declarative about your deps.. but not in the sense an end system owner can't update the deps directly.. even if it may break an app on their machine but module a and module b might both contain package C, at differing verions And yes, bundling/relocation are on the table here, with tooling to reduce the problems inherent there.
== Q: so how do modules map? or is that up to the maintainer(s)? ie, if there's fedora22/23/24 core modules, would there be just one httpd module that has 3 versions depending on which core? ==
* hopefully up to the maintainer.. but in the short term, 1-1 .. moving very quickly to maintainer choice
* hopes no one points out having more than 1 core on one system ;)
* ideally a module can have a single source base that works on multiple versions of fedora: Rather than having separate sources for fc22, and fc23, and fc24 you instead have sources for your module that get built on fc22, fc23, fc24.
** right, so there might be httpd22 httpd24 and they would be built against all the various core/modules that they have as deps...
** one source, many binaries
** write once, run everywhere! but the other way too
** as in .. core source may offer f22, f23, f24 apis .. even though it is only one source.. but i think that is a ways away - yes definitely a ways off
== Q: do we have a plan on where to start and where we'll see the first artifacts? ==
* We are updating the wiki pages right now (wiki/modularization), renewing the objective in a couple weeks, some prelim initial module attempts are getting written up now..
* also posting some blog posts.. actually doing commblog.. as it seemed the more appropriate
** yeah I was just saying to jzb we could do a user-focused magazine post pointing to your commblog posts, or just link it in 5tftw
* nirik thinks this all sounds awesome. The devil is always in the details tho.
* Right, the major intention of this meeting was to get a feel for "Does this sound like something we want to do?" vs "This is a terrible idea, go home and leave us alone"
== Q: So is Fedora the first distro to try this with such a broad target (ie: not just "make everything into a container")? ==
* i think so.. and i think 'make it all a container' is not gonna work..
== Q: nirik wonders about some kind of rpm namespacing to support this, but thats way too into the details weeds ==
* langdon mutters relocatable rpms..
== Q: We talked about automatically building the modules for each base, but is there value in trying to avoid doing that entirely when possible? ==
*For example, if we have modules with no shared deps, it makes sense to have the same exact module just tested atop F24 and F25 cores.
* i think it is too early to say about optimizing builds.. i think we just "build it all" until we can tell we can short circuit.. i know sct had some thoughts on where can do that... but i would punt for now
** Right. The main thing with builds is that I would like automation to try all builds and tell us what breaks;
== Q: so which WGs/SIGs are we going to ask to spearhead this? ==
* part of the objective-renewal for the council is to ask base and e&s to make some changes to org and goals.. and then they would spearhead it
== Q: what about FTBFS (fail to build from source) against some base but not another? ==
* it may be ok if new packages fail to build on old cores, but the build system should tell us it fails and reject the build until we waive older builds. Ie. accepting failures should always require manual choice, the tools should tell us exactly what fails by default.
** yeah, and at that point you could decide that you don't want to keep supporting that old core or fix things so it works or whatever.
== Q: as an end user, how do I know that my system is safe from the vulnerability fixed in sec-patch? ==
* a point i am not sure was clearly made.. we don't want to ship anything that hasn't passed testing as a module.. so for example, app-a builds successfully w/ dep-d.sec-patch but app-b does not.. so app-a ships an update.. but app-b does not..
** so app-b is still vulnerable to --dep-d.sec-patch but.. we need client tooling to allow 1) disable app-b 2) force patch app-b 3) ignore w/ risk
** app-b w/ risk seems like an ok idea on inward facing apps.. like .. what we also are supporting here is the end user system owner making explicit judgements about what risks they want to take..
== Q: from a UX pov, i think overall this is going to be a big win for users, who always want new hotness but can't without an OS update. i'm hoping that making it so you can build against older cores would make it easier to make new apps available for older cores? ==
* Probably, but as Time Passes, more and more stuff will likely end up bundled.
* So it will result in an increasing maintenance burden over time. But with automation, maybe not too much, and modules likely will grow over time if they support older cores I bet.
* supporting against the versions of Fedora is already really insane ... eg: without automation Cockpit would be impossible - supporting against N modules * M cores will take the already insane state to the next level
** if only it was a requirement before pushing *any* update to *any* package
*** but CI fixes ALL THE THINGS!
**** adamw looks forward to seeing the container truck fleet and helicopters turn up to deliver the CI cluster
** i guess my concern is that we are forcing a tree of modules and making a graph impossible if fragmentation happens at each/many levels of the tree there is no hope for tools and projects that try to make the result usable, integrated, etc. so there needs to be a balance found, and perhaps very strict CI would be that balance
*** If we can get to a place where the API/ABI boundary is really what we care about rather than the package versions, I think it gets more straightforward
**** indeed but in reality you have things constantly deprecating and then removing API, a good example is docker dropping things out of the API (with fair warning mostly) from version to version
***** sgallagh: it is a good example of how people do things unfortunately
*** i guess i hope for automation to help with this.. as in, module owners have deps suggested to them when they work for them.. so the automation is actually trying to minimize the number of versions of deps
** i guess if we allow modules to require certain packages or versions of API, and as a result make them incompatible with other versions, then we'll at least model the issue quite nicely, and force fixes rather than ignore them
*** if we are constantly scanning for where modules' deps overlap, and testing versions for simplification, i think this will be better than you think.. software is much more sophisticated in recent years.. with much better backward/forward compat in libraries..
**** yes it may have been worse ... but from working on Cockpit ... it's pretty bad, people are constantly pushing broken stuff into Fedora, from systemd, to docker, to kubernetes, to NetworkManager, to lvm - it's hard to think of a high level API that hasn't regressed at some point over the last couple years
***** right.. but we block that from coming in.. as in all the modules fail with new dep.. so no one picks it up
***** platform-api problem there.. i think that will be a harder case.. sorta..
****** it's 2016 ... to me "API" means something remotable, like DBus, REST, spawning a process, etc
******* yeah.. i just mean the libraries that you use.. local or remote
******** right, so you're right that this is a requirement of modularization. it should be a hard requirement as in CI, otherwise we'd be building something that will get out of hand rapidly
********* i almost think it is a by product.. no module is updated unless it passes its tests.. so if you push a broken shared library.. no module will pick it up
********** currently most of the tests are done by things that integrate other things, the libraries and modules themselves don't do near the amount of testing required to make this work
*********** yeah... glossing over the current lack of tests ;) but, as with all things software these days, need to start fixing recipes not cakes
* I'd like to eventually get to the point where os core and modules can be mixed and match betweeen Fedora and CentOS/RHEL
** im not following - frankenlinux?
*** no, because the modules are big, clear deliniation points, rather than organs all sewn together, mattdm remembers something about voltron.
**** but i'm thinking the cores for centos vs fedora etc would be different? so how could you voltron modules
***** bring along the appropriate runtime / bundling
***** This is easier in container world.... with docker, use the fedora or centos base image, with xdg-app, use the centos or fedora runtimes
***** the other approach is rebuild, as people were talking about for f23/f24/f25 -- could be centos too
** four basic kinds of modules: base system; system services (e.g. logging); infrastructure (web, database, etc.); and then actual applications
** so by a module spanning fedora / centos you dont literally mean the same module, but the functionality / definition of the module
*** I am partly Dreaming Big, here, not suggesting an immediate target, but I do want it kept in mind as we design
*** with container approaches, maybe literally the same binary
*** something like that, yes. it might be easier to think of it as "the source code for a module can be rebuilt on top of fedora and centos cores" and then with containers, it might map to the same binaries too. but again, that is all an eventuality we may not even try
*** although, if we can, it means we really did disconnect the os from the apps.. and honestly, for many apps built on scripting languages.. i bet running on multiple cores is not that hard even today
*** Or even running on golang and other bundle-the-world languages
* from an end user perspective it is just, you have all current versions of an app avail for either cent or fedora.. but.. an app will start to have to seriously think about LTS style versions.. because the end user will start to care about their version stickiness more than the target OS. and if a patch comes out.. inkscape gets tested with the patch before the patch is shipped.. so.. inkscape may have the patch but openoffice may not.. because oo failed the new sec patch test...
** could also help with the 'can you duplicate this bug against the latest upstream git snapshot?' problem... :) sure, let me install that module, yes, here's the error.
** Probably not in the first blush, but tools to hook up a git repo and do automated nightly builds would be fantastic too
** we can also show to the app/module provider exactly what the end user is using when the bug was filed.. including force-patched fixes..
*** what is force-patched fix?
**** that is my idea from above.. where oo failed the rebuild but the end user decided they wanted the security patch enough to force it on the module
***** I think we'll have to come back to that one. I have concerns, but they're too low-level for this discussion
* aiming big: forget nightly. do it on each commit!
** Probably unachievable for huge projects like kernel or glibc
*** the kernel already has CI that does builds and testing on every commit. unfortunately, the kernel also supports so much diverse hardware that it is impossible to test all those CI builds on all the hardware, so bugs still happen a lot.
* but.. suffice to say.. an end user can exactly report what binaries are in use in a particular module, you know that the module has been approved to use with those binaries (or are warned), and so bug reports are rarely "what the h*@&*@k did they have installed besides my app"
* https://wiki.gnome.org/Projects/SandboxedApps/NightlyBuilds
= To-Dos =
* a REQUIREMENT of modularity is significant tooling change/automation/improvement
* we need to have automated rebuilds of modules.. we need basic testing of a module on security updates automatically.. we need to build all that out for modules.. to a much greater degree than we do with rpms.. because we don't want to update a module on the end system if it hasn't been tested as a unit
* we also need to improve the "app -> module in fedora" path.. like really lower the barrier (of effort not quality) in an app developer maintaining their own module..
* We may need to be able to package those bundled dependencies in a path that belongs to the app that wants them, and not have them share the system path.
* make dealing with libraries and dependent modules less time consuming, freeing effort for applications people really care about
* I'd say a prerequisite for this is going to be Continuous Deployment-based testing of modules right from the start
* to simo's point (sorta) earlier.. i would like the language packaging mechanisms to be more directly supported.. reducing app dev effort, simplifying packaging, not creating a new packaging system, relying on app language unbundling, etc
7 years
Server SIG Weekly Meeting Minutes (2016-03-08) - Modularization
by Stephen Gallagher
=========================================================
#fedora-meeting-1: Server SIG Weekly Meeting (2016-03-08)
=========================================================
Meeting started by sgallagh at 16:00:31 UTC. The full logs are available
at
https://meetbot.fedoraproject.org/fedora-meeting-1/2016-03-08/serversig.2...
.
Meeting summary
---------------
* roll call (sgallagh, 16:00:31)
* Agenda (sgallagh, 16:05:05)
* Agenda Topic: Modularization (sgallagh, 16:05:05)
* Modularization (sgallagh, 16:06:26)
* Guest Speaker: Langdon White (sgallagh, 16:06:26)
* a REQUIREMENT of modularity is significant tooling
change/automation/improvement (sgallagh, 16:27:40)
* Motivation 1) allow software to operate at a difference pace from OS
(sgallagh, 16:40:11)
* Automation is the linchpin for all of this (sgallagh, 16:56:57)
* Motivation 2) allow an application to have an independent lifecycle,
including its dependencies, from other applications and the OS but
only branch on deps when necessary (sgallagh, 16:59:11)
* Motivation 3) make an explicit promise about whether a dep is for an
app or meant to be shared (sgallagh, 16:59:19)
* Motivation 4) move testing from implicit every package to explicit
exposed public apis of a module (sgallagh, 16:59:31)
Meeting ended at 17:29:47 UTC.
Action Items
------------
Action Items, by person
-----------------------
* **UNASSIGNED**
* (none)
People Present (lines said)
---------------------------
* langdon (121)
* sgallagh (110)
* mattdm (49)
* stefw (48)
* mizmo (48)
* jwb (26)
* nirik (25)
* simo (24)
* zodbot (21)
* sctw (12)
* bconoboy (11)
* jds2001 (11)
* adamw (9)
* danofsatx (5)
* jzb (4)
* masta (1)
* mhayden (0)
Generated by `MeetBot`_ 0.1.4
.. _`MeetBot`: http://wiki.debian.org/MeetBot
7 years
Can't attend the 2016-03-08 meeting
by Major Hayden
Hey folks,
We're participating in the OpenStack Bug Smash this week at the office and I will not be able to make it in for the weekly meeting today. I will check the logs as soon as I can!
--
Major Hayden
7 years
Presentation on Fedora.next and the Modularity Initiative
by Stephen Gallagher
Tomorrow at 1600 UTC/1100 EST/1700 CET during the regularly-scheduled Server SIG
meeting in #fedora-meeting-1 on Freenode, Langdon White will be giving a brief
presentation and then a question-and-answer session around the planned
modularity efforts coming in Fedora.
This will be a high-level overview and brainstorming about what modularity means
to Fedora in general and the Server Edition in particular. To be clear, this is
an initial introduction to what will undoubtedly become a massive
cross-distribution effort. I encourage anyone who is interested in the Next Big
Thing to join us for this meeting (and of course the IRC logs will be published
afterwards).
I'll also note that this is an informational session, not a technical design
review: most of what will be covered will be at a 50,000 foot view. As moderator
of the discussion, I will try to keep us focused on goals rather than
implementation details for this meeting; there will be many more discussions
around implementation in the coming months.
7 years