So, in the past we have always had a policy to package as rpms and get into fedora/epel applications we deploy and are upstream for.
There were a number of good reasons for this: * We deployed everything on vm's using rpm. * Other users that wanted to reproduce our infrastructure could use the rpms. * It made us sure that the thing passed review and built on various Fedoras with the versions of things there it depended on.
However, now a days we have a number of new apps that are deployed in openshift and aren't using rpms, but pip or s2i or other things. For these packaging them up as rpms becomes a burden with not too many gains. ;(
So, I was thinking we should codify a new policy. (To avoid confusion for application authors and others).
Something like:
Applications in Fedora Infrastructure may be deployed via non rpm methods (as long as they obey licensing guidelines ( https://fedoraproject.org/wiki/Infrastructure_Licensing )). For those applications, creating and maintaining an rpm is optional.
Thoughts?
kevin
On Mon, 23 May 2022 at 16:52, Kevin Fenzi kevin@scrye.com wrote:
So, in the past we have always had a policy to package as rpms and get into fedora/epel applications we deploy and are upstream for.
There were a number of good reasons for this:
- We deployed everything on vm's using rpm.
- Other users that wanted to reproduce our infrastructure could use the
rpms.
- It made us sure that the thing passed review and built on various
Fedoras with the versions of things there it depended on.
However, now a days we have a number of new apps that are deployed in openshift and aren't using rpms, but pip or s2i or other things. For these packaging them up as rpms becomes a burden with not too many gains. ;(
So, I was thinking we should codify a new policy. (To avoid confusion for application authors and others).
Something like:
Applications in Fedora Infrastructure may be deployed via non rpm methods (as long as they obey licensing guidelines ( https://fedoraproject.org/wiki/Infrastructure_Licensing )). For those applications, creating and maintaining an rpm is optional.
How about:
Applications in Fedora Infrastructure need to be deployed in an auditable and repeatable way. These methods need to allow someone to determine which software was installed, when it was installed, and what it was meant to be done (example: rpms or podman build scripts for containers). The goal is to be kind to our future selves at 2 am who need to figure out why a critical application is broken and how to rebuild and redeploy as needed.
Thoughts?
kevin _______________________________________________ infrastructure mailing list -- infrastructure@lists.fedoraproject.org To unsubscribe send an email to infrastructure-leave@lists.fedoraproject.org Fedora Code of Conduct: https://docs.fedoraproject.org/en-US/project/code-of-conduct/ List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines List Archives: https://lists.fedoraproject.org/archives/list/infrastructure@lists.fedorapro... Do not reply to spam on the list, report it: https://pagure.io/fedora-infrastructure
Something like:
Applications in Fedora Infrastructure may be deployed via non rpm methods (as long as they obey licensing guidelines ( https://fedoraproject.org/wiki/Infrastructure_Licensing )). For those applications, creating and maintaining an rpm is optional.
How about:
Applications in Fedora Infrastructure need to be deployed in an auditable and repeatable way. These methods need to allow someone to determine which software was installed, when it was installed, and what it was meant to be done (example: rpms or podman build scripts for containers). The goal is to be kind to our future selves at 2 am who need to figure out why a critical application is broken and how to rebuild and redeploy as needed.
Yeah that seems sensible (although I'm not sure of the wording of "what it was meant to be done", but I think I get it). This would satisfy apps built with s2i as long as they are pinning their dependencies with something like poetry or pipenv. We are currently standardizing on poetry but any would do as long as deps are pinned).
For s2i based apps, I see two ways of ensuring repeatability, one being stricter but more transparent than the other: 1. have the buildconfig track a production branch upstream, and rely on the build log to know which exact commit was built 2. have the buildconfig specify the commit hash, and change the buildconfig each time we want to deploy a new prod version
Option 2 is more transparent because the commit to build is a var in ansible, but it means updating ansible each time we want to make a prod deployment. The workflow for option 1 is simpler because it's just a start-build, but we'll need the logs to know which commit in the prod branch was actually built, and it may be cumbersome to dig up during if something goes wrong.
Any preference? As a dev I would be happy with both, they are still infinitely easier than building RPMs. Option 1 being easier for devs, my lazy self leans towards it, but Option 2 is fine as well. Another option that I did not think of?
Aurélien
On Tue, May 24, 2022 at 06:24:10PM +0200, Aurelien Bompard wrote:
Something like:
Applications in Fedora Infrastructure may be deployed via non rpm methods (as long as they obey licensing guidelines ( https://fedoraproject.org/wiki/Infrastructure_Licensing )). For those applications, creating and maintaining an rpm is optional.
How about:
Applications in Fedora Infrastructure need to be deployed in an auditable and repeatable way. These methods need to allow someone to determine which software was installed, when it was installed, and what it was meant to be done (example: rpms or podman build scripts for containers). The goal is to be kind to our future selves at 2 am who need to figure out why a critical application is broken and how to rebuild and redeploy as needed.
Indeed! Although 'repeatable' has always been a bit mushy...
Yeah that seems sensible (although I'm not sure of the wording of "what it was meant to be done", but I think I get it).
Yeah, could be re-worded a bit/made more verbose. Perhaps:
All deployments of software in Fedora Infrastructure MUST: * Be under an acceptable license ( https://fedoraproject.org/wiki/Infrastructure_Licensing ) * Be auditable (what versions of what things are in the deployment, for example: a koji task for a rpm build or a openshift build log for a pod) * Be Downgradable (allow rollback to a previous working version, for example with a rpm downgrade or a openshift pod rollout of old version)
This would satisfy apps built with s2i as long as they are pinning their dependencies with something like poetry or pipenv. We are currently standardizing on poetry but any would do as long as deps are pinned).
As a package maintainer... I LOATHE pinning. ;( but I understand why in this case it makes things nicer... I don't know that we really need to pin everything as long as we have logs of what was used. Then if an upgrade causes breakage, we could just go back to a previous build, or _then_ pin something at a specific version to avoid a bug.
For s2i based apps, I see two ways of ensuring repeatability, one being stricter but more transparent than the other:
- have the buildconfig track a production branch upstream, and rely on the
build log to know which exact commit was built 2. have the buildconfig specify the commit hash, and change the buildconfig each time we want to deploy a new prod version
Option 2 is more transparent because the commit to build is a var in ansible, but it means updating ansible each time we want to make a prod deployment. The workflow for option 1 is simpler because it's just a start-build, but we'll need the logs to know which commit in the prod branch was actually built, and it may be cumbersome to dig up during if something goes wrong.
Any preference? As a dev I would be happy with both, they are still infinitely easier than building RPMs. Option 1 being easier for devs, my lazy self leans towards it, but Option 2 is fine as well. Another option that I did not think of?
I'm fine with either. It might be we should decide this on a app by app basis? ie, some not very important app, or some new app thats building all the time could use method 1, and something thats really important and established could use method 2. Method2 is much nicer for freezes, if we decide anything openshift wise freezes. ;)
On Tue, May 24, 2022 at 01:15:20PM -0400, Ben Cotton wrote:
On Mon, May 23, 2022 at 6:17 PM Stephen Smoogen ssmoogen@redhat.com wrote:
Applications in Fedora Infrastructure need to be deployed in an auditable and repeatable way. These methods need to allow someone to determine which software was installed, when it was installed, and what it was meant to be done (example: rpms or podman build scripts for containers). The goal is to be kind to our future selves at 2 am who need to figure out why a critical application is broken and how to rebuild and redeploy as needed.
I like this approach. I don't think there's real value in requiring that everything be packaged as an RPM, but we do want to make sure we can re-deploy correctly.
What are the implications for pinning requirements here? Should we require that each application require specific versions of dependencies? I don't love that idea, but I love even less the idea of a stealthy change to a package turning our infrastructure into a cryptocurrency rig.
Well, I don't think thats too likely... more likely would be a new build would cause some breakage due to a dep upgrading. As long as we record what deps were used, it should be easy to then pin things or roll back to a previous version until it's fixed.
In fact, pinning could result in more chance of security issues... if you pin everything and avoid security updates. I'd prefer pinning just the bare amount we need to...
kevin
As a package maintainer... I LOATHE pinning. ;(
Let me rephrase that and please tell me if I'm correctly representing your thoughts. You loathe somebody else deciding which dependencies you must use. That's fair, it's a distro packager's hell. However in this case I think it's pretty different: we control both the pinning and the packaging (well, image building).
In a way, using RPMs does not guarantee reproducibility either: if my app depends on libA-X.Y and it works when I build it, but then libA's maintainer decides to update to X.Z and it breaks my app when I rebuild the image, then having an RPM of my app does not help. To ensure reproducibility we need the versions of the RPMs used at build time, and that's pretty similar to the versions that pip would have pulled at image build time. So, in our case, I suppose storing the list of all versions used would suffice. Or even better: let's store the images themselves and version them. Can the internal OpenShift registry reliably do that? Do we need to switch to something external (quay.io?) to reduce the chance of everything failing at the same time? Then it does not matter whether we track a branch or a commit, we can rollback to the code that was used before by using the previous image. Provided there was no DB schema upgrade, but that's another can of worms.
Aurélien
On Tue, 24 May 2022 at 18:55, Aurelien Bompard abompard@fedoraproject.org wrote:
As a package maintainer... I LOATHE pinning. ;(
Let me rephrase that and please tell me if I'm correctly representing your thoughts. You loathe somebody else deciding which dependencies you must use. That's fair, it's a distro packager's hell. However in this case I think it's pretty different: we control both the pinning and the packaging (well, image building).
In a way, using RPMs does not guarantee reproducibility either: if my app depends on libA-X.Y and it works when I build it, but then libA's maintainer decides to update to X.Z and it breaks my app when I rebuild the image, then having an RPM of my app does not help. To ensure reproducibility we need the versions of the RPMs used at build time, and that's pretty similar to the versions that pip would have pulled at image build time. So, in our case, I suppose storing the list of all versions used would suffice. Or even better: let's store the images themselves and version them. Can the internal OpenShift registry reliably do that? Do we need to switch to something external (quay.io?) to reduce the chance of everything failing at the same time? Then it does not matter whether we track a branch or a commit, we can rollback to the code that was used before by using the previous image. Provided there was no DB schema upgrade, but that's another can of worms.
Yes I want to be clear. I wasn't saying pinning as much as a record of which packages were used. Most of the time, even 70% accuracy is better than 0%... [and here is the opening for various stories about when it is worse :)].
On Wed, May 25, 2022 at 12:54:57AM +0200, Aurelien Bompard wrote:
As a package maintainer... I LOATHE pinning. ;(
Let me rephrase that and please tell me if I'm correctly representing your thoughts. You loathe somebody else deciding which dependencies you must use. That's fair, it's a distro packager's hell.
Yeah, in a distro you need to integrate the thing with all the other things and come up with versions that they can all use/share. If something is pinned to a specific version it often indeed causes doom because all the other things have moved on or havent yet and you have to reconcile that with them.
However in this case I think it's pretty different: we control both the pinning and the packaging (well, image building).
well, sure, but it makes it hard for someone else to package it if they want to. :)
In a way, using RPMs does not guarantee reproducibility either: if my app depends on libA-X.Y and it works when I build it, but then libA's maintainer decides to update to X.Z and it breaks my app when I rebuild the image, then having an RPM of my app does not help. To ensure reproducibility we need the versions of the RPMs used at build time, and that's pretty similar to the versions that pip would have pulled at image build time.
Indeed.
So, in our case, I suppose storing the list of all versions used would suffice. Or even better: let's store the images themselves and version them. Can the internal OpenShift registry reliably do that? Do we need to switch to something external (quay.io?) to reduce the chance of everything failing at the same time? Then it does not matter whether we track a branch or a commit, we can rollback to the code that was used before by using the previous image. Provided there was no DB schema upgrade, but that's another can of worms.
Yeah, saving images sounds good to me for the openshift case. As long as we can rollout an old version we know was working, I think thats just fine. I would prefer not to depend on any external providers for that if we can at all.
So, perhaps we need a little tinkering here... take some app...and confirm we can successfully rollout an older version. If we can do that, I think we are fine, pinning or not pinning. At least for the purposes of our deployment. I think that does make it harder for people to package our things, but I suppose thats really an upstream decision how tightly to pin.
kevin
On Tue, May 24, 2022 at 06:24:10PM +0200, Aurelien Bompard wrote:
Something like:
Applications in Fedora Infrastructure may be deployed via non rpm methods (as long as they obey licensing guidelines ( https://fedoraproject.org/wiki/Infrastructure_Licensing )). For those applications, creating and maintaining an rpm is optional.
How about:
Applications in Fedora Infrastructure need to be deployed in an auditable and repeatable way. These methods need to allow someone to determine which software was installed, when it was installed, and what it was meant to be done (example: rpms or podman build scripts for containers). The goal is to be kind to our future selves at 2 am who need to figure out why a critical application is broken and how to rebuild and redeploy as needed.
Yeah that seems sensible (although I'm not sure of the wording of "what it was meant to be done", but I think I get it). This would satisfy apps built with s2i as long as they are pinning their dependencies with something like poetry or pipenv. We are currently standardizing on poetry but any would do as long as deps are pinned).
For s2i based apps, I see two ways of ensuring repeatability, one being stricter but more transparent than the other:
- have the buildconfig track a production branch upstream, and rely on the
build log to know which exact commit was built 2. have the buildconfig specify the commit hash, and change the buildconfig each time we want to deploy a new prod version
A third option which we've used in a few of our apps is to have a specific branch (or branches) for our deployment. That branch can then have commits dedicated to our deployment, such as support for s2i which aren't needed in the 'main' branch. We could do things like the version pining in that branch as well, making it easy/easier for people to package the application while still helping us ensure the reproducibility we want in openshift.
Pierre
On Mon, May 23, 2022 at 6:17 PM Stephen Smoogen ssmoogen@redhat.com wrote:
Applications in Fedora Infrastructure need to be deployed in an auditable and repeatable way. These methods need to allow someone to determine which software was installed, when it was installed, and what it was meant to be done (example: rpms or podman build scripts for containers). The goal is to be kind to our future selves at 2 am who need to figure out why a critical application is broken and how to rebuild and redeploy as needed.
I like this approach. I don't think there's real value in requiring that everything be packaged as an RPM, but we do want to make sure we can re-deploy correctly.
What are the implications for pinning requirements here? Should we require that each application require specific versions of dependencies? I don't love that idea, but I love even less the idea of a stealthy change to a package turning our infrastructure into a cryptocurrency rig.
Dne 23. 05. 22 v 20:57 Kevin Fenzi napsal(a):
However, now a days we have a number of new apps that are deployed in openshift and aren't using rpms, but pip or s2i or other things.
Regarding pip applications - we have pyp2rpm and pyp2spec which can convert to rpm easily. And we have
https://copr.fedorainfracloud.org/coprs/g/copr/PyPI/
with about 70k packages.
Miroslav
On Mon, May 30, 2022 at 11:20 PM Miroslav Suchý msuchy@redhat.com wrote:
Dne 23. 05. 22 v 20:57 Kevin Fenzi napsal(a):
However, now a days we have a number of new apps that are deployed in openshift and aren't using rpms, but pip or s2i or other things.
Regarding pip applications - we have pyp2rpm and pyp2spec which can convert to rpm easily. And we have
https://copr.fedorainfracloud.org/coprs/g/copr/PyPI/
with about 70k packages.
The bigger problem is that those applications are *not* able to easily be deployed outside of Fedora infrastructure. One consequence of OpenShift based deployments is that it's become almost too easy to assume nobody else would ever want to run that code. Noggin and Bodhi both have had a number of code changes in the past couple of years that have made it increasingly difficult to use outside of Fedora deployments in their default code state. I've given up on Bodhi, but I'm still trying to get Noggin into good shape.
I've been able to evaluate those issues by packaging them as RPMs, because RPM packaging forces a total decoupling of development, deployment, and configuration. None of that is true with our container based deployments. They're not discoverable, and if you can find them, they're not independently useful.
Because of this, it becomes hard for community growth around these projects.
-- 真実はいつも一つ!/ Always, there's only one truth!
The bigger problem is that those applications are *not* able to easily be deployed outside of Fedora infrastructure. One consequence of OpenShift based deployments is that it's become almost too easy to assume nobody else would ever want to run that code.
Because of this, it becomes hard for community growth around these
projects.
I think that's a fair point.
I'm still trying to get Noggin into good shape.
I'm interested in which parts of the Noggin source code make it hard to be deployed outside Fedora. In my opinion those are bugs, because we do want others to be able to deploy it. Do you have some pointers? I'll easily admit we may have gone the easier route by assuming what our infra looks like in a few places, so I'm happy to fix that.
I've been able to evaluate those issues by packaging them as RPMs, because RPM packaging forces a total decoupling of development, deployment, and configuration. None of that is true with our container based deployments. They're not discoverable, and if you can find them, they're not independently useful.
Hmm, I'm not sure I agree. Containers can be decoupled from the deployment and configuration, can they not? That's how people use all those generic containers on DockerHub, no? It's probably extra effort to make our containers runnable in different infra, but I'm pretty sure there's also some work involved with making our RPMs buildable/runnable in other distros, no? So I'm not convinced RPMs are inherently better than containers at that.
Aurélien
On Mon, 30 May 2022 at 21:26, Neal Gompa ngompa13@gmail.com wrote:
On Mon, May 30, 2022 at 11:20 PM Miroslav Suchý msuchy@redhat.com wrote:
Dne 23. 05. 22 v 20:57 Kevin Fenzi napsal(a):
However, now a days we have a number of new apps that are deployed in openshift and aren't using rpms, but pip or s2i or other things.
Regarding pip applications - we have pyp2rpm and pyp2spec which can
convert to rpm easily. And we have
https://copr.fedorainfracloud.org/coprs/g/copr/PyPI/
with about 70k packages.
The bigger problem is that those applications are *not* able to easily be deployed outside of Fedora infrastructure. One consequence of OpenShift based deployments is that it's become almost too easy to assume nobody else would ever want to run that code. Noggin and Bodhi both have had a number of code changes in the past couple of years that have made it increasingly difficult to use outside of Fedora deployments in their default code state. I've given up on Bodhi, but I'm still trying to get Noggin into good shape.
I've been able to evaluate those issues by packaging them as RPMs, because RPM packaging forces a total decoupling of development, deployment, and configuration. None of that is true with our container based deployments. They're not discoverable, and if you can find them, they're not independently useful.
Because of this, it becomes hard for community growth around these projects.
I think we should be clearer about if community growth is expected. 15 to 10 years ago, we tried very hard to grow community support into helping to build things but found that the requirements of what was wanted in the applications always took more time and energy than volunteers had. Over the last 10 years, we have had less and less time to do this community growth work.
Fedora Infrastructure's primary mission is to provide a working build system and produce N thousand deliverables every day (be this individually built RPMS, containers, spins, isos, raw images, etc etc). We are to do this while also maintaining the core community features which allow for mail, badges, and outsourced tools to work as best as possible. The amount of resources to actually make it work where we could have people do the marketing and community support needed to make generic apps or even build an infrastructure community is about 2x what we have people for IF we only had to keep the current applications going. However as soon as you hire someone in this space, you find that 3-4 new projects which were waiting to be resourced become high priority. And the timeframes to delivery are shortened as it is no longer acceptable to have a tool in development for N years before initial deployment like Bodhi and some others had. Instead you usually go from prototype to full production in 1 to 1.5 release cycles (6 to 9 months).
In order to meet this, we usually have to bake all the 'business rules' of the time into the application. The idea being that when the rules change, the developer will just likely rewrite it from scratch to meet those. [Since one of the lessons learned from when we did 'long form development' was that a lot of code was rewritten every time a new framework which helped solve something better was found.]
Finally, I think trying to decouple deployment, development, and configuration is going against the tide. "Infrastructure as Code" sounds great but it generally means that those 3 things are tied together deeply in every application and tool written these days. Like the tide, this is a cyclical pattern in the industry and it will roll out and leave a pile of bad example flotsam that few will remember the next time the tide comes in.
infrastructure@lists.fedoraproject.org