Hi everyone!
The Websites & Apps team is currently working on rewriting all major Fedora websites (such as getfedora, spins & alt) and I believe this is a good opportunity to revisit the current deployment workflow and try to make it simpler.
Currently, websites are being built in Openshift, with a cronjob running every hour that fetches code from git, builds it, then saves it to an NFS share. That same NFS share is also mounted on sundries01, which exposes it through rsync for proxies to sync it, then serve it to the world.
I have a few solutions in mind to replace that, and I would like your input on them.
A) Full Openshift We can build, and deploy the websites directly in Openshift, and serve it from here. Just like silverblue.fp-o. While this is probably the most straightforward solution, it has one major downside: if Openshift is unavailable for any reason, our major websites become also unavailable. I believe this is why we are still using our proxies to host them, as such a scenario is unlikely to happen on every single one of them at the same time.
B) Same as before, with a twist We build on Openshift, but instead of going through NFS and sundries with rsync, we store the websites on S3 storage provided by Openshift, then we sync the proxies using `s3cmd sync`.
C) Same as B, but with an external builder We already build the new websites on Gitlab CI, and since the S3 gateway is accessible from the outside, we could just push the build artifacts to s3 directly from GitLab CI. Then sync the proxies from it.
D) Keep using the Openshift->Sundries->proxies workflow
E) Your solution here.
We could also improve B and C by adding fedora-messaging to the mix to trigger a proxy resync as soon as a new build is available instead of doing so every hour.
What do you all think?
-darknao
What is likelyhood of Openshift going down? A would be best solution if stable enough.
copperi ________________________________ From: Francois Andrieu darknao@fedoraproject.org Sent: Thursday, November 24, 2022 7:28 PM To: infrastructure@lists.fedoraproject.org infrastructure@lists.fedoraproject.org Subject: Rethinking fedora websites deployment
Hi everyone!
The Websites & Apps team is currently working on rewriting all major Fedora websites (such as getfedora, spins & alt) and I believe this is a good opportunity to revisit the current deployment workflow and try to make it simpler.
Currently, websites are being built in Openshift, with a cronjob running every hour that fetches code from git, builds it, then saves it to an NFS share. That same NFS share is also mounted on sundries01, which exposes it through rsync for proxies to sync it, then serve it to the world.
I have a few solutions in mind to replace that, and I would like your input on them.
A) Full Openshift We can build, and deploy the websites directly in Openshift, and serve it from here. Just like silverblue.fp-o. While this is probably the most straightforward solution, it has one major downside: if Openshift is unavailable for any reason, our major websites become also unavailable. I believe this is why we are still using our proxies to host them, as such a scenario is unlikely to happen on every single one of them at the same time.
B) Same as before, with a twist We build on Openshift, but instead of going through NFS and sundries with rsync, we store the websites on S3 storage provided by Openshift, then we sync the proxies using `s3cmd sync`.
C) Same as B, but with an external builder We already build the new websites on Gitlab CI, and since the S3 gateway is accessible from the outside, we could just push the build artifacts to s3 directly from GitLab CI. Then sync the proxies from it.
D) Keep using the Openshift->Sundries->proxies workflow
E) Your solution here.
We could also improve B and C by adding fedora-messaging to the mix to trigger a proxy resync as soon as a new build is available instead of doing so every hour.
What do you all think?
-darknao _______________________________________________ infrastructure mailing list -- infrastructure@lists.fedoraproject.org To unsubscribe send an email to infrastructure-leave@lists.fedoraproject.org Fedora Code of Conduct: https://docs.fedoraproject.org/en-US/project/code-of-conduct/ List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines List Archives: https://lists.fedoraproject.org/archives/list/infrastructure@lists.fedorapro... Do not reply to spam, report it: https://pagure.io/fedora-infrastructure/new_issue
My vote is for C. If we can reduce the amount of steps required to implement service redundancy. Deployment to S3 provides out of the box access to the public domain and is straightforward.
Regards, Ahmed Al-meleh Fedora QA Contributor
On Thu, 24 Nov 2022, 17:28 Francois Andrieu, darknao@fedoraproject.org wrote:
Hi everyone!
The Websites & Apps team is currently working on rewriting all major Fedora websites (such as getfedora, spins & alt) and I believe this is a good opportunity to revisit the current deployment workflow and try to make it simpler.
Currently, websites are being built in Openshift, with a cronjob running every hour that fetches code from git, builds it, then saves it to an NFS share. That same NFS share is also mounted on sundries01, which exposes it through rsync for proxies to sync it, then serve it to the world.
I have a few solutions in mind to replace that, and I would like your input on them.
A) Full Openshift We can build, and deploy the websites directly in Openshift, and serve it from here. Just like silverblue.fp-o. While this is probably the most straightforward solution, it has one major downside: if Openshift is unavailable for any reason, our major websites become also unavailable. I believe this is why we are still using our proxies to host them, as such a scenario is unlikely to happen on every single one of them at the same time.
B) Same as before, with a twist We build on Openshift, but instead of going through NFS and sundries with rsync, we store the websites on S3 storage provided by Openshift, then we sync the proxies using `s3cmd sync`.
C) Same as B, but with an external builder We already build the new websites on Gitlab CI, and since the S3 gateway is accessible from the outside, we could just push the build artifacts to s3 directly from GitLab CI. Then sync the proxies from it.
D) Keep using the Openshift->Sundries->proxies workflow
E) Your solution here.
We could also improve B and C by adding fedora-messaging to the mix to trigger a proxy resync as soon as a new build is available instead of doing so every hour.
What do you all think?
-darknao _______________________________________________ infrastructure mailing list -- infrastructure@lists.fedoraproject.org To unsubscribe send an email to infrastructure-leave@lists.fedoraproject.org Fedora Code of Conduct: https://docs.fedoraproject.org/en-US/project/code-of-conduct/ List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines List Archives: https://lists.fedoraproject.org/archives/list/infrastructure@lists.fedorapro... Do not reply to spam, report it: https://pagure.io/fedora-infrastructure/new_issue
I like C too. Currently, when something breaks on the websites (the most common issue is outdated content), the websites team needs to reach out to infra to understand what's happening and ask them to check the build logs. Using Openshift is not a widespread skill, and it can be a bit difficult to debug anything when you don't know where to look, how to trigger a new build, or just don't have any access.
With C, we are offloading that task to the website team, who are then able to use the tools they know to deploy and solve any issues related to the build process that may arise.
Now, the real question is, are we going to allow that? Giving access to such s3 storage to a third party (I'm talking about GitLab here since the s3 access key will be stored on their platform) can be a potential security concern. If this key gets stolen, it basically gives direct access to our proxies.
I feel like I've somewhat answered my own question, but I would love your opinion on this :)
-darknao
On 2022-11-24 18:59, Ahmed Almeleh wrote:
My vote is for C. If we can reduce the amount of steps required to implement service redundancy. Deployment to S3 provides out of the box access to the public domain and is straightforward.
Regards, Ahmed Al-meleh Fedora QA Contributor
On Thu, 24 Nov 2022, 17:28 Francois Andrieu, darknao@fedoraproject.org wrote:
B) Same as before, with a twist We build on Openshift, but instead of going through NFS and sundries with rsync, we store the websites on S3 storage provided by Openshift, then we sync the proxies using `s3cmd sync`.
C) Same as B, but with an external builder We already build the new websites on Gitlab CI, and since the S3 gateway is accessible from the outside, we could just push the build artifacts to s3 directly from GitLab CI. Then sync the proxies from it.
-darknao
On Fri, Nov 25, 2022 at 01:07:50PM +0100, darknao wrote:
I like C too. Currently, when something breaks on the websites (the most common issue is outdated content), the websites team needs to reach out to infra to understand what's happening and ask them to check the build logs. Using Openshift is not a widespread skill, and it can be a bit difficult to debug anything when you don't know where to look, how to trigger a new build, or just don't have any access.
Yeah, although it works the other way too... if we move to gitlab it would need someone who understands that setup to debug and fix.
With C, we are offloading that task to the website team, who are then able to use the tools they know to deploy and solve any issues related to the build process that may arise.
Sure, but we are then saying that there would be someone available to fix things for... all the time we are still running things there. ;)
Now, the real question is, are we going to allow that? Giving access to such s3 storage to a third party (I'm talking about GitLab here since the s3 access key will be stored on their platform) can be a potential security concern.
Sure, but we should hopefully be able to make sure nothing else would be accessable to that key.
If this key gets stolen, it basically gives direct access to our proxies.
True, we would need to make sure it was as secure as we could make it.
I feel like I've somewhat answered my own question, but I would love your opinion on this :)
kevin --
-darknao
On 2022-11-24 18:59, Ahmed Almeleh wrote:
My vote is for C. If we can reduce the amount of steps required to implement service redundancy. Deployment to S3 provides out of the box access to the public domain and is straightforward.
Regards, Ahmed Al-meleh Fedora QA Contributor
On Thu, 24 Nov 2022, 17:28 Francois Andrieu, darknao@fedoraproject.org wrote:
B) Same as before, with a twist We build on Openshift, but instead of going through NFS and sundries with rsync, we store the websites on S3 storage provided by Openshift, then we sync the proxies using `s3cmd sync`.
C) Same as B, but with an external builder We already build the new websites on Gitlab CI, and since the S3 gateway is accessible from the outside, we could just push the build artifacts to s3 directly from GitLab CI. Then sync the proxies from it.
-darknao
infrastructure mailing list -- infrastructure@lists.fedoraproject.org To unsubscribe send an email to infrastructure-leave@lists.fedoraproject.org Fedora Code of Conduct: https://docs.fedoraproject.org/en-US/project/code-of-conduct/ List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines List Archives: https://lists.fedoraproject.org/archives/list/infrastructure@lists.fedorapro... Do not reply to spam, report it: https://pagure.io/fedora-infrastructure/new_issue
On Thu, Nov 24, 2022 at 05:28:22PM -0000, Francois Andrieu wrote:
Hi everyone!
Hello.
The Websites & Apps team is currently working on rewriting all major Fedora websites (such as getfedora, spins & alt) and I believe this is a good opportunity to revisit the current deployment workflow and try to make it simpler.
Indeed. I have often thought about this workflow. ;)
Currently, websites are being built in Openshift, with a cronjob running every hour that fetches code from git, builds it, then saves it to an NFS share. That same NFS share is also mounted on sundries01, which exposes it through rsync for proxies to sync it, then serve it to the world.
I have a few solutions in mind to replace that, and I would like your input on them.
A) Full Openshift We can build, and deploy the websites directly in Openshift, and serve it from here. Just like silverblue.fp-o. While this is probably the most straightforward solution, it has one major downside: if Openshift is unavailable for any reason, our major websites become also unavailable. I believe this is why we are still using our proxies to host them, as such a scenario is unlikely to happen on every single one of them at the same time.
Yep. That and also proxies are usually 'closer' to the end user and serving static content, so it's much faster network wise. ie, say a user in germany would just hit a german proxy and get the content fast, while if we moved it to openshift they would have to transit all the way over to the us and back to get that content.
B) Same as before, with a twist We build on Openshift, but instead of going through NFS and sundries with rsync, we store the websites on S3 storage provided by Openshift, then we sync the proxies using `s3cmd sync`.
I think this is definitely a improvement over the current setup.
C) Same as B, but with an external builder We already build the new websites on Gitlab CI, and since the S3 gateway is accessible from the outside, we could just push the build artifacts to s3 directly from GitLab CI. Then sync the proxies from it.
Or just pull from gitlab directly? Or does it expose that data in a way we could sync from?
D) Keep using the Openshift->Sundries->proxies workflow
I'd like to move to something better once we decide whats worth trying. :)
E) Your solution here.
Some more I have thought on:
E) a twist on A. We build and serve in openshift, but we stick cloudfront in front of it. This would solve the speed problems, but still would have the openshift down issue.
F) (this is a fun one :) How about looking into FCOS or RHEL for edge? In this model we would install ostree based vm's in the places we have proxies now and we would build the web content as a ostree ref and pull it from our registry (or quay.io). I think this would be fun, but probibly overkill/too much effort for just static content, but I thought I would throw it out there.
We could also improve B and C by adding fedora-messaging to the mix to trigger a proxy resync as soon as a new build is available instead of doing so every hour.
+1 (although we should make sure not to thundering herd the endpoint if all proxies decided to sync at the same instant).
What do you all think?
I like B... but possibly could be talked into C.
The thing I don't like about C is that it has less visibility, if there was a problem, it would require someone from websites to fix, rather than possibly being something that anyone with access to openshift could fix.
kevin
On 2022-11-28 01:32, Kevin Fenzi wrote:
Some more I have thought on:
E) a twist on A. We build and serve in openshift, but we stick cloudfront in front of it. This would solve the speed problems, but still would have the openshift down issue.
I'm not familiar with Cloudfront, so I can't really comment on that one.
F) (this is a fun one :) How about looking into FCOS or RHEL for edge? In this model we would install ostree based vm's in the places we have proxies now and we would build the web content as a ostree ref and pull it from our registry (or quay.io). I think this would be fun, but probibly overkill/too much effort for just static content, but I thought I would throw it out there.
That does sounds a bit overkill yes :) I'm not expert on the subject, but I believe applying a new ostree ref requires a reboot to use it, right? I was considering something similar in the past: Build the websites into a container image, then pull & extract the content on the proxies. One downside to that is every build creates a new image (or at least a new layer), that proxies will have to pull every time. With rsync (or s3 sync) we only download what actually changes and save bandwidth in the long run.
I like B... but possibly could be talked into C.
The thing I don't like about C is that it has less visibility, if there was a problem, it would require someone from websites to fix, rather than possibly being something that anyone with access to openshift could fix.
That's a valid point.
I think I'll go with B as a starting point. We could always build from there at a later time, as we see fit.
-darknao
On Mon, Nov 28, 2022 at 02:24:14PM +0100, darknao wrote:
On 2022-11-28 01:32, Kevin Fenzi wrote:
Some more I have thought on:
E) a twist on A. We build and serve in openshift, but we stick cloudfront in front of it. This would solve the speed problems, but still would have the openshift down issue.
I'm not familiar with Cloudfront, so I can't really comment on that one.
F) (this is a fun one :) How about looking into FCOS or RHEL for edge? In this model we would install ostree based vm's in the places we have proxies now and we would build the web content as a ostree ref and pull it from our registry (or quay.io). I think this would be fun, but probibly overkill/too much effort for just static content, but I thought I would throw it out there.
That does sounds a bit overkill yes :) I'm not expert on the subject, but I believe applying a new ostree ref requires a reboot to use it, right?
yep.
I was considering something similar in the past: Build the websites into a container image, then pull & extract the content on the proxies. One downside to that is every build creates a new image (or at least a new layer), that proxies will have to pull every time. With rsync (or s3 sync) we only download what actually changes and save bandwidth in the long run.
Yeah, and if we run in a pod or a ostree setup, we have to duplicate the proxy stack (ie, httpd, varnish, etc), where as if we just copy the content we can leverage the existing software stack.
I like B... but possibly could be talked into C.
The thing I don't like about C is that it has less visibility, if there was a problem, it would require someone from websites to fix, rather than possibly being something that anyone with access to openshift could fix.
That's a valid point.
I think I'll go with B as a starting point. We could always build from there at a later time, as we see fit.
Yeah, thats sounds good. We can always adjust, and B will still be a nice improvement, especially if we get it to sync on message (although we should be sure to have some manual way to force a 'sync now' too).
kevin
On Sun, Nov 27, 2022 at 5:32 PM Kevin Fenzi kevin@scrye.com wrote:
C) Same as B, but with an external builder We already build the new websites on Gitlab CI, and since the S3 gateway is accessible from the outside, we could just push the build artifacts to s3 directly from GitLab CI. Then sync the proxies from it.
Or just pull from gitlab directly? Or does it expose that data in a way we could sync from?
Gitlab CI supports downloadable artifacts. A tarball seems most appropriate here but I often build RPMs myself. :-D
Here's a simple CI script that build and tars the website up: https://gitlab.com/tchollingsworth/fedora-websites/-/blob/6787d86f6e81c849df...
And a stable link by branch and commit to the tarball it generated: https://gitlab.com/api/v4/projects/tchollingsworth%2Ffedora-websites/jobs/ar... https://gitlab.com/api/v4/projects/tchollingsworth%2Ffedora-websites/jobs/ar...
We could also improve B and C by adding fedora-messaging to the mix to trigger a proxy resync as soon as a new build is available instead of doing so every hour.
+1 (although we should make sure not to thundering herd the endpoint if all proxies decided to sync at the same instant).
Gitlab can call a webhook when the build is completed, which could trigger (a bus message that fires off) a sync. I see CentOS developed webhooks that forward commits, issues, etc. to the fedora-messaging bus, not sure if they were ever put into production or if they include CI successes and failures, but maybe half the work needed for this is already done.
Yeah, although it works the other way too... if we move to gitlab it would need someone who understands that setup to debug and fix.
I spent more time messing around with stuff like rolling back to Fedora 34 because the flask-assets dependency it needed was retired in the current release than I did with anything that was gitlab's fault. ;-)
On Mon, Nov 28, 2022 at 6:24 AM darknao darknao@fedoraproject.org wrote:
One downside to that is every build creates a new image (or at least a new layer), that proxies will have to pull every time. With rsync (or s3 sync) we only download what actually changes and save bandwidth in the long run.
Maybe my build isn't working all the way but tarball above is 11MB gzipped and 30MB uncompressed so probably not a big burden, especially if you only build when someone pushes.
infrastructure@lists.fedoraproject.org