About JS framework
by Pierre-Yves Chibon
Good Morning Everyone,
Our infrastructure is mostly a python store, meaning almost all our apps are
written in python and most using wsgi.
However in python we are using a number of framework:
* flask for most
* pyramid for some of the biggest (bodhi, FAS3)
* Django (askbot, Hyperkitty)
* TurboGears2 (fedora-packages)
* aiohttp (python3, async app: mdapi)
While this makes sometime things difficult, these are fairly standard framework
and most of our developers are able to help on all.
However, as I see us starting to look at JS for some of our apps (fedora-hubs,
wartaa...), I wonder if we could start the discussion early about the different
framework and eventually see if we can unify around one.
This would also allow those of us not familiar with any JS framework to look at
the recommended one instead of picking one up semi-randomly.
So has anyone experience with one or more JS framework? Do you have one that
would you recommend? Why?
Thanks for your inputs,
Pierre
11 months, 2 weeks
Infrastructure and release engineering Documentation
by Mark O'Brien
Hi All,
As some of you may be aware there has been chatter around having the
documentation for infrastructure and release engineering centralised in one
place.
A possible solution to this is to move all of these under a new section on
docs.fedoraproject.org called something like Infrastructure and Release
Engineering (very original I know).
We could then go about moving docs which are suitable from currrent
locations to the new central point. Each doc should be updated before
moving and the old document should be updated to only contain a link to the
new doc to avoid a case of multiple versions of a doc (https://xkcd.com/927/
).
The following links contain the bulk of the documentation:
https://fedora-infra-docs.readthedocs.io/en/latest/
https://docs.pagure.org/releng/
https://fedoraproject.org/wiki/Infrastructure
These would be the suggested first steps with other docs to follow if/when
these are completed.
As I say this is just a possible solution so as always all feedback and
suggestions are encouraged and welcomed.
Thanks,
Mark
1 year, 10 months
Fedora 35 Beta Freeze now in effect
by Kevin Fenzi
Greetings.
We are now in the infrastructure freeze leading up to the Fedora 35
Beta release. This is a pre release freeze.
We do this to ensure that our infrastructure is stable and ready to
release the Fedora 34 Beta when it's available.
You can see a list of hosts that do not freeze by checking out the
ansible repo and running the freezelist script:
git clone
https://infrastructure.fedoraproject.org/infra/ansible.git
ansible/scripts/freezelist -i inventory
Any hosts listed as freezes is frozen until 2021-09-14 (or later if
release slips). Frozen hosts should have no changes made to them without
a sign-off on the change from at least 2 sysadmin-main or rel-eng
members, along with (in most cases) a patch of the exact change to be
made to this list or a pull request for review.
Thanks,
Kevin
1 year, 12 months
Openshift 4 SOP PR review
by David Kirwan
Hi all,
We have put a number of SOPs together, related to Openshift 4, installation
and configuration on Fedora Infra, we are hoping to get some feedback!
If you get a minute please check the following:
https://pagure.io/infra-docs-fpo/pull-request/8
--
David Kirwan
Software Engineer
Community Platform Engineering @ Red Hat
T: +(353) 86-8624108 IM: @dkirwan
1 year, 12 months
Freeze break request: disable fedmsg-irc bots in most channels
by Kevin Fenzi
Currently, the fedmsg-irc role connects a number of IRC bots to
libera.chat IRC. (Usually 2 per channel, one for production and one for
staging). These bots then send fedmsgs that match a specific regex to
the channel they are in.
As part of setting up matrix rooms and bridging them to IRC, we have
realized how noisy these bots tend to be. In almost all cases the
tickets/message/event is sent to people who care about it via email or
personal FMN notification, so the message in the channel is just noise.
Additionally, these fedmsg-irc bots don't have a ton of flexability in
what they match on, for example if you say 'perhaps we should add a
badge for this' in a ticket, it will then forever notify the
#fedora-badges channel about any changes to that ticket.
Additionally, fedmsg-irc is python2 and tied to fedmsg (not
fedora-messaging). If we can get rid of all use of them that saves us
some replacement costs.
So, I would like to drop all these bots, except for #fedora-fedmsg and
#fedora-fedmsg-stg. I personally think those channels are useful as you
can watch the bus flow and tell things about it's health and see overall
changes you might not otherwise notice. They might be replaceable by a
more simple bot someday.
Of course there may be some use case I am not seeing here, so if you
_DO_ want these messages in your channel/room still, please tell us!
Or if there's some other set of messages that would be of use that would
be good to know too, so we could take it into account when/if we work on
a matrix bot.
Affected channels:
#fedora-admin
#fedora-commops
#fedora-python
#fedora-releng
#fedora-latam
#fedora-g11n
#ipsilon
#pagure
#fedora-design
#fedora-docs
#fedora-websites
#fedora-mktg
#fedora-modularity-bots
#fedora-diversity
#fedora-magazine
#fedora-rust
#rit-foss
#fedora-workstation
#koji
#fedora-join
#fedora-neuro
#fedora-badges
#centos-ci
#fedora-podcast
https://pagure.io/fedora-infra/ansible/pull-request/811 is the PR with
the changes.
kevin
2 years
(retroactive) Freeze break: Proxy adjustments
by Kevin Fenzi
Yesterday we were having lots of issues with proxy01/10 in IAD2.
They would stop processing connections. Restarting httpd seemed to clear
it up for a while, then it would get stuck again.
My current theory is that we were hitting the limit of 900 clients for
some reason and it wasn't processing them correctly when it got to that
point.
So, I increased that limit to 1500 and also setup a SSL session cache
(which it was complaining about that we didn't have). Since then,
proxy01/10 with those changes have been running ok.
I'd like to push this out to the other proxies now as well, as some of
them have been alerting from time to time and it could be this same
issue.
I already pushed this commit because I wanted 01/10 to be in sync/in
git.
+1's to push it to the rest of the proxies?
commit 313674646df60fc0e8342eff26094f694105cf76
Author: Kevin Fenzi <kevin(a)scrye.com>
Date: Tue Sep 21 16:19:14 2021 -0700
proxies: increase max workers
Also add a ssl connection cache.
These changes are live on proxy01/10 and seem to have made them stable
again. Will look at pushing to the rest tomorrow.
Signed-off-by: Kevin Fenzi <kevin(a)scrye.com>
diff --git a/inventory/group_vars/proxies b/inventory/group_vars/proxies
index c04531a57..5b0a25fee 100644
--- a/inventory/group_vars/proxies
+++ b/inventory/group_vars/proxies
@@ -7,7 +7,7 @@ num_cpus: 6
# This is used in the httpd.conf to determine the value for serverlimit and
# maxrequestworkers. On 8gb proxies, 900 seems fine. But on 4gb proxies, this
# should be lowered in the host vars for that proxy.
-maxrequestworkers: 900
+maxrequestworkers: 1500
tcp_ports: [
# For apache, generally.
diff --git a/roles/httpd/proxy/templates/httpd.conf.j2 b/roles/httpd/proxy/templates/httpd.conf.j2
index 00947131f..5b1e0debf 100644
--- a/roles/httpd/proxy/templates/httpd.conf.j2
+++ b/roles/httpd/proxy/templates/httpd.conf.j2
@@ -773,3 +773,5 @@ EnableSendfile on
# Configure a location for OCSP stapling
SSLStaplingCache shmcb:/tmp/stapling_cache(128000)
+SSLSessionCache shmcb:/run/httpd/sslcache(10240000)
+SSLSessionCacheTimeout 600
kevin
2 years