Collaboration Servers!
by Mike McGrath
collab1.fedoraproject.org is up and running. Yahoo! So whats missing?
Well, it doesn't actually do anything yet. Plans for it include
1) gobby (its AMAZING)
2) pastebin or something like it (also amazing)
3) mailman.
So who wants to set up what? Luke, you'd mentioned you might be able to
get gobby up sometime this week / next. Is that still the case? If so
I'll open a ticket and assign it to you.
My only request for pastebin is that we use something that has an
upstream, and that we don't modify it other then to create a template for
that good ol' fedora look and feel.
If someone wants to do a part (paulo, you'd mentioned some interest) just
let me know. These types of servers also fall under "tools" so you'll
need to be part of the sysadmin-tools FIG.
-Mike
15 years, 7 months
Fedora Search
by Mike McGrath
We need a Fedora search engine. Especially for docs. Options
1) Do we run our own?
2) Do we use google.
I love 2, its easy. But it is, non-OSS. So there are moral issues at
stake here. (though I've not used google to exclusively search through
our sites, it may suck at it, who knows :)
So, thoughts? Who has deployed their own search engines? I've used htdig
in the past.
-Mike
15 years, 8 months
Re: Spam Mail and Fedora-docs subscribers
by Mike McGrath
On Thu, 31 Jan 2008, John Babich wrote:
> Fedora Infrastructure Team:
>
> For the past several weeks, the Fedora Docs list has been hit hard (at
> least 500+) with spam e-mails. They get rejected, but, I, as an owner
> of the list, am getting flooded with these rejected notices, making my
> job harder.
>
> I also notice an increase in unsubscribe notices, which may be unrelated.
>
> Can you please check this out and let me know if there is anything
> that can be done to tighten up the filters?
>
> I know there are at least 3 other list owners of Fedora-Docs-list who
> are being affected.
>
> Best Regards,
>
Yes there is, if someone forwards me the password for admin access to the
list I can set it up and then show you guys how to maintain/tweak it.
-Mike
15 years, 8 months
Postgres vacuuming update
by Toshio Kuratomi
Alright team, now that we're able to vacuum all of our postgres tables
no matter how big, it's time for us to setup a plan for vacuuming all of
our tables on a regular basis. That way when we do get a chance to
perform a vacuum full on the tables they won't fill back up with dead
tuples and allocated free space.
For those that have sudo access on db2, I've written a little program,
vacstat.py, that can help analyze our needs. The first command that
people should know about is:
sudo -u postgres vacstat.py schema
When run in this mode, vacstat will attempt to get a list of all
databases and the tables in those databases. It will compare those
lists against the copy from a previous run. If they are the same then
vacstat will be happy. If they differ, vacstat will save the new data
in a file and print a message for you to setup a vacuum policy for the
new table.
I've set up vacstat to run from cron on db2 in this mode. If we get
email from vacstat telling us that there's a new database or table,
we'll need to make sure to enter those databases or tables in our vacuum
script and then follow the directions to let vacstat know we're aware of
the new tables.
The next major mode is:
sudo -u postgres vacstat.py stattuple-start --database DBNAME
In this mode, vacstat will vacuum the database and then take samples of
how dirty the tables have gotten over the course of a day. vacstat is
currently set to take a sample immediately after vacuuming, after one
hour, after six hours, and after a day. The information is recorded
into a pickle file under /var/lib/vacstat on db2. Once we have that
information we can use it to see how quickly the dead tuples accumulate
in the table and from that come up with a plan on how frequently to
vacuum on a table-by-table basis.
I'm currently testing the latter functionality with a run on some of our
smaller databases. Once that's working I'll be collecting stats for all
of our databases.
-Toshio
15 years, 8 months
Koji vacuuming
by Toshio Kuratomi
We've been having some issues vacuuming the huge rpmfiles table in koji
for over a month. After some help optimizing our postgres server from
Devrim GÜNDÜZ of Command Prompt we were finally able to complete that
task (as well as the whole server running much better.)
Here's some preliminary information about the vacuuming. I'll have more
later today -- a combination of a script I'm writing to help us evaluate
which tables need frequent vacuuming and more exact timing from a second
run of this vacuum process (to see if it will be markedly faster when
run on an already vacuumed database.).
Approximate vacuum runtime: 14 hours
Before Vacuum
=============
koji=# select * from pgstattuple('rpmfiles');
table_len | 20169555968
tuple_count | 99381945
tuple_len | 14163528564
tuple_percent | 70.22
dead_tuple_count | 5036605
dead_tuple_len | 741444680
dead_tuple_percent | 3.68
free_space | 4460801412
free_percent | 22.12
After Vacuum
============
table_len | 20214169600
tuple_count | 99690347
tuple_len | 14206464600
tuple_percent | 70.28
dead_tuple_count | 0
dead_tuple_len | 0
dead_tuple_percent | 0
free_space | 5211934688
free_percent | 25.78
Notes
=====
The vacuum succeeded in clearing out all of the dead tuples that had
accumulated in the database which is what vacuuming is suposed to do.
(Dead tuples are old rows that have either been deleted or updated.)
One thing that was interesting to me was that the free space (space that
was formerly in dead_tuples that the database is unable to restore to
the system without physically reordering where data is on the disk but
is able to reuse for new rows) increased by more than what was moved in
from dead_tuples. This means that not every new row created in the
table is drawn from the free space. We'll probably want to either
perform a vacuum full of the table or dump and reload it when we have
the ability to take an extended outage.
Log of the vacuum run
=====================
* Note: Devrim is taking a look at this to see if there's any further
optimizations we can perform on the db server.
koji=# vacuum verbose rpmfiles;
INFO: vacuuming "public.rpmfiles"
INFO: index "rpmfiles_by_rpm_id" now contains 99401971 row versions in
464395 pages
DETAIL: 0 index row versions were removed.
126139 index pages have been deleted, 126139 are currently reusable.
CPU 9.82s/10.98u sec elapsed 2720.37 sec.
INFO: index "rpmfiles_by_filename" now contains 99444842 row versions
in 2162981 pages
DETAIL: 0 index row versions were removed.
320028 index pages have been deleted, 320028 are currently reusable.
CPU 39.41s/14.81u sec elapsed 20121.68 sec.
INFO: index "rpmfiles_pkey" now contains 99595304 row versions in
2451380 pages
DETAIL: 0 index row versions were removed.
345115 index pages have been deleted, 297534 are currently reusable.
CPU 47.37s/18.01u sec elapsed 22697.21 sec.
INFO: "rpmfiles": removed 5036605 row versions in 95692 pages
DETAIL: CPU 5.19s/0.65u sec elapsed 946.13 sec.
INFO: "rpmfiles": found 5036605 removable, 99399621 nonremovable row
versions in 2462392 pages
DETAIL: 0 dead row versions cannot be removed yet.
There were 30000388 unused item pointers.
0 pages are entirely empty.
CPU 111.12s/46.87u sec elapsed 48686.50 sec.
INFO: vacuuming "pg_toast.pg_toast_396022"
INFO: index "pg_toast_396022_index" now contains 0 row versions in 1 pages
DETAIL: 0 index pages have been deleted, 0 are currently reusable.
CPU 0.00s/0.00u sec elapsed 0.02 sec.
INFO: "pg_toast_396022": found 0 removable, 0 nonremovable row versions
in 0 pages
DETAIL: 0 dead row versions cannot be removed yet.
There were 0 unused item pointers.
0 pages are entirely empty.
CPU 0.00s/0.00u sec elapsed 0.05 sec.
-Toshio
15 years, 8 months
xen1 outage
by Mike McGrath
FYI all we're working on xen1's issues. They seem to be iscsi related but
its still not quite clear. Basically the last thing we see in the logs is
an issue with iscsi, then the box reboots. We're seeing similar log
issues around the same time on our other boxes... but they don't reboot.
See:
https://fedorahosted.org/fedora-infrastructure/ticket/334
-Mike
15 years, 8 months
Problem with @fedoraproject.org mail address
by Marcin Zajączkowski
Hi,
If that list it not suitable for my question, please let me know how to
reach the Postmaster.
I gave my email address in fedoraproject.org domain a few people and
recently I was informed that mails sent to it bounce with error message:
<<< 550 SPF Error: Please see
http://spf.pobox.com/why.html?sender=some-strange-account%40wsisiz.edu.pl...
554 5.0.0 Service unavailable
It looks there is a problem with SPF.
209.132.177.92 (mx1-phx.redhat.com) is not allowed to send mails from
wsisiz.edu.pl domain (which is true) and a message is bounced by a
destination email server (where mails from @fedoraproject.org are
redirected). Probably it's not possible to receive any email sent from a
domain with SPF enabled.
Is there a way to make it work?
I'm not a SPF specialist, but I don't have problems with mails
redirected from @users.sf.net.
Regards
Marcin
15 years, 8 months
Introduction to the Infrastructure Group
by Baldwin Sung
Hi,
I've been an sysadmin for over 10 years. I started working with UNIX
back in 97 and have never looked back ever since. I would rate my
Linux skill level as advanced.
Thanks,
Baldwin
15 years, 8 months
Meeting Log - 2008-01-24
by Ricky Zhou
15:01 -!- mmcgrath changed the topic of #fedora-meeting to: Infrastructure -- Who's here
15:01 * lmacken
15:02 * iWolf
15:02 * abadger1999 here
15:02 * EvilBob
15:02 * nirik is in the peanut gallery
15:03 -!- kital [n=Joerg_Si@fedora/kital] has quit Remote closed the connection
15:04 -!- kital [n=Joerg_Si(a)port-87-234-46-98.static.qsc.de] has joined #fedora-meeting
15:04 < mmcgrath> alrighty, lets get started
15:04 -!- mmcgrath changed the topic of #fedora-meeting to: Infrastructure -- Tickets
15:04 < mmcgrath> .tiny https://fedorahosted.org/fedora-infrastructure/query?status=new&status=as...
15:04 < zodbot> mmcgrath: http://tinyurl.com/2hyyz6
15:05 < f13> howdy
15:05 < mmcgrath> .ticket 347
15:05 < zodbot> mmcgrath: #347 (Set localtime on all our servers to UTC) - Fedora Infrastructure - Trac - https://hosted.fedoraproject.org/projects/fedora-infrastructure/ticket/347
15:05 * yingbull here too, btw.
15:05 < mmcgrath> So we've got servers all over the place now, it'd probably be wise to set them all to a single time and the only one that makes sense is UTC.
15:05 < mmcgrath> I'm generall for this change though worry about exactly how / when it will happen.
15:05 < mmcgrath> Anyone against / for or have other comments about this?
15:06 < iWolf> I think its a wise move, especially as the servers get more spread out.
15:06 < f13> doing that during the outage makes sense
15:06 < f13> if that hasn't already been suggested
15:07 < lmacken> +1
15:07 < mmcgrath> its only an outage for the db and the buildsys, none of the other servers are covered.
15:07 < mmcgrath> which isn't to say it can't be done, just not something we've looked at.
15:07 < f13> mmcgrath: true. And if we approve this change, we should probably give more than a few hours notice to cron job owners
15:07 -!- ChitleshGoorah [n=chitlesh(a)77.206.229.202] has joined #fedora-meeting
15:08 < mmcgrath> I'll send a note to the list and say we'll do it in a week.
15:08 < mmcgrath> I can't think of any cron jobs that would really cause any harm by this, maybe koji gc.
15:08 < mmcgrath> anyone have anything else on that?
15:09 < yingbull> sounds good to me.
15:09 < mmcgrath> .tiny 192
15:09 < zodbot> mmcgrath: Error: '192' is not a valid url.
15:09 < mmcgrath> .ticket 192
15:09 < zodbot> mmcgrath: #192 (Netapp low on free space) - Fedora Infrastructure - Trac - https://hosted.fedoraproject.org/projects/fedora-infrastructure/ticket/192
15:09 < mmcgrath> So thats supposed to get done tonight if all goes well.
15:10 < mmcgrath> It takes about 4 hours for the actual rsync to complete.
15:10 < mmcgrath> This should be a pretty straight forward change. During the outage though we'll be making some changes to the db as well.
15:10 < mmcgrath> abadger1999: will you be around for that?
15:10 < abadger1999> Yep.
15:10 < mmcgrath> solid.
15:11 < mmcgrath> anyone have any questions on that?
15:11 * dgilmore is here
15:11 < mmcgrath> dgilmore: word
15:12 < mmcgrath> .ticket 270
15:12 < zodbot> mmcgrath: #270 (Fedora Wiki allows editing raw HTML) - Fedora Infrastructure - Trac - https://hosted.fedoraproject.org/projects/fedora-infrastructure/ticket/270
15:12 < mmcgrath> paulobanon: ping
15:12 < mmcgrath> ricky: ping
15:12 < mmcgrath> any word on that?
15:12 < mmcgrath> this seems like it will be easier to do then we thought since the docs guys aren't really using it.
15:12 < dgilmore> i say pull it
15:12 < mmcgrath> <nod>
15:13 < mmcgrath> paulobanon and ricky were supposed to be working on that but I don't see them
15:13 < mmcgrath> we can go back to that.
15:14 < mmcgrath> .ticket 302
15:14 < zodbot> mmcgrath: #302 (Moin patches) - Fedora Infrastructure - Trac - https://hosted.fedoraproject.org/projects/fedora-infrastructure/ticket/302
15:14 * mmcgrath tries to summon MrBawb
15:14 < dgilmore> ive not heard anything on this for quite some time
15:15 < mmcgrath> well its kind of been on hold for the Moin 1.6 upgrade.
15:15 < mmcgrath> I never heard back from the packager about it. We'll probably have to put somethig in EPEL.
15:15 < dgilmore> we may need to just do it
15:16 < mmcgrath> err we'll probably have to put something in the infrastructure repo.
15:16 < mmcgrath> alrighty, thats the rest of the tickets.
15:16 < f13> mmcgrath: can you ping me 1 hour before the outage to disable all the builders?
15:16 -!- giallu [n=giallu(a)81-174-9-209.dynamic.ngi.it] has joined #fedora-meeting
15:16 < mmcgrath> f13: sure, or I can if you're not around.
15:17 -!- mmcgrath changed the topic of #fedora-meeting to: Infrastructure -- Schedule
15:17 < mmcgrath> http://fedoraproject.org/wiki/Infrastructure/Schedule
15:17 < f13> mmcgrath: we'll have a higher chance of all the builds being done by the outage if we do that first, then we can disable koji itself, and once we bring it back up it'll pick up the submitted jobs
15:17 < mmcgrath> <nod>
15:17 -!- wolfy [n=lonewolf@fedora/wolfy] has joined #fedora-meeting
15:18 < mmcgrath> Nothing new on corporate sponsors though I've been talking with ctylor a bit which could be fruitful.
15:18 < mmcgrath> I need to get back to our german colo, we should have a block of 16 IP's coming our way.
15:18 < mmcgrath> I've added a bit more in terms of architectural documentation, nothing ground breaking though.
15:18 < mmcgrath> No new SOP's this week.
15:19 < mmcgrath> And no new sponsors though yingbull is now in web, we're just making sure he's all setup
15:19 < mmcgrath> yingbull: how's all that going?
15:19 < dgilmore> i need to put together some proposals
15:19 < mmcgrath> dgilmore: for what?
15:19 < yingbull> yingbull: I think I should be good now. I'm task oriented: throw me something to start with and I'll run with it.
15:20 < dgilmore> one to OSUOSL for hosting and one to Dell for hardware. to have a primary mirror for all secondary arches
15:20 < yingbull> err, mmcgrath. bah.
15:20 * yingbull needs more coffee today.
15:20 < mmcgrath> yingbull: k, meet with me after the meeting, I've got a few tasks.
15:20 -!- JSchmitt [n=s4504kr@fedora/JSchmitt] has quit "Konversation terminated!"
15:20 < yingbull> sounds good.
15:21 < mmcgrath> dgilmore: I've been probing a bit about secondary arch stuff. We need to find a mirror with both Inet1 and Inet2 right?
15:21 < dgilmore> mmcgrath: ideally yes which osuosl has
15:22 < mmcgrath> dgilmore: excellent, let me know if you want some help with that.
15:22 < dgilmore> mmcgrath: mostly its just time. Ive informally spoken with them and need to put something formal forward
15:23 < mmcgrath> <nod>
15:23 < mmcgrath> thats good to hear.
15:23 < mmcgrath> dgilmore: anything else on that? if not I'll open the floor.
15:23 * ricky appears.
15:23 < mmcgrath> ricky: howdy.
15:23 < EvilBob> If an outage happens and you are working to get things back up, feel free to ping me to handle incoming questions about what is going on and to change channel topics so our contributors know.
15:24 < EvilBob> I figure it is the least I can do
15:24 < mmcgrath> EvilBob: sure thing, I got real cranky last night.
15:24 < mmcgrath> ricky: whats the word on ticket 270?
15:24 < mmcgrath> .ticket 270
15:24 < zodbot> mmcgrath: #270 (Fedora Wiki allows editing raw HTML) - Fedora Infrastructure - Trac - https://hosted.fedoraproject.org/projects/fedora-infrastructure/ticket/270
15:25 < dgilmore> mmcgrath: nothing else. just wanting to let people know im working on it
15:25 < mmcgrath> <nod>
15:25 < ricky> mmcgrath: Nothing yet, sorry - I've been pretty swamped, but I guess I'll have to concentrate more on FAS2 for the next while.
15:26 < mmcgrath> yeah, and FAS2 is the priority.
15:26 < mmcgrath> alrighty
15:26 -!- mmcgrath changed the topic of #fedora-meeting to: Infrastructure -- Open Floor
15:26 < mmcgrath> anyone have anything specific they'd like to discuss?
15:27 * mmcgrath doesn't have anything
15:28 < mmcgrath> alrighty. we'll close th emeeting in 30
15:28 < mmcgrath> 10
15:29 -!- mmcgrath changed the topic of #fedora-meeting to: Meeting closed
15 years, 8 months
My introduction
by Amitakhya Phukan
Hello!
I am Amit from India and have recently joined the fedora-infrastructure
mailing list and I hope to have an exciting time here. Presently, I am
working with Red Hat as a Language Maintainer for Assamese and am
involved with localization of RHEL, Fedora, Mozilla and GNOME. I hope to
contribute to the fedora infrastructure and in the process also learn a
lot from you guys.
Cheers and regards,
Amitakhya Phukan.
15 years, 8 months