hosting git conversion of Fedora CVS tree on fedora infrastructure?
by Lennert Buytenhek
Hi,
For a while now, I've been maintaining a git conversion of the
Fedora CVS tree, pulling in a copy of the CVS tree via rsync, and
running some local scripts to convert that to git, incrementally
updating the git tree as commits are made to the CVS tree.
(For more background info, see here:)
https://www.redhat.com/archives/fedora-devel-list/2007-November/msg00561....
I think most of the issues with the conversion have been worked
out, and I'd like to make this available to the World in some way.
I was wondering whether it makes sense to host something like this
on Fedora infrastructure.
Note that this is _not_ a proposal to replace CVS by git.
The git tree is currently a read-only (slave) version of the CVS
tree, and I expect it to stay that way for some time. But even though
Fedora isn't switching VCSes at this point, I think it would still
make sense to have git/hg/random-other-VCS conversions of the Fedora
CVS tree publically available, for a number of reasons:
- Give package maintainers the option of working with their favorite
VCS for local development (while continuing to use CVS when
committing things upstream.)
- All the advantages of other version control systems over CVS, e.g.:
- Give people the opportunity to pull a local copy of the entire
tree or parts of the tree for local browsing of packages and
their history without having to go through the server (CVS
doesn't support this, although you _could_ just rsync the
entire CVS tree to your local machine...)
- Allow stacking commits, reverting commits, merging commits,
splitting commits, reordering commits, etc., before the changes
are pushed into the CVS tree and become final.
- Allow easy maintaining of local branches of packages.
What would be needed to host this on Fedora infrastructure:
- Some disk space. The size of the converted git tree is about 725
megabytes after packing, but for experimenting it would be good to
have a bit more space available, say, 10G or so.
- Open ports. For browsing the git tree via the web, port 80 access
would be needed, and for allowing people to clone the tree over
git://, port 9418 access would be useful.
- Read-only access to the ,v files in the CVS tree, say, over NFS.
Ideas?
thanks,
Lennert
16 years
Meeting Log - 2007-11-29
by Ricky Zhou
15:00 -!- mmcgrath changed the topic of #fedora-meeting to: Infrastructure -- Time for a hootinanny
15:00 < mmcgrath> Who's here?
15:00 < abadger1999> I'm at the hootinanny of course!
15:00 * iWolf is here
15:00 * loupgaroublond is bored in class, so he's kibitzing the hootinanny
15:00 < mmcgrath> iWolf: welcome
15:00 -!- MrBawb [i=abob(a)guppy.drown.org] has joined #fedora-meeting
15:00 < mmcgrath> loupgaroublond: :)
15:00 < iWolf> mmcgrath: Thanks!
15:01 -!- Sopwith [n=elliot(a)little-black-box.vmware.com] has joined #fedora-meeting
15:01 -!- giarc_w [i=hidden-u(a)gnat.asiscan.com] has joined #fedora-meeting
15:01 < abadger1999> Hey Sopwith!
15:01 * lmacken is here
15:01 * nirik is in the rabble seats.
15:01 * skvidal is
15:01 < mmcgrath> ricky: ping
15:01 < mmcgrath> paulobanon: ping
15:01 < mmcgrath> dgilmore: ping
15:01 < mmcgrath> jima: ping
15:01 * dgilmore is here
15:01 < mmcgrath> warren: ping
15:01 < warren> pong
15:02 < f13> mmcgrath: pong.
15:02 < mmcgrath> anyone else I forgot: ping
15:02 < mmcgrath> Welcome everyone!
15:02 < mmcgrath> This is the first meeting we've really had since the F8 launch, yippee.
15:03 < mmcgrath> I'd like to take some time and solidify our goals for the F9 launch.
15:03 < f13> beer harder.
15:03 -!- mmcgrath changed the topic of #fedora-meeting to: Infrastructure -- Fedora 9 - an infrastructure preview.
15:03 < warren> f13, Stacy took all the beer with him.
15:03 < mmcgrath> For those unfamiliar with the thread its:
15:03 < mmcgrath> .tiny https://www.redhat.com/archives/fedora-infrastructure-list/2007-November/...
15:03 < mmcgrathbot> mmcgrath: http://tinyurl.com/yu2jju
15:04 < mmcgrath> I saw some people add some things to that list that they'd like to see, here's a preview as it stands right now... (this might take a bit)
15:04 -!- JSchmitt [n=s4504kr(a)p54B11157.dip0.t-ipconnect.de] has joined #fedora-meeting
15:04 < Sopwith> /query abadger1999
15:04 < Sopwith> (sry)
15:05 < abadger1999> heh
15:05 < loupgaroublond> i can add a bit of advice to #2
15:05 < mmcgrath> Remove all FC6 boxes, separate test infrastructure, finalize backup solution, system hardening and cleanup, further system replication, new torrent server, new collaboration servers, move hosted out of PHX...
15:05 * warren wants to work on hosted.
15:05 < mmcgrath> FAS2 implementation, better ystems integration (bodhi, FAS, pkgdb, koji, etc), less focus this time around on new systems...
15:05 * jima gets out of a meeting and stumbles into another one
15:06 < mmcgrath> SSL auth against apps, better integration with projects like OLPC and CC.
15:06 < mmcgrath> Ok, thats what I've got.
15:06 < mmcgrath> Anyone see anything obvious that I've missed?
15:06 < dgilmore> nope
15:06 < mmcgrath> we can always add stuff but these changes alone will keep us might busy.
15:07 < dgilmore> thats alot of work :)
15:07 < mmcgrath> dgilmore: no doubt.
15:08 < mmcgrath> If there's no objections I'll get these bits added to the F9 milestone.
15:08 -!- JSchmitt [n=s4504kr@fedora/JSchmitt] has quit Client Quit
15:08 -!- jmtaylor [n=jason(a)c-76-112-119-170.hsd1.mi.comcast.net] has joined #fedora-meeting
15:08 < mmcgrath> All in all F8 launch went very well except for the switch failing which took PHX offline.
15:08 < mmcgrath> For F9 we should be able to completely mitigate any issues in PHX related to distribution.
15:08 < mmcgrath> Even for the F8 launch we did not rely on download.fedora.redhat.com as it was not in the mirror list or public list.
15:09 < jima> are we going to relocate the services, or mirror them?
15:09 < dgilmore> do we have a list of phx based things that are ok if they go down
15:09 < mmcgrath> In fact the only thing that stopped us after the switch went down was mirrormanager.
15:09 < dgilmore> and then mirror everything else out
15:09 < mmcgrath> dgilmore: we don't but we need that, I'm actually working on getting our services divided up into 4 sections.
15:09 < mmcgrath> 1) Buildsystem, 2) distribution 3) support and 4) value-added.
15:10 < mmcgrath> 3 would be like, the website for example and 4) would be hosted.fedoraproject.org or fedorapeople.org
15:10 < skvidal> what's stuff ike FAS?
15:10 < mmcgrath> jima: we're actually just going to create duplicates in other colo's.
15:10 < mmcgrath> skvidal: support.
15:10 < dgilmore> skvidal: critical
15:10 < warren> FAS is needed by multiple parts...
15:10 < skvidal> mmcgrath: okay
15:10 < skvidal> that's what I figured
15:11 < mmcgrath> but each of those sections will also have subsections we can consider critical.
15:11 < jima> mirrors/mm are distribution?
15:11 < mmcgrath> jima: well, the mirrorlist app in particular was the part that failed, had we just had it installed in tummy.com our users wouldn't have even noticed the outage.
15:11 < mmcgrath> its designed PERFECTLY for that purpose actually.
15:11 < warren> in two parts
15:12 < warren> management and serving
15:12 -!- clarkbw [i=clarkbw@nat/redhat/x-fe2f50bfb8c9efc5] has joined #fedora-meeting
15:12 < mmcgrath> warren: yeah, and we can handle the management portion being down for a while, the serving piece needs to be HA, and since it keeps its own cache on each app server, if we lose access to the primary DB it'll continue working.
15:13 < mmcgrath> One of the bigger projects this time around is going to be FAS2.
15:13 * paulobanon is here but going home now
15:13 < mmcgrath> its actually been very close for a long time, but we started looking at migration too late in the F8 process where the dev's and releng really need to have a solid environment.
15:13 < jima> yeah
15:14 < f13> we appreciated that (:
15:14 < mmcgrath> :)
15:14 < dgilmore> mmcgrath: are we going to replicate ldap across a few datacantres
15:14 < dgilmore> centres
15:14 < mmcgrath> dgilmore: we will to at least two places, though for the initial rollout we're going to duplicate what we have now.
15:14 < mmcgrath> with each box downloading their own copy of the stuff.
15:15 < mmcgrath> once we're solid with the ldap infrastructure we'll start looking to migrating actual shell acounts to use ldap directly.
15:15 * skvidal cringes
15:15 < skvidal> really?
15:15 < skvidal> it's been my experience that the reliability of nss_ldap is not-so-fantastic
15:15 < dgilmore> mmcgrath: ok the FDS guys just announced the first beta of FDS 1.1 and RDS 8
15:15 < warren> mmcgrath, might it be more reliable to keep our replicated-like setup?
15:16 < dgilmore> skvidal: ive used it for 3 or 4 years now
15:16 < mmcgrath> skvidal: yeah, if the infrastructure is solid enough we really should. We have a lot of sub issues that ocme about from our current setup (people trying to ssh in and not having accounts right away then getting locked out by deny hosts, etc)
15:16 < skvidal> dgilmore: so have I and it full of hurk - esp if you turn on nscd
15:16 < abadger1999> replicated is a pain though... (one-two hour sync time, etc)
15:16 < dgilmore> skvidal: yeah i never use ncsd
15:16 < mmcgrath> If we find that our ldap setup is not reliable enough to use then we'll continue using what we have, but in the meantime this "syncs at the top of the hour stuff" is not good form.
15:17 < skvidal> so do it more often?
15:17 < mmcgrath> its confusing to people and once we actually have LDAP up and running, there's a viable solution to it.
15:17 < skvidal> it's not like the data is heavy
15:17 < dgilmore> abadger1999: the fds replication is quick
15:17 < mmcgrath> skvidal: thats an option as well. I'd prefer not to have a sync if we don't have to, we'll have to discuss this again once FAS2 ships so we can look at it.
15:17 < skvidal> well, we're syncing _something_ no matter what
15:18 < abadger1999> dgilmore: Sorry. I was talking about our current "replicated-like setup"
15:18 < skvidal> the issue is how transparent fixing it is b/t the two situations
15:18 < dgilmore> abadger1999: :) ok
15:18 -!- kital [n=Joerg_Si@fedora/kital] has joined #fedora-meeting
15:18 < mmcgrath> Yep.
15:18 < mmcgrath> anywho, that will be something we need to talk about when the time comes.
15:18 < skvidal> nod
15:18 < f13> how much of the freeipa stuff would apply?
15:19 < mmcgrath> In the meantime ricky is point man on FAS2.
15:19 < dgilmore> f13: not sure
15:19 < f13> and could we get their help in using us as a real world usage case?
15:19 < mmcgrath> f13: on initial rollout none, but prior to other changes we can certainly look at it.
15:19 < skvidal> f13: freeipa is NOT ready yet
15:19 < mmcgrath> IIRC FAS2 is very close to having OpenID support ready as well.
15:19 < f13> skvidal: yet.
15:19 < skvidal> at least from what Iv'e seen so far
15:19 < mmcgrath> f13: can you give us a quick rundown of this release? It seems that F9 alpha is very very close, when would be your prefered time frame for the switchover to FAS2?
15:20 < f13> F9 alpha should be pretty light in infrastructure. It's not a full freeze, just a nonblocking releng freeze
15:20 < f13> it'll be mostly business as usual.
15:20 < f13> We could do FAS2 between alpha and beta if you need that much time.
15:20 * skvidal has to jet - back after a while
15:20 < mmcgrath> skvidal: solid
15:21 < mmcgrath> f13: thats what I was thinking as well. Probably hugging just after the alpha.
15:21 < jwb> FAS2?
15:21 < mmcgrath> jwb: Fedora Account System 2.
15:21 < jwb> apparently i'm totally missing where this is being discussed
15:21 -!- stick [n=stick(a)cpe-069-134-113-166.nc.res.rr.com] has joined #fedora-meeting
15:22 -!- loupgaroublond [n=loupgaro(a)dijk249.athome232.wau.nl] has quit "class is over early"
15:22 < mmcgrath> we're discussing it here now, and in #fedora-admin or https://hosted.fedoraproject.org/projects/fas2/
15:22 < mmcgrath> really though there's not much more to say on it for the meeting though :)
15:22 < mmcgrath> Anyone have anything they'd like to discuss for the F9 release (either in process or in new features they'd like to have) ?
15:23 < mmcgrath> Ok, we'll move on then.
15:23 -!- mmcgrath changed the topic of #fedora-meeting to: Infrastructure -- Serverbeach
15:23 * jwb once again confuses -meeting with -devel because of xchat tab reordering
15:24 < mmcgrath> As many of you know we've had 5 serverbeach servers given to us and we're working out some specifics but they should be ready soon.
15:24 < dgilmore> mmcgrath: how have we done with xen bridges
15:24 < mmcgrath> We've got it setup so that we can do xen guests on them though its in a fairly complex way. It will be included in the kickstart SOP I'm writing.
15:24 < dgilmore> mmcgrath: warren had said they dont allow it
15:24 < warren> they wont allow the normal one
15:24 < warren> normal way
15:24 < mmcgrath> dgilmore: its not that they do or don't allow it, its that in their network setup it doesn't work.
15:25 < warren> mmcgrath, you did it with private bridging + NAT?
15:25 < dgilmore> mmcgrath: fun
15:25 < warren> mmcgrath, so you already found a solution?
15:25 < warren> don't need me?
15:25 * warren is sad.
15:26 < mmcgrath> For example, serverbeach2 has an IP address of 64.34.163.95 and when we requested additional IP addresses we were given 64.34.195.12.
15:26 < mmcgrath> warren: sorry :) I do think we have a good solution though.
15:26 < warren> mmcgrath, it is standard procedure for them to give IP's on a different subnet.
15:26 < mmcgrath> the problem with the 64.34.195.0/24 network is that it does not have a default gateway. Each ip given to is is given a static route through 64.34.163.95.
15:26 * ricky is here (darn you, DST!)
15:26 < warren> mmcgrath, I don't mind that you figured out your own solution, but I knew how to do it way back and I feel like I've been ignored.
15:26 -!- sankarshan [n=sankarsh@fedora/sankarshan] has quit Read error: 110 (Connection timed out)
15:27 < mmcgrath> So in order to give 64.34.195.12 access to the internet we have to give our xen dom0 an IP in that range
15:27 < mmcgrath> warren: we just didn't want to have to maintain a natpool for each host. this way is a two cli method and then there's no further upkeep required, even when we add additional machines.
15:28 < warren> mmcgrath, I had to do all of this months ago so I was fully aware of what was needed. I could have given you sample configurations and had it up and running in less than an hour with reproducible documentation.
15:28 < mmcgrath> sorry, didn't mean to ignore you, we were just looking at the nat solution as last resort.
15:28 < warren> mmcgrath, uh, that's exactly what I did.
15:28 < mmcgrath> warren: you used nat?
15:28 < MrBawb> why not use proxy arp and local routing?
15:28 < warren> mmcgrath, well, didn't hear your full explanation yet.
15:29 < warren> mmcgrath, you didn't even give me a chance to setup a demo on one unused box.
15:29 -!- sankarshan [i=sankarsh@fedora/sankarshan] has joined #fedora-meeting
15:29 < mmcgrath> warren: sorry, we were just busy with stuff. The solution we came up with is:
15:29 < mmcgrath> ifconfig eth0:1 64.34.195.13 netmask 255.255.255.0; route add -net 64.34.195.0/24 gw 64.34.195.13 eth0
15:29 < warren> mmcgrath, essentially my solution has each IP on dom0 but each guest behaves as if it owns that IP, without their network seeing the fake IP's.
15:29 < mmcgrath> is that the same as yours?
15:30 < warren> err... fake MAC's
15:30 < warren> mmcgrath, wait, how do the xen guests use that interfacE?
15:31 < warren> you're only so far describing how it works on dom0
15:31 < mmcgrath> don't need to, the dom0 does it. The xen guests are still bridged, they're just contacting that IP as their gateway, the dom0 OS itself handles the routing and the static routing in place on the routers at SB handle the rest. Since its transparent to the xen guests, we can just add more without having to make any changes to the dom0
15:32 < mmcgrath> which makes it handy because the documentation is exactly the same for creating a domU inside serverbeach or inside PHX.
15:32 < dgilmore> mmcgrath: sounds good
15:32 < warren> mmcgrath, is there a xen host I can ssh into to see how it is setup?
15:32 < warren> or is this documented?
15:32 < warren> OK, this setup is different
15:32 < mmcgrath> serverbeach2. I'm still writing the documentation.
15:33 < mmcgrath> serverbeach2.fedoraproject.org that is.
15:33 < warren> mmcgrath, still, I don't appreciate being ignored when I had a similar solution to this.
15:33 -!- mdomsch [n=Matt_Dom(a)70.124.62.55] has quit "Leaving"
15:34 < mmcgrath> warren: sorry, I had just assumed you were using nat which was something we knew would work but not something we wanted to have to maintain, it'd make documentation and troubleshooting more difficult.
15:34 < mmcgrath> warren: serverbeach2.fedoraproject.org is up as is its domU at 64.34.195.12
15:35 < mmcgrath> though its domU guest isn't in puppet yet :-/
15:35 < mmcgrath> ok, lets move on for now.
15:35 < warren> I was even loudly suggesting serverbeach a year ago for this same purpose, knowing the limitations we would face, and when we finally do it I'm kept out of the design.
15:35 < warren> ok, move on
15:35 < dgilmore> warren: lets move on
15:36 < mmcgrath> The other server we're still working on getting is the new Dell in our Frankfurt colo.
15:36 < mmcgrath> Its ordered (has been for over 2 months now)
15:36 < mmcgrath> but its been lost in Dell for a little bit, I think thats been figured out just as of yesterday though I haven't heard an ETA on delivery.
15:37 < dgilmore> mmcgrath: we have half a rack there?
15:37 < mmcgrath> I'm hoping to find funding to fill the frankfurt colo for our expected growth.
15:37 < mmcgrath> dgilmore: yeah, half rack.
15:37 < mmcgrath> and donated remote hands.
15:37 < mmcgrath> so thats going to be a solid location for us I believe.
15:37 < mmcgrath> Anywho, thats where we are expanding at present.
15:37 < mmcgrath> Speaking of expansion.......
15:38 < jima> be a shame if a bladecenter fell into that half rack ;)
15:38 -!- mmcgrath changed the topic of #fedora-meeting to: Infrastructure -- Koji share
15:38 < mmcgrath> f13: ping
15:39 < f13> yep
15:39 < mmcgrath> f13: did the last GC ever run?
15:39 < mmcgrath> we're at 91%
15:39 < f13> mmcgrath: I haven't heard from mikem sadly, silly paternaty leave :/
15:39 < f13> I'll ping him via email to see if we can get a response outof him
15:40 < mmcgrath> f13: lets just say that the next gc takes us from 90% to 91%. What steps are we going to take?
15:40 < f13> (I had asked for the information on how to do it myself but I didn't get anything)
15:40 < mmcgrath> We can hope that people won't be coding and building as much over the holidays but that sounds foolish to me ;)
15:40 * warren gets more done during holidays
15:41 < f13> mmcgrath: We can start taking durastic measures like removing all the fe7-merge stuff that isn't latest in an active tag
15:41 < f13> er wait
15:41 < f13> hrm.
15:41 < f13> mmcgrath: did you get the whatever submitted to oracle yet?
15:42 < mmcgrath> f13: I still don't have my access back, spevack is working on it.
15:42 < mmcgrath> spevack: ^^^ BTW :)
15:42 < mmcgrath> f13: I seriously doubt that we'll have a solution ready by the end of the year.
15:42 < warren> spevack, unless you enjoy the silence of the buildsystem. =)
15:42 < mmcgrath> Its going to have to be done on or two the koji share at some point.
15:44 < f13> wha?
15:44 < mmcgrath> f13: if it gets to the point where the koji share fills up and we can't build anymore do we have options to further purge stuff, even if it results in the loss of tagged packages?
15:44 < f13> We might be able to prune more signed packages, things that have already been shipped and could be re-created in koji, the signed copy doesn't need to continue to exist.
15:44 < f13> yes, we have some more pruning options.
15:44 < mmcgrath> k.
15:45 < mmcgrath> I'm going to hope for the best but it might come to that, the koji share grows, it grows!
15:45 < f13> yeah
15:45 < warren> mmcgrath, setup big red buttons
15:45 * mmcgrath could use many big red buttons.
15:45 < mmcgrath> ok, we'll move to the next thing.
15:46 < jima> which is...? :)
15:46 -!- mmcgrath changed the topic of #fedora-meeting to: Infrastructure -- Tickets
15:46 < mmcgrath> .tiny https://hosted.fedoraproject.org/projects/fedora-infrastructure/query?sta...
15:46 < mmcgrathbot> mmcgrath: http://tinyurl.com/yth34b
15:46 < mmcgrath> .ticket 154
15:46 < mmcgrathbot> mmcgrath: #154 (DNS) - Fedora Infrastructure - Trac
15:46 < mmcgrath> The DNS stuff is coming along (this is the vpn.fedoraproject.org stuff) we need to add a few more hosts, its basically blocking on a rebuild of cvs-int.
15:46 < mmcgrath> .ticket 192
15:46 < mmcgrathbot> mmcgrath: #192 (Netapp low on free space) - Fedora Infrastructure - Trac
15:47 < mmcgrath> We already talked about that.
15:47 < mmcgrath> .ticket 222
15:47 < mmcgrathbot> mmcgrath: #222 (sysctl on the proxy servers) - Fedora Infrastructure - Trac
15:47 -!- JSchmitt [n=s4504kr(a)p54B11157.dip0.t-ipconnect.de] has joined #fedora-meeting
15:47 -!- kwizart [n=kwizart@fedora/kwizart] has joined #fedora-meeting
15:47 < mmcgrath> It seems the sysctl setup we have now works well, I'm going to commit those changes to puppet soon and make it part of our default setup for externally facing hosts.
15:47 < mmcgrath> So thats it on those.
15:47 -!- mmcgrath changed the topic of #fedora-meeting to: Infrastructure -- Open Floor
15:47 < f13> mmcgrath: I just found some more space to clear off the koji store
15:48 < mmcgrath> f13: clear it!
15:48 < f13> I just did
15:48 < mmcgrath> Anyone have anything else they'd like to discuss for this meeting?
15:48 < f13> what is the cacti link again?
15:48 -!- GeroldKa [n=GeroldKa@fedora/geroldka] has joined #fedora-meeting
15:48 * warren throws cacti at f13
15:48 < mmcgrath> .tiny https://admin.fedoraproject.org/cacti/graph.php?action=view&rra_id=all&lo...
15:48 < mmcgrathbot> mmcgrath: http://tinyurl.com/25pnep
15:49 < jwb> what is the point of .tiny??
15:49 < jima> makes it easier to copy/paste, i imagine
15:49 < ricky> It'd be cool to integrate with .ticket :)
15:49 < mmcgrath> f13: am I correct in assuming that the GC stuff that has been implemented is going to be an on-going ad-hoc thing? There's not a monthly job or anything?
15:50 < f13> mmcgrath: it's supposed to be an automated job that runs daily or weekly
15:50 < mmcgrath> jwb: what jima said, some people are in terminals (like me) big url's get annoying.
15:50 -!- knurd is now known as knurd_afk
15:50 < f13> throwing stuff in trash as it goes, emptying out stuff that's been in trash for a timeout period
15:50 -!- GeroldKa [n=GeroldKa@fedora/geroldka] has quit Client Quit
15:50 < mmcgrath> ahh, I'm mistaken then.
15:50 < mmcgrath> keep me in the loop if you get ahold of Mike.
15:50 < f13> so we'd just see continual small amounts cleaned, and an overall slower growth
15:50 < f13> nod
15:51 < J5> hey guys, abadger1999 suggested I let you guys know what I am working on. Let me know if there is a break where I can start babbling
15:51 < f13> we've got 235G free now
15:51 < jwb> mmcgrath, gnome terminal can open links
15:51 < jwb> anyway, i'll shut up
15:51 * mmcgrath doesn't always have these meetings in X.
15:51 < mmcgrath> J5: you can start babbling now. Open floor.
15:51 -!- G__ [n=njones@wikipedia/NigelJ] has joined #fedora-meeting
15:52 -!- JSchmitt [n=s4504kr@fedora/JSchmitt] has quit "Konversation terminated!"
15:52 < J5> ok, so I just put up a wiki page for reference http://wiki.fedoraproject.org/MyFedora
15:52 < mmcgrath> http://fedoraproject.org/wiki/MyFedora
15:52 * mmcgrath had troubles with wiki.fp.o
15:53 -!- sankarshan [i=sankarsh@fedora/sankarshan] has quit Read error: 110 (Connection timed out)
15:53 < J5> I am working on integration of all of fedora's resources to make our developer community more efficent
15:53 < dgilmore> J5: :)
15:53 < mmcgrath> J5: you're talking about number 10) in our F9 target - https://www.redhat.com/archives/fedora-infrastructure-list/2007-November/...
15:53 < warren> J5, sounds like beginning to tie all the pieces together like Ubuntu has done with launchpad?
15:54 < J5> this includes talking to all of you guys as well as throwing ideas out and implementing some of theose ideas
15:54 < J5> warren: I'm thinking way beyond launchpad but that is a start
15:54 < jwb> J5, we should call your effort futopia
15:54 < abadger1999> mmcgrath: #10 definitely has a place in this but J5's ideas go beyond it too.
15:54 < mmcgrath> J5: for developers or for users as well.
15:54 < J5> mmcgrath: developers first then users
15:55 < J5> I want to integrate upstream also
15:55 * f13 spots a big glaring problem (:
15:55 < mmcgrath> f13: ?
15:55 < f13> J5: just an fyi and something to think about, THe packages that are 'released' with say F9, don't show up in the bodhi list
15:56 < J5> http://fedorapeople.org/~johnp/fedora_package_maint.pdf is the large overview piece
15:56 < mmcgrath> bah, thats implementation.
15:56 -!- sankarshan [n=sankarsh@fedora/sankarshan] has joined #fedora-meeting
15:56 < f13> mmcgrath: sure, that's why its an fyi
15:56 < J5> f13: that is taken into account - just show the last build :)
15:56 < J5> the idea is that information comes from a number of sources but is consolidated into common views
15:57 < mmcgrath> J5: well I, for one, am generally for this idea, If you're interested in moving forward with it, please create a ticket https://hosted.fedoraproject.org/projects/fedora-infrastructure/ with the links and information you just sent us. That way we can keep the discussion in one spot.
15:57 < ricky> Sounds very cool :)
15:57 -!- sankarshan [n=sankarsh@fedora/sankarshan] has quit Read error: 104 (Connection reset by peer)
15:57 < J5> will do
15:58 < mmcgrath> Ok, anyone have anything else they'd like to discuss?
15:58 < lmacken> J5: this also encompasses the 'amber' project that rnorwood has been working on
15:58 < mmcgrath> J5: FYI, you can also do http://johnp.fedorapeople.org/fedora_package_maint.pdf
15:59 < mmcgrath> Ok, anyone else have anything to bring up, we're almost out of time.
15:59 < J5> so I see myself as a facilitator here. There are a lot of projects with a lot of steam and I don't want to slow them down. So we have this one integration point where we can also discuss common goals.
16:00 < abadger1999> mmcgrath: FYI: I think that scratch builds are going to start being used in reviews and such.
16:00 < mmcgrath> J5: lets chat about this after the meeting in #fedora-admin if you have a moment.
16:00 < mmcgrath> abadger1999: Ugh, can we *not* do that until January?
16:00 < J5> sounds good
16:00 < f13> mmcgrath: 238G free now
16:01 < warren> Let's not do that until we have more storage
16:01 < warren> we're near critical now
16:01 < f13> abadger1999: "start"? scratch buidls have been used in revies for a while, it' sjust not well advertised
16:01 < f13> scratch buidls do automatically get pruned though
16:01 < mmcgrath> f13: you freed up 44G earlier.
16:01 < abadger1999> f13: yeah -- I mean "much more widely publicized"
16:01 < f13> mmcgrath: yes, I found some trees in rel-eng/ that didn't need to be there anymore.
16:01 < f13> currently scratch/ is only taking up 5 g
16:01 < f13> 5.7 to be precise.
16:02 < mmcgrath> f13: weren't a bunch of them recently purged though?
16:02 < f13> mmcgrath: scratch builds are autopruned by internal koji processes
16:02 < mmcgrath> Though I suppose we can purge them again.
16:02 < mmcgrath> f13: ahh, k. that I didn't know. Whats the algorithm?
16:02 < f13> not entirely sure
16:03 < f13> I think they're kept for a week
16:03 < mbonnet> mmcgrath: anything older than 30 days
16:03 < mbonnet> but we can crank that down if we want
16:03 -!- G [n=njones@wikipedia/NigelJ] has quit Connection timed out
16:03 < mmcgrath> abadger1999: I don't know if there's anything we can do about it but at this point I'm generally against encouraging anyone to build anything on our buildsystem.
16:03 < f13> ah 30
16:03 < abadger1999> heh :-)
16:04 < mmcgrath> we'll leave it at 30 for now if its only 5G, but its good to know we can crank that down.
16:04 < abadger1999> mbonnet: Maybe we should watch the size of scratch and if it grows, turn it down? I think a week is reasonable for scratch builds (people should be copying the files out to other storage if they want it longer.)
16:04 < spevack> mmcgrath, warren: i'm on it
16:05 < mmcgrath> spevack: rock, thanks.
16:05 < mmcgrath> ok, we can discuss the rest of this stuff in #fedora-admin, we're running over :)
16:05 < mmcgrath> anyone else have anything to discuss?
16:05 < mmcgrath> if not I'll close in 30
16:05 < mmcgrath> 15
16:05 < mbonnet> oh, no, I take it back, it is 7 days
16:06 < mbonnet> it's in /etc/cron.d/koji-scratch-cleanup
16:06 -!- MrBawb [i=abob(a)guppy.drown.org] has left #fedora-meeting []
16:06 < mbonnet> on koji.fp.o
16:06 < mmcgrath> mbonnet: good to know, thanks :)
16:06 -!- mmcgrath changed the topic of #fedora-meeting to: Infrastructure -- Meeting Closed
16:06 < f13> mbonnet: makes sense, I only see builds as far back as the 21st
16:06 < spevack> mmcgrath: i've given my "approval" to helpdesk. Why they needed it is beyond me :)
16:06 -!- bklemm [n=3745xyz(a)c-71-201-246-251.hsd1.il.comcast.net] has quit "ircII EPIC4-2.6 -- Are we there yet?"
16:06 < mmcgrath> spevack: I want to know why it was removed in the first place, it came at a terrible time :(
16:06 * spevack wishes he could just sign something that says "if mmcgrath asks for something, give it to him" :)
16:06 < mmcgrath> Anywho, thank everyone for coming!
16:06 < ricky> Thanks a lot!
16:07 * mmcgrath requires a Fabergé egg.
16:07 < spevack> mmcgrath: let's get it reinstated, and then we'll find out why it disappeared
16:07 < ricky> By the way, with the DST change, I won't be able to make the first 10 minutes or so for most meetings :(
16:07 < ricky> Whoops, I mean 20 minutes :(
16:08 -!- mmcgrath changed the topic of #fedora-meeting to: Channel is used by various Fedora groups and committees for their regular meetings | Note that meetings often get logged | For questions about using Fedora please ask in #fedora | See http://fedoraproject.org/wiki/Communicate/FedoraMeetingChannel for meeting schedule
16 years
TG Apps and caching
by Toshio Kuratomi
Hey all, good news! We've finally got caching of static data on our
TurboGears applications running. This builds on the mod_cache work that
paulobanon did earlier to have Apache cache the images, stylesheets,
javascript, and other non-dynamic files served by our TurboGears apps.
So far, we have the following programs/directories cached:
bodhi:
admin.fp.o/updates/static
admin.fp.o/updates/tg_widgets/turbogears.widgets
packagedb:
admin.fp.p/pkgdb/static
admin.fp.o/pkgdb/tg_js
mirrormanager:
admin.fp.o/mirrormanager/static
If there's other directories of purely static data in your application
that you'd like us to set up caching of (glezos, I'm looking at you ;-),
please get in touch with me or file a ticket:
https://hosted.fedoraproject.org/projects/fedora-infrastructure/newticket
-Toshio
16 years
Restart TG apps for high mem-usage
by Toshio Kuratomi
Here's a short script to test our TG apps run via supervisor for
excessive memory usage and restart them if necessary. We could run this
via cron in alternate hours on each app server. Does this seem like a
good or bad idea to people?
-Tosjio
16 years
Updated hostedreposync proposal
by Toshio Kuratomi
Since opening up creation of hosted sites by people other than Jesse
we've had to straighten out a few steps here and there that the people
we want to manage the site have not been able to perform on their own.
One of those is the script that rsyncs the source repositories from
cvs-int to lockbox (where it is stored on the netapp.) This script,
hostedreposync, needs to be updated with the names of the new hosted
repositories in order to sync them.
I'm attaching a new hostedreposync that would rsync the whole
$SCM/hosted tree from cvs-int to the netapp instead of cherry picking
the individual repositories. This allows us to stop editing the
hostedreposync script every time a new hosted repository is added but it
does have some differences with the current script:
1) More repositories are pulled over. I count 39 repositories that are
present in the hosted trees on cvs-int that aren't listed in the current
hostedreposync script that will now be pulled over.
2) Old repositories will be deleted from the netapp. These repositories
are present on the netapp but not on cvs-int and would be erased with
the new script:
pungi.bak (there is a pungi repo)
mock (probably replaced by mock.git)
func (probably replaced by func.git)
surfr
Comments?
-Toshio
#!/bin/bash
HGRSYNC="cvs-int.fedora.redhat.com/hgrepos/hosted"
GITRSYNC="cvs-int.fedora.redhat.com/gitrepos/hosted"
SVNRSYNC="cvs-int.fedora.redhat.com/svnrepos/hosted"
BZRRSYNC='cvs-int.fedora.redhat.com/bzrrepos/hosted'
# One off -- should be merged on the new hosted boxes
GITRELENG="../fedora/releng"
# Sync the hg repos
rsync -aH --delete --delete-after rsync://${HGRSYNC}/ /netapp/app/scm/hg
# Sync the git repos
rsync -aH --delete --delete-after rsync://${GITRSYNC}/ /netapp/app/scm/git
# Sync the svn repos
rsync -aH --delete --delete-after rsync://${SVNRSYNC}/ /netapp/app/scm/svn
# Sync bzr repos
rsync -aH --delete --delete-after rsync://${BZRRSYNC}/ /netapp/app/scm/bzr
rsync -aH --delete --delete-after rsync://${GITRSYNC}/${GITRELENG} /netapp/app/scm/git
16 years
Our Web Apps and SSL
by Toshio Kuratomi
I've had this in the back of my mind for a while but only looked at it
yesterday. I think we have a potential problem with the way kojiweb is
using SSL. To a lesser extent it affects our TurboGears apps as well.
= Koji =
Kojiweb uses SSL to authenticate the client. This is fine. Kojiweb
then stores a session cookie on the client's machine so the client
doesn't have to go through the auth mechanism on every transaction.
This is also fine. However, kojiweb does not require that this cookie
be sent back to the server via SSL and when you initially hit koji via a
non-SSL connection only the authentication itself uses SSL. koji sends
the session cookie over an unencrypted connection. This leaves koji
open to packet sniffing and man-in-the-middle attacks.
To prevent this we should be doing two things:
1) Set the session cookie's secure flag to True
2) Once logged in, return the user to an https URL rather than http.
= TurboGears =
Our TurboGears apps are all running behind
https://admin.fedoraproject.org so they have to use an SSL link in order
to pull up content. However, the plain http link is active; it just
redirects to the SSL page. This means that if you log in and then
explicitly request a plain http URL the session cookie will be returned
to the server over an unencrypted connection. This is not too bad as
the TG servers should be setup to return https links (so someone would
have to actually change the URL to http after logging in) but it is a hole.
I sent an email last month to say that we'd be upgrading to TG-1.0.3 to
close this hole but dropped the ball on actually doing the upgrade.
I'll be doing that today; please let me know if you experience any
strange problems with your web application and we'll try to work out if
it's TG-1.0.3 related.
-Toshio
16 years
Re: manifests/nodes xen6.fedora.phx.redhat.com.pp,1.3,1.4
by Seth Vidal
On Mon, 2007-11-26 at 23:17 -0700, Seth Vidal wrote:
> Author: skvidal
>
> Update of /cvs/puppet/manifests/nodes
> In directory puppet1.fedora.phx.redhat.com:/home/fedora/skvidal/manifests/nodes
>
> Modified Files:
> xen6.fedora.phx.redhat.com.pp
> Log Message:
> make xen6 stop eating the iptable allow to make it's /u01/bacula export work
>
this change was to keep bacula from getting fatal errors b/c of
the /u01/bacula export off of xen6 was not being allowed to be mounted
from xen5.
-sv
16 years
[Fwd: mirrors.fedoraproject.org/mirrorlist: HTTP Error 500: Internal Server Error]
by Seth Vidal
Forward regarding mirrorlist failures and maybe we should fix that
contact email address from sysadmin-devel(a)rh.c to admin(a)fp.o
-sv
-------- Forwarded Message --------
From: Ralf Corsepius <rc040203(a)freenet.de>
To: sysadmin-devel(a)redhat.com
Cc: seth vidal <skvidal(a)fedoraproject.org>
Subject: mirrors.fedoraproject.org/mirrorlist: HTTP Error 500: Internal
Server Error
Date: Tue, 27 Nov 2007 05:51:38 +0100
Accessing http://mirrors.fedoraproject.org/mirrorlist returns this:
<snip>
Internal Server Error
The server encountered an internal error or misconfiguration and was
unable to complete your request.
Please contact the server administrator, sysadmin-devel(a)redhat.com and
inform them of the time the error occurred, and anything you might have
done that may have caused the error.
More information about this error may be available in the server error
log.
________________________________________________________________________
Apache/2.2.6 (Fedora) Server at app3.fedora.phx.redhat.com Port 80
</snip>
With http://mirrors.fedoraproject.org/mirrorlist being the default and
central bottleneck in fedora configuations, this probably affects all
fedora users world-wide, who are trying to use yum:
# yum update
...
Could not retrieve mirrorlist http://mirrors.fedoraproject.org/mirrorlist?repo=fedora-8&arch=i386 error was
[Errno 14] HTTP Error 500: Internal Server Error
Error: Cannot retrieve repository metadata (repomd.xml) for repository: fedora. Please verify its path and try again
# repoquery -q --whatrequires 'perl(File::NCopy)'
Could not retrieve mirrorlist http://mirrors.fedoraproject.org/mirrorlist?repo=fedora-8&arch=i386 error was
[Errno 14] HTTP Error 500: Internal Server Error
Cannot retrieve repository metadata (repomd.xml) for repository: fedora. Please verify its path and try again
# date -u
Tue Nov 27 04:51:27 UTC 2007
Ralf
16 years
Log analyzer improvements, ticket #226
by Michael Yingbull
Hi all,
I'm following up from ticket #226, which is tracking improvements to the log
analyzer system.
This would be what analyzers the logs on lockbox, which is the syslog host
for infrastructure machines:
https://hosted.fedoraproject.org/projects/fedora-infrastructure/ticket/226
I wanted to capture what we wanted the new analyzer to do.
Main feedback I had from discussion in #fedora-admin was a need for more
signal, less noise:
the current 'analyzed' logs were too verbose and had too much cruft.
Did I capture that requirement?
Are there other requirements besides improving the presentation?
Anything else that people feel they need from the log analyzer that they
aren't getting?
Currently Epylog is used - I did some looking around, and I'm not seeing
something that looks like its any better.
If someone knows another open source log analyzer they think would be much
better, I'd like to hear.
Else, my plan is to continue with Epylog, reconfigure it... and if really
needed to get what we need, patch it and contribute upstream.
Thanks all, hope everyone is having a good weekend.
Cheers,
Michael
16 years