On 5/25/07, Dennis Gilmore dennis@ausil.us wrote:
Once upon a time Friday 25 May 2007, Mike McGrath wrote:
Over the last couple of weeks we've been using puppet to distribute static content across some of our application servers and proxy servers.
Static content might include the new static webpage or an application like our accounts system.
This has proved to be a bit of an issue. Puppet wasn't really designed to do this and as such puts a noticeable load on the boxes while running as well as causing longer runs. Puppet works for this but we're currently into it managing thousands of files and initial deploys take a long time :) In the past we'd discussed moving some things (like turbogears apps) around using rpms. We can do that with tg pretty easily. But what about other static content, images, things like that?
This needs to be scriptable from start to finish, here's the options as I see them:
- Straight nfs mount (boo)
- nfs mount to cron copy the files
- recursive wget to an http store somewhere
- rsync via ssh keys or rsync server (I'm currently leaning towards this)
- Figure out how to make puppet more efficient with large numbers of
files.
We've got a whole pool of sysadmins on this list. How do you deal with these issues in your current environments?
how about using cvs and scripting a checkout of the content? i wuld say either that or rsync. since alot of it like the accounts system is already in cvs why not use that?
I was going to sugest SVN, but roughly the same thing. Have it checked in and automate a check-out. You can even checkout things only tagged for that machine if you want. I think you could use Puppet to kick-off the checkout.
stahnma
Dennis
Fedora-infrastructure-list mailing list Fedora-infrastructure-list@redhat.com https://www.redhat.com/mailman/listinfo/fedora-infrastructure-list