Res: About the recent invasion
by Henrique Junior
Thank you, paul.
I'll spread the announcement.
Henrique "LonelySpooky" Junior
________________________________
"In a world without walls and fences, who needs windows and gates?!"
----- Mensagem original ----
> De: Paul W. Frields <stickster(a)gmail.com>
> Para: Fedora Infrastructure <fedora-infrastructure-list(a)redhat.com>
> Enviadas: Sexta-feira, 12 de Setembro de 2008 13:59:43
> Assunto: Re: About the recent invasion
>
> On Fri, 2008-09-12 at 09:40 -0700, Henrique Junior wrote:
> > Hello, guys
> > I'm sorry if this list
> > is not the right place to post this question but I can't figure a
> > better place.
> > As a Fedora ambassador
> > (in Brazil) I've been asked by a lot of people about the recent
> > invasion in our servers. The question I've been asked yesterday was
> > “how it happened?”
> > I'd like to explain
> > here exactly what happened to make our users more comfortable and confident.
> > Please excuse my bad english.
>
> Hello Henrique. You can refer to the following announcement for the
> most recent update:
> http://www.redhat.com/archives/fedora-announce-list/2008-August/msg00012....
>
> This is an ongoing investigation, and we'll provide another update as
> soon as more information is available.
>
> --
> Paul W.. Frields
> gpg fingerprint: 3DA6 A0AC 6D58 FEC4 0233 5906 ACDB C937 BD11 3717
> http://paul.frields.org/ - - http://pfrields.fedorapeople.org/
> irc.freenode.net: stickster @ #fedora-docs, #fedora-devel, #fredlug
Novos endereços, o Yahoo! que você conhece.. Crie um email novo com a sua cara @ymail.com ou @rocketmail.com.
http://br.new.mail.yahoo.com/addresses
15 years, 2 months
Meeting Log - 2008-09-11
by Ricky Zhou
20:00 -!- dgilmore changed the topic of #fedora-meeting to: infra meeting prep
20:00 * abadger1999 stretches
20:00 < dgilmore> hey all meetingtime
20:00 * ValHolla walks in
20:00 < SmootherFrOgZ> hello guys
20:00 < abadger1999> Hey ValHolla!
20:01 < abadger1999> Hello SmootherFrOgZ
20:01 < brothers> EHLO
20:01 < ValHolla> Hello
20:01 * wakko666 lurks.
20:01 < dgilmore> we will start with https://fedorahosted.org/fedora-infrastructure/query?status=new&status=as...
20:01 < dgilmore> the tickets :)
20:01 * fchiulli is observing
20:01 * mmcgrath notes G said he won't be around and sends his best.
20:01 < dgilmore> .ticket 753
20:01 < zodbot> dgilmore: #753 (Mini-freeze for Fedora 10 beta) - Fedora Infrastructure - Trac - https://fedorahosted.org/fedora-infrastructure/ticket/753
20:01 * ricky
20:01 * nokia3510 says hello
20:02 < dgilmore> we need to plan infra freeze for F-10 release
20:02 < dgilmore> and a mini one for beta
20:02 < dgilmore> f13: when is beta. what would you like frozen?
20:04 < dgilmore> mmcgrath: thanks :)
20:04 < f13> mmcgrath write up a chart that showed what would be frozen and what wouldn't. THe beta freeze is supposed to be today
20:04 -!- skvidal [n=nnnnnnsk@fedora/skvidal] has joined #fedora-meeting
20:04 < mmcgrath> f13: ah, k. I wasn't sure about the dates. given the fas issues that are going on, mind if we start the freeze tomorrow?
20:05 < dgilmore> f13: i have a koji outage scheduled for tomorrow
20:05 < dgilmore> i guess we should have talked about this last week
20:05 -!- kital [n=Joerg_Si@fedora/kital] has quit Remote closed the connection
20:06 < mmcgrath> yeah, its the first time we've really done a beta freeze, I wasn't sure when it was going to start either.
20:06 < mmcgrath> f13: so next time we'll be more careful, mind if we start the freeze on the 13th?
20:06 < f13> this beta is looking like a rolling wreck anyway, so sure, why not
20:06 < mmcgrath> hahahah
20:06 < dgilmore> f13: :)
20:06 < abadger1999> As long as it's rolling :-)
20:06 < mmcgrath> on the f13th
20:06 < f13> so it goes.
20:07 < dgilmore> so that means all changes need to be sent to infra list first and acked by one other?
20:07 < mmcgrath> yeah. I'll send a note out after the meeting similar to the last one. I should update the release SOP as well.
20:08 < dgilmore> do we roll from mini freeze to full freeze? or do we unfrezze once beta is out and frezze again for test and final?
20:09 < dgilmore> lets talk this over on the list
20:09 < dgilmore> .ticket 395
20:09 < zodbot> dgilmore: #395 (Audio Streaming of Fedora Board Conference Calls) - Fedora Infrastructure - Trac - https://fedorahosted.org/fedora-infrastructure/ticket/395
20:09 < dgilmore> jcollie: ping
20:09 < jcollie> dgilmore: ping
20:09 < dgilmore> any update
20:09 < f13> I'm fine with unfreeze/refreeze later
20:09 < SmootherFrOgZ> dgilmore: mail says on Saturnday 13 for the outage
20:09 < dgilmore> SmootherFrOgZ: 1:00am UTC which is 8pm friday my time
20:10 < jcollie> dgilmore: nope, been busy with other stuff :(
20:10 < dgilmore> jcollie: :) it happens
20:10 -!- cassmodiah [n=cass(a)p54AB3DC0.dip.t-dialin.net] has quit Remote closed the connection
20:10 < SmootherFrOgZ> dgilmore: k
20:10 < dgilmore> .ticket 446
20:10 < zodbot> dgilmore: #446 (Possibility to add external links on spins page) - Fedora Infrastructure - Trac - https://fedorahosted.org/fedora-infrastructure/ticket/446
20:10 < dgilmore> this is me
20:10 < dgilmore> i fail
20:10 < dgilmore> .ticket 740
20:11 < zodbot> dgilmore: #740 (Loaning out system time to OLPC participants) - Fedora Infrastructure - Trac - https://fedorahosted.org/fedora-infrastructure/ticket/740
20:11 < dgilmore> so I got from OLPC what it is they wanted
20:11 < dgilmore> as i suspected its a machine where people can log in. have cvs, koji, mock, etc access
20:12 < dgilmore> so they can do test builds. package management etc
20:12 -!- kital [n=Joerg_Si@fedora/kital] has joined #fedora-meeting
20:13 < dgilmore> there is quite a few people at OLPC who don't use fedora and are not familiar with fedora and how it works
20:13 < dgilmore> so part of what they want is also help to train people on how to work with fedora. how to maintain packages.
20:14 < dgilmore> mock is packaged and works on debian
20:14 < dgilmore> koji is not packaged
20:15 < dgilmore> so does anyone have ideas on what we can do to help
20:15 < nokia3510> I don't get it why they need a machine in this case.
20:15 < f13> I really don't understand why they can't supply this themselves
20:15 < f13> it's not very hard to put up a RHEL or Fedora host and install those packages
20:16 < f13> or is the thing they really want help with account management, in the form of FAS logins?
20:16 < dgilmore> nokia3510: so that people who dont use fedora can get access to it
20:16 < dgilmore> f13: some of it is fas
20:16 < dgilmore> and account management
20:17 < dgilmore> When i was there i was pushing to get fas in place. and i think it will happen over time
20:17 < f13> so tell them if they host a quadg5 system, we'll let them use fas to log in on it (:
20:18 < ValHolla> The reply from OLPC sounds to me like they are looking for a "tutor" ... someone to teach the non Fedora folks
20:18 < dgilmore> some of it is trying to get it closer to fedora. so that fedora developers could help train olpc's developers in fedora
20:18 < dgilmore> f13: :)
20:18 < f13> er, why does it matter where the box lives for that?
20:18 * f13 is seriously confused by the folks over there.
20:19 < dgilmore> it really doesnt
20:19 < dgilmore> they currently have a box they give out accounts on
20:19 < f13> and really, all they have to do is *ask*
20:19 < f13> but we've seen evidence of where that just doesn't happen
20:19 < dgilmore> but there are concerns its not as secure as it could be
20:19 * ValHolla wonders if it wouldn't be a good Idea to create #fedora-OLPC and try to have some of us in there for the OLPC to ask questions to
20:20 < dgilmore> ValHolla: it exists already
20:20 < dgilmore> well #fedora-olpc
20:21 < dgilmore> they are doing a better job of asking for help, than has been the case in the past
20:21 < ValHolla> dgilmore: ok, well a specific channel the OLPC folks know they can go to for a quick answer/tutorial on what they are doing?
20:21 < ValHolla> if they get stuck using mock, koji etc...
20:21 < wakko666> ValHolla: like #fedora, or #koji, or #fedora-admin?
20:21 < dgilmore> ValHolla: they usually come to me and I point them in the right direction.
20:22 < dgilmore> #fedora-devel
20:22 < dgilmore> there are quite a few olpc folks in #fedora-devel
20:22 < ValHolla> yes like those.... but there are so many different places to go.
20:22 < dgilmore> perhaps what we can do is help them with kickstart files, FAS help, etc
20:23 < dgilmore> help get an environment setup where they could rebuild the box that is used by people to test builds and play with things on a regular basis
20:24 -!- stickster_afk is now known as stickster
20:24 < dgilmore> help them to understand fas
20:24 < ValHolla> roughly how large an image do they use? will it all fit on a single dvd?
20:24 < dgilmore> and help get it implemented
20:24 < ValHolla> make them a "Live CD on steroids"
20:24 < dgilmore> ValHolla: image for what?
20:25 < ValHolla> kickstart image
20:25 * ValHolla is throwing solaris terms around ... sorry
20:25 < dgilmore> ValHolla: well it would need to be a developer spin of sorts
20:26 < nokia3510> I'd ask where Fedora-related stuff ends and strict OLPC particularities appear
20:26 < dgilmore> having fedora-packager gcc rpm-build etc installed
20:26 < nokia3510> because it seems to be a big overlap
20:26 -!- warren [n=warren@redhat/wombat/warren] has quit "Leaving"
20:26 < dgilmore> nokia3510: there are a few things in OLPC that can not be in fedora
20:26 < dgilmore> but most of it really should be built in fedora
20:27 -!- TopoMorto [n=TopoMort(a)151.61.153.118] has quit "Sto andando via"
20:27 < dgilmore> anyways. lets move on
20:27 < nokia3510> ok
20:28 < dgilmore> feel free to make comments in the ticket with ideas
20:28 -!- dgilmore changed the topic of #fedora-meeting to: Fedora Infrastructure - Open Floor
20:28 < dgilmore> anyone have anything they would like to bring up?
20:29 < SmootherFrOgZ> what about hosting request ?
20:29 < dgilmore> SmootherFrOgZ: which hosting request?
20:29 -!- LetoTo1 [n=paul(a)76-10-173-74.dsl.teksavvy.com] has joined #fedora-meeting
20:29 < dgilmore> SmootherFrOgZ: OLPC's?
20:29 < SmootherFrOgZ> nope
20:30 < SmootherFrOgZ> seth ask few weeks ago, that fp.o was looking for more hosting place
20:30 < dgilmore> skvidal: ^^^
20:30 < skvidal> we're always looking for more hosting locations
20:31 < skvidal> we found some new space w/ibiblio
20:31 < skvidal> very kindly of them
20:31 < ricky> So torrent server there?
20:31 -!- dwmw2_gone is now known as dwmw2_HIO
20:31 < skvidal> ricky: xen host there
20:31 < skvidal> torrent will be on it
20:31 < SmootherFrOgZ> :), skvidal can you draft me details about what fp.o is looking for
20:31 < ricky> Niice
20:31 < skvidal> SmootherFrOgZ: mmcgrath has a better background in what's needed
20:31 < skvidal> ricky: iirc the box has been ordered
20:32 < SmootherFrOgZ> actually, our hosting sevice need more details to reply on that
20:32 < SmootherFrOgZ> service*
20:32 < ricky> Coolnsess
20:32 < skvidal> SmootherFrOgZ: but the gist is 1-2U +power +network
20:32 < ricky> **Coolness
20:32 < skvidal> SmootherFrOgZ: ideally, a serial console and the ability to power off/on the machine
20:32 < skvidal> SmootherFrOgZ: though we're willing to work with various amount of other things
20:32 < dgilmore> SmootherFrOgZ: or an easy way to get it handled
20:32 < ricky> Non-blocked ports are usually good
20:33 < SmootherFrOgZ> k
20:33 < skvidal> ricky: +1 :)
20:33 < dgilmore> disk is good to
20:33 < skvidal> dgilmore: well, normally we can provide the box
20:33 < skvidal> if we can get the space
20:33 < dgilmore> skvidal: true.
20:34 < dgilmore> we have some hosting that is rack space+power+network. we provide servers. we also have some where servers are provided
20:34 < dgilmore> depends on whats on offere
20:34 < dgilmore> offer
20:35 < skvidal> nod
20:35 < skvidal> SmootherFrOgZ: does that help?
20:36 -!- JSchmitt [n=s4504kr@fedora/JSchmitt] has quit "Konversation terminated!"
20:36 < SmootherFrOgZ> yep, thanks
20:37 < dgilmore> moving on
20:37 < abadger1999> I'm going to be at a memorial service tomorrow. Out all day. I'll probably be *very* spotty on the weekend as well with all the relatives in town.
20:37 -!- stickster is now known as stickster_afk
20:37 -!- stickster_afk is now known as stickster
20:37 < dgilmore> abadger1999: :) ok
20:37 * dgilmore will be updating koji to 1.2.6
20:38 -!- bzbot is now known as buggbot
20:38 < ricky> Any thoughts about upgrading postgres on db2?
20:38 < dgilmore> so we will need to look and see if we start hitting bugs.
20:38 < dgilmore> abadger1999: ^
20:38 < dgilmore> ricky: im for it.
20:38 < abadger1999> I'd like to do it ASAP.
20:38 < SmootherFrOgZ> :)
20:38 < skvidal> abadger1999: :(
20:38 < abadger1999> mmcgrath: Thoughts on when we can do that?
20:38 < dgilmore> from my use of 8.3 its much better than 8.1
20:38 < ricky> No rush or anything, just wondering when it can go down (did we ever end up scheduling an outage for it?)
20:39 < ricky> dgilmore: That's really good to hear :-)
20:39 < mmcgrath> abadger1999: just need someone to do it. I can try to do it next week sometime
20:39 < SmootherFrOgZ> mmcgrath: i can give you a hand on that
20:39 < abadger1999> mmcgrath: k. I can help with the data manipulation. It's just dumping the data on one box and loading on another.
20:40 -!- mbacovsk_ [n=mbacovsk@nat/redhat/x-574ae97d3509588a] has joined #fedora-meeting
20:40 < mmcgrath> SmootherFrOgZ: thanks. I'm hoping it will be pretty straight forward but I'll make sure you can be around the night of.
20:40 < mmcgrath> we have a staging box now too so hopefully that will make things nice and clean :)
20:40 < ricky> abadger1999: Does it necessarily need to be on different boxes?
20:40 < abadger1999> ricky: Could be the same box even.
20:40 < nokia3510> can I help too ?
20:40 < ricky> Actually, I guess it could be good to test the postgres 8.3 manifest as well
20:40 < abadger1999> It's easier for reverting if they're separate boxes.
20:40 < ricky> And there's waay less data to transfer each time, I forgot
20:41 < SmootherFrOgZ> mmcgrath: no pb
20:41 < ricky> mmcgrath: So how will staging play into this?
20:42 < mmcgrath> ricky: we'll run it there first, if it works fine we'll run it in production
20:42 < ricky> Cool.
20:43 < ricky> Also, any update on the mod_wsgi mirrorlist?
20:43 < ricky> Is it close to ready?
20:43 < abadger1999> I think mdomsch just wanted people to test his new code.
20:43 -!- greenlion [n=greenlio(a)93-80-108-19.broadband.corbina.ru] has quit Remote closed the connection
20:44 < abadger1999> lmacken: If you're still around, did you get a chance to do that?
20:44 < ricky> Ah, so it's fully running on staging right now?
20:44 < ricky> Anybody have a URL handy?
20:44 < ricky> Ah, I guess http://mirrors.stg.fedoraproject.org/mirrorlist
20:45 < lmacken> abadger1999: nope, I haven't had time to do anything other than a basic sanity check on it
20:45 -!- mbacovsk__ [n=mbacovsk(a)okr2fw.topnet.cz] has quit Read error: 104 (Connection reset by peer)
20:46 < abadger1999> ricky: In the same vein, I heard you switched trac over to mod_wsgi?
20:46 < ricky> I haven't yet, actually
20:46 < abadger1999> Okay.
20:46 < ricky> I only did moin, but I want to look into trac soon
20:46 < abadger1999> Cool.
20:46 < dgilmore> mmcgrath: want to give an update on staging?
20:46 < ricky> I'll hopefully have a bit to look at it tonight or tomorrow night
20:46 < abadger1999> I want to convert the bzr web viewer to loggerhead... the new version is a wsgi app so when trac switches I try to deploy that.
20:47 < mmcgrath> dgilmore: sure, I can give a quick one.
20:47 < ricky> Nice
20:47 < abadger1999> (And then the code will actually be maintained upstream :-)
20:47 < mmcgrath> right now we have an app1.stg, app2.stg, proxy1.stg and db1.stg. The staging environment can't contact any production hardware except for /accounts/ and the infrastructure repo.
20:47 < mmcgrath> Matt is giving it a try for some actual work, I'm going to continue documenting it and will hold training at some point in the future (separate from the puppet training)
20:48 < dgilmore> :)
20:48 < lmacken> excellent
20:48 -!- mbacovsk__ [n=mbacovsk(a)okr2fw.topnet.cz] has joined #fedora-meeting
20:48 < dgilmore> when is puppet training again?
20:48 < mmcgrath> haven't schedule it yet.
20:48 < mmcgrath> its coming though
20:48 < dgilmore> ok
20:49 < dgilmore> If somoene wants to help with some things. I wantto design a way to load balance koji
20:49 < SmootherFrOgZ> dgilmore: go on
20:50 < ricky> As in kojihub?
20:50 * ricky wonders what kind of locking would need to happen
20:50 < dgilmore> ricky: locking is all in the db
20:50 < ricky> Oh, cool.
20:51 < dgilmore> SmootherFrOgZ: either setting up haproxy, pound, or something like that to do load balancing
20:51 -!- warren [n=warren@redhat/wombat/warren] has joined #fedora-meeting
20:51 < ricky> Oh, so it's 100% doing the balancing (as in, the code doesn't really need to change?)
20:51 < dgilmore> so we can do pretty much all maintainence without koji going down. coping with spikes. taking a box off to test something etc
20:52 < ricky> So would we build a separate proxy box just for koji and put it in front of koji1 and koji2 then?
20:52 < dgilmore> ricky: yeah.
20:52 < SmootherFrOgZ> dgilmore: interesting, haproxy is pretty good
20:52 < dgilmore> ricky: or pound would need two boxes
20:53 < ricky> So both pound and haproxy would handle SSL stuff easily, right?
20:53 < dgilmore> ricky: it should. we would need to test it first.
20:53 < ricky> Yeah.
20:54 < SmootherFrOgZ> yeah haproxy handle that well
20:54 < ricky> I wonder if we'd want koji + a builder in staging
20:54 < dgilmore> ricky: we should.
20:54 < ricky> But that sounds really complicated to setup
20:54 < dgilmore> i do my testing of koji on sparc.koji.fedoraproject.org
20:55 < dgilmore> mostly i want to remove the spof in the buildsys
20:55 < dgilmore> so i dont know how well haproxy would help
20:55 < ricky> Hm.
20:55 < dgilmore> since it would replace the spof
20:56 < dgilmore> pound having two boxes. one active, one passive would do it
20:56 < dgilmore> but it needs thought and testing
20:56 -!- wfp [n=wfp5p(a)viridian.itc.Virginia.EDU] has quit "Leaving"
20:56 < dgilmore> also something that I need help with is dogtag
20:57 < dgilmore> getting it to build in mock. so we can look at migrating the CA to it
20:57 < dgilmore> thats going to need alot of testing.
20:57 < ricky> Yeah, I'll try to ping you about that when I have some more itme
20:57 < ricky> **time
20:57 < mmcgrath> dgilmore: if we're going to load balance it we should probably just put them behind the hardware firewall we have at RH, if not maybe just do heartbeat.
20:57 < dgilmore> when we switch we have to do some funkyness to migrate from the existing CA
20:58 < dgilmore> mmcgrath: id be ok with either
20:58 < mmcgrath> I'm not sure we want to put koji behind our proxy farm thats shared with the other webservers, and doing a dedicated frontend would take our koji server from 1 to 4 (2 koji, 2 proxy) which we could do but I think its probably easier just to use the balancer thats there.
20:58 < dgilmore> mmcgrath: though id kinda prefer load balancing
20:58 * ValHolla wonders if you use pound the setup could be used for more then just koji
20:59 -!- mccann [n=jmccann(a)66.187.234.199] has quit "See ya"
20:59 < dgilmore> ValHolla: we try and keep buildsys seperate from everything else
20:59 < ricky> I guess we don't have very much control over the load balancer between fedoraproject.org and proxy1/2?
20:59 < mmcgrath> dgilmore: yeah, use the front end load balancer then. we'll have two backend servers and the balancer on front.
20:59 < mmcgrath> ricky: yeah its all by request
20:59 < dgilmore> mmcgrath: works for me
21:00 < mmcgrath> really though its been pretty solid so I haven't complained about it.
21:00 < ValHolla> digilmore: yeah I guess that is a good idea:/
21:00 -!- ldimaggi___ [n=ldimaggi@nat/redhat/x-6a2313ae23860d09] has quit "Leaving"
21:00 < dgilmore> so thats some things for thought
21:00 < dgilmore> Anyone have anything else to talk about?
21:00 -!- mbacovsk_ [n=mbacovsk@nat/redhat/x-574ae97d3509588a] has quit Connection timed out
21:00 < dgilmore> if not ill wrap up in 30
21:01 < dgilmore> 20
21:01 < dgilmore> 10
21:01 -!- brothers [n=brothers(a)66-234-43-147.nyc.cable.nyct.net] has left #fedora-meeting []
21:01 < dgilmore> --meeting wrap-- thanks all
15 years, 2 months
Intrusion Detection System
by Luke Macken
Hey all,
A couple of weeks ago I did an initial deployment of an Intrusion
Detection System in our infrastructure. It utilizes the prelude stack,
and is currently powered by auditd and prelude-lml events. Audit gives
us a ridiculous amount of power with regarding to monitoring
everything that happens on a system. Prelude-lml, out of the box
using it's pcre plugin, is able to watch a large variety of service
logs, including many things we are running (asterisk, mod_security,
nagios, cacti, PAM, postfix, sendmail, selinux, shadowutils, sshd,
sudo). Prewikka is the web-based frontend
(https://admin.fedoraproject.org/prewikka).
I created a new 'prelude' puppet module that contains the
configuration for audit, auditsp-plugins, libprelude,
prelude-manager, prewikka, prelude-correlator, and prelude-lml.
Turning a node/servergroup into a sensor entails adding the
following to your class definition: 'include prelude::sensor::audisp'
My initial deployment entailed setting up the prelude-manager
and correlator on a single box, and hooking up a single sensor
(bastion).
So, we're now at the point where we can fine tune our audit rules
before we further deploy this infrastructure.
Some things we want to consider:
- Creating specific security policies for each servergroup
- Define what files/directories/activities we want to monitor on
which machines.
- What events to we want to escalate ?
I opened an infrastructure ticket to track this deployment here:
https://fedorahosted.org/fedora-infrastructure/ticket/833
Suggestions, comments, and ideas are welcome.
Cheers,
luke
15 years, 2 months
SELinux status update
by Luke Macken
Over the past few months, I've been working closley with Dan Walsh and
Mike McGrath to solidify our SELinux deployment. We're not yet to the
point where we can flip every system into enforcing mode, but we're
getting close.
We're at the point now where we can pretty much do everything we need to
do via our puppet configuration, and we've created a handful of
constructs that can be used to configure various aspects of SELinux, for
example:
== Setting custom context
semanage_fcontext { '/var/tmp/l10n-data(/.*)?':
type => 'httpd_sys_content_t'
}
== Toggling booleans
selinux_bool { 'httpd_can_network_connect_db': bool => 'on' }
== Allowing ports
semanage_port { '8081-8089': type => 'http_port_t', proto => 'tcp' }
== Deploying custom policy
semodule { 'fedora': }
I created a custom 'fedora' selinux module that is loaded on all systems
(that are configured with 'include selinux'). This module exists to fix
various issues custom to our environment, and to cover up minor
annoyances such as leaky file descriptors.
So, now it's just a matter of hunting down the existing issues, and
fixing them in puppet or in the SELinux policy. I've been keeping our
infrastructure ahead of the RHEL5 selinux-policy, as Dan has fixed a lot
of our issues in his rpms.
I threw together a basic SOP for our SELinux configuration here:
https://fedoraproject.org/wiki/Infrastructure/SOP/SELinux
You can keep up to date on our SELinux deployment status here:
https://fedorahosted.org/fedora-infrastructure/ticket/230
Cheers,
luke
15 years, 2 months
Removal of old projects from fedorahosted.
by susmit shannigrahi
Hi,
This is w.r.t to ticket #714[1].
As explained by mmcgrath, Fedora has a policy to remove _any_ hosted
projects that are
not altered or updated for last six months.
Here is the list of projects, which falls into this category and they
will soon be removed.
--------------------------------------------------------------------------------------------------------------------
These directories have not been altered for 6 months
/srv/svn/hardlink group: svnhardlink
/srv/svn/package-jitsu group: svnpackage-jitsu
/srv/svn/repoview group: svnrepoview
/srv/svn/ols group: svnols
/srv/svn/setarch group: svnsetarch
/srv/svn/authd group: svnauthd
/srv/svn/system-config-keyboard.old group: svnsystem-config-keyboard
/srv/hg/camelus group: hgcamelus
/srv/hg/passwd group: hgpasswd
/srv/hg/bodhi.old group: gitbodhi
/srv/hg/guest-account group: hgguest-account
/srv/hg/timeconfig group: hgtimeconfig
/srv/hg/LHCP group: hgLHCP
/srv/hg/virt-manager group: hgvirt-manager
/srv/hg/pam-redhat group: hgpam-redhat
/srv/hg/tmpwatch group: hgtmpwatch
/srv/git/splatbind.git group: gitsplatbind
/srv/git/system-config-securitylevel.git group: gitsystem-config-securitylevel
If you have any update with respect to this, and don't want any/some
project(s) be removed, please
let us know.
Thanks.
[1]https://fedorahosted.org/fedora-infrastructure/ticket/714
--
Regards,
Susmit.
=============================================
ssh
0x86DD170A
http://www.fedoraproject.org/wiki/user:susmit
=============================================
15 years, 2 months
Environments Doc
by Mike McGrath
So I'm slowly getting more architecture docs put together. This is now in
our repo:
http://mmcgrath.fedorapeople.org/Environments.pdf
When we're in a freeze or a pre-freeze here's the rules.
If the host is listen in the $FREEZE_TYPE list. Then its frozen.
You'll notice that, for example, app[1-5] are listed in both the normal
full freeze as well as the pre-release freeze. That's because we have
applications that exist in each environment. Until we move those services
somewhere else. Those servers are frozen during prefreezes.
The actual environment names are:
* Buildsystem
* Distribution
* Support
* Virtualization
* Staging
* Testing
* Value Added
-Mike
15 years, 2 months
More puppet training!
by Mike McGrath
So I'm going to hold a couple more training seminars for Puppet in
Fedora's Infrastructure. I was hoping you guys could also throw some
questions together so i make sure I don't miss anything.
-Mike
15 years, 2 months
Last week
by Mike McGrath
Strange week last week, many of you noticed a bunch of nagios outages so I
thought I'd send a roundup of what happened.
1) The big one was what seems to be a corrupt database table. For some
reason running a vacuum on a table (which was only 66M large) was taking a
long time and even after it would finish the disks would thrash for
sometimes 10 minutes after. This caused outages of lots of our systems
like the account system, to which other systems depend. The job was
hourly so thats why it kept happening.
We were able to reproduce this on another host and never quite figured out
what was going on but a dump, drop, restore fixed the issue and so far we
haven't had time to revisit what was going on, just that it hasn't
happened since.
2) Strange network issues towards the end of the week. Seems our round
time to server beach went up causing nagios to flag some hosts as dead.
I've also not yet had time to look into this. The network seems and I
don't think we're seeing any functional issues from it but it was
different.
3) pkgdb's home page started taking longer to load causing our balancer to
start flagging it dead causing it to throw 503's. We only recently moved
it to haproxy so this could be a normal behavior that we just didn't see.
I've moved response time of the front page up to 5 seconds from 2.
-Mike
15 years, 2 months