Network setup of builders

Bohuslav Kabrda bkabrda at redhat.com
Tue Jul 2 08:37:09 UTC 2013


----- Original Message -----
> On Thu, 27 Jun 2013 10:58:09 +0200
> Miroslav Suchý <msuchy at redhat.com> wrote:
> 
> > On 06/26/2013 10:03 PM, seth vidal wrote:
> > > Which is the point. That really only lets you build from the blessed
> > > locations. It's an extra bit of red tape to cut through. Why would
> > > we want to restrict our users in that way?
> > 
> > If you (and others) really want this feature. What about that you
> > provide URL to that repo. It will be synced to copr and then builder
> > will use the synced repo?
> 
> As Rex pointed out - the point of this is to lower barriers - not put
> more up.
>  

IMO having the ability to get build deps from outside repos is very important for Copr. How we want to achieve that is a different question.
Syncing repo may be very expensive. Think of scenario with 100G repo, out of which Copr builds only use 1M of packages (a cornercase yes, but still...). Generally, I think both approaches have their downsides, but I'm currently more convinced that letting builders download the packages is the way to go.

> > > Have you uploaded a 1GB file to a website from a home network
> > > connection? It's AWFUL. Not to mention we'd need to store this one
> > > the
> > 
> > Yes. I uploaded even 100 GB.
> I'm glad you have such good network access. I do not. A lot of the
> world does not.
> 
> 
> If you want to add in functionality to upload files I won't to
> block it - but I do not want that at the cost of removing remote file
> downloads. If you lobby to remove remote file downloads on builders
> you're going to be creating a lot of problems for yourself.
> 
> Please don't make us start having to get approval on individual patches.
> 

So IMHO this is more about separating building process into multiple steps:
making SRPM available -> providing SRPM to backend -> building SRPM
And by "making SRPM available" I mean in any way possible - upload it somewhere and just point to it or upload it to frontend, which will then place it somewhere where backend can pick it up. We can do both cases easily.

> 
> > 
> > > -fe system and the whole goal was to have a separation between -fe
> > > and -be so they didn't need to communicate in anyway but,
> > > ultimately REST
> > 
> > Why is there such requirement?
> 
> B/c it makes sense - b/c it cleanly separates the FE and BE?
> 
> 

I think that separating the functionality into two communicating entities will pay off in the long run. It allows us to swap the parts without changing the way the other parts work.

>  
> > > Here's how I would use this:
> > >
> > > 1. construct my spec file(s) locally or over ssh.
> > > 2. build srpms on a system <out_in_the_world>
> > > 3. put srpms in a webaccessible location
> > > 4. paste the srpm urls into the box
> > > 5. wait for builds
> > > 6. win
> > 
> > On the other hand I always build srpms on local system (which is
> > usually behind NAT or restrictive firewall). So *I* prefer upload
> > rather then scp somewhere and pasting URL. Yes uploading over http
> > have little overhead, but is is just multiply by some constant, so
> > not interesting from complexity POV.
> 
> And yet it means we have to be able to house and cope with those on the
> frontend and write them out. I'd really rather them not ever be there
> specificaly for the security and integrity of our own systems.
> 
> > 
> > 
> > > Our existing private cloud infrastructure has some space - but not
> > > infinite space. The answer I was given was - if this is useful
> > > we'll be
> > 
> > Can we know the order of "some space"? Is it MB, GB, TB or PB?
> 
> We have a about 400GB allocated right now and we could probably add
> another 600GB before we need to look for more space for cinder volume
> servers.
> So let's call it about 1tb at the moment.
> 
> 
> > 
> > > Off the top of my head I can think of a couple of trivial ways to
> > > verify that the places we're pulling from are repos and not
> > > random websites - whitelisting them through.
> > 
> > You mean on the iptables level? I would not call that trivial.
> 
> Actually I mean on the private cloud security groups level.
> Iptables on the builder would be meaningless - anyone who can install a
> package into the chroot is effectively root. rpm pkgs are installed as
> root and any root user can walk out of a chroot - counting on iptables
> on the buildsystem is unsafe - we'd need to do it with security groups
> in the cloud system.
> 
> > 
> > Adamant? Nope. I'm just playing devils advocate.
> 
> Please don't. You'd be much more help just by helping rather than
> making me and others defend decisions and plans we made before.
> 
> > I'm trying to look on COPR from different angles and I'm thinking
> > loudly. I would rather be prepared for scenario, which will never
> > happen than experience issue for which we will not be prepared.
> 
> I'd rather we work on the specific items that are left to do, get this
> available to all fedora packagers and adapt as we need to.
> 
> Perfect is the enemy of the good.
> 
> > Really? That differ from my expectations. I do not want to encourage
> > people to build crap (by allowing them to).
> > Maybe it is time to recap what are the primary target audience of
> > Copr. At least how I feel it, because I never seen that written or
> > discussed:
> 
> COPR IS FOR ANY PACKAGE NO MATTER THE QUALITY.
> 
> I put that in caps to make sure it was not misssed.
> 
> It is absolutely mandatory that we are not imposing packaging policies
> or rules on anything that goes in coprs.
> 
> Exceptions to this rule include:
>  1. packages which cannot build (by default of course)
>  2. packages which are not legal to be build ie: don't have a
>  valid/acceptable license tag.
> 
> 
> otherwise ANYTHING goes.
> 

+1. If we try to enforce any sort of packaging guidelines, we will end up where Fedora currently is with all the guidelines and stuff (which is what we don't want). Let's keep Copr free, if users want to build crap, then let them.

> > 
> > 1) various upstream projects (e.g OpenShift, Katello, etc. - but even
> > small one man projects), which want to build theirs nightlies in
> > reproducible manner.
> 
> Sure.
> 
> > 2) projects which consist of lot of packages and it would take long
> > time for them to go through Package Review process (either because it
> > is simply too much packages or because they are e.g. bundling
> > libraries and it takes some time to fix the code to follow Fedora
> > guideliness) and they want to offer their releases right now.
> 
> Sure.
> 
> > 3) Projects which want to offer release for different platform than
> > Fedora/EPEL. E.g. Suse, but I would count here even Software
> > Collections, Secondary Architectures.
> 
> Sure - provided we have it.
> 
> > 4) Repos which should be private for some reasons (embargoed builds
> > which should be tested)
> 
> sure.
> 
> 
> > 
> > Do I perceive it correctly? Feel free to correct me or add what I'm
> > missing.
> 
> Okay
> 
> 5. Crap.
> 

Agreed all 5.

> 
> Let a thousand flowers bloom and we'll pick the best ones later. The
> crap gets mowed under for compost.
> 
> -sv

-- 
Regards,
Bohuslav "Slavek" Kabrda.


More information about the copr-devel mailing list