mock 0.2
by seth vidal
Hey all,
I've put up a new mock release - mock. Mock 0.2
A number of bugs fixed and the config format has been changed. Most
importantly is it should work nicely for a user with any uid or gid - as
long as they're in the mock group. No longer does it require uid 500,
gid 500.
http://linux.duke.edu/~skvidal/mock/
binary rpm is built for rawhide i386.
let me know what breaks for you.
-sv
18 years, 6 months
spm and buildreqs
by seth vidal
Going through this a few more times as I work on some bits inside the
buildsystem.
We're given an srpm - we don't know where it was made, on what arch,
nothing - so we cannot trust the buildreqs it provides.
If we're inside the chroot and on the arch we want to build on then
running:
rpm -Uvh /path/to/our/srpm
rpmbuild -bs --nodeps /path/to/the/generated/spec
should result in a srpm for us that will have valid build reqs.
So that if we grab the requires from that srpm we'll have a pretty good
idea of what we'll need to install to build the package.
is that correct/accurate/etc?
-sv
18 years, 6 months
Netgear WG311 card
by Michael Waite
anyone got an rpm for this card?
I am trying to get FC3 (rawhide) online for a summer intern that is here
this summer.
Or, is there a card that is known to work with rawhide?
Thanks.
------Mike
--
Michael Waite
978-943-9042
mwaite(a)redhat.com
10 Technology Park Drive
Westford, Ma 01876
Learn, Network and Experience Open Source.
Red Hat Summit, New Orleans 2005
http://www.redhat.com/promo/summit/
18 years, 6 months
Mock/Mach
by seth vidal
In order to make the world a bit more simple I've been working on a fork
of mach that simplifies the feature set dramatically. Right now it does
the things we need and only those things:
it does:
1. makes a chroot
2. installs and remakes the srpm from the srpm to get the buildreqs
right.
3. installs the build reqs
4. rebuilds the srpm into binary rpms
5. returns logs and what not intelligently.
6. does all this as quickly as possible.
It doesn't do any of the spec file parsing or build order sorting that
mach does. It only deals with srpms to build from and it only deals with
them one at a time.
Right now I'm calling it 'mock' b/c it's a fake or lesser version of
mach. You can see the packages and what not I've got so far here:
http://linux.duke.edu/~skvidal/mock/
Steps to run it:
1. make sure you're a member of the 'mock' group.
2. mock -r name-of-chroot(look in /etc/mock for names) /path/to/srpm
thats it - it should tell you where to look for the resulting packages
or the logs.
I'll be checking it into fedora cvs shortly and then working on
integrating it with the new buildsystem code that dcbw has been putting
together.
If all goes as I hope then we'll no longer need the common nfs share for
writing out resulting packages/logs. We can just ship the packages from
the build host back over the wire to the queuing host via the xml-rpc
connection already in place. If that all works then we'll be able to
have buildhosts virtually anywhere. (w/i reason of course)
Everything seems to 'work' in my tests - I'm sure there are bugs but I'm
equally sure that y'all will tell me all about them.
-sv
18 years, 6 months
mock and other plans
by seth vidal
I checked mock into fedora cvs /cvs/extras.
Here's what I think we should work on doing:
1. move the automation2 directory from extras-buildsys-stemp and into
it's own module.
2. remove the extras-buildsys-temp dirs from cvs
3. setup makefiles and specfiles for the automation2 code dcbw has been
doing
4. get mock and the automation2 code together and work out the rest of
the bits
5. deploy it for building and figure out where the bugs lay. :)
What Dan and I have been discussing is to try to make it easier for the
main queuing agent and the build hosts to be on very separate networks
and still work.
There's still ground to cover but it's getting closer. I think the items
left to look at are:
1. thread out the archwelder servers so they don't stall
2. xmlrpc ssl auth using .fedora.cert files
3. xmlrpc client from the make build side
4. download of resulting packages/logs from the archwelder servers to
the queuing agent.
5. monitoring and status information from the queuer for
updates/notices/etc.
anyone else want to pitch in?
-sv
-sv
18 years, 6 months
mach and disttag
by Ignacio Vazquez-Abrams
So the CVS stuff handles %dist properly, which is good. Unfortunately
the build system doesn't, which is not-so-good. Rather than leave the
job half-done, I've come up with a patch that should fix it, which I've
attached. It can be removed when the disttag changes go into redhat-rpm-
config, but until then there's this.
--
Ignacio Vazquez-Abrams <ivazquez(a)ivazquez.net>
http://fedora.ivazquez.net/
gpg --keyserver hkp://subkeys.pgp.net --recv-key 38028b72
18 years, 7 months
buildsys info updates
by seth vidal
Hi,
Had a short conference call with Dan and Jeremy today. Gist was working
on the bits I mentioned to this list a few days ago.
Dan is going to work on the config bits and changing around the classes
a bit so as to make it more clear what bits do what.
We also 'decided' that it would be worthwhile for the queuer to run an
xmlrpc server for 'make build' to communicate with
It will store the list of things to be built in a db of some kind.
The process would be something like:
- user runs 'make build TARGET=development'
this runs an xml-rpc client program which uses the ~/.fedora.cert to
connect and auth to the queuer. It submits the build request and exits
- the queuer takes the list of packages to build and farms them out to
the buildhosts. Right now the archwelders are xml-rpc servers that
the queuer connects to to tell them what to do (run, die, logs,
status, etc). Dan mentioned some interest in making them polling
daemons instead - discussion of this is welcome.
- the archwelder finishes, the queuer gets notified of the results and:
- updates the queuer db/list with the status
- sends notices/info to the user who requested the build
- moves the files around like it needs to for the repositories
That's most of what we talked about.
Ideas:
- making it so the buildhosts and the queuer don't have to have
immediate access to the same file space (currently via nfs)
- (along with the above) making it so the archwelders/buildhosts can be
anywhere in the world for building packages.
Dan, Jeremy, feel free to fill in anything I missed or said wrong.
-sv
18 years, 7 months
buildsystem stuff
by seth vidal
Hey folks,
I've been doing a lot of tests today and I have some good news to
report.
1. the xml-rpc communication is working pretty well. We can spawn builds
out to different hosts than the queuer and get feedback on what broke in
the build and/or why.
2. I've cleaned up the code and on my set of 2 systems (x86_64 and ppc)
I can build for all 3 architectures w/o having to manually run anything.
You can see the code here:
http://linux.duke.edu/~skvidal/misc/buildsys/
Gist of how it is used:
The queuer runs, figures out what needs to be built by getting the list
from the /cvs/extras/common/tobuild file. It preps for the build by:
- checking out the tag from cvs
- making all the necessary dirs (name/v-r/a)
- making the srpm
Then it dispatches the build to the right archwelder classes. These
classes build the packages and put the files into the right places if
they succeed; or output the logs if they fail.
After the build runs the queuer will notify the person requesting the
build of success or failure.
In the event of a failure on any one architecture the other builds are
stopped and logs are reported.
That's the short version of how it works - a list of todos are at the
top of the two important files. I've got a few more tests to do and then
I'll probably make the 'tobuild' file open to anyone with cvs commit
access so you can run 'make build' to request your own builds.
let me know what you think, even if I've wasted my time.
-sv
18 years, 7 months
build system glue scripts and requirements
by seth vidal
hi folks,
So right now I think things with mach and yum are working for building
fedora extras. The packages _seem_ like they're coming out right and
things seem functional enough. The second part that I need help and
input on is the glue scripts and requirements for having automatic
triggering of builds for packagers.
The questions I have:
1. if this is meant to run on the red hat boxes in the PHX coloc, what
does that infrastructure look like? What features does it have? Can we
assume all the build boxes have access to the cvs tree? Do we need to
worry about pushing srpms around?
2. How do folks want packagers to send notices about builds? Just a
cvs tag? A webpage? A gpg-signed email with specific content? A custom
xmpp/jabber-client to send a custom message to a listening build client
across an xmpp infrastructure? :)
3. What things am I missing or not understanding about what is needed
from the build system? The requirements I've been working under
are/were:
- self hosting on Fedora Core
- not crazy
What else do I need to think about?
4. Who else is interested in working on this and getting things
progressing more? The yum changes to mach are just a hackjob to get a
problem solved for the short term. However, I'd like to continue down
this general line of development. so <buffy>Where do we go from
here?</buffy>
Thanks,
-sv
18 years, 7 months