On Sun, Jul 05, 2009 at 04:37:02PM +0200, Till Maas wrote:
On Sun July 5 2009, Richard W.M. Jones wrote:
> There's been lots of previous discussion of this silly idea of
> patching generated code. You end up carrying enormous patches
> containing just line number changes that often can't be applied
> upstream, and can't be carried forward to new upstream releases --
> what on earth use is that? And still no one has explained coherently
> why the sky will fall if we patch configure.ac and Makefile.am and
> just rerun autoconf/automake in the specfile.
There is also the third alternative to patch configure.ac and
Makefile.am, send the patches upstream, then run autoconf/automake
once to get a patch for the upstream tarball and use this patch
inside the spec. The patch in the spec may still be big, but it does
not hurt anyone afaics.
But WHY!??!!!
Why is it bad to patch configure.ac and rerun the autotools stuff? I
do this all the time and it doesn't fail, even when we upgrade
autotools mid-release.
Please someone explain why this is bad. It's totally stupid to go to
all this extra effort and carry huge patches against what are
essentially binary files, unless there's a really _really_ good reason
for it.
Rich.
--
Richard Jones, Emerging Technologies, Red Hat
http://et.redhat.com/~rjones
virt-df lists disk usage of guests without needing to install any
software inside the virtual machine. Supports Linux and Windows.
http://et.redhat.com/~rjones/virt-df/