Hi,
I'm wondering if option 4 from the following article
Linux Journal: "Argument list too long": Beyond Arguments and Limitations http://www.linuxjournal.com/article.php?sid=6060
could be done for the Red Hat kernel (or the vanilla kernel) and if not, what the objections are.
Linux Journal suggests to increase the MAX_ARG_PAGES from 32 to 64 pages in include/linux/binfmts.h:
/* * MAX_ARG_PAGES defines the number of pages allocated for arguments * and envelope for the new program. 32 should suffice, this gives * a maximum env+arg of 128kB w/4KB pages! */ #define MAX_ARG_PAGES 32
I'm not in favor to increase it to 64 per se, but at least something higher than 32 (as I've already come across this boundary at several occassions where the only solution was to work around it in a bad way).
Maybe make it system dependent or even dynamically changeable.
Kind regards, -- dag wieers, dag@wieers.com, http://dag.wieers.com/ -- [Any errors in spelling, tact or fact are transmission errors]
On Mon, 8 Sep 2003, Dag Wieers wrote:
Linux Journal suggests to increase the MAX_ARG_PAGES from 32 to 64 pages in include/linux/binfmts.h:
... after which you need to recompile lots of your userspace programs, because they know about the kernel limit and refuse to call exec(2) when you have more than 32 pages full of arguments. ;)
On Sun, 7 Sep 2003, Rik van Riel wrote:
On Mon, 8 Sep 2003, Dag Wieers wrote:
Linux Journal suggests to increase the MAX_ARG_PAGES from 32 to 64 pages in include/linux/binfmts.h:
... after which you need to recompile lots of your userspace programs, because they know about the kernel limit and refuse to call exec(2) when you have more than 32 pages full of arguments. ;)
I'd leave the recompiling up to Red Hat too ;)
-- dag wieers, dag@wieers.com, http://dag.wieers.com/ -- [Any errors in spelling, tact or fact are transmission errors]
On Mon, Sep 08, 2003 at 05:12:25AM +0200, Dag Wieers wrote:
I'm wondering if option 4 from the following article
Linux Journal: "Argument list too long": Beyond Arguments and Limitations http://www.linuxjournal.com/article.php?sid=6060
could be done for the Red Hat kernel (or the vanilla kernel) and if not, what the objections are.
The main objection from Red Hat's standpoint is that it changes the semantics of a kernel interface in a way that really forks from the vanilla kernel -- it would be really painful if programs you built on a Red Hat system magically quit working when you put an upstream kernel on the system.
So this should really be an lkml discussion. That shouldn't be construed as a suggestion that everyone go spam lkml, of course...
michaelkjohnson
"He that composes himself is wiser than he that composes a book." Linux Application Development -- Ben Franklin http://people.redhat.com/johnsonm/lad/
On Mon, 8 Sep 2003, Dag Wieers wrote:
Date: Mon, 8 Sep 2003 05:12:25 +0200 (CEST) From: Dag Wieers dag@wieers.com To: rhl-devel-list@redhat.com Content-Type: TEXT/PLAIN; charset=US-ASCII List-Id: For developers, developers, developers <rhl-devel-list.redhat.com> Subject: Argument list too long.
Hi,
I'm wondering if option 4 from the following article
Linux Journal: "Argument list too long": Beyond Arguments and Limitations http://www.linuxjournal.com/article.php?sid=6060
could be done for the Red Hat kernel (or the vanilla kernel) and if not, what the objections are.
Linux Journal suggests to increase the MAX_ARG_PAGES from 32 to 64 pages in include/linux/binfmts.h:
/* * MAX_ARG_PAGES defines the number of pages allocated for arguments * and envelope for the new program. 32 should suffice, this gives * a maximum env+arg of 128kB w/4KB pages! */ #define MAX_ARG_PAGES 32
I'm not in favor to increase it to 64 per se, but at least something higher than 32 (as I've already come across this boundary at several occassions where the only solution was to work around it in a bad way).
Maybe make it system dependent or even dynamically changeable.
man xargs
On Tue, 9 Sep 2003, Mike A. Harris wrote:
On Mon, 8 Sep 2003, Dag Wieers wrote:
I'm not in favor to increase it to 64 per se, but at least something higher than 32 (as I've already come across this boundary at several occassions where the only solution was to work around it in a bad way).
man xargs
Seems plausible but unpractical in some situations. Eg. you're resigning thousands of files, too long for the argument list. Doing it one by one would force you to enter your passphrase a thousand times.
You could split it up like the document says, but even that will make it harder (there's no guarantee that [a-n] will not be too long). Or you could starting to write an expect script that enters your passphrase... Sure.
But the easiest solution IMO would be to increase the size so that it doesn't occur that often in practice.
Kind regards, -- dag wieers, dag@wieers.com, http://dag.wieers.com/ -- [Any errors in spelling, tact or fact are transmission errors]
On Thu, 11 Sep 2003 17:41:35 +0200 (CEST) Dag Wieers dag@wieers.com wrote:
On Tue, 9 Sep 2003, Mike A. Harris wrote:
On Mon, 8 Sep 2003, Dag Wieers wrote:
I'm not in favor to increase it to 64 per se, but at least something higher than 32 (as I've already come across this boundary at several occassions where the only solution was to work around it in a bad way).
man xargs
Seems plausible but unpractical in some situations. Eg. you're resigning thousands of files, too long for the argument list. Doing it one by one would force you to enter your passphrase a thousand times.
You could split it up like the document says, but even that will make it harder (there's no guarantee that [a-n] will not be too long). Or you could starting to write an expect script that enters your passphrase... Sure.
But the easiest solution IMO would be to increase the size so that it doesn't occur that often in practice.
Dag,
You just don't hear many people complaining about this as a problem. xargs allows you to combine the maximum number of arguments per invocation of the command (-s option). This limit isn't reached very often in practice, and no matter what value you pick someone will find a command that exceeds the limit.
It usually doesn't take much effort to organize things so that this just isn't an issue. Perhaps if you posted a real script where you are having problems, suggestions could be made on alternative methods.
Cheers, Sean
On Thu, 11 Sep 2003, Sean Estabrooks wrote:
Hey Sean,
You just don't hear many people complaining about this as a problem. xargs allows you to combine the maximum number of arguments per invocation of the command (-s option). This limit isn't reached very often in practice, and no matter what value you pick someone will find a command that exceeds the limit.
You say the limit isn't reached very often in practice, I must beg to differ.
The argument size is a balance between memory-usage and practical considerations. Computers are becoming more powerful and people can do more powerful things. So you see that such balances change from time to time when computer-power increases and I think it is time to modify this one too. (And yes, it should not be a Red Hat thing)
Here's the link again:
Linux Journal: "Argument list too long": Beyond Arguments and Limitations http://www.linuxjournal.com/article.php?sid=6060
Read the article and especially the comments. xargs was already mentioned
It usually doesn't take much effort to organize things so that this just isn't an issue. Perhaps if you posted a real script where you are having problems, suggestions could be made on alternative methods.
Sure, you can always organize things so that this just isn't an issue. But if working around something takes increasingly more time than just executing the command, there's something wrong IMO.
I just gave you an example where it isn't very practical to use xargs because you have to type your passphrase multiple times. Automating that however would make it even more ugly, and all that because I can only have 128Kb in arguments. You see how silly computers can be ;)
-- dag wieers, dag@wieers.com, http://dag.wieers.com/ -- [Any errors in spelling, tact or fact are transmission errors]
On Thu, 11 Sep 2003 18:23:15 +0200 (CEST) Dag Wieers dag@wieers.com wrote:
Here's the link again:
Linux Journal: "Argument list too long": Beyond Arguments and Limitations http://www.linuxjournal.com/article.php?sid=6060
Read the article and especially the comments. xargs was already mentioned
yup, read it
It usually doesn't take much effort to organize things so that this just isn't an issue. Perhaps if you posted a real script where you are having problems, suggestions could be made on alternative methods.
Sure, you can always organize things so that this just isn't an issue. But if working around something takes increasingly more time than just executing the command, there's something wrong IMO.
The command line is for people of a certain type who can work these little issues out for themselves. If it is a recurring problem you simply make a script to deal with it.. Tada... *nix already has everything you need... you just have to be willing to use it ;o)
I just gave you an example where it isn't very practical to use xargs because you have to type your passphrase multiple times. Automating that however would make it even more ugly, and all that because I can only have
Yeah. you gave a theoretical example. Do you have one from your own experience that actually matters to you? I've been doing this alot of years and i can count on one hand the number of times this has been an issue for me.
128Kb in arguments. You see how silly computers can be ;)
lol... well you got me there.
Cheers, Sean
On Thu, Sep 11, 2003 at 06:23:15PM +0200, Dag Wieers wrote:
You say the limit isn't reached very often in practice, I must beg to differ.
The argument size is a balance between memory-usage and practical considerations. Computers are becoming more powerful and people can do more powerful things. So you see that such balances change from time to time when computer-power increases and I think it is time to modify this one too. (And yes, it should not be a Red Hat thing)
There are two competing tenets of Unix philosophy that have long bumped heads:
1. Design programs to be connected to other programs.
and 2. Optimize for the common case.
If you look at the approach taken in the Linux kernel, arbitrary limits are removed whenever (a) it can be done without vastly complicating the code, and (b) the common case (i.e., "fast path") remains fast.
So if you can figure out how to keep execve() fast for the common case, but handle a gigabyte arg list (and environment, while you are at it!), without DoS, great.
Back in the days of MS-DOS, there were several different toolkits, like MKS, that provided versions of the standard UNIX tools. Since DOS limited the arglist to some small (127 or 255, can't recall) characters, the arg handling routines allowed one to specify an argument in the format @pathname which would insert arguments from the file at the specified location in the arglist.
For the exceptional cases that you discuss, this could be achieve in userland several ways. The simplest, which only works for non-set[ug]id dynamically linked executables (i.e., the vast majority) is to wrap up the required loader magic (e.g., LD_PRELOAD=/usr/lib/hugearglist.so in a shell script, and invoke rpm as
hugearglist rpm ...
[Alternatively, put it in /etc/ld.so.preload, and you don't need the hugearglist bit, but more caution is required.]
More difficult, but still workable, is to write hugearglist in C and have it load the program directly. A bit of ELF magic is involved there.
One feature that one would want today is the option to separate entries with '\0', ala "find -print0" and "xargs -0".
Regards,
Bill Rugolsky
On Thu, 11 Sep 2003, Bill Rugolsky Jr. wrote:
If you look at the approach taken in the Linux kernel, arbitrary limits are removed whenever (a) it can be done without vastly complicating the code, and (b) the common case (i.e., "fast path") remains fast.
And if you look at how limits have evolved with computer power, you see that upper limits are increased over time. Sure when computers only had 640Kb, a 128Kb argument space was a bad idea. No arguing there.
So if you can figure out how to keep execve() fast for the common case, but handle a gigabyte arg list (and environment, while you are at it!), without DoS, great.
It's not that I'm forcing anyone to use the whole argument space. And it's not that I'm arguing to make it 1Gb either.
I don't see why processing 1Gb arguments would be slower than processing 10 times 100Kb arguments. I'd even wildly guess the latter case is slower than the first.
PS I removed the whole DOS explanation because I don't think it is relevant to the current situation. Unless you want to go back to the 255 bytes argument space ;)
-- dag wieers, dag@wieers.com, http://dag.wieers.com/ -- [Any errors in spelling, tact or fact are transmission errors]
On Thu, Sep 11, 2003 at 11:01:20PM +0200, Dag Wieers wrote:
It's not that I'm forcing anyone to use the whole argument space. And it's not that I'm arguing to make it 1Gb either.
I don't see why processing 1Gb arguments would be slower than processing 10 times 100Kb arguments. I'd even wildly guess the latter case is slower than the first.
Wrong question. The question is whether adding support for very large, or arbitrarily large (hence swappable) arglist+environment makes the common case (i.e., 1 page) significantly slower, or otherwise negatively impacts the kernel (e.g., resource starvation). We won't know until someone implements it. If you are interested in pursuing this, and seeing it done the "right" way, see this post by Jamie Lokier from Mar 2000, along with the surrounding thread:
http://www.ussg.iu.edu/hypermail/linux/kernel/0003.0/0887.html
Regards,
Bill Rugolsky
On Thu, 11 Sep 2003, Bill Rugolsky Jr. wrote:
On Thu, Sep 11, 2003 at 11:01:20PM +0200, Dag Wieers wrote:
It's not that I'm forcing anyone to use the whole argument space. And it's not that I'm arguing to make it 1Gb either.
Read this again.
I don't see why processing 1Gb arguments would be slower than processing 10 times 100Kb arguments. I'd even wildly guess the latter case is slower than the first.
Wrong question. The question is whether adding support for very large, or arbitrarily large (hence swappable) arglist+environment makes the common case (i.e., 1 page) significantly slower, or otherwise negatively impacts the kernel (e.g., resource starvation). We won't know until someone implements it. If you are interested in pursuing this, and seeing it done the "right" way, see this post by Jamie Lokier from Mar 2000, along with the surrounding thread:
http://www.ussg.iu.edu/hypermail/linux/kernel/0003.0/0887.html
Well, 1Gb is something very different than I would propose. Replace 1Gb by 256Kb and '10 times 100Kb' by '4 times 64Kb' and you're closer to home.
But if Jamie Lokier doesn't see any reason to have a limit (!), I rest my case.
Thanks ! -- dag wieers, dag@wieers.com, http://dag.wieers.com/ -- [Any errors in spelling, tact or fact are transmission errors]
On Fri, 12 Sep 2003 00:13:07 +0200 (CEST) Dag Wieers dag@wieers.com wrote:
On Thu, 11 Sep 2003, Bill Rugolsky Jr. wrote:
On Thu, Sep 11, 2003 at 11:01:20PM +0200, Dag Wieers wrote:
It's not that I'm forcing anyone to use the whole argument space. And it's not that I'm arguing to make it 1Gb either.
Read this again.
I don't see why processing 1Gb arguments would be slower than processing 10 times 100Kb arguments. I'd even wildly guess the latter case is slower than the first.
Wrong question. The question is whether adding support for very large, or arbitrarily large (hence swappable) arglist+environment makes the common case (i.e., 1 page) significantly slower, or otherwise negatively impacts the kernel (e.g., resource starvation). We won't know until someone implements it. If you are interested in pursuing this, and seeing it done the "right" way, see this post by Jamie Lokier from Mar 2000, along with the surrounding thread:
http://www.ussg.iu.edu/hypermail/linux/kernel/0003.0/0887.html
Well, 1Gb is something very different than I would propose. Replace 1Gb by 256Kb and '10 times 100Kb' by '4 times 64Kb' and you're closer to home.
But if Jamie Lokier doesn't see any reason to have a limit (!), I rest my case.
Maybe Jamie has had a change of heart since he hasn't implemented it in the three and a half years since that was written ;o)
Cheers, Sean
On Thu, 11 Sep 2003, Dag Wieers wrote:
I just gave you an example where it isn't very practical to use xargs because you have to type your passphrase multiple times. Automating that however would make it even more ugly, and all that because I can only have 128Kb in arguments. You see how silly computers can be ;)
That isn't an argument in favour of increasing the commandline size, it's an argument in favour of filing a bug report / enhancement request against rpm or whatever application to allow it to read filenames from a text file instead.
On Sep 13, 2003, "Mike A. Harris" mharris@redhat.com wrote:
enhancement request against rpm or whatever application to allow it to read filenames from a text file instead.
And making sure the app opens the file in LFS (64-bit) mode. After all, 4GB of arguments will soon no longer be enough :-P :-D
On Thu, 11 Sep 2003, Sean Estabrooks wrote:
I'm not in favor to increase it to 64 per se, but at least something higher than 32 (as I've already come across this boundary at several occassions where the only solution was to work around it in a bad way).
man xargs
Seems plausible but unpractical in some situations. Eg. you're resigning thousands of files, too long for the argument list. Doing it one by one would force you to enter your passphrase a thousand times.
You could split it up like the document says, but even that will make it harder (there's no guarantee that [a-n] will not be too long). Or you could starting to write an expect script that enters your passphrase... Sure.
But the easiest solution IMO would be to increase the size so that it doesn't occur that often in practice.
Dag,
You just don't hear many people complaining about this as a problem. xargs allows you to combine the maximum number of arguments per invocation of the command (-s option). This limit isn't reached very often in practice, and no matter what value you pick someone will find a command that exceeds the limit.
It usually doesn't take much effort to organize things so that this just isn't an issue. Perhaps if you posted a real script where you are having problems, suggestions could be made on alternative methods.
That's really basically the bottom line. The number of cases in which xargs is even needed is pretty small, and the number of cases in which there are other problems due to this is also very small. The number of problems that would be created by increasing the commandline size is much larger in scope than the number of problems that would be solved by it, and it still would not solve the problem unless the commandline was made infinite in length.
A proper solution in the case of rpm signing of files, would be to use a file list stored in a text file instead of the commandline. If rpm does not already allow filenames to be read from a file for gpg signing, and this was indeed a _real_ problem for someone (not just hypothetical), I'm sure jbj would accept a patch to rpm to allow it to read from a file the names of all packages needing signing.
Other software likely would too (if it doesn't already). There's almost always more than one way to skin a cat, and when there isn't, may the source be with you.
"Dag" == Dag Wieers dag@wieers.com writes:
man xargs
Dag> Seems plausible but unpractical in some situations. Eg. you're resigning Dag> thousands of files, too long for the argument list. Doing it one by one Dag> would force you to enter your passphrase a thousand times.
Occasionally you have to change a program to let you specify a list of arguments in some other way. For instance, this came up for libtool with very large numbers of object files.
Tom
On 11 Sep 2003, Tom Tromey wrote:
"Dag" == Dag Wieers dag@wieers.com writes:
man xargs
Dag> Seems plausible but unpractical in some situations. Eg. you're resigning Dag> thousands of files, too long for the argument list. Doing it one by one Dag> would force you to enter your passphrase a thousand times.
Occasionally you have to change a program to let you specify a list of arguments in some other way. For instance, this came up for libtool with very large numbers of object files.
Ok, then I'm on the right list afterall ;)
Please Red Hat change rpm so that I can sign packages without giving them as arguments and without having to enter my passphrase for each (set of) packages.
This seems really silly to me however and this was just an example. Fact is that most scripts suffer from this and as usual people will only notice this when the script fails (with potential data-loss, manual correction and other horrible things). </scare-tactics>
-- dag wieers, dag@wieers.com, http://dag.wieers.com/ -- [Any errors in spelling, tact or fact are transmission errors]
On Thu, 11 Sep 2003 18:30:49 +0200 (CEST) Dag Wieers dag@wieers.com wrote:
On 11 Sep 2003, Tom Tromey wrote:
> "Dag" == Dag Wieers dag@wieers.com writes:
man xargs
Dag> Seems plausible but unpractical in some situations. Eg. you're resigning Dag> thousands of files, too long for the argument list. Doing it one by one Dag> would force you to enter your passphrase a thousand times.
Occasionally you have to change a program to let you specify a list of arguments in some other way. For instance, this came up for libtool with very large numbers of object files.
Ok, then I'm on the right list afterall ;)
Please Red Hat change rpm so that I can sign packages without giving them as arguments and without having to enter my passphrase for each (set of) packages.
This seems really silly to me however and this was just an example. Fact is that most scripts suffer from this and as usual people will only notice this when the script fails (with potential data-loss, manual correction and other horrible things). </scare-tactics>
Dag,
But this is _exactly_ the point that goes against your own argument. If you accept that there has to be some upper limit then a properly written script will always have to guard against such buffer overflows. No matter what limit you pick.
Cheers, Sean
On Thu, 11 Sep 2003, Sean Estabrooks wrote:
On Thu, 11 Sep 2003 18:30:49 +0200 (CEST) Dag Wieers dag@wieers.com wrote:
On 11 Sep 2003, Tom Tromey wrote:
>> "Dag" == Dag Wieers dag@wieers.com writes:
Ok, then I'm on the right list afterall ;)
Please Red Hat change rpm so that I can sign packages without giving them as arguments and without having to enter my passphrase for each (set of) packages.
This seems really silly to me however and this was just an example. Fact is that most scripts suffer from this and as usual people will only notice this when the script fails (with potential data-loss, manual correction and other horrible things). </scare-tactics>
But this is _exactly_ the point that goes against your own argument. If
you accept that there has to be some upper limit then a properly written script will always have to guard against such buffer overflows. No matter what limit you pick.
I'm not arguing that if you increase the limit, there's no limit anymore. (Doh!) The only thing I'm trying to say is that maybe after 10 years (or when was this 128Kb limit added?) memory isn't much a problem now and for the common case 128Kb could easily be 256Kb without anyone noticing (and without causing any more problems).
And as you say most people wouldn't have this problem, well, increasing it to 256Kb will not cause any extra problems either. It's not that you are obliged to fill the 256Kb or anything. No overhead.
So, since you haven't had any problems with it, why are you against it ?
-- dag wieers, dag@wieers.com, http://dag.wieers.com/ -- [Any errors in spelling, tact or fact are transmission errors]
On Thu, 11 Sep 2003 22:55:08 +0200 (CEST) Dag Wieers dag@wieers.com wrote:
But this is _exactly_ the point that goes against your own argument. If
you accept that there has to be some upper limit then a properly written script will always have to guard against such buffer overflows. No matter what limit you pick.
I'm not arguing that if you increase the limit, there's no limit anymore. (Doh!) The only thing I'm trying to say is that maybe after 10 years (or when was this 128Kb limit added?) memory isn't much a problem now and for the common case 128Kb could easily be 256Kb without anyone noticing (and without causing any more problems).
And as you say most people wouldn't have this problem, well, increasing it to 256Kb will not cause any extra problems either. It's not that you are obliged to fill the 256Kb or anything. No overhead.
So, since you haven't had any problems with it, why are you against it ?
Hey Dag,
I'm not exactly against it, but here are my reservations.....
- there isn't any dire or pressing need - it would mask bugs in broken scripts instead of fixing them - scripts that run on Redhat shouldn't break on other systems. (ie. this would be a Linus change not a RH change)
It's not even clear that it would be a good thing if done by the kernel developers but i promise not to riot in the streets if they decide to make the bump ;o)
Really i'm just lobbying for people to get more comfortable with the solutions provided in the article you quoted. Now... i'm about to run out of buffer space for this argument ;o)
Cheers, Sean.
On Thu, 11 Sep 2003, Dag Wieers wrote:
Ok, then I'm on the right list afterall ;)
Please Red Hat change rpm so that I can sign packages without giving them as arguments and without having to enter my passphrase for each (set of) packages.
This seems really silly to me however and this was just an example. Fact is that most scripts suffer from this and as usual people will only notice this when the script fails (with potential data-loss, manual correction and other horrible things). </scare-tactics>
But this is _exactly_ the point that goes against your own argument. If
you accept that there has to be some upper limit then a properly written script will always have to guard against such buffer overflows. No matter what limit you pick.
I'm not arguing that if you increase the limit, there's no limit anymore. (Doh!) The only thing I'm trying to say is that maybe after 10 years (or when was this 128Kb limit added?) memory isn't much a problem now and for the common case 128Kb could easily be 256Kb without anyone noticing (and without causing any more problems).
There's one definite immediate problem it would cause. If it did actually work for someone, then their script would break on systems that didn't have 256Kb, and the problem is worse.
And as you say most people wouldn't have this problem, well, increasing it to 256Kb will not cause any extra problems either. It's not that you are obliged to fill the 256Kb or anything. No overhead.
So, since you haven't had any problems with it, why are you against it ?
Probably because it is the wrong solution to a non-general-case rare problem, that has ramifications that affect everyone, and break compatibilty with virtually every piece of software out there.
On Thu, 11 Sep 2003, Dag Wieers wrote:
Date: Thu, 11 Sep 2003 17:41:35 +0200 (CEST) From: Dag Wieers dag@wieers.com To: rhl-devel-list@redhat.com Content-Type: TEXT/PLAIN; charset=US-ASCII List-Id: For developers, developers, developers <rhl-devel-list.redhat.com> Subject: Re: Argument list too long.
On Tue, 9 Sep 2003, Mike A. Harris wrote:
On Mon, 8 Sep 2003, Dag Wieers wrote:
I'm not in favor to increase it to 64 per se, but at least something higher than 32 (as I've already come across this boundary at several occassions where the only solution was to work around it in a bad way).
man xargs
Seems plausible but unpractical in some situations. Eg. you're resigning thousands of files, too long for the argument list. Doing it one by one would force you to enter your passphrase a thousand times.
xargs isn't called once per argument, so you wouldn't do it 1000 times. A few times perhaps. Red Hat signs tonnes of RPM packages, and we never seem to have a problem with this. ;o)
You could split it up like the document says, but even that will make it harder (there's no guarantee that [a-n] will not be too long). Or you could starting to write an expect script that enters your passphrase... Sure.
But the easiest solution IMO would be to increase the size so that it doesn't occur that often in practice.
And break a lot of software in the process.
On Sat, 13 Sep 2003, Mike A. Harris wrote:
On Thu, 11 Sep 2003, Dag Wieers wrote:
On Tue, 9 Sep 2003, Mike A. Harris wrote:
On Mon, 8 Sep 2003, Dag Wieers wrote:
I'm not in favor to increase it to 64 per se, but at least something higher than 32 (as I've already come across this boundary at several occassions where the only solution was to work around it in a bad way).
man xargs
Seems plausible but unpractical in some situations. Eg. you're resigning thousands of files, too long for the argument list. Doing it one by one would force you to enter your passphrase a thousand times.
xargs isn't called once per argument, so you wouldn't do it 1000 times. A few times perhaps. Red Hat signs tonnes of RPM packages, and we never seem to have a problem with this. ;o)
You people aren't using a passphrase ? ;)
-- dag wieers, dag@wieers.com, http://dag.wieers.com/ -- [Any errors in spelling, tact or fact are transmission errors]
On Sat, 13 Sep 2003, Dag Wieers wrote:
Seems plausible but unpractical in some situations. Eg. you're resigning thousands of files, too long for the argument list. Doing it one by one would force you to enter your passphrase a thousand times.
xargs isn't called once per argument, so you wouldn't do it 1000 times. A few times perhaps. Red Hat signs tonnes of RPM packages, and we never seem to have a problem with this. ;o)
You people aren't using a passphrase ? ;)
Of course there is a passphrase.