I know that the ext3 fs can handle files larger than 2GiB, so why does sendmail quit relaying messages when it's /var/log/sm-mta hits that threshold?
On Tue, 20 Feb 2007 07:31:17 -0700, Ashley M. Kirchner wrote
I know that the ext3 fs can handle files larger than 2GiB, so why does sendmail quit relaying messages when it's /var/log/sm-mta hits that threshold?
What is your MAX_MESSAGE_SIZE in your sendmail.cf / sendmail.mc?
After some digging I found out that it's sysklogd that stops at the 2 GiB limit. 32-bit anyone? So, is there a reason why this is done? And do I have to rebuild the sysklogd rpm to enable this?
Or should I not bother and install syslog-ng instead? Does that support files larger than 2 GiB?
On Tue, 20 Feb 2007, Ashley M. Kirchner wrote:
After some digging I found out that it's sysklogd that stops at the 2 GiB limit. 32-bit anyone? So, is there a reason why this is done? And do I have to rebuild the sysklogd rpm to enable this?
Why on earth do you have logs that are larger then 2GiB? I logrotate working correctly?
-- 21:50:04 up 2 days, 9:07, 0 users, load average: 0.92, 0.37, 0.18 --------------------------------------------------------- Lic. MartÃn Marqués | SELECT 'mmarques' || Centro de Telemática | '@' || 'unl.edu.ar'; Universidad Nacional | DBA, Programador, del Litoral | Administrador ---------------------------------------------------------
Martin Marques wrote:
Why on earth do you have logs that are larger then 2GiB? I logrotate working correctly?
We rotate ALL logs once a month, and yes it's rotating fine. Except we have enough mail traffic that surpasses that 2GiB limit. I really don't want to have to rotate it more often and end up with multiple files for the same month. All of our archives are done per month (so far 1 DVD per month, though that's getting close to full.)
On Tue, 2007-02-20 at 13:23 -0700, Ashley M. Kirchner wrote:
Martin Marques wrote:
Why on earth do you have logs that are larger then 2GiB? I logrotate working correctly?
We rotate ALL logs once a month, and yes it's rotating fine. Except
we have enough mail traffic that surpasses that 2GiB limit. I really don't want to have to rotate it more often and end up with multiple files for the same month. All of our archives are done per month (so far 1 DVD per month, though that's getting close to full.)
---- I use RHEL/CentOS for servers and they rotate syslog(s) every week, not every month. I'm surprised that Fedora would do differently than RHEL.
Craig
Craig White wrote:
I use RHEL/CentOS for servers and they rotate syslog(s) every week, not every month. I'm surprised that Fedora would do differently than RHEL.
I didn't say that it's Fedora's default rotation, did I? I said WE rotate once a month. In other words, WE adjust our rotation configuration to do it once a month, and WE keep 12 months worth of logs.
However, this isn't about the rotation frequency. I can hit 2GiB in a week easily, but that's not the problem here. This is about the *limitation* that sysklogd has and the question as to WHY. There's, in my mind, absolutely no reason for this limitation, and yet it seems to have escaped everyone, probably because everyone rotate every week.
On Tue, 2007-02-20 at 15:47 -0700, Ashley M. Kirchner wrote:
Craig White wrote:
I use RHEL/CentOS for servers and they rotate syslog(s) every week, not every month. I'm surprised that Fedora would do differently than RHEL.
I didn't say that it's Fedora's default rotation, did I? I said WE
rotate once a month. In other words, WE adjust our rotation configuration to do it once a month, and WE keep 12 months worth of logs.
However, this isn't about the rotation frequency. I can hit 2GiB in
a week easily, but that's not the problem here. This is about the *limitation* that sysklogd has and the question as to WHY. There's, in my mind, absolutely no reason for this limitation, and yet it seems to have escaped everyone, probably because everyone rotate every week.
---- as Jim pointed out, the 2Gb file limitation is because of limitations of unsigned integer of 32 bit OS. Moreover, performance deteriorates when syslog has to log to the end of very large log files and it makes more sense to log more frequently and probably automatically tar/gzip them. Whatever amounts you retain, the style of retention (compressed or uncompressed), and the frequency of rotation are clearly under your control.
Craig
I didn't say that it's Fedora's default rotation, did I? I said WE
rotate once a month. In other words, WE adjust our rotation configuration to do it once a month, and WE keep 12 months worth of logs.
However, this isn't about the rotation frequency. I can hit 2GiB in
a week easily, but that's not the problem here. This is about the *limitation* that sysklogd has and the question as to WHY. There's, in my mind, absolutely no reason for this limitation, and yet it seems to have escaped everyone, probably because everyone rotate every week.
If I follow this discussion: http://www.mail-archive.com/devel@kannel.org/msg06223.html
you may be able to recompile sysklogd and have it pass the 2g limit. It might be easier to move the rotation frequency back to one week, and then glom the rotated files together once a month via cron.
If recompiling sysklogd is the answwer, a bug should propbably be filed?
Chris
Chris
Chris Mohler wrote:
If I follow this discussion: http://www.mail-archive.com/devel@kannel.org/msg06223.html
you may be able to recompile sysklogd and have it pass the 2g limit. It might be easier to move the rotation frequency back to one week, and then glom the rotated files together once a month via cron.
That was easier said than done. Once I recompiled sysklogd and installed the new binary, restarted and all that jazz, when I tried to start sendmail back up (with the >2 GiB file in place) I got this:
> service sendmail start Starting sendmail: 451 4.0.0 cannot open /var/log/sm-mta: File too large
If I'm not mistaken, those error codes are sendmail's. *sigh*
Now I have to write another script just to work in conjunction with logrotate and rotate twice a month. At the end of the month, take both files and cat them together and then zip it up. This is just stupid in my opinion.
On Tue, 2007-02-20 at 20:05 -0700, Ashley M. Kirchner wrote:
Chris Mohler wrote:
If I follow this discussion: http://www.mail-archive.com/devel@kannel.org/msg06223.html
you may be able to recompile sysklogd and have it pass the 2g limit. It might be easier to move the rotation frequency back to one week, and then glom the rotated files together once a month via cron.
That was easier said than done. Once I recompiled sysklogd and
installed the new binary, restarted and all that jazz, when I tried to start sendmail back up (with the >2 GiB file in place) I got this:
> service sendmail start Starting sendmail: 451 4.0.0 cannot open /var/log/sm-mta: File too large If I'm not mistaken, those error codes are sendmail's. *sigh* Now I have to write another script just to work in conjunction with
logrotate and rotate twice a month. At the end of the month, take both files and cat them together and then zip it up. This is just stupid in my opinion.
---- a script samurai would welcome the challenge.
Come to think of it, I don't recall ever seeing savvy administrators whining over a new scripting challenge, especially something that is this easy.
Craig
Craig White wrote:
Come to think of it, I don't recall ever seeing savvy administrators whining over a new scripting challenge, especially something that is this easy.
It wasn't meant as a whine in the sense that it can't be done or that I had to do it, but more a complaint of having to do it *because* of someone else's decision to implement that limitation. I'm being forced to do one of two things: either rotate logs more often and write a wrapper around the rotation, or rebuild syslog (and whatever other utility needed to make this work.) Neither of which are necessary if that limit wasn't implemented. Again, I don't see any reason why it still exists.
Ashley M. Kirchner wrote:
Craig White wrote:
Come to think of it, I don't recall ever seeing savvy administrators whining over a new scripting challenge, especially something that is this easy.
It wasn't meant as a whine in the sense that it can't be done or that I had to do it, but more a complaint of having to do it *because* of someone else's decision to implement that limitation. I'm being forced to do one of two things: either rotate logs more often and write a wrapper around the rotation, or rebuild syslog (and whatever other utility needed to make this work.) Neither of which are necessary if that limit wasn't implemented. Again, I don't see any reason why it still exists.
It isn't that someone decided to implement a limitation, but that they didn't program around a limitation of 32 bit processors. The limit is imposed by using the standard GNU libc as compiled by gcc on 32 bit processors. Considering that 1G hard drive was a large drive at the time it was implemented, it was not unreasonable to accept a 2G size limit. The fact that system memory and hard drive sizes progressed much faster then processor size has led to ways of handling larger files on 32 bit processors, but you have to use them. It tends to be more then just changing a function call. It also tends to make the program run slower, because you need more cpu instructions to do the same thing.
I don't believe the same programs, compiled on a 64 bit system, have the same problem. When you are using the 64 bit version, the sizes used are 64 bit instead of 32 bit without changing anything in the program. This is part of the reason people are moving to 64 bit processors, and the 64 bit versions of Linux.
Mikkel
Mikkel L. Ellertson wrote:
It isn't that someone decided to implement a limitation, but that they didn't program around a limitation of 32 bit processors.
Somebody decided on size of an off_t back then.
The limit is imposed by using the standard GNU libc as compiled by gcc on 32 bit processors. Considering that 1G hard drive was a large drive at the time it was implemented, it was not unreasonable to accept a 2G size limit.
I think it was unreasonable for a year or two of probably unmeasurably small performance gain to force an ugly workaround to be needed for the rest of the life of 32-bit systems.
The fact that system memory and hard drive sizes progressed much faster then processor size has led to ways of handling larger files on 32 bit processors, but you have to use them. It tends to be more then just changing a function call.
Preprocessor macros do all the grunge work, and pretty much every program now has made the change. You just run into one that didn't bother every now and then.
It also tends to make the program run slower, because you need more cpu instructions to do the same thing.
But compared to anything involving a disk access a couple of cpu cycles aren't going to make a difference.
On 2/21/07, Les Mikesell lesmikesell@gmail.com wrote:
Mikkel L. Ellertson wrote:
It isn't that someone decided to implement a limitation, but that they didn't program around a limitation of 32 bit processors.
Somebody decided on size of an off_t back then.
The limit is imposed by using the standard GNU libc as compiled by gcc on 32 bit processors. Considering that 1G hard drive was a large drive at the time it was implemented, it was not unreasonable to accept a 2G size limit.
I think it was unreasonable for a year or two of probably unmeasurably small performance gain to force an ugly workaround to be needed for the rest of the life of 32-bit systems.
Then what is reasonable? Make an off_t 64-bits long? Why stop there? Sure, a 2^63 byte file sounds huge (admittedly it is, 8 uh... Exabytes...), but remember that it wasn't that long ago that a 2^31 byte file sounded enormous. Just some food for thought.
There are many instances where people in this industry have been rather short-sighted. A famous Bill Gates quote comes to mind. And then there was the "Y2K bug," even though it never amounted to much in reality. We are not in the habit of programming for growth. Whether we should seems easily answerable as "yes." The "how much?" question is much more debatable.
That said, with where we are, efforts should be made to ensure that all programs can deal with things like > 2 (or 4) GB files. Things are quickly progressing toward 64-bit and even under 32-bit we can easily have files much bigger than that. The longer this "legacy" code hangs around the more painful it will be to fix it later.
Jonathan
Jonathan Berry wrote:
It isn't that someone decided to implement a limitation, but that they didn't program around a limitation of 32 bit processors.
Somebody decided on size of an off_t back then.
The limit is imposed by using the standard GNU libc as compiled by gcc on 32 bit processors. Considering that 1G hard drive was a large drive at the time it was implemented, it was not unreasonable to accept a 2G size limit.
I think it was unreasonable for a year or two of probably unmeasurably small performance gain to force an ugly workaround to be needed for the rest of the life of 32-bit systems.
Then what is reasonable? Make an off_t 64-bits long?
That's pretty obvious in retrospect, but I suppose everyone thought we'd have had 64 ints long ago and it would happen naturally. Who could have guessed that windows binary backwards compatibility would be a requirement and that it would take so long to produce it?
Why stop there? Sure, a 2^63 byte file sounds huge (admittedly it is, 8 uh... Exabytes...), but remember that it wasn't that long ago that a 2^31 byte file sounded enormous. Just some food for thought.
There are lots of things in human terms where 32 bits aren't enough. Not so many with 64 bits. Maybe we'll start counting in smaller units or something.
There are many instances where people in this industry have been rather short-sighted. A famous Bill Gates quote comes to mind. And then there was the "Y2K bug," even though it never amounted to much in reality. We are not in the habit of programming for growth.
Until Y2K proved otherwise we were in the habit of assuming that the shortcuts made in programming would be replaced by something better before they did any harm. The reason Y2K didn't cause real problems was that every company where it could have spent an enormous amount of time and money checking and fixing it ahead of time. I suspect we'll actually see more problems next month when everyone's outlook appointments are off by an hour from the DST move.
That said, with where we are, efforts should be made to ensure that all programs can deal with things like > 2 (or 4) GB files. Things are quickly progressing toward 64-bit and even under 32-bit we can easily have files much bigger than that. The longer this "legacy" code hangs around the more painful it will be to fix it later.
What's done is done, and backwards compatibility is a good and necessary thing, but it means that every program needs to be rebuilt in a way that invokes the macros for large file support. And they haven't yet, as this thread demonstrates.
On Thu, 2007-02-22 at 01:02 -0600, Les Mikesell wrote:
I suspect we'll actually see more problems next month when everyone's outlook appointments are off by an hour from the DST move.
---- I suspect that if this turns out to be an issue for a bunch of people, they will get a glimpse of proprietary software looking for an opportunity to sell an upgrade. C'est la vie.
Craig
On Thu, 2007-02-22 at 08:04 -0700, Craig White wrote:
On Thu, 2007-02-22 at 01:02 -0600, Les Mikesell wrote:
I suspect we'll actually see more problems next month when everyone's outlook appointments are off by an hour from the DST move.
I suspect that if this turns out to be an issue for a bunch of people, they will get a glimpse of proprietary software looking for an opportunity to sell an upgrade. C'est la vie.
Or build the binary yourself and include the compiler flags that generate largefile support, e.g. "gcc -D_FILE_OFFSET_BITS=64 ...". Or edit one of the main include files and:
#define _LARGEFILE_SOURCE 1 #define _LARGEFILE64_SOURCE 1
in the file and rebuild. I used the first one. Works fine.
---------------------------------------------------------------------- - Rick Stevens, Senior Systems Engineer rstevens@vitalstream.com - - VitalStream, Inc. http://www.vitalstream.com - - - - I don't suffer from insanity...I enjoy every minute of it! - ----------------------------------------------------------------------
Ashley M. Kirchner wrote:
That was easier said than done. Once I recompiled sysklogd and installed the new binary, restarted and all that jazz, when I tried to start sendmail back up (with the >2 GiB file in place) I got this:
service sendmail start
Starting sendmail: 451 4.0.0 cannot open /var/log/sm-mta: File too large
If I'm not mistaken, those error codes are sendmail's. *sigh*
Now I have to write another script just to work in conjunction with logrotate and rotate twice a month. At the end of the month, take both files and cat them together and then zip it up. This is just stupid in my opinion.
Well, the problem has more to do with how libc is written, and with the default size of things like int in C on a 32 bit machine. It is not an easy fix. I leave it to someone else to explain the performance hit you would take trying to change it everywhere.
Now, as far as getting logrotate to rotate the mail logs more often, you remove /var/log/maillog from /etc/logrotate.d/syslog and create your own entry just for that log file. You may want to consider rotating based on size, or rotate daily/weekly, and use the dateext option to date when the logs were created. If you go this route, don't forget to increase the rotate count in your maillog rule.
I used to have a call logging program that would rotate logs daily, with month/day extension, and then a monthly backup that would bundle the daily backups into a monthly archive, and remove the daily logs. If you are going to archives the logs, it can be handy to be able to grab the logs for a specific day. I know, you can use grep to grab matching dates, but if the logs are large, then having smaller files to work with can be handy.
Mikkel
On Tuesday 20 February 2007, Craig White wrote:
On Tue, 2007-02-20 at 13:23 -0700, Ashley M. Kirchner wrote:
Martin Marques wrote:
Why on earth do you have logs that are larger then 2GiB? I logrotate working correctly?
We rotate ALL logs once a month, and yes it's rotating fine.
Except we have enough mail traffic that surpasses that 2GiB limit. I really don't want to have to rotate it more often and end up with multiple files for the same month. All of our archives are done per month (so far 1 DVD per month, though that's getting close to full.)
I use RHEL/CentOS for servers and they rotate syslog(s) every week, not every month. I'm surprised that Fedora would do differently than RHEL.
Craig
They don't, its weekly. I think he has been playing with his scripts.
On Tue, 2007-02-20 at 13:23 -0700, Ashley M. Kirchner wrote:
Except we have enough mail traffic that surpasses that 2GiB limit.
Just for curiosity's sake, with that amount of data logging, are you keeping copies of all messages in their entirety?
Tim wrote:
Just for curiosity's sake, with that amount of data logging, are you keeping copies of all messages in their entirety?
Look in your /var/log/sm-mta, that is what's archived every month. /var/mail/ gets backed up every night, but not archived. So if your question relates to me being able to recover a lost e-mail from someone's INBOX, then no I'm not keeping entire messages. However, I *can* extract the contents of the message from the log file and give that to the user. And for our clients, that's good enough.
Martin Marques wrote:
After some digging I found out that it's sysklogd that stops at the 2 GiB limit. 32-bit anyone? So, is there a reason why this is done? And do I have to rebuild the sysklogd rpm to enable this?
Why on earth do you have logs that are larger then 2GiB? I logrotate working correctly?
A better question is why is there still anything with a 2 gig file size limit? Or why was there ever one in Linux, given that unix should have already been going through the pain of conversion by the time Linux distributions were being built?
Les Mikesell wrote:
A better question is why is there still anything with a 2 gig file size limit? Or why was there ever one in Linux, given that unix should have already been going through the pain of conversion by the time Linux distributions were being built?
When Linux distributions were first being built, the filesystem had a 64 MB limit. That's for the entire filesystem...
As for why there are still 2 gig limits -- for one thing, if you're going to memory map a file, and use memory operations to read and write the file, and you're using a 32 bit computer, then the 2 gig limit comes with the territory. Memory-mapping files is a very useful technique, and using 64-bit file accesses is inherently much slower on a 32 bit processor (and it matters with memory-mapped files).
The other main reason (and the one I suspect applies here) is that it's not considered worth the complexity: not worth paying the real price of extra complexity to be able to handle large files to get the theoretical benefit of having log files over 2 GB on 32 bit systems.
James.
James Wilkinson wrote:
A better question is why is there still anything with a 2 gig file size limit? Or why was there ever one in Linux, given that unix should have already been going through the pain of conversion by the time Linux distributions were being built?
When Linux distributions were first being built, the filesystem had a 64 MB limit. That's for the entire filesystem...
I would hope everyone involved knew that would be a temporary limitation and that imposing any particular filesystem's limits on the kernel would be short sighted. Now 2 gigs is a dollars's worth of disk space.
As for why there are still 2 gig limits -- for one thing, if you're going to memory map a file, and use memory operations to read and write the file, and you're using a 32 bit computer, then the 2 gig limit comes with the territory. Memory-mapping files is a very useful technique, and using 64-bit file accesses is inherently much slower on a 32 bit processor (and it matters with memory-mapped files).
But that shouldn't limit your file size - and doesn't anymore for nearly everything.
The other main reason (and the one I suspect applies here) is that it's not considered worth the complexity: not worth paying the real price of extra complexity to be able to handle large files to get the theoretical benefit of having log files over 2 GB on 32 bit systems.
The benefit isn't theoretical if you have more than 2 gigs of data. I thought the default now was to compile with large file support and had been for some time. Does that mean someone intentionally is still imposing tiny limits or that parts of the system haven't been rebuilt for ages? I've had another program or two croak when hitting this no longer relevant limit.