When I ran "yum update", when it started to install the updates it said something like cpu 0 overheating cpu 1 overheating. Then my screen went black for a second and the next thing I new is I was at the login screen.
Having problems like this in Linux is unacceptable. That could have caused major damage. Things like this need to be worked out. Linux needs to focus more on the here and now and not keep pushing for the newest technology.
Don't get me wrong I think Linux is a gem. I like the GPL but I don't like how company's can use people's software and make profit. The people that make the software should be getting paid if anyone is getting paid.
Once again, things like this are unacceptable. I am afraid to use Fedora 6 now. I alos, have am having problems after a clean installation.
Kind Regards, Mike
On 12/12/06, Mike Chalmers mikechalmers70@gmail.com wrote:
When I ran "yum update", when it started to install the updates it said something like cpu 0 overheating cpu 1 overheating. Then my screen went black for a second and the next thing I new is I was at the login screen.
Having problems like this in Linux is unacceptable. That could have caused major damage. Things like this need to be worked out. Linux needs to focus more on the here and now and not keep pushing for the newest technology.
Don't get me wrong I think Linux is a gem. I like the GPL but I don't like how company's can use people's software and make profit. The people that make the software should be getting paid if anyone is getting paid.
Once again, things like this are unacceptable. I am afraid to use Fedora 6 now. I alos, have am having problems after a clean installation.
Kind Regards, Mike
Have you considered that maybe your CPU was realy over heating and Linux didn't have much to do with it?
Similiar happened to me recently, post a kernel upgrade. I thought it was the new kernel, turned out that the CPU was over heating. I cleaned up the dust and put in a new CPU fan with new thermal gel. Now I'm running cooler than before.
Mike Chalmers wrote:
When I ran "yum update", when it started to install the updates it said something like cpu 0 overheating cpu 1 overheating. Then my screen went black for a second and the next thing I new is I was at the login screen.
Ouch.
If the CPU is overheating, then that's a hardware problem. Check your fans, check BIOS settings to see what the temperature really is, check that your heatsinks are properly attached. And check that your backups are good.
What sort of system is it -- laptop? Desktop? How old is it?
Having problems like this in Linux is unacceptable.
Quite. But blaming Linux is also unacceptable. If software can do this, then that's a hardware fault -- hardware should not react like this *whatever* software does. (For one thing, how do you know that this wouldn't happen if you were running a malicious Java applet?)
Once again, things like this are unacceptable. I am afraid to use Fedora 6 now. I alos, have am having problems after a clean installation.
That's misplaced fear -- fear your *hardware*.
Hope this helps,
James.
On 12/11/06, James Wilkinson fedora@aprilcottage.co.uk wrote:
Mike Chalmers wrote:
When I ran "yum update", when it started to install the updates it said something like cpu 0 overheating cpu 1 overheating. Then my screen went black for a second and the next thing I new is I was at the login screen.
Ouch.
If the CPU is overheating, then that's a hardware problem. Check your fans, check BIOS settings to see what the temperature really is, check that your heatsinks are properly attached. And check that your backups are good.
What sort of system is it -- laptop? Desktop? How old is it?
Having problems like this in Linux is unacceptable.
Quite. But blaming Linux is also unacceptable. If software can do this, then that's a hardware fault -- hardware should not react like this *whatever* software does. (For one thing, how do you know that this wouldn't happen if you were running a malicious Java applet?)
Once again, things like this are unacceptable. I am afraid to use Fedora 6 now. I alos, have am having problems after a clean installation.
That's misplaced fear -- fear your *hardware*.
Hope this helps,
James.
-- E-mail: james@ | And WinCE... there should be BIG RED WARNING STICKERS, aprilcottage.co.uk | like those IntelInside things. "This product contains | WinCE and is therefore not suitable for the purposes | stated." -- Graham Reed
-- fedora-list mailing list fedora-list@redhat.com To unsubscribe: https://www.redhat.com/mailman/listinfo/fedora-list
I like Linux, alot. I mean alot. I like what it stands for (besides the corporations). But my CPU has never overheated. I am pretty sure that it is not my hardware. It could be a bug in Linux. The kernel could be be sending incorrect frequencies to the hardware or something like that.
Kind Regards, Preston
On 12/12/06, Mike Chalmers mikechalmers70@gmail.com wrote:
On 12/11/06, James Wilkinson fedora@aprilcottage.co.uk wrote:
Mike Chalmers wrote:
When I ran "yum update", when it started to install the updates it said something like cpu 0 overheating cpu 1 overheating. Then my screen went black for a second and the next thing I new is I was at the login screen.
Ouch.
If the CPU is overheating, then that's a hardware problem. Check your fans, check BIOS settings to see what the temperature really is, check that your heatsinks are properly attached. And check that your backups are good.
What sort of system is it -- laptop? Desktop? How old is it?
Having problems like this in Linux is unacceptable.
Quite. But blaming Linux is also unacceptable. If software can do this, then that's a hardware fault -- hardware should not react like this *whatever* software does. (For one thing, how do you know that this wouldn't happen if you were running a malicious Java applet?)
Once again, things like this are unacceptable. I am afraid to use Fedora 6 now. I alos, have am having problems after a clean installation.
That's misplaced fear -- fear your *hardware*.
Hope this helps,
James.
-- E-mail: james@ | And WinCE... there should be BIG RED WARNING STICKERS, aprilcottage.co.uk | like those IntelInside things. "This product contains | WinCE and is therefore not suitable for the purposes | stated." -- Graham Reed
-- fedora-list mailing list fedora-list@redhat.com To unsubscribe: https://www.redhat.com/mailman/listinfo/fedora-list
I like Linux, alot. I mean alot. I like what it stands for (besides the corporations). But my CPU has never overheated. I am pretty sure that it is not my hardware. It could be a bug in Linux. The kernel could be be sending incorrect frequencies to the hardware or something like that.
Kind Regards, Preston
--
My CPU had never overheated before then too. I too was pretty sure it was not my hardware, but after I failed to find confirmation that it was a software prob, I checked my hardware. Have you?
Get it to overheat, fast reboot, and check what the BIOS says. If it is really Linux, use an older Kernel version where the over heating didn't occur.
Franky, in the end I was just happy that Linux alerted me of the problem.
On Tue, 12 Dec 2006, Arthur Pemberton wrote:
On 12/12/06, Mike Chalmers mikechalmers70@gmail.com wrote:
When I ran "yum update", when it started to install the updates it said something like cpu 0 overheating cpu 1 overheating. Then my screen went black for a second and the next thing I new is I was at the login screen.
Having problems like this in Linux is unacceptable. That could have caused major damage. Things like this need to be worked out. Linux needs to focus more on the here and now and not keep pushing for the newest technology.
Don't get me wrong I think Linux is a gem. I like the GPL but I don't like how company's can use people's software and make profit. The people that make the software should be getting paid if anyone is getting paid.
Once again, things like this are unacceptable. I am afraid to use Fedora 6 now. I alos, have am having problems after a clean installation.
Kind Regards, Mike
Have you considered that maybe your CPU was realy over heating and Linux didn't have much to do with it?
Similiar happened to me recently, post a kernel upgrade. I thought it was the new kernel, turned out that the CPU was over heating. I cleaned up the dust and put in a new CPU fan with new thermal gel. Now I'm running cooler than before.
Recently installed FC6 on an old P-III 500MHz. At first, it would reliably do thermal shutdowns during the dependency resolution step. Also did shutdowns during memtest86.
I swapped the CPU package from one of the two slots to the other (tried to get airflow across the package back and probably shook out some dust) and it's been fine ever since.
On 12/11/06, Matthew Saltzman mjs@ces.clemson.edu wrote:
On Tue, 12 Dec 2006, Arthur Pemberton wrote:
On 12/12/06, Mike Chalmers mikechalmers70@gmail.com wrote:
When I ran "yum update", when it started to install the updates it said something like cpu 0 overheating cpu 1 overheating. Then my screen went black for a second and the next thing I new is I was at the login screen.
Having problems like this in Linux is unacceptable. That could have caused major damage. Things like this need to be worked out. Linux needs to focus more on the here and now and not keep pushing for the newest technology.
Don't get me wrong I think Linux is a gem. I like the GPL but I don't like how company's can use people's software and make profit. The people that make the software should be getting paid if anyone is getting paid.
Once again, things like this are unacceptable. I am afraid to use Fedora 6 now. I alos, have am having problems after a clean installation.
Kind Regards, Mike
Have you considered that maybe your CPU was realy over heating and Linux didn't have much to do with it?
Similiar happened to me recently, post a kernel upgrade. I thought it was the new kernel, turned out that the CPU was over heating. I cleaned up the dust and put in a new CPU fan with new thermal gel. Now I'm running cooler than before.
Recently installed FC6 on an old P-III 500MHz. At first, it would reliably do thermal shutdowns during the dependency resolution step. Also did shutdowns during memtest86.
I swapped the CPU package from one of the two slots to the other (tried to get airflow across the package back and probably shook out some dust) and it's been fine ever since.
-- Matthew Saltzman
Clemson University Math Sciences mjs AT clemson DOT edu http://www.math.clemson.edu/~mjs
-- fedora-list mailing list fedora-list@redhat.com To unsubscribe: https://www.redhat.com/mailman/listinfo/fedora-list
I will try dusting it off. I still don't know why it would have never happened before. Matthew, what kind of fan did you use? Price is not an option on the fan, because that is important. But not to high. Does anyone recommend thermoelectric cooling?
What is a reliable shutdown? I don't think the screen going black then returning me to the is a reliable shutdown, is it?
I don't think letting it overheat again is a good idea. It could break it. It is Intel P4 HT 3.0 ghz processor. Can't afford to lose that. If I did run update again and the cpu overheated, wouldn't that mean it was Linux?
Kind Regards, Preston
On Mon, 11 Dec 2006 13:05:48 -0500 "Mike Chalmers" mikechalmers70@gmail.com wrote:
When I ran "yum update", when it started to install the updates it said something like cpu 0 overheating cpu 1 overheating. Then my
That means your system reported via ACPI that an overheat situation was occurring. There are a couple of reasons this can happen, the first would be that your PC is broken - and the fact "yum update" triggered it is perhaps indicative of this. The second is that the ACPI code in the BIOS is buggy and the warning is bogus.
There are multiple bands of overheating on a PC and properly designed and built systems will shut down without OS intervention when the temperature hits a hazardous level. This is done because the user may be running a legacy environment (eg booting Win95 to play a Win/XP incompatible game) and also because the OS may have crashed beforehand, and since errors such as memory errors may occur before the risk of damage, the OS is actually quite likely to have crashed
Don't get me wrong I think Linux is a gem. I like the GPL but I don't like how company's can use people's software and make profit. The people that make the software should be getting paid if anyone is getting paid.
We are 8)
Alan
Alan wrote:
precisely I suspect because vendors didn't want people to be able to write software overclock tools intentionally or otherwise.
Wouldn't BIOS' that have this feature fall in this category? Or that a different set of "people" that write software to over clock CPUs?
I like Linux, alot. I mean alot. I like what it stands for (besides the corporations). But my CPU has never overheated. I am pretty sure that it is not my hardware. It could be a bug in Linux. The kernel could be be sending incorrect frequencies to the hardware or something like that.
Two problems with that hypothesis
1. We take care to load the precise speed ranges in the BIOS firmware
2. There is no "overclock" setting in the cpu speed controls. You can't tell the machine to run faster than intended (to many peoples disappointment). There is no "11" on the knob.
So the tuning knobs in the kernel are not only carefully checking, but don't support the ability to overload the CPU - precisely I suspect because vendors didn't want people to be able to write software overclock tools intentionally or otherwise.
Alan
On Mon, 11 Dec 2006 12:01:11 -0700 "Ashley M. Kirchner" ashley@pcraft.com wrote:
Alan wrote:
precisely I suspect because vendors didn't want people to be able to write software overclock tools intentionally or otherwise.
Wouldn't BIOS' that have this feature fall in this category? Or
that a different set of "people" that write software to over clock CPUs?
The BIOS setup deals with the actual master clocks for the board (which are entirely board dependant and nobody else touches). They can set up overclocking. The general purpose interfaces for speed control (the ones Linux and Windows use) to the CPU don't allow that to be done. Probably if you overclock the CPU by say 20% all your other speeds are 20% higher than they should be, but that may well depend on the board.
Alan
Alan wrote:
The BIOS setup deals with the actual master clocks for the board (which are entirely board dependant and nobody else touches). They can set up overclocking.
Aha. Gotcha. Thanks for the insight.
On Mon, 11 Dec 2006, Mike Chalmers wrote:
On 12/11/06, Matthew Saltzman mjs@ces.clemson.edu wrote:
[...]
Recently installed FC6 on an old P-III 500MHz. At first, it would reliably do thermal shutdowns during the dependency resolution step. Also did shutdowns during memtest86.
I swapped the CPU package from one of the two slots to the other (tried to get airflow across the package back and probably shook out some dust) and it's been fine ever since.
I will try dusting it off. I still don't know why it would have never happened before. Matthew, what kind of fan did you use? Price is not an option on the fan, because that is important. But not to high. Does anyone recommend thermoelectric cooling?
This machine has a case fan at the end of a plastic tunnel that draws air past the CPU package slots. It doesn't have a heat-sink fan.
What is a reliable shutdown? I don't think the screen going black then returning me to the is a reliable shutdown, is it?
"Reliable" in the sense that it was not intermittent. My BIOS has a setting that will force power off when thermal violations are detected. That's what was doing the shutdown.
I don't think letting it overheat again is a good idea. It could break it. It is Intel P4 HT 3.0 ghz processor. Can't afford to lose that. If I did run update again and the cpu overheated, wouldn't that mean it was Linux?
Linux may make your CPU work hard, but it shouldn't cause it to overheat if the hardware is otherwise OK. See the other posts in this thread.
Kind Regards, Preston
On 12/11/06, Matthew Saltzman mjs@ces.clemson.edu wrote:
On Mon, 11 Dec 2006, Mike Chalmers wrote:
On 12/11/06, Matthew Saltzman mjs@ces.clemson.edu wrote:
[...]
Recently installed FC6 on an old P-III 500MHz. At first, it would reliably do thermal shutdowns during the dependency resolution step. Also did shutdowns during memtest86.
I swapped the CPU package from one of the two slots to the other (tried to get airflow across the package back and probably shook out some dust) and it's been fine ever since.
I will try dusting it off. I still don't know why it would have never happened before. Matthew, what kind of fan did you use? Price is not an option on the fan, because that is important. But not to high. Does anyone recommend thermoelectric cooling?
This machine has a case fan at the end of a plastic tunnel that draws air past the CPU package slots. It doesn't have a heat-sink fan.
What is a reliable shutdown? I don't think the screen going black then returning me to the is a reliable shutdown, is it?
"Reliable" in the sense that it was not intermittent. My BIOS has a setting that will force power off when thermal violations are detected. That's what was doing the shutdown.
I don't think letting it overheat again is a good idea. It could break it. It is Intel P4 HT 3.0 ghz processor. Can't afford to lose that. If I did run update again and the cpu overheated, wouldn't that mean it was Linux?
Linux may make your CPU work hard, but it shouldn't cause it to overheat if the hardware is otherwise OK. See the other posts in this thread.
Kind Regards, Preston
-- Matthew Saltzman
Clemson University Math Sciences mjs AT clemson DOT edu http://www.math.clemson.edu/~mjs
-- fedora-list mailing list fedora-list@redhat.com To unsubscribe: https://www.redhat.com/mailman/listinfo/fedora-list
I have decided to dust my system, I have also decided to get this cpu fan, http://www.quietpcusa.com/acb/showdetl.cfm?&DID=8&Product_ID=266&... , and a new power supply. I am also going to update my case fans.
Thank you for the help.
On 12/12/06, Mike Chalmers mikechalmers70@gmail.com wrote:
On 12/11/06, Matthew Saltzman mjs@ces.clemson.edu wrote:
On Tue, 12 Dec 2006, Arthur Pemberton wrote:
On 12/12/06, Mike Chalmers mikechalmers70@gmail.com wrote:
When I ran "yum update", when it started to install the updates it said something like cpu 0 overheating cpu 1 overheating. Then my screen went black for a second and the next thing I new is I was at the login screen.
Having problems like this in Linux is unacceptable. That could have caused major damage. Things like this need to be worked out. Linux needs to focus more on the here and now and not keep pushing for the newest technology.
Don't get me wrong I think Linux is a gem. I like the GPL but I don't like how company's can use people's software and make profit. The people that make the software should be getting paid if anyone is getting paid.
Once again, things like this are unacceptable. I am afraid to use Fedora 6 now. I alos, have am having problems after a clean installation.
Kind Regards, Mike
Have you considered that maybe your CPU was realy over heating and Linux didn't have much to do with it?
Similiar happened to me recently, post a kernel upgrade. I thought it was the new kernel, turned out that the CPU was over heating. I cleaned up the dust and put in a new CPU fan with new thermal gel. Now I'm running cooler than before.
Recently installed FC6 on an old P-III 500MHz. At first, it would reliably do thermal shutdowns during the dependency resolution step. Also did shutdowns during memtest86.
I swapped the CPU package from one of the two slots to the other (tried to get airflow across the package back and probably shook out some dust) and it's been fine ever since.
-- Matthew Saltzman
Clemson University Math Sciences mjs AT clemson DOT edu http://www.math.clemson.edu/~mjs
-- fedora-list mailing list fedora-list@redhat.com To unsubscribe: https://www.redhat.com/mailman/listinfo/fedora-list
I will try dusting it off. I still don't know why it would have never happened before. Matthew, what kind of fan did you use? Price is not an option on the fan, because that is important. But not to high. Does anyone recommend thermoelectric cooling?
What is a reliable shutdown? I don't think the screen going black then returning me to the is a reliable shutdown, is it?
I don't think letting it overheat again is a good idea. It could break it. It is Intel P4 HT 3.0 ghz processor. Can't afford to lose that. If I did run update again and the cpu overheated, wouldn't that mean it was Linux?
You seem not to be listening to us. Have you bothered to checked what the BIOS says? Yum update is just making a lot of use of your CPU, it is _incapable_ of causing that problem you described by itself.
On 12/12/06, Mike Chalmers mikechalmers70@gmail.com wrote:
On 12/11/06, Matthew Saltzman mjs@ces.clemson.edu wrote:
On Mon, 11 Dec 2006, Mike Chalmers wrote:
On 12/11/06, Matthew Saltzman mjs@ces.clemson.edu wrote:
[...]
Recently installed FC6 on an old P-III 500MHz. At first, it would reliably do thermal shutdowns during the dependency resolution step. Also did shutdowns during memtest86.
I swapped the CPU package from one of the two slots to the other (tried to get airflow across the package back and probably shook out some dust) and it's been fine ever since.
I will try dusting it off. I still don't know why it would have never happened before. Matthew, what kind of fan did you use? Price is not an option on the fan, because that is important. But not to high. Does anyone recommend thermoelectric cooling?
This machine has a case fan at the end of a plastic tunnel that draws air past the CPU package slots. It doesn't have a heat-sink fan.
What is a reliable shutdown? I don't think the screen going black then returning me to the is a reliable shutdown, is it?
"Reliable" in the sense that it was not intermittent. My BIOS has a setting that will force power off when thermal violations are detected. That's what was doing the shutdown.
I don't think letting it overheat again is a good idea. It could break it. It is Intel P4 HT 3.0 ghz processor. Can't afford to lose that. If I did run update again and the cpu overheated, wouldn't that mean it was Linux?
Linux may make your CPU work hard, but it shouldn't cause it to overheat if the hardware is otherwise OK. See the other posts in this thread.
Kind Regards, Preston
-- Matthew Saltzman
Clemson University Math Sciences mjs AT clemson DOT edu http://www.math.clemson.edu/~mjs
-- fedora-list mailing list fedora-list@redhat.com To unsubscribe: https://www.redhat.com/mailman/listinfo/fedora-list
I have decided to dust my system, I have also decided to get this cpu fan, http://www.quietpcusa.com/acb/showdetl.cfm?&DID=8&Product_ID=266&... , and a new power supply. I am also going to update my case fans.
Thank you for the help.
The description of that fan doesn't seem to detail its air flow.
On 12/11/06, Arthur Pemberton pemboa@gmail.com wrote:
On 12/12/06, Mike Chalmers mikechalmers70@gmail.com wrote:
On 12/11/06, Matthew Saltzman mjs@ces.clemson.edu wrote:
On Tue, 12 Dec 2006, Arthur Pemberton wrote:
On 12/12/06, Mike Chalmers mikechalmers70@gmail.com wrote:
When I ran "yum update", when it started to install the updates it said something like cpu 0 overheating cpu 1 overheating. Then my screen went black for a second and the next thing I new is I was at the login screen.
Having problems like this in Linux is unacceptable. That could have caused major damage. Things like this need to be worked out. Linux needs to focus more on the here and now and not keep pushing for the newest technology.
Don't get me wrong I think Linux is a gem. I like the GPL but I don't like how company's can use people's software and make profit. The people that make the software should be getting paid if anyone is getting paid.
Once again, things like this are unacceptable. I am afraid to use Fedora 6 now. I alos, have am having problems after a clean installation.
Kind Regards, Mike
Have you considered that maybe your CPU was realy over heating and Linux didn't have much to do with it?
Similiar happened to me recently, post a kernel upgrade. I thought it was the new kernel, turned out that the CPU was over heating. I cleaned up the dust and put in a new CPU fan with new thermal gel. Now I'm running cooler than before.
Recently installed FC6 on an old P-III 500MHz. At first, it would reliably do thermal shutdowns during the dependency resolution step. Also did shutdowns during memtest86.
I swapped the CPU package from one of the two slots to the other (tried to get airflow across the package back and probably shook out some dust) and it's been fine ever since.
-- Matthew Saltzman
Clemson University Math Sciences mjs AT clemson DOT edu http://www.math.clemson.edu/~mjs
-- fedora-list mailing list fedora-list@redhat.com To unsubscribe: https://www.redhat.com/mailman/listinfo/fedora-list
I will try dusting it off. I still don't know why it would have never happened before. Matthew, what kind of fan did you use? Price is not an option on the fan, because that is important. But not to high. Does anyone recommend thermoelectric cooling?
What is a reliable shutdown? I don't think the screen going black then returning me to the is a reliable shutdown, is it?
I don't think letting it overheat again is a good idea. It could break it. It is Intel P4 HT 3.0 ghz processor. Can't afford to lose that. If I did run update again and the cpu overheated, wouldn't that mean it was Linux?
You seem not to be listening to us. Have you bothered to checked what the BIOS says? Yum update is just making a lot of use of your CPU, it is _incapable_ of causing that problem you described by itself.
-- Fedora Core 6 and proud
-- fedora-list mailing list fedora-list@redhat.com To unsubscribe: https://www.redhat.com/mailman/listinfo/fedora-list
I'm listening, that is why I decided to upgrade my hardware. I am not going to overheat my cpu again though, so that I can check the temperatures in the BIOS.
Once again thank you all for the help.
Mike Chalmers wrote:
I like Linux, alot. I mean alot. I like what it stands for (besides the corporations). But my CPU has never overheated. I am pretty sure that it is not my hardware. It could be a bug in Linux. The kernel could be be sending incorrect frequencies to the hardware or something like that.
Hummmm.... Something like that happened to me years ago... Let me think. Oh, right.....
Every morning I would get up to go to work. I would go down to the garage start my car and drive away. Then, one day, a new Citgo Gas station opened up very near my home. On my way home that evening I stopped and filled up. The next morning I went to start my car and it refused to start. It tried and tried. Must be that new Citgo gas...bet is was contaminated, had water in it, or what ever. I mean it was a new station so something must have been wrong.
I'm not much into cars, but my neighbor was and he came over when he saw I was having trouble. He popped the hood, looked down at my battery cables, gave them a few smacks with a hammer and said "try again". The car started just fine. I looked at the guy and said "this never happened before". He looked me straight in the eye and said, "just because something worked just fine yesterday doesn't mean it work the next day".
James Wilkinson wrote:
Mike Chalmers wrote:
...
Once again, things like this are unacceptable. I am afraid to use Fedora 6 now. I alos, have am having problems after a clean installation.
That's misplaced fear -- fear your *hardware*.
Unless there is a problem with, say, lmsensors mis-reading the CPU temperature register and freaking out.
Not impossible, I should think.
Greg
On 12/11/06, Greg Trounson gregt@maths.otago.ac.nz wrote:
James Wilkinson wrote:
Mike Chalmers wrote:
...
Once again, things like this are unacceptable. I am afraid to use Fedora 6 now. I alos, have am having problems after a clean installation.
That's misplaced fear -- fear your *hardware*.
Unless there is a problem with, say, lmsensors mis-reading the CPU temperature register and freaking out.
Not impossible, I should think.
Greg
I do not think that lm_sensors is involved in that process.
Arthur Pemberton wrote:
On 12/11/06, Greg Trounson gregt@maths.otago.ac.nz wrote:
James Wilkinson wrote:
Mike Chalmers wrote:
...
Once again, things like this are unacceptable. I am afraid to use Fedora 6 now. I alos, have am having problems after a clean installation.
That's misplaced fear -- fear your *hardware*.
Unless there is a problem with, say, lmsensors mis-reading the CPU temperature register and freaking out.
Not impossible, I should think.
Greg
I do not think that lm_sensors is involved in that process.
Apologies, I should have said acpid.
To add further credibility to the dodgy software theory though, what hardware/firmware protection mechanisms do you know of that upon detecting high CPU temperature dump the user back to a login screen?
Returning to the login screen could be either a reboot or killing X, neither of which a sane BIOS would do in a CPU overheat event. Every BIOS I've ever seen shuts the machine down, which makes more sense if it's overheating.
Greg
Linux may make your CPU work hard, but it shouldn't cause it to overheat if the hardware is otherwise OK. See the other posts in this thread.
And there is the key.
Even if you were to run your processor 100% usage 24/7. If the hardware is properly cooled, you will never overheat. The answer is never going to be "Let's stop the CPU reaching 100% usage, because that will CAUSE overheating". Overheating is caused by poor thermal solutions.
Software is not responsible for poor thermal solutions.
Things I have discovered:
1> A dust caked heatsink on a Northwood or Prescott CPU can make the temperature jump from a stressed 55 Celsius to as much as 76 Celsius when stressed, causing the BIOS to kick in and prevent overheating (the result being different for different motherboards).
2> Simply adding cheap case fans does bring the temperature down a little, adds a lot more noise, but most importantly DOES NOT prevent CPU overheating. No matter how good the air circulating around the fan/heatsink, an improperly functioning fan/heatsink (i.e. covered with dust), will simply have to be dusted or replaced.
3> Zalman! I've played with Thermaltake, Coolermaster and Zalman CPU fan/heatsinks and it always seems to me the Zalman's are not only the coolest, but the quietest as well. For point 1 above, where I was getting 55 Celsius stressed temperature for a new CPU and 76 Celsius after a year or so in the dust, no matter how much I stress that machine now, I cannot get the CPU temp above 43 Celsius, after fitting an $AU80 Zalman. Expensive I know, but the results are SO worth it.
My experience in a nutshell after 9 years of business - take it or leave it.
Regards, Ed.
On 12/11/06, Edward Dekkers edward@tdcs.com.au wrote:
Linux may make your CPU work hard, but it shouldn't cause it to overheat if the hardware is otherwise OK. See the other posts in this thread.
And there is the key.
Even if you were to run your processor 100% usage 24/7. If the hardware is properly cooled, you will never overheat. The answer is never going to be "Let's stop the CPU reaching 100% usage, because that will CAUSE overheating". Overheating is caused by poor thermal solutions.
Software is not responsible for poor thermal solutions.
Things I have discovered:
1> A dust caked heatsink on a Northwood or Prescott CPU can make the temperature jump from a stressed 55 Celsius to as much as 76 Celsius when stressed, causing the BIOS to kick in and prevent overheating (the result being different for different motherboards).
2> Simply adding cheap case fans does bring the temperature down a little, adds a lot more noise, but most importantly DOES NOT prevent CPU overheating. No matter how good the air circulating around the fan/heatsink, an improperly functioning fan/heatsink (i.e. covered with dust), will simply have to be dusted or replaced.
3> Zalman! I've played with Thermaltake, Coolermaster and Zalman CPU fan/heatsinks and it always seems to me the Zalman's are not only the coolest, but the quietest as well. For point 1 above, where I was getting 55 Celsius stressed temperature for a new CPU and 76 Celsius after a year or so in the dust, no matter how much I stress that machine now, I cannot get the CPU temp above 43 Celsius, after fitting an $AU80 Zalman. Expensive I know, but the results are SO worth it.
My experience in a nutshell after 9 years of business - take it or leave it.
Regards, Ed.
-- fedora-list mailing list fedora-list@redhat.com To unsubscribe: https://www.redhat.com/mailman/listinfo/fedora-list
Thanks.
On Mon, 2006-12-11 at 13:05 -0500, Mike Chalmers wrote:
When I ran "yum update", when it started to install the updates it said something like cpu 0 overheating cpu 1 overheating. Then my screen went black for a second and the next thing I new is I was at the login screen.
I've noticed that yum seems terribly CPU intensive. Much more than other things which I expect to be doing even more work than the databasing that yum does (working out package dependencies).
Of course, it'd help if Linux wasn't so dependency mad. I've seem some damn peculiar ones (like KDE being dependent on having htdig installed). I can understand applications been dependent on standard system files, but it shouldn't go the other way around.
Ed Greshko wrote:
Mike Chalmers wrote:
I like Linux, alot. I mean alot. I like what it stands for (besides the corporations). But my CPU has never overheated. I am pretty sure that it is not my hardware. It could be a bug in Linux. The kernel could be be sending incorrect frequencies to the hardware or something like that.
Hummmm.... Something like that happened to me years ago... Let me think. Oh, right.....
[snip]
It is a well-known fact that faulty software can overheat CPUs. You might investigate the GIMPS which has a test program intended specifically to ascertain whether it can run on a machine, or will in fact cause it to overheat. If he somehow got Linux or some part of it into a tight loop due to a software defect, it might very well have caused the overheating.
Don't toss this aside lightly. Give it due consideration. It is likely not a fault of Linux, but don't just disregard this.
Mike
On Tue, 2006-12-12 at 07:58 +0800, Ed Greshko wrote:
I'm not much into cars, but my neighbor was and he came over when he saw I was having trouble. He popped the hood, looked down at my battery cables, gave them a few smacks with a hammer and said "try again". The car started just fine. I looked at the guy and said "this never happened before". He looked me straight in the eye and said, "just because something worked just fine yesterday doesn't mean it work the next day".
Reminds of that story doing the round about Sony changing the Windows error messages for their VAIO laptops to use haikus. One of them being, "Yesterday it worked, today it does not, Windows is like that."
On Mon, 2006-12-11 at 20:15 -0600, Mike McCarty wrote:
It is a well-known fact that faulty software can overheat CPUs.
I was thinking about something like this the other day. I'm just finishing off putting a PC together for a friend, and have been using it to stress test it. What I'd like is an automated stress testing tool, not one that tries to break it, but tests that things that should be do-able, are, without having to manually check the serial ports, parallel ports, video ports, that the CPU can do all its math and get the answers right, etc. There's memtest86+ (though I've heard some detractors say it can give false results), but I haven't heard of anything to test other parts of the system.
On Mon, 2006-12-11 at 13:48 -0500, Mike Chalmers wrote:
If I did run update again and the cpu overheated, wouldn't that mean it was Linux?
Nope. You could find someting other than YUM running on Linux to stress your system to its limits, just as much. The software makes use of what's there. If what's there has its own deficiencies (inadequate heatsink, dusty heatsink, heatsink fan running too slow, etc.), that's where the problem lay.
On 12/12/06, Tim ignored_mailbox@yahoo.com.au wrote:
On Mon, 2006-12-11 at 13:05 -0500, Mike Chalmers wrote:
When I ran "yum update", when it started to install the updates it said something like cpu 0 overheating cpu 1 overheating. Then my screen went black for a second and the next thing I new is I was at the login screen.
I've noticed that yum seems terribly CPU intensive. Much more than other things which I expect to be doing even more work than the databasing that yum does (working out package dependencies).
It also does a lot of text processing, ie. XML parsing.
Of course, it'd help if Linux wasn't so dependency mad.
Best solution I've seen yet.
I've seem some damn peculiar ones (like KDE being dependent on having htdig installed).
The htdig does have system files (libraries)
I can understand applications been dependent on standard system files, but it shouldn't go the other way around.
On 12/11/06, Arthur Pemberton pemboa@gmail.com wrote:
On 12/12/06, Tim ignored_mailbox@yahoo.com.au wrote:
On Mon, 2006-12-11 at 13:05 -0500, Mike Chalmers wrote:
When I ran "yum update", when it started to install the updates it said something like cpu 0 overheating cpu 1 overheating. Then my screen went black for a second and the next thing I new is I was at the login screen.
I've noticed that yum seems terribly CPU intensive. Much more than other things which I expect to be doing even more work than the databasing that yum does (working out package dependencies).
It also does a lot of text processing, ie. XML parsing.
Of course, it'd help if Linux wasn't so dependency mad.
Best solution I've seen yet.
I've seem some damn peculiar ones (like KDE being dependent on having htdig installed).
The htdig does have system files (libraries)
I can understand applications been dependent on standard system files, but it shouldn't go the other way around.
-- Fedora Core 6 and proud
-- fedora-list mailing list fedora-list@redhat.com To unsubscribe: https://www.redhat.com/mailman/listinfo/fedora-list
My cpu just overheated again while I was using Linux. This time I was browsing the web. I am back into Windows to post this. It does not over heat in Windows. I am not saying Windows is better just because of this, and I am not saying that Linux causes cpu's to overheat easier.
As I said earlier, I am in the process of getting a Zalman CNPS9500-LED Aero Flower Cooler Copper. I am going to take apart my computer and dust it thoroughly, which it needs. I am also getting a new power supply. So I will see if that works. Can't use Linux until then though.
Mike
On 12/12/06, Mike Chalmers mikechalmers70@gmail.com wrote:
On 12/11/06, Arthur Pemberton pemboa@gmail.com wrote:
On 12/12/06, Tim ignored_mailbox@yahoo.com.au wrote:
On Mon, 2006-12-11 at 13:05 -0500, Mike Chalmers wrote:
When I ran "yum update", when it started to install the updates it said something like cpu 0 overheating cpu 1 overheating. Then my screen went black for a second and the next thing I new is I was at the login screen.
I've noticed that yum seems terribly CPU intensive. Much more than other things which I expect to be doing even more work than the databasing that yum does (working out package dependencies).
It also does a lot of text processing, ie. XML parsing.
Of course, it'd help if Linux wasn't so dependency mad.
Best solution I've seen yet.
I've seem some damn peculiar ones (like KDE being dependent on having htdig installed).
The htdig does have system files (libraries)
I can understand applications been dependent on standard system files, but it shouldn't go the other way around.
-- Fedora Core 6 and proud
-- fedora-list mailing list fedora-list@redhat.com To unsubscribe: https://www.redhat.com/mailman/listinfo/fedora-list
My cpu just overheated again while I was using Linux. This time I was browsing the web. I am back into Windows to post this. It does not over heat in Windows. I
I'm somewhat curious as to how you know this. I'm not certain that there are such warning systems built in...of course I haven't used Windows heavily in ages so I really don't know what's going on these days.
On 12/11/06, Arthur Pemberton pemboa@gmail.com wrote:
On 12/12/06, Mike Chalmers mikechalmers70@gmail.com wrote:
On 12/11/06, Arthur Pemberton pemboa@gmail.com wrote:
On 12/12/06, Tim ignored_mailbox@yahoo.com.au wrote:
On Mon, 2006-12-11 at 13:05 -0500, Mike Chalmers wrote:
When I ran "yum update", when it started to install the updates it said something like cpu 0 overheating cpu 1 overheating. Then my screen went black for a second and the next thing I new is I was at the login screen.
I've noticed that yum seems terribly CPU intensive. Much more than other things which I expect to be doing even more work than the databasing that yum does (working out package dependencies).
It also does a lot of text processing, ie. XML parsing.
Of course, it'd help if Linux wasn't so dependency mad.
Best solution I've seen yet.
I've seem some damn peculiar ones (like KDE being dependent on having htdig installed).
The htdig does have system files (libraries)
I can understand applications been dependent on standard system files, but it shouldn't go the other way around.
-- Fedora Core 6 and proud
-- fedora-list mailing list fedora-list@redhat.com To unsubscribe: https://www.redhat.com/mailman/listinfo/fedora-list
My cpu just overheated again while I was using Linux. This time I was browsing the web. I am back into Windows to post this. It does not over heat in Windows. I
I'm somewhat curious as to how you know this. I'm not certain that there are such warning systems built in...of course I haven't used Windows heavily in ages so I really don't know what's going on these days.
-- Fedora Core 6 and proud
-- fedora-list mailing list fedora-list@redhat.com To unsubscribe: https://www.redhat.com/mailman/listinfo/fedora-list
Well in Windows it works fine (Windows stays on). In Linux I had the terminal open while I was browsing and my computer beeped. I wondered what had happened so I looked at the terminal to see if there was anything going on. And it said something like CPU 0 temperature high (or something like that) CPU 1 temperature high. It kept repeating them. I was able to successfully reboot and log into Windows. Sorry I can't remember exactly what the terminal read.
Kind Regards, Mike
On Mon, 2006-12-11 at 21:41 -0500, Mike Chalmers wrote:
My cpu just overheated again while I was using Linux. This time I was browsing the web. I am back into Windows to post this. It does not over heat in Windows.
My guess would be that your system is on the verge of overheating, and there just happens to be a bit more a system load, overall, on your installation of Linux than Windows (other things happening in the background, as well).
The other guess might be that the warning is bogus. It's not overheating, but the threshold for the sensor is wrong.
On 12/11/06, Tim ignored_mailbox@yahoo.com.au wrote:
On Mon, 2006-12-11 at 21:41 -0500, Mike Chalmers wrote:
My cpu just overheated again while I was using Linux. This time I was browsing the web. I am back into Windows to post this. It does not over heat in Windows.
My guess would be that your system is on the verge of overheating, and there just happens to be a bit more a system load, overall, on your installation of Linux than Windows (other things happening in the background, as well).
The other guess might be that the warning is bogus. It's not overheating, but the threshold for the sensor is wrong.
-- (Currently testing FC5, but still running FC4, if that's important.)
Don't send private replies to my address, the mailbox is ignored. I read messages from the public lists.
-- fedora-list mailing list fedora-list@redhat.com To unsubscribe: https://www.redhat.com/mailman/listinfo/fedora-list
I believe that too.
Tim:
I've noticed that yum seems terribly CPU intensive. Much more than other things which I expect to be doing even more work than the databasing that yum does (working out package dependencies).
Arthur Pemberton:
It also does a lot of text processing, ie. XML parsing.
I thought there was supposed to be a move towards SQL instead of XML that was supposed to improve things? Or so I seem to recall reading quite some time ago.
Of course, it'd help if Linux wasn't so dependency mad.
Best solution I've seen yet.
I've seem some damn peculiar ones (like KDE being dependent on having htdig installed).
The htdig does have system files (libraries)
Why? It's an application. It's an independent application. You shouldn't have to install htdig unless you actually want to use htdig.
It reminds me other other stupidities I saw when installing a minimum installation for a headless server, without X, that installed various graphics libraries. What was I going to use to see them?
Sure, I can imagine that if was going to install Apache, and draw pie charts, that there might be some use for them. But let such applications draw them in as a dependency.
-- I need a new wheel for my car. -- Sure, but it comes with a caravan... -- I don't want a caravan! -- You don't have to use it, you can just leave it parked in your garage. -- I don't have the space. -- You could get a bigger plot of land... -- I don't want to. Can I get rid of the caravan? -- Yes, but you'd also lose the new wheel.
Who came up with these dependency ideas? Goofy?
On Mon, 2006-12-11 at 21:58 -0500, Mike Chalmers wrote:
In Linux I had the terminal open while I was browsing and my computer beeped. I wondered what had happened so I looked at the terminal to see if there was anything going on. And it said something like CPU 0 temperature high (or something like that) CPU 1 temperature high.
What is it that tells you that information? I wonder if it's the infamous lm_sensors, with its uncalibrated (*) sensor readings. On one of my systems I have some things with negative temparatures, and others well over boiling point. They're wrong, of course.
* It's not really its fault, there isn't any reliable way to calibrate the readings. You'd have to get voltage probes out and work out that when some rail reading says 4.6 for the sensor, but it's actually 5.1, that a conversion needs to be done on the figures. Likewise with temperatures and other readings.
My advice would be to turn off lm_sensors, and turn on temperature warnings in your BIOS. At least the manufacturer ought to know how to read the sensors in the board that they built. I've got at least a couple of boxes set up that way, a beeps if it gets too warm, and will shut down if it gets too hot. They also protect themselves even if the OS has crashed.
On 12/12/06, Tim ignored_mailbox@yahoo.com.au wrote:
Tim:
I've noticed that yum seems terribly CPU intensive. Much more than other things which I expect to be doing even more work than the databasing that yum does (working out package dependencies).
Arthur Pemberton:
It also does a lot of text processing, ie. XML parsing.
I thought there was supposed to be a move towards SQL instead of XML that was supposed to improve things? Or so I seem to recall reading quite some time ago.
Well the repo data is still in XML.
Of course, it'd help if Linux wasn't so dependency mad.
Best solution I've seen yet.
I've seem some damn peculiar ones (like KDE being dependent on having htdig installed).
The htdig does have system files (libraries)
Why? It's an application. It's an independent application. You shouldn't have to install htdig unless you actually want to use htdig.
Well apperently kdebase needs the libraires. Or rather according to the description of htdig: "ht://Dig is also used by KDE to search KDE's HTML documentation."
It reminds me other other stupidities I saw when installing a minimum installation for a headless server, without X, that installed various graphics libraries. What was I going to use to see them?
Did you install the system-config tools - that's what comes to mind immediately.
On Tue, 2006-12-12 at 10:43 +0900, Edward Dekkers wrote:
Linux may make your CPU work hard, but it shouldn't cause it to overheat if the hardware is otherwise OK. See the other posts in this thread.
And there is the key.
Even if you were to run your processor 100% usage 24/7. If the hardware is properly cooled, you will never overheat. The answer is never going to be "Let's stop the CPU reaching 100% usage, because that will CAUSE overheating". Overheating is caused by poor thermal solutions.
Software is not responsible for poor thermal solutions.
Things I have discovered:
1> A dust caked heatsink on a Northwood or Prescott CPU can make the temperature jump from a stressed 55 Celsius to as much as 76 Celsius when stressed, causing the BIOS to kick in and prevent overheating (the result being different for different motherboards).
2> Simply adding cheap case fans does bring the temperature down a little, adds a lot more noise, but most importantly DOES NOT prevent CPU overheating. No matter how good the air circulating around the fan/heatsink, an improperly functioning fan/heatsink (i.e. covered with dust), will simply have to be dusted or replaced.
3> Zalman! I've played with Thermaltake, Coolermaster and Zalman CPU fan/heatsinks and it always seems to me the Zalman's are not only the coolest, but the quietest as well. For point 1 above, where I was getting 55 Celsius stressed temperature for a new CPU and 76 Celsius after a year or so in the dust, no matter how much I stress that machine now, I cannot get the CPU temp above 43 Celsius, after fitting an $AU80 Zalman. Expensive I know, but the results are SO worth it.
My experience in a nutshell after 9 years of business - take it or leave it.
Regards, Ed.
Good enough to save... Do you think you'll get a commission from Zalman? ;-)
Regards, Les H
On 12/11/06, Tim ignored_mailbox@yahoo.com.au wrote:
On Mon, 2006-12-11 at 21:58 -0500, Mike Chalmers wrote:
In Linux I had the terminal open while I was browsing and my computer beeped. I wondered what had happened so I looked at the terminal to see if there was anything going on. And it said something like CPU 0 temperature high (or something like that) CPU 1 temperature high.
What is it that tells you that information? I wonder if it's the infamous lm_sensors, with its uncalibrated (*) sensor readings. On one of my systems I have some things with negative temparatures, and others well over boiling point. They're wrong, of course.
- It's not really its fault, there isn't any reliable way to calibrate
the readings. You'd have to get voltage probes out and work out that when some rail reading says 4.6 for the sensor, but it's actually 5.1, that a conversion needs to be done on the figures. Likewise with temperatures and other readings.
My advice would be to turn off lm_sensors, and turn on temperature warnings in your BIOS. At least the manufacturer ought to know how to read the sensors in the board that they built. I've got at least a couple of boxes set up that way, a beeps if it gets too warm, and will shut down if it gets too hot. They also protect themselves even if the OS has crashed.
-- (Currently testing FC5, but still running FC4, if that's important.)
Don't send private replies to my address, the mailbox is ignored. I read messages from the public lists.
-- fedora-list mailing list fedora-list@redhat.com To unsubscribe: https://www.redhat.com/mailman/listinfo/fedora-list
I have a Intel D875PBZ, I have looked through the BIOS but have never thought to look for temperature warnings. I will have to check on that. Thanks.
On Mon, 2006-12-11 at 20:15 -0600, Mike McCarty wrote:
Ed Greshko wrote:
Mike Chalmers wrote:
I like Linux, alot. I mean alot. I like what it stands for (besides the corporations). But my CPU has never overheated. I am pretty sure that it is not my hardware. It could be a bug in Linux. The kernel could be be sending incorrect frequencies to the hardware or something like that.
Hummmm.... Something like that happened to me years ago... Let me think. Oh, right.....
[snip]
It is a well-known fact that faulty software can overheat CPUs.
[snip]
I do believe that I can cause some processors to overheat, however, that software would not do anything useful. Once you start accessing memory, dealing with page faults and taking interrupts in the "real" environment that type of cycle intensive processing sort of goes out the window. Yes, processors are heat sensitive, and can fail due to heat, but the new coolers are very good, and the testing process ensures that the weak ones probably don't make it into the field. Believe me when I say the tests, which bypass many of the protections built in, stress the processors quite heavily, I do know what I am talking about. A modern processor can sustain a hundred amps or so of instantaneous current, which is made up by the multiple bypassing, board construction, path design and modern power supply design. In the test environment, they are not as fully cooled and protected as they are in your PC, so if they fail, that is where and when. You can check out some of the board designs on Teradyne's public materials (you don't get details, but you can see the complexity of connecting to over 1000 pins via a zero insertion force socket that must operate thousands of times reliably in a factory floor.
The GIMP testing program I would suspect verifies such things as look ahead, que length call/return overhead and so forth to give a good "frames per second presentation" vs a temperature test of the processor. Flops (floating point operations per second) is the processing number important to graphics processing, and the hardware design will determine such things as how to best optimize the algorithms for best performance. That is one aspect of real time benchmarking. I can write the same 3 level nested loop at least 9 different ways. Some ways are most efficient on AMD systems, some are better on intel, and some fit power pc better. It depends on lots of things outside the processor as well. Memory speed, cache size and clock frequency to name some of the better known ones.
As to the statement that faulty software could cause it, perhaps, but it would be a real fluke, because as I said lots of things get in the way in a real normal system. Such a software bug would have to disable lots of hardware besides the processor to create that kind of havoc.
Maybe I should write a book....
Regards, Les H
As I said earlier, I am in the process of getting a Zalman CNPS9500-LED Aero Flower Cooler Copper. I am going to take apart my computer and dust it thoroughly, which it needs. I am also getting a new power supply. So I will see if that works. Can't use Linux until then though.
That's the one I was talking about.
I couldn't remember the model of it but you just jogged my memory.
Seriously, if that doesn't fix it for you then I don't know what will. Excellent choice.
Make sure to use the (included) thermal grease. A customer of mine who also bought one "forgot", and returned the unit to me claiming it was faulty. After fitting it for him (and charging appropriately) he called up later and said he was over the moon with it.
Oh, and make sure you point it the right way. It will in fact fit with the air flow in either direction, but make sure it jets out the back of the PC. Another common mistake.
Regards, Ed.
Tim wrote:
Sure, I can imagine that if was going to install Apache, and draw pie charts, that there might be some use for them.
What's htdig got to do with pie charts? It's a search engine for HTML files. KDE uses it to search HTML files, rather than re-implementing that code on their own.
But let such applications draw them in as a dependency.
Right... that's what happened. KDE's help system includes a search function. It uses htdig. htdig was drawn in as a dependency of KDE.
-- I need a new wheel for my car. -- Sure, but it comes with a caravan... -- I don't want a caravan! -- You don't have to use it, you can just leave it parked in your garage. -- I don't have the space. -- You could get a bigger plot of land... -- I don't want to. Can I get rid of the caravan? -- Yes, but you'd also lose the new wheel.
That's a silly analogy. You're installing KDE, which requires htdig to search the KDE documentation. The space you're complaining about is trivial compared to the rest of the KDE platform. If KDE didn't pull in htdig, they'd have to re-implement the functionality, and there's no reason to believe it'd save a significant amount of space.
Car analogy: you want a car, and you're complaining because it requires brakes. It's a PITA to replace brake pads, you say, but if you want the car to stop, that's what you're gonna do.
Who came up with these dependency ideas? Goofy?
People who understood how software functions created dependency tracking. Don't loose sleep over it; it's a good thing.
Greg Trounson wrote:
To add further credibility to the dodgy software theory though, what hardware/firmware protection mechanisms do you know of that upon detecting high CPU temperature dump the user back to a login screen?
Returning to the login screen could be either a reboot or killing X, neither of which a sane BIOS would do in a CPU overheat event. Every BIOS I've ever seen shuts the machine down, which makes more sense if it's overheating.
It's more likely that the BIOS isn't able to slow the CPU sufficiently, and X or the desktop session segfaulted. Random shit tends to happen when CPUs get too hot, most of it causing applications to crash.
Mike McCarty wrote:
It is a well-known fact that faulty software can overheat CPUs.
No, it isn't. It is, however, a well-known fact that CPUs generate more heat when they're executing instructions than they do when they're idle. If the CPU overheats when it's busy, then the cooling system has failed and needs to be replaced. It is never a software fault. Software (possibly in the BIOS) *may* be able to slow the CPU down if it exceeds some defined temperature threshold, but that's a cross-your-fingers type of safeguard against a failed cooling system.
On Mon, 2006-12-11 at 20:03 -0800, Les wrote:
As to the statement that faulty software could cause it, perhaps, but it would be a real fluke, because as I said lots of things get in the way in a real normal system. Such a software bug would have to disable lots of hardware besides the processor to create that kind of havoc.
I'd say it's more likely that bad software might expose a flaw in a system, such as cooling that was never going to work adequately when the CPU was running at full steam, than it actually be able to cook it.
It's probably a fair bet that there's a lot of CPU coolers that are just inadequate.
Tim:
Sure, I can imagine that if was going to install Apache, and draw pie charts, that there might be some use for them.
Gordon Messmer:
What's htdig got to do with pie charts?
Nothing, it was part of another conversation: A minimal, headless, X-less, server installation installing graphical library files.
But let such applications draw them in as a dependency.
Right... that's what happened. KDE's help system includes a search function. It uses htdig. htdig was drawn in as a dependency of KDE.
Fair enough, in that situation. Doesn't explain some others, though.
-- I need a new wheel for my car. -- Sure, but it comes with a caravan... -- I don't want a caravan! -- You don't have to use it, you can just leave it parked in your garage. -- I don't have the space. -- You could get a bigger plot of land... -- I don't want to. Can I get rid of the caravan? -- Yes, but you'd also lose the new wheel.
That's a silly analogy. You're installing KDE, which requires htdig to search the KDE documentation.
Only if you're harping on about KDE. In the other case, there was no sane reason for installing some of what got included on a bare bones install. The silliness of the situation was precisely the point (alleged requirements, that really are not required).
The space you're complaining about is trivial compared to the rest of the KDE platform.
Ignoring KDE, for the moment, it all adds up. When this programmer decides that his program depends on another 20 meg program, and that programmer decides that his program on another 50 meg program, it all snowballs. Other distros can manage barebone installations on systems with really small drives and RAM requirements, a fraction of what Fedora considered to be a barebones system.
Even then, why it is a requirement to be able to search the documentation for KDE? Perhaps I mightn't want to search the documents. Perhaps I mightn't even want to install the documentation. Don't make what should be optional extras into unavoidable dependencies. KDE and Gnome are shockingly huge behemoths.
Car analogy: you want a car, and you're complaining because it requires brakes. It's a PITA to replace brake pads, you say, but if you want the car to stop, that's what you're gonna do.
Spurious argument. Buy a car, don't want a radio, don't need it, for anything, but we won't let you buy it without it. That's what I'm annoyed at: BOGUS requirements.
Les wrote:
On Mon, 2006-12-11 at 20:15 -0600, Mike McCarty wrote:
Ed Greshko wrote:
Mike Chalmers wrote:
I like Linux, alot. I mean alot. I like what it stands for (besides the corporations). But my CPU has never overheated. I am pretty sure that it is not my hardware. It could be a bug in Linux. The kernel could be be sending incorrect frequencies to the hardware or something like that.
Hummmm.... Something like that happened to me years ago... Let me think. Oh, right.....
[snip]
It is a well-known fact that faulty software can overheat CPUs.
[snip]
I do believe that I can cause some processors to overheat, however, that software would not do anything useful. Once you start accessing memory,
Yes, it does. It searches for large prime numbers. That program has found several largest known primes in the last few years.
[snip]
The GIMP testing program I would suspect verifies such things as look ahead, que length call/return overhead and so forth to give a good "frames per second presentation" vs a temperature test of the processor.
Actually, it doesn't. The point of it is simply to run as intensively as possible, and see whether it can complete. If the CPU overheats, then it doesn't work properly, that's all. It doesn't read any sensors.
[snip]
As to the statement that faulty software could cause it, perhaps, but it would be a real fluke, because as I said lots of things get in the way in a real normal system. Such a software bug would have to disable lots of hardware besides the processor to create that kind of havoc.
Eh? All it has to do is be constantly ready to run. If the process never blocks for I/O, then it would keep the CPU pretty busy.
Yes there would be breaks when other processes get time, possibly, due to virtual misses. But that is by no means guaranteed.
Mike
Gordon Messmer wrote:
Mike McCarty wrote:
It is a well-known fact that faulty software can overheat CPUs.
No, it isn't. It is, however, a well-known fact that CPUs generate more heat when they're executing instructions than they do when they're idle. If the CPU overheats when it's busy, then the cooling system has failed
What? You didn't actually read what I wrote, or you wouldn't write this. I did not write "Faulty software can overheat CPUs which have properly designed cooling which can handle worst case."
and needs to be replaced. It is never a software fault. Software
No, just cheaply designed. There are MANY processors which cannot run GIMPS, for example, because they have "good enough for normal case" cooling designs.
(possibly in the BIOS) *may* be able to slow the CPU down if it exceeds some defined temperature threshold, but that's a cross-your-fingers type of safeguard against a failed cooling system.
You completely missed my point. Dismissing this as "well, your hardware shouldn't do that" is
S-T-U-P-I-D
because, even though it means the hardware is marginal at best, it is a hint that
THERE MAY BE A DEFECT IN THE SOFTWARE.
And that should not be ignored.
My point wasn't that his or any CPU *should* be overheated by bad software. My point was that hints that there might be a defect in the software should not be ignored.
Please engage your reading ability before responding next time. You might actually respond to what was written.
Mike
Mike McCarty wrote:
Gordon Messmer wrote:
Mike McCarty wrote:
It is a well-known fact that faulty software can overheat CPUs.
No, it isn't. It is, however, a well-known fact that CPUs generate more heat when they're executing instructions than they do when they're idle. If the CPU overheats when it's busy, then the cooling system has failed
What? You didn't actually read what I wrote, or you wouldn't write this.
Sorry, that was uncalled for. The explanation is correct, but the tone is to strong. I apologize.
[snip]
Please engage your reading ability before responding next time. You might actually respond to what was written.
And this was snippy. Sorry.
I stand by the rest of the message. We should not ignore hints that the software has defects in it, even if "the hardware shouldn't do that".
Mike
Mike McCarty wrote:
No, just cheaply designed. There are MANY processors which cannot run GIMPS, for example, because they have "good enough for normal case" cooling designs.
Change the above to read "MANY systems" instead of "MANY processors". As you have noted, it is the bad cooling design that causes the problem...not the processor.
(possibly in the BIOS) *may* be able to slow the CPU down if it exceeds some defined temperature threshold, but that's a cross-your-fingers type of safeguard against a failed cooling system.
You completely missed my point. Dismissing this as "well, your hardware shouldn't do that" is
S-T-U-P-I-D
because, even though it means the hardware is marginal at best, it is a hint that
THERE MAY BE A DEFECT IN THE SOFTWARE.
So, you are saying that if I run the GIMPS torture test and my system overheats then there "may" be a defect in the GIMPS software?
And that should not be ignored.
My point wasn't that his or any CPU *should* be overheated by bad software. My point was that hints that there might be a defect in the software should not be ignored.
I think (hope) you mean *could*.
FWIW, I have a Dual Xeon 2.80GHz system with 2GB of RAM. It runs my DNS server, web server, etc. I also run VMware and normally have at least 2 virtual machines running. One an XP VM the other some variant of Linux. At times I run a RHELv4 team with 4 systems plus the XP. During the day time I limit GIMPS to running on 2 CPUs (hyper-threading enabled) and nice value at 19. At night, a cron job runs and gives GIMPS all 4 CPUs and a negative nice value. On some summer days, since my wife is rather slim, I don't turn on the A/C until the room temp is above 30. My system is on 24/7. Never had a heat issue. But, I do check my fans and dust out the system on a regular basis. Need to do that when you live in Taipei and you have 3 house cats.
Sorry, I don't buy into your theory that overheating may be hinting at "software with defects". IMHO, if that were the case the virus/hackers of the world would be putting out "defective" code with the goal of burning up everyones systems. :-)
Ed Greshko wrote:
Mike McCarty wrote:
No, just cheaply designed. There are MANY processors which cannot run GIMPS, for example, because they have "good enough for normal case" cooling designs.
Change the above to read "MANY systems" instead of "MANY processors". As
Point taken. The CPU itself is not the problem.
you have noted, it is the bad cooling design that causes the problem...not the processor.
[snip]
THERE MAY BE A DEFECT IN THE SOFTWARE.
So, you are saying that if I run the GIMPS torture test and my system overheats then there "may" be a defect in the GIMPS software?
Sorry, I guess I didn't make the exact context clear here. The OP claims that his machine seems to overheat when running Linux, but not Windows XP. Some seem to be saying "Well, you have an underdesigned machine. The problem isn't Linux. It's your box."
My point is that there may actually be some defect in Linux which is eating lots of CPU on his machine, and this possibility should be investigated, not cast aside with "well, your system cooling is just underdesigned, get a new one".
[snip]
- At night, a cron job runs and gives GIMPS all 4 CPUs and a negative
nice value. On some summer days, since my wife is rather slim, I don't turn on the A/C until the room temp is above 30. My system is on 24/7. Never
I hope you mean 30 degres centigrade. I'd hate to think you turned on the A/C when the room temp was 32 degrees Fahrenheit. No wonder your machines don't overheat!
had a heat issue. But, I do check my fans and dust out the system on a regular basis. Need to do that when you live in Taipei and you have 3 house cats.
You need to do that if you have three cats no matter where you live.
Sorry, I don't buy into your theory that overheating may be hinting at "software with defects". IMHO, if that were the case the virus/hackers of the world would be putting out "defective" code with the goal of burning up everyones systems. :-)
Eh...
Why should Linux be using more CPU than Windows?
Mike
On Tue, 2006-12-12 at 01:37 -0600, Mike McCarty wrote:
Les wrote:
On Mon, 2006-12-11 at 20:15 -0600, Mike McCarty wrote:
Ed Greshko wrote:
Mike Chalmers wrote:
I like Linux, alot. I mean alot. I like what it stands for (besides the corporations). But my CPU has never overheated. I am pretty sure that it is not my hardware. It could be a bug in Linux. The kernel could be be sending incorrect frequencies to the hardware or something like that.
Hummmm.... Something like that happened to me years ago... Let me think. Oh, right.....
[snip]
It is a well-known fact that faulty software can overheat CPUs.
[snip]
I do believe that I can cause some processors to overheat, however, that software would not do anything useful. Once you start accessing memory,
Yes, it does. It searches for large prime numbers. That program has found several largest known primes in the last few years.
[snip]
The GIMP testing program I would suspect verifies such things as look ahead, que length call/return overhead and so forth to give a good "frames per second presentation" vs a temperature test of the processor.
Actually, it doesn't. The point of it is simply to run as intensively as possible, and see whether it can complete. If the CPU overheats, then it doesn't work properly, that's all. It doesn't read any sensors.
[snip]
As to the statement that faulty software could cause it, perhaps, but it would be a real fluke, because as I said lots of things get in the way in a real normal system. Such a software bug would have to disable lots of hardware besides the processor to create that kind of havoc.
Eh? All it has to do is be constantly ready to run. If the process never blocks for I/O, then it would keep the CPU pretty busy.
Yes there would be breaks when other processes get time, possibly, due to virtual misses. But that is by no means guaranteed.
Mike
Hi, Mike, There is a lot about Linux I don't know. There is a lot of everything I don't know, but when it comes to IC's, including processors and how they work, there are not too many people I have to take a back seat to. Not as a designer, but as a test engineer with over 20 years experience and as a programmer with over 30 years experience. The most stressful program that you can write for a processor accesses only about 6 or 8 words of memory, and does that same act repeatedly with no branching. One is called the push loop, where an instruction for push a register and that register is placed on the stack using the push instrution, and this is followed with an instruction to decrement the program counter. Then the program counter is pointed at the push instruction and started. The system then pushes the register, and decrements the pc to point to the pushed instruction. In effect the program is push: jump push. Similar microcode exists in some processors where a compare can be done and set up the flags and a jump here on non-zero will enter an infinite loop. But these are special conditions, and require setting up the situation to that effect, and either one, if interrupted has the option of a return to the following address and will break the loop. They are tricks. But even these won't overheat a modern processor. What you need is a special instrucition that cycles all register contents, like a checkerboard and inverse checkerboard and does it rapidly. I won't even mention the required sequence because it is really hateful code. But it can be done, and a really artful programmer can probably even lock a cache containing processor into an infinite internal loop doing that, which coupled with turning off certain other processes and hardware could do the trick, but today a reasonably efficient Mother board with reasonable cooling will even handle this dastardly trick for quite a while. And again the thermal sensors will sense the condition and shut it down. There are some other hardware secrets that can prevent this type of action as well.
These new processors are energy hogs, and some of that is reduced by the new lower voltages which drop the power curve in a linear fashion. Design cell size also will reduce transistor power and capacitive load, which is the real power hog in processor design. And modern design attempts to pattern the registers in such a way that the thermal cycle is managed as well as the electrical cycle. But still when you swap a million transistors from one state to another in pico seconds, you have a tremendous parasitic power exchange and thermal effects that result from that relative to the real estate and thermal constant of the packaging. It's a real handful. Intel, for all the flack they get has done their homework on this stuff, and the task goes up geometrically with each new generation of component.
Doesn't answer your heat question, I know, but if the Mother board is designed correctly and the cooling is designed correctly, and work once, then the changes that typically cause problems are dust and dirt, mechanical failure (fan or loss of heat conductivity to the heat sink (compound dries up and falls from the two mating surfaces), or something physically breaks, like the wires to the fan.
While software can affect the thermal load, the design criteria take that into account when the thermal package is designed, so under normal conditions, heating will not shut things down.
As to the prime number algorithm, it ranges over many pages of memory and really is not an intensive program for calculations. The hardest program is a gausian filter on a 3d array, because of the local nature of operations and the number of operations done. Another good indicator of some of the effects are the response graphs given for the FFTW (Fastest Fourier Transform in the West). Check out how the flops fall off once the cache size is exceeded. I have considerable experience with FFT's as well.
Regards, Les H
Mike McCarty wrote:
So, you are saying that if I run the GIMPS torture test and my system overheats then there "may" be a defect in the GIMPS software?
Sorry, I guess I didn't make the exact context clear here. The OP claims that his machine seems to overheat when running Linux, but not Windows XP. Some seem to be saying "Well, you have an underdesigned machine. The problem isn't Linux. It's your box."
My point is that there may actually be some defect in Linux which is eating lots of CPU on his machine, and this possibility should be investigated, not cast aside with "well, your system cooling is just underdesigned, get a new one".
If it were the case that Linux had such a defect don't you think it would have been discovered by others? I doubt his hardware is so unique that it only affects his system.
I hope you mean 30 degres centigrade. I'd hate to think you turned on the A/C when the room temp was 32 degrees Fahrenheit. No wonder your machines don't overheat!
Yes, 30C. Isn't the whole world metric?
Eh...
Why should Linux be using more CPU than Windows?
I don't know that it does or doesn't. But the question is to vague to even contemplate. Maybe in a multi-tasking environment Linux is more efficient in its scheduling. Maybe a Linux system spends less time in an idle/wait state while I/O is going on. What services/tasks are being performed on when running windows v.s. Linux?
All I know is, there isn't some hidden defect in "Linux" causing every system running it to use more electricity and over heating systems. Then again, maybe Linux is conspiracy cooked up by the oil companies to increase demand for their products. :-)
Mike Chalmers wrote:
My cpu just overheated again while I was using Linux. This time I was browsing the web. I am back into Windows to post this. It does not over heat in Windows. I am not saying Windows is better just because of this, and I am not saying that Linux causes cpu's to overheat easier.
I missed the start of this thread, but I used to have heat problems on my laptop, even when seemingly not doing anything significant. I got "CPU modulated" messages on the console, like I think you are.
To cut a long story short:
* the base problem was my laptop - the CPU heatsink was clogged with dust. Once I took it to pieces and cleaned it, it was OK. What other people here have said about thermal problems being the underlying *cause* and software just triggering *symptoms* is true.
* it was often triggered by anaconda running in the background, triggering the updatedb script (which is quite HD and CPU intensive). The same thing could probably happen with yum-updatesd too.
Tim
On Tue, 2006-12-12 at 03:39 -0600, Mike McCarty wrote:
Sorry, I guess I didn't make the exact context clear here. The OP claims that his machine seems to overheat when running Linux, but not Windows XP. Some seem to be saying "Well, you have an underdesigned machine. The problem isn't Linux. It's your box."
My point is that there may actually be some defect in Linux which is eating lots of CPU on his machine, and this possibility should be investigated, not cast aside with "well, your system cooling is just underdesigned, get a new one".
Actually, I'd go along with the last argument (improve the cooling). If your box is overheating, whether it's due to strange conditions or not, the fact that it *can* overheat is the important issue.
Therefore there's a chance that it can overheat during normal operating conditions, if one of your normal operating conditions makes strong use of the CPU. So you DO want to improve your cooling.
All that is presuming that it really is overheating, and not just that the over-temperature alarm, itself, is at fault.
Ed Greshko wrote:
I wrote
Why should Linux be using more CPU than Windows?
I don't know that it does or doesn't. But the question is to vague to even contemplate. Maybe in a multi-tasking environment Linux is more efficient
You apparently already know all the answers. Of course you don't need more data.
in its scheduling. Maybe a Linux system spends less time in an idle/wait state while I/O is going on. What services/tasks are being performed on when running windows v.s. Linux?
All I know is, there isn't some hidden defect in "Linux" causing every
Who said anything about "hidden defect"?
Every defect has a first time that it gets discovered. As software gradually matures, the "easy" defects get taken care of, and the more elusive ones remain.
system running it to use more electricity and over heating systems. Then again, maybe Linux is conspiracy cooked up by the oil companies to increase demand for their products. :-)
This isn't worth responding to. The whole message has the appearance of a joke.
I suggest that he actually capture some information, and the world comes to pieces. Where I come from letting prejudice guide behavior is not considered good practice.
I suggest he run top or something similar, and actually *measure* CPU utilization. If it is within normal operation (as I suspect it is) then that's the end of the story.
If it isn't, then it's worth further investigation.
Suggesting actually *collecting information* rather than *acting on knee-jerk prejudice* is usually considered prudent, where I come from.
But then, I come from a background where if software fails, one loses customers, because they don't have a religious attachment to using it, and people care about their reputations, and try to produce high quality stuff, because they know the customers will vote with their feet.
Mike
Tim wrote:
Actually, I'd go along with the last argument (improve the cooling). If your box is overheating, whether it's due to strange conditions or not, the fact that it *can* overheat is the important issue.
Therefore there's a chance that it can overheat during normal operating conditions, if one of your normal operating conditions makes strong use of the CPU. So you DO want to improve your cooling.
All that is presuming that it really is overheating, and not just that the over-temperature alarm, itself, is at fault.
I give up. Linux is a religion. Linus is the Pope. No one can question it. Heretics are vilified and run out of town on a rail.
Here we have a chance to take 5 minutes to run top, and see what the CPU utilization is. It would literally take five minutes.
I am 99.44% sure that nothing would turn up. But what would it cost?
It costs actually considering the possibility that Linux might have one line of code which is not perfect.
That's a priori impossible. Because the Pope is never wrong, and the Holy Religion cannot be questioned. I am anathema.
So, an opportunity possibly to improve Linux passes us by because we can't take the time to run top once.
I'm reminded of the famous line supposedly spoken by a german commander when the invasion in Normandy began, and he couldn't inform German High Command: "Wir werden den Krieg verloren, weil der Fuehrer einige Schlaftabletten genommen hat, und nicht zu stoeren ist."
It's futile to suggest actually to collect any data on Linux behavior if there is any threat it might reveal a wart on its nose, because the suggestion will be rejected out of hand.
Mike
This might be useful in terms of test tools. Cluster people care a lot about getting their systems reliable (if you had 512 PC's you would too) and there is some good info in that posting about stress tools including va-ctcs/cerberus and the like.
http://www.beowulf.org/archive/2006-October/016669.html
For I/O stressing take a look at
http://samba.org/ftp/tridge/dbench/
and the related tbench tool for network stress.
Alan wrote:
It is a well-known fact that faulty software can overheat CPUs.
Not for a PC it isn't. I'm not sure where you got that idea from. Things like "halt and catch fire" are urban legend.
Yes, for the PC. The fact that many PCs are designed with marginal, at best, cooling is no secret at all. They are not designed to run with maximum CPU utilization 24/7. The GIMPS program, for example, is known to cause machines to overheat, and comes with a test for that particular effect. Many machines cannot participate in the GIMPS because of that. I suggest you research this well-known phenonmenon.
The HCF instruction is an old joke. I've been working with computers since 1969 or so, when I wrote machine language for the IBM 1401, and it was known then.
Mike
On Tue, 12 Dec 2006 06:38:15 -0600, Mike McCarty Mike.McCarty@sbcglobal.net wrote:
Yes, for the PC. The fact that many PCs are designed with marginal, at best, cooling is no secret at all. They are not designed to run with maximum CPU utilization 24/7. The GIMPS program, for example, is known to cause machines to overheat, and comes with a test for that particular effect. Many machines cannot participate in the GIMPS because of that. I suggest you research this well-known phenonmenon.
How is this not a hardware problem? How would you propose that some random piece of software that needs to do a CPU intensive task avoid this problem? Why should any software developer feel compelled to write bad code to cover up bad hardware design?
Mike
Mike McCarty wrote:
I don't know that it does or doesn't. But the question is to vague to even contemplate. Maybe in a multi-tasking environment Linux is more efficient
You apparently already know all the answers. Of course you don't need more data.
I'm sorry... What part of "I don't know" translates to "you apparently already know all the answers"?
in its scheduling. Maybe a Linux system spends less time in an idle/wait state while I/O is going on. What services/tasks are being performed on when running windows v.s. Linux?
All I know is, there isn't some hidden defect in "Linux" causing every
Who said anything about "hidden defect"?
OK, then let me rephrase to say "unknown defect". I can see how one may interpret "hidden" to mean it was done intentionally.
Every defect has a first time that it gets discovered. As software gradually matures, the "easy" defects get taken care of, and the more elusive ones remain.
And the elusive ones are "hidden" from view. But if one has no empirical evidence to suggest there is an unknown defect it is pure idle speculation.
system running it to use more electricity and over heating systems. Then again, maybe Linux is conspiracy cooked up by the oil companies to increase demand for their products. :-)
This isn't worth responding to. The whole message has the appearance of a joke.
Indeed, your sarcasm detection ability is not impaired. You will note the smiley face which should have given you the clue that indeed no response was expected.
I suggest that he actually capture some information, and the world comes to pieces. Where I come from letting prejudice guide behavior is not considered good practice.
You are detecting prejudice where none exists.
I suggest he run top or something similar, and actually *measure* CPU utilization. If it is within normal operation (as I suspect it is) then that's the end of the story.
Define "normal".
If it isn't, then it's worth further investigation.
Suggesting actually *collecting information* rather than *acting on knee-jerk prejudice* is usually considered prudent, where I come from.
And were does "common sense" factor into the equation?
But then, I come from a background where if software fails, one loses customers, because they don't have a religious attachment to using it, and people care about their reputations, and try to produce high quality stuff, because they know the customers will vote with their feet.
A bit OT, but one then wonders how Microsoft has done so well over the years. How many BSOD have you suffered?
I honestly have no idea where you are coming from. If the situation were reversed I would still be saying the same thing. One aspect of the O/S + applications is stressing the user's hardware more than the other. This is pushing the user's marginal hardware to the thermal breaking point. I could give a (excuse the expression) rat's ass what O/S + application are putting his hardware over the threshold.
Occam's razor states that the explanation of any phenomenon should make as few assumptions as possible, eliminating, or "shaving off", those that make no difference in the observable predictions of the explanatory hypothesis or theory. In short, when given two equally valid explanations for a phenomenon, one should embrace the less complicated formulation.
So, claiming there is a potential defect in some software is causing the over heating is just too complex to make sense. This is especially true when we are talking about a single occurrence. The less complicated answer is his cooling is substandard.
FWIW, running mprime on my system in Taipei makes for a great foot warmer during the winter months. Homes here are generally not heated....kind of like Florida.
On Tue, 12 Dec 2006 01:42:43 -0600, Mike McCarty wrote:
You completely missed my point. Dismissing this as "well, your hardware shouldn't do that" is
S-T-U-P-I-D
because, even though it means the hardware is marginal at best, it is a hint that
THERE MAY BE A DEFECT IN THE SOFTWARE.
What the hell are you going on about?
One of the soak tests I used to do on PCs that I shipped out to people was running them at full CPU utilisation for 24 hours. If your machine gets too hot because it is doing something CPU intensive... then your machine is faulty. I don't know whether Linux is being too picky about the temperature and Windows isn't... but you are simply talking out of your arse here. Saying that software which fully utilises the CPU has a DEFECT is witless drivelling of the worst kind.
And that should not be ignored.
My point wasn't that his or any CPU *should* be overheated by bad software. My point was that hints that there might be a defect in the software should not be ignored.
Yes... they should... because you have no idea what you are talking about. Now if you want to discuss why you are seeing the warning on Linux and not on Windows, I suggest you start being more sensible.
On Tue, 12 Dec 2006 05:29:33 -0600, Mike McCarty wrote:
I give up. Linux is a religion. Linus is the Pope. No one can question it. Heretics are vilified and run out of town on a rail.
On the subject of READING OTHER PEOPLES' POSTS... perhaps you should do that first. This list of full of people discussing problems with Fedora.
Here we have a chance to take 5 minutes to run top, and see what the CPU utilization is. It would literally take five minutes.
I am 99.44% sure that nothing would turn up. But what would it cost?
It costs actually considering the possibility that Linux might have one line of code which is not perfect.
That's a priori impossible. Because the Pope is never wrong, and the Holy Religion cannot be questioned. I am anathema.
So, an opportunity possibly to improve Linux passes us by because we can't take the time to run top once.
I'm reminded of the famous line supposedly spoken by a german commander when the invasion in Normandy began, and he couldn't inform German High Command: "Wir werden den Krieg verloren, weil der Fuehrer einige Schlaftabletten genommen hat, und nicht zu stoeren ist."
[snip rest of drivel and pseudo-intellectual nonsense]
It's futile to suggest actually to collect any data on Linux behavior if there is any threat it might reveal a wart on its nose, because the suggestion will be rejected out of hand.
You are an idiot. These people have tried to be reasonable with you and discuss what is wrong with your system. All you've done is make a colossal arse of yourself. Congratulations... especially on using your real name. It'll make for interesting googling by your workmates in a few years.
Mike McCarty wrote:
I give up. Linux is a religion. Linus is the Pope. No one can question it. Heretics are vilified and run out of town on a rail.
Here we have a chance to take 5 minutes to run top, and see what the CPU utilization is. It would literally take five minutes.
No, it isn't. In fact, I personally think that Linux is pretty awful. It's less awful than some alternatives, but awful all the same.
It costs actually considering the possibility that Linux might have one line of code which is not perfect.
Linux has lots of imperfect lines of code. We all know that. That's not the reason that we're trying to convince you that the problem is cooling.
It's futile to suggest actually to collect any data on Linux behavior if there is any threat it might reveal a wart on its nose, because the suggestion will be rejected out of hand.
There's no point in collecting data. The OP already told us that the system overheated when he was running an application that uses a lot of CPU time. Beyond that, Linux does lots of things that Windows doesn't, and will use more CPU resources. Off the top of my head: all the default cron jobs (possibly run at another time of day by anacron), and beagled on the Gnome desktop.
Looking around for software culprits will only distract the OP from the problem: the CPU isn't sufficiently cooled. That needs to be corrected.
We get that GIMPS has an application to test for poor cooling, already. You know what? That doesn't mean that "it's a well-known fact that faulty software can overheat CPUs". At best, it means that the GIMPS team doesn't want to be blamed when a system that's already failing goes tits up. If your system can't cool a system under sustained CPU utilization, then it's only a matter of time until it fails. *Something* is going to make that CPU overheat.
What, exactly, do you suggest that software applications do? Should they be designed to sleep() enough to let the CPU get a breather? People don't buy a 2Ghz CPU to get 1Ghz of processing. Most systems already throttle back if they detect a cooling failure, but failing that... the CPU could fry. It happens. It's never the software's fault.
Overheating is always a failure of the cooling system.
I don't mean to be rude, but have you considered that Linux is not a religion, and that there is no conspiracy to suppress the wisdom that you bring, but that you're simply wrong on this one point?
Tim wrote:
Gordon Messmer:
What's htdig got to do with pie charts?
Nothing, it was part of another conversation: A minimal, headless, X-less, server installation installing graphical library files.
Oh. Sorry, I missed some connection. To address that, then:
# rpm -q --whatrequires `rpm -q --provides libpng` | grep -v '^no ' cups-libs-1.1.22-0.rc1.9.11 # rpm -q --whatrequires `rpm -q --provides cups-libs` | grep -v '^no ' cups-1.1.22-0.rc1.9.11 # rpm -q --whatrequires `rpm -q --provides cups` | grep -v '^no ' redhat-lsb-3.0-8.EL
So, there you go. "libpng" is needed by cups. "cups" is needed for LSB conformance. That's why you have graphics libraries on a headless server.
Only if you're harping on about KDE. In the other case, there was no sane reason for installing some of what got included on a bare bones install. The silliness of the situation was precisely the point (alleged requirements, that really are not required).
Some of these things are required to conform to some standard or other expectation of functionality, but by and large, they're components without which things just won't *work*.
I'm not sure if you're familiar with the way that linking works, but for the most part, a library which provides functionality that's optional to an application stops being optional as soon as the software is compiled. Once it's built with support for that library, the binary requires it to function. If the library is missing, the linker can't complete its task and the application can't initialize.
If you're into minimalism, you can go the LFS route, or maybe even Gentoo. Most of us, I think, prefer stable systems that offer predictable results. A bare installation is still somewhere in the neighborhood of six or seven hundred megs (of which, around two hundred is manual pages, documentation, and string translations, vastly out sizing the "optional" dependencies), which is hardly unreasonable.
I understand that the source of some dependencies isn't obvious, but if it really eats you up, I suggest that you take the time to find out why they're required. The situation is hardly as dire as you seem to think.
Ignoring KDE, for the moment, it all adds up. When this programmer decides that his program depends on another 20 meg program, and that programmer decides that his program on another 50 meg program, it all snowballs.
Well, you have these alternative scenarios: * The developer decides that he doesn't want his application to support whatever function. Users who aren't worried about two cents worth of disk space will probably appreciate the functionality more. ($80 / 260,000MB * 50MB == 1.6 cents) * The developer implements the features himself. It takes him additional time, which means that bugs in the whole application take longer to fix, and new features take longer to come out. There's no guarantee that his implementation is any smaller than the original.
Other distros can manage barebone installations on systems with really small drives and RAM requirements, a fraction of what Fedora considered to be a barebones system.
It's not widely advertised, but you can do an installation with only the "core" group, rather than "Base", and come out with something around 300MB, IIRC. Fedora *probably* lets you customize things more than you believe.
Even then, why it is a requirement to be able to search the documentation for KDE? Perhaps I mightn't want to search the documents. Perhaps I mightn't even want to install the documentation. Don't make what should be optional extras into unavoidable dependencies.
Breaking the documentation (and documentation reader) out into its own package results in more maintenance for the Fedora team, and more work for users, either the ones who want it or the ones who don't. Requiring a 3MB dependency (Around 1/10 cent of disk) to avoid that extra work is a reasonable compromise, I think.
On Tue, 2006-12-12 at 12:58 +1030, Tim wrote:
On Mon, 2006-12-11 at 20:15 -0600, Mike McCarty wrote:
It is a well-known fact that faulty software can overheat CPUs.
I was thinking about something like this the other day. I'm just finishing off putting a PC together for a friend, and have been using it to stress test it. What I'd like is an automated stress testing tool, not one that tries to break it, but tests that things that should be do-able, are, without having to manually check the serial ports, parallel ports, video ports, that the CPU can do all its math and get the answers right, etc. There's memtest86+ (though I've heard some detractors say it can give false results), but I haven't heard of anything to test other parts of the system.
-- (Currently testing FC5, but still running FC4, if that's important.)
Don't send private replies to my address, the mailbox is ignored. I read messages from the public lists.
This disk has a system Burn-In test on it. There are also other tests as well.
http://www.ultimatebootcd.com/
FWIW, my computer died (well shutdown) on the weekend due to over heating. A can of dust off, a few clouds of dust later and the computer is back up and running 24/7.
Roo wrote:
On Tue, 12 Dec 2006 05:29:33 -0600, Mike McCarty wrote:
[snip]
It's futile to suggest actually to collect any data on Linux behavior if there is any threat it might reveal a wart on its nose, because the suggestion will be rejected out of hand.
You are an idiot. These people have tried to be reasonable with you and discuss what is wrong with your system. All you've done is make a colossal arse of yourself. Congratulations... especially on using your real name. It'll make for interesting googling by your workmates in a few years.
There is nothing wrong with my system. You are addressing the wrong person on that point. I am not the OP.
One (1) guy has had a problem with his machine overheating when running Linux, but seemingly not when running Windows.
I suggested that rather than brush this aside, we collect information which might, just might, albeit with low probability, reveal an as yet unknown defect in Linux.
Rather than anyone else saying "Well, it *is* highly unlikely, but it's worth a shot just to try it when we can." all here have pooh-poohed this idea. It would take 5 minutes to verify that Linux isn't doing something weird. Then the idea could be discarded. It is interesting, however, that no one seems to think that even finding out is worth the effort when it is essentially without cost.
Where I come from (telecomm industry), we always try to garner all information when we have a chance, if we can do so safely and with little cost.
One could argue that only one machine is having a problem, so it is extremely unlikely that Linux is the problem, but rather the machine. That argument is correct. One could also argue that this unique case is important because is may be the result of a defect in Linux which occurs only very infrequently, so it is important to find out its cause if so. That argument is also correct. They are not opposed to each other. The deciding factor is, IMO, the cost of collecting the information.
The absolute most probable thing we would find out is that nothing unusual is going on. But no one even wants to find out, even when it costs essentially nothing to do so.
If thinking we should take every opportunity to investigate unusual behavior for possible defects is being an "idiot", then every industry in the world which considers availability and reliability in software to be important is full of idiots. This includes telecomm, aviation, and power systems at least. And these idiots are the ones who guarantee that your telephone provides you with dialtone when you pick it up, and airplanes don't crash when you fly on them, and your lights come on when you flip the switch.
I'll side with the idiots on this one.
I'd like to take every possible opportunity to improve Linux when it comes. Even if the likelihood is very low, if the cost is nothing, then why not collect the information and know that everything is fine?
Mike
Roo wrote:
On Tue, 12 Dec 2006 01:42:43 -0600, Mike McCarty wrote:
You completely missed my point. Dismissing this as "well, your hardware shouldn't do that" is
S-T-U-P-I-D
because, even though it means the hardware is marginal at best, it is a hint that
THERE MAY BE A DEFECT IN THE SOFTWARE.
What the hell are you going on about?
See my other reply to you.
One of the soak tests I used to do on PCs that I shipped out to people was running them at full CPU utilisation for 24 hours. If your machine gets
Fine. His machine has, at best, marginal cooling. In fact, he may have clogged fins on his heatsinks. I haven't argued against that.
too hot because it is doing something CPU intensive... then your machine is faulty. I don't know whether Linux is being too picky about the
I haven't argued against that.
temperature and Windows isn't... but you are simply talking out of your arse here. Saying that software which fully utilises the CPU has a DEFECT is witless drivelling of the worst kind.
It is not, because Linux is not intended to utilize the CPU like that. My machine normally runs 4% to 8% utilization, most of that being in X.
Saying that about an arbitrary piece of software would be silly.
My suggestion is simply to find out whether his CPU is being consumed. If it is, then we can find out what is consuming it. If that is some application he runs which we know is CPU intensive, then that's what we expected.
But if it is something which is not supposed to be CPU intensive, we can investigate that.
What I suggest is finding out.
And that should not be ignored.
My point wasn't that his or any CPU *should* be overheated by bad software. My point was that hints that there might be a defect in the software should not be ignored.
Yes... they should... because you have no idea what you are talking about. Now if you want to discuss why you are seeing the warning on Linux and not on Windows, I suggest you start being more sensible.
I'll repeat this: I'm having no problem with my machine.
Next, I'll point out that what I've been suggesting is exactly that: Let's find out why the machine is overheating. The first thing I'd like to check is whether some application or kernel thread is consuming an unexpectedly large amount of CPU.
Yes, he has a problem getting rid of heat. But why can't we at least see whether X or some other software on his system has gone crazy? Likely not, but if so, then we have a chance to find out what it is and why it's crazy.
Also, he needs to see why his machine can't get rid of the heat.
Mike
Mike Wohlgemuth wrote:
On Tue, 12 Dec 2006 06:38:15 -0600, Mike McCarty Mike.McCarty@sbcglobal.net wrote:
Yes, for the PC. The fact that many PCs are designed with marginal, at best, cooling is no secret at all. They are not designed to run with maximum CPU utilization 24/7. The GIMPS program, for example, is known to cause machines to overheat, and comes with a test for that particular effect. Many machines cannot participate in the GIMPS because of that. I suggest you research this well-known phenonmenon.
How is this not a hardware problem? How would you propose that some random piece of software that needs to do a CPU intensive task avoid this problem? Why should any software developer feel compelled to write bad code to cover up bad hardware design?
Of course it's a hardware problem. I haven't said otherwise.
I'd just like to find out what this "random" piece of software is. If we find that xinetd or X or clockapplet is consuming 94.6% of his CPU, is this not something worth knowing? OTOH, if we find that he is running GIMPS, well, we know that's going to heat up his system, and we can all forget about it.
Then he needs to clean out his case. If that doesn't work, then he just has a marginal design, which is not uncommon.
Mike
On 12/12/06, Mike McCarty Mike.McCarty@sbcglobal.net wrote:
Mike Wohlgemuth wrote:
On Tue, 12 Dec 2006 06:38:15 -0600, Mike McCarty Mike.McCarty@sbcglobal.net wrote:
Yes, for the PC. The fact that many PCs are designed with marginal, at best, cooling is no secret at all. They are not designed to run with maximum CPU utilization 24/7. The GIMPS program, for example, is known to cause machines to overheat, and comes with a test for that particular effect. Many machines cannot participate in the GIMPS because of that. I suggest you research this well-known phenonmenon.
How is this not a hardware problem? How would you propose that some random piece of software that needs to do a CPU intensive task avoid this problem? Why should any software developer feel compelled to write bad code to cover up bad hardware design?
Of course it's a hardware problem. I haven't said otherwise.
I'd just like to find out what this "random" piece of software is. If we find that xinetd or X or clockapplet is consuming 94.6% of his CPU, is this not something worth knowing? OTOH, if we find that he is running GIMPS, well, we know that's going to heat up his system, and we can all forget about it.
Then he needs to clean out his case. If that doesn't work, then he just has a marginal design, which is not uncommon.
Mike
p="p=%c%s%c;main(){printf(p,34,p,34);}";main(){printf(p,34,p,34);} This message made from 100% recycled bits. You have found the bank of Larn. I can explain it for you, but I can't understand it for you. I speak only for myself, and I am unanimous in that!
-- fedora-list mailing list fedora-list@redhat.com To unsubscribe: https://www.redhat.com/mailman/listinfo/fedora-list
Yes, the problem was caused by hardware. I cleaned out my computer, and it was pretty bad. Besides the cooling the hardware is good. I am waiting on some new hardware also, cpu fan and power supply, case fans and a new case. That should fix this problem.
The question is why doesn't it do this on Windows when I run cpu intensive apps. I believe in Windows that it was close to overheating though.
On 12/13/06, Mike McCarty Mike.McCarty@sbcglobal.net wrote:
Roo wrote:
On Tue, 12 Dec 2006 05:29:33 -0600, Mike McCarty wrote:
[snip]
It's futile to suggest actually to collect any data on Linux behavior if there is any threat it might reveal a wart on its nose, because the suggestion will be rejected out of hand.
You are an idiot. These people have tried to be reasonable with you and discuss what is wrong with your system. All you've done is make a colossal arse of yourself. Congratulations... especially on using your real name. It'll make for interesting googling by your workmates in a few years.
There is nothing wrong with my system. You are addressing the wrong person on that point. I am not the OP.
You and the OP have the same first name, that is causing some ID problems
One (1) guy has had a problem with his machine overheating when running Linux, but seemingly not when running Windows.
Not at all suprising, Windows doesn't always fully recognize the full capability of a CPU to begin with...but that is off topic.
I suggested that rather than brush this aside, we collect information which might, just might, albeit with low probability, reveal an as yet unknown defect in Linux.
Suggest a test case(s), I will try it out on my test machine.
Rather than anyone else saying "Well, it *is* highly unlikely, but it's worth a shot just to try it when we can." all here have pooh-poohed this idea. It would take 5 minutes to verify that Linux isn't doing something weird. Then the idea could be discarded. It is interesting, however, that no one seems to think that even finding out is worth the effort when it is essentially without cost.
I'm willing to give this a try if you can tell me what to try.
Where I come from (telecomm industry), we always try to garner all information when we have a chance, if we can do so safely and with little cost.
Just to be clear, the OP has stated that he himself will not allow it to happen to him again for the sake of finding out futher details.
One could argue that only one machine is having a problem, so it is extremely unlikely that Linux is the problem, but rather the machine. That argument is correct. One could also argue that this unique case is important because is may be the result of a defect in Linux which occurs only very infrequently, so it is important to find out its cause if so. That argument is also correct. They are not opposed to each other. The deciding factor is, IMO, the cost of collecting the information.
I am willing. The OP is not.
The absolute most probable thing we would find out is that nothing unusual is going on. But no one even wants to find out, even when it costs essentially nothing to do so.
Well the thing is, running yum isn't an unusual thing, we run yum all the time. So I guess most of us, having run yum so frequently feel very confident in the prognosis - as I'm sure you know, very little of the code behind yum is even capable of triggering such a problem by itself. But as you've stated...not impossible.
If thinking we should take every opportunity to investigate unusual behavior for possible defects is being an "idiot", then every industry in the world which considers availability and reliability in software to be important is full of idiots. This includes telecomm, aviation, and power systems at least. And these idiots are the ones who guarantee that your telephone provides you with dialtone when you pick it up, and airplanes don't crash when you fly on them, and your lights come on when you flip the switch.
I'll side with the idiots on this one.
I don't think your approach is idiotic, just seeminly unnecessary in this case, but again, feel free to suggest how I can get the data you require.
I'd like to take every possible opportunity to improve Linux when it comes. Even if the likelihood is very low, if the cost is nothing, then why not collect the information and know that everything is fine?
No reason not to.
Mike
You may also be interested to know that in another thread some dude has claimed that FC6 destroyed his LCDs ability to report its capabilty to accept digital input...somehow that seemed more relevant when I began typing.
Peace
I really shouldn't be joining in this thread again. It's obvious Mike can't quite get the hang of a simple concept...
Mike McCarty wrote:
One could argue that only one machine is having a problem, so it is extremely unlikely that Linux is the problem, but rather the machine. That argument is correct.
But guess what? You're the *only* person in this thread to make that argument. It's a "straw man" argument. Stop arguing against it.
One could also argue that this unique case is important because is may be the result of a defect in Linux which occurs only very infrequently, so it is important to find out its cause if so.
If, as appears *highly* probable, the system really was overheating, *whatever* Linux does it logically cannot be responsible.
Hardware should not overheat whatever software does, so whatever Linux is doing, the hardware should still not overheat. If it was happening in Windows, Windows would not be responsible. It *cannot* be a software bug!
There is, however, a related argument you could make. That it is important to find out exactly what is happening so that whoever is appropriate can stop it happening again. But for this we need to know such things as: * where did the error messages come from; * how hot does that processor actually get; * are any temperature probes correctly configured.
But these are different questions, and we'd want to gather different data. The question "what was going on" is relatively unimportant -- we know the Original Poster was running yum, and we *know* that stresses the processor.
If thinking we should take every opportunity to investigate unusual behavior for possible defects is being an "idiot", then every industry in the world which considers availability and reliability in software to be important is full of idiots. This includes telecomm, aviation, and power systems at least.
These industries have the sense to know that reliable software is dependent on reliable hardware. If the Original Poster's hardware is not reliable, crashing software is expected. Indeed, fly-by-wire aeroplanes are designed around the possibility that the computers will fail, and usually have multiple redundant "voting" systems to identify and neutralise rogue systems.
James.
On 12/13/06, James Wilkinson fedora@aprilcottage.co.uk wrote:
There is, however, a related argument you could make. That it is important to find out exactly what is happening so that whoever is appropriate can stop it happening again. But for this we need to know such things as:
- where did the error messages come from;
From BIOS to Kernel
- how hot does that processor actually get;
OP didn't think to check that
- are any temperature probes correctly configured.
Non issue in this case since lm_sensors was not involved.
Roo wrote:
Saying that software which fully utilises the CPU has a DEFECT is witless drivelling of the worst kind.
Mike McCarty wrote:
It is not, because Linux is not intended to utilize the CPU like that.
That is an extraordinary claim. Absolutely extraordinary.
Are you *seriously* suggesting that Linux computers are not meant to run processor-intensive code?
My machine normally runs 4% to 8% utilization, most of that being in X.
That is completely irrelevant.
Yes, in normal desktop usage most computers will have very low utilisation most of the time. That's because people buy computers to run programs. Some of these (media encoding, 3D games, scientific applications, compilers etc) take a lot of processor power. So it's a good idea that they should have as much processor time as they can use, which means the rest of the system should try not to take up too much of it.
Are you seriously suggesting that people should not do 3D games, media encoding, scientific applications or compiling on Linux?
James.
On Tue, Dec 12, 2006 at 04:07:53PM -0500, Mike Chalmers wrote:
The question is why doesn't it do this on Windows when I run cpu intensive apps. I believe in Windows that it was close to overheating though.
I think Linux is simply more efficient at keeping the CPU absolutely busy -- a good thing when the hardware doesn't have some problem.
Gordon Messmer wrote:
Well, I had more-or-less convinced myself not to try to respond to this anymore. But I guess I'm weak. Here I am writing another message.
Mike McCarty wrote:
I give up. Linux is a religion. Linus is the Pope. No one can question it. Heretics are vilified and run out of town on a rail.
Here we have a chance to take 5 minutes to run top, and see what the CPU utilization is. It would literally take five minutes.
No, it isn't. In fact, I personally think that Linux is pretty awful. It's less awful than some alternatives, but awful all the same.
Well, it's not a religion to you. It's not a religion to all. But it is a religion to some, seemingly.
We agree on the assessment of Linux.
It costs actually considering the possibility that Linux might have one line of code which is not perfect.
Linux has lots of imperfect lines of code. We all know that. That's not the reason that we're trying to convince you that the problem is cooling.
I don't need convincing. I know he has a cooling problem. I simply suggest that if Linux is causing his machine to overheat, we should take five minutes to ask him to run top and see what, if any, process is eating a lot of CPU. Here's the message I sent which started the controversy:
--------------------------------------------------------------- It is a well-known fact that faulty software can overheat CPUs. You might investigate the GIMPS which has a test program intended specifically to ascertain whether it can run on a machine, or will in fact cause it to overheat. If he somehow got Linux or some part of it into a tight loop due to a software defect, it might very well have caused the overheating.
Don't toss this aside lightly. Give it due consideration. It is likely not a fault of Linux, but don't just disregard this. ---------------------------------------------------------------
Nowhere do I state that he doesn't have a hardware problem. He does. I know that. The hardware problem may be one he can correct, by cleaning air ports, or freeing a stuck fan, or cleaning out lint from the fins of a heatsink. It may be one which he *cannot* fix because his cooling is underdesigned.
I happen to own a machine which cannot run GIMPS. It overheats. It is underdesigned. What can I say?
However, it behooves us to find out what is causing his CPU to heat up. It may be that he is running GIMPS or sth like that which is known to eat CPUs. Almost surely it is not a defect in Linux or X or other parts of the system which have run away. But *something* is happening.
On my machine, I run 4% to 10% utilization, usually mostly X.
It's futile to suggest actually to collect any data on Linux behavior if there is any threat it might reveal a wart on its nose, because the suggestion will be rejected out of hand.
There's no point in collecting data. The OP already told us that the system overheated when he was running an application that uses a lot of CPU time. Beyond that, Linux does lots of things that Windows doesn't,
AAAK! I missed that post, or missed that line of a post. Because that is the only thing I've been promoting. Find out what is causing the heat.
I was sure it would not be Linux or sth distributed with it, but thought we ought at least to find out what the source of the heat was.
Ok, then we have already covered that.
Sorry.
and will use more CPU resources. Off the top of my head: all the default cron jobs (possibly run at another time of day by anacron), and beagled on the Gnome desktop.
Yes. I ran some benchmarks on my machine when I first got it. I tried runing the DHRYSTONES usingg MSDOS with DJGPP, Linux, and Windows. Not surprisingly, it ran fastest with MSDOS, Windows came in an almost undetectable second, and Linux was some ways behind.
Looking around for software culprits will only distract the OP from the problem: the CPU isn't sufficiently cooled. That needs to be corrected.
Well, if he already knows the culprit, then that has been done.
[snip]
Overheating is always a failure of the cooling system.
I never said otherwise. Of course it is.
I don't mean to be rude, but have you considered that Linux is not a religion, and that there is no conspiracy to suppress the wisdom that you bring, but that you're simply wrong on this one point?
See above.
Mike
On 12/13/06, Mike Chalmers mikechalmers70@gmail.com wrote:
Yes, the problem was caused by hardware. I cleaned out my computer, and it was pretty bad. Besides the cooling the hardware is good. I am waiting on some new hardware also, cpu fan and power supply, case fans and a new case. That should fix this problem.
The question is why doesn't it do this on Windows when I run cpu intensive apps. I believe in Windows that it was close to overheating though.
This is working on the assumption that Windows is capable of telling that such a thing is occuring without the use of third party software, I'm also guessing you're not doing anything CPU intensive on Windows.
Yes, the problem was caused by hardware. I cleaned out my computer, and it was pretty bad. Besides the cooling the hardware is good. I am waiting on some new hardware also, cpu fan and power supply, case fans and a new case. That should fix this problem.
The question is why doesn't it do this on Windows when I run cpu intensive apps. I believe in Windows that it was close to overheating though.
Are you sure windows will tell you that the cpu is being slowed down? If windows does not have the correct hooks to tell you it is happening, then the only way you can tell is if you know how long something is supposed to take, and if it takes longer than expected (by a fair margin) then it is being slowed down.
Linux did not notify of this slow down on older kernels, so it would not suprise me that windows did not notify you of it happening either, if this is so the only way to know is with a known cpu-intensive timing test that runs longer than several minutes (it usually takes 1-5 minutes to get hot enough to declock).
I believe the thing slowing down the cpu is completely in the CPU itself (it is fully automatic and not under software control from the OS, at best the OS can get notified of it happening)
Roger
James Wilkinson wrote:
I really shouldn't be joining in this thread again. It's obvious Mike can't quite get the hang of a simple concept...
The simple concept which I missed is (according to Gordon Messmer) is that the software which is consuming the CPU is already known. I missed that.
What I've been proposing is to find what piece of software was consuming the CPU. Somehow, I missed the fact that we *already* know what piece of software is doing so, and it is normal operation for that.
I have never claimed that he doesn't have a hardware problem with heat.
[snip]
If, as appears *highly* probable, the system really was overheating, *whatever* Linux does it logically cannot be responsible.
That is not an a-priori fact. In this case, everyone be I seemingly knew that the culprit was also already known, which put what I wrote off-base. Sorry about that.
Hardware should not overheat whatever software does, so whatever Linux
I never claimed that it should.
is doing, the hardware should still not overheat. If it was happening in Windows, Windows would not be responsible. It *cannot* be a software bug!
It can be a symptom of a software defect, which was my only claim.
There is, however, a related argument you could make. That it is important to find out exactly what is happening so that whoever is appropriate can stop it happening again. But for this we need to know such things as:
- where did the error messages come from;
- how hot does that processor actually get;
- are any temperature probes correctly configured.
But these are different questions, and we'd want to gather different data. The question "what was going on" is relatively unimportant -- we know the Original Poster was running yum, and we *know* that stresses the processor.
But, unfortunately, I missed that point, and caused this big furor.
For which I apologize.
If thinking we should take every opportunity to investigate unusual behavior for possible defects is being an "idiot", then every industry in the world which considers availability and reliability in software to be important is full of idiots. This includes telecomm, aviation, and power systems at least.
These industries have the sense to know that reliable software is dependent on reliable hardware. If the Original Poster's hardware is not reliable, crashing software is expected. Indeed, fly-by-wire aeroplanes are designed around the possibility that the computers will fail, and usually have multiple redundant "voting" systems to identify and neutralise rogue systems.
Re-read that in light of the fact that I missed that someone had already identified the software culprit. That was the only point I ever was trying to make: That we should identify the software eating the CPU, and ascertain whether that was considered to be normal behavior for that software.
That I missed this fact, is an oversight on my part, for which I apologize.
Mike
Arthur Pemberton wrote:
On 12/13/06, Mike McCarty Mike.McCarty@sbcglobal.net wrote:
Roo wrote:
On Tue, 12 Dec 2006 05:29:33 -0600, Mike McCarty wrote:
[snip]
You are an idiot. These people have tried to be reasonable with you and discuss what is wrong with your system. All you've done is make a
colossal
arse of yourself. Congratulations... especially on using your real
name.
It'll make for interesting googling by your workmates in a few years.
There is nothing wrong with my system. You are addressing the wrong person on that point. I am not the OP.
You and the OP have the same first name, that is causing some ID problems
Fair enough.
[snip]
I'm willing to give this a try if you can tell me what to try.
Wonderful attitude. But you seemingly don't have the problem.
Where I come from (telecomm industry), we always try to garner all information when we have a chance, if we can do so safely and with little cost.
...
The absolute most probable thing we would find out is that nothing unusual is going on. But no one even wants to find out, even when it costs essentially nothing to do so.
Well the thing is, running yum isn't an unusual thing, we run yum all the time. So I guess most of us, having run yum so frequently feel very confident in the prognosis - as I'm sure you know, very little of the code behind yum is even capable of triggering such a problem by itself. But as you've stated...not impossible.
This is the point I missed, and I apologize for that. I missed the fact that the software which caused high CPU utilization had been identified, and that this was a known and accepted part of its behavior.
My point, and the furor I've caused, have been because I missed that fact. I have been insisting that we find what software was eating the CPU, and see whether that was expected behavior.
That has already been done, and I missed that. I apologize.
If thinking we should take every opportunity to investigate unusual behavior for possible defects is being an "idiot", then every industry in the world which considers availability and reliability in software to be important is full of idiots. This includes telecomm, aviation, and power systems at least. And these idiots are the ones who guarantee that your telephone provides you with dialtone when you pick it up, and airplanes don't crash when you fly on them, and your lights come on when you flip the switch.
I'll side with the idiots on this one.
I don't think your approach is idiotic, just seeminly unnecessary in this case, but again, feel free to suggest how I can get the data you require.
We already have it. Sorry for the fuss.
...
You may also be interested to know that in another thread some dude has claimed that FC6 destroyed his LCDs ability to report its capabilty to accept digital input...somehow that seemed more relevant when I began typing.
I've seen that thread. I find it difficult to believe that FC6 acutally did any damage. Monitors since the EGA came out have been protected against that. Furthermore, it's the flyback which is prone to death. He seems to have an interface problem, or so he claims.
Mike
James Wilkinson wrote:
Roo wrote:
Saying that software which fully utilises the CPU has a DEFECT is witless drivelling of the worst kind.
Mike McCarty wrote:
It is not, because Linux is not intended to utilize the CPU like that.
That is an extraordinary claim. Absolutely extraordinary.
It seems perfectly natural to me.
Are you *seriously* suggesting that Linux computers are not meant to run processor-intensive code?
Read what I wrote very carefully.
I said that *LINUX* is not intended to utilize the CPU like that. I said nothing about applications which might be written for Linux. I said that the OS itself is not supposed to be CPU intensive.
That seems perfectly natural and unremarkable to me.
My machine normally runs 4% to 8% utilization, most of that being in X.
That is completely irrelevant.
No, it is not, because it shows that Linux itself is not CPU intensive. I stated above that Linux was not intended to be CPU intensive, and I provided evidence that it actually is not CPU intensive.
[snip]
Are you seriously suggesting that people should not do 3D games, media encoding, scientific applications or compiling on Linux?
I seriously said what I said, but not what you read into it.
Mike
Mike Chalmers wrote:
On 12/12/06, Mike McCarty Mike.McCarty@sbcglobal.net wrote:
[...]
Of course it's a hardware problem. I haven't said otherwise.
I'd just like to find out what this "random" piece of software is. If we find that xinetd or X or clockapplet is consuming 94.6% of his
See below.
CPU, is this not something worth knowing? OTOH, if we find that he is running GIMPS, well, we know that's going to heat up his system, and we can all forget about it.
Then he needs to clean out his case. If that doesn't work, then he just has a marginal design, which is not uncommon.
Yes, the problem was caused by hardware. I cleaned out my computer, and it was pretty bad. Besides the cooling the hardware is good. I am waiting on some new hardware also, cpu fan and power supply, case fans and a new case. That should fix this problem.
The question is why doesn't it do this on Windows when I run cpu intensive apps. I believe in Windows that it was close to overheating though.
Well, that I dunno. However, to find out (and to beat a very dead horse) run top.
And, I am sorry for missing the fact that some had already identified the software which is CPU intensive. I apologize for the oversight.
Mike
On 12/13/06, Mike McCarty Mike.McCarty@sbcglobal.net wrote:
James Wilkinson wrote:
Roo wrote:
Saying that software which fully utilises the CPU has a DEFECT is witless drivelling of the worst kind.
Mike McCarty wrote:
It is not, because Linux is not intended to utilize the CPU like that.
That is an extraordinary claim. Absolutely extraordinary.
It seems perfectly natural to me.
Are you *seriously* suggesting that Linux computers are not meant to run processor-intensive code?
Read what I wrote very carefully.
I said that *LINUX* is not intended to utilize the CPU like that. I said nothing about applications which might be written for Linux. I said that the OS itself is not supposed to be CPU intensive.
That seems perfectly natural and unremarkable to me.
Apperent disambigation issues here between Linux as an OS and Linux as the kernel.
On 12/13/06, Mike McCarty Mike.McCarty@sbcglobal.net wrote:
Arthur Pemberton wrote:
On 12/13/06, Mike McCarty Mike.McCarty@sbcglobal.net wrote:
[snip]
I don't think your approach is idiotic, just seeminly unnecessary in this case, but again, feel free to suggest how I can get the data you require.
We already have it. Sorry for the fuss.
Apologoy accepted upon receipt of my symbolic Heineken in the mail.
You may also be interested to know that in another thread some dude has claimed that FC6 destroyed his LCDs ability to report its capabilty to accept digital input...somehow that seemed more relevant when I began typing.
I've seen that thread. I find it difficult to believe that FC6 acutally did any damage. Monitors since the EGA came out have been protected against that. Furthermore, it's the flyback which is prone to death. He seems to have an interface problem, or so he claims.
Mike
He as also, supposedly unsuscribed from the list.
James Wilkinson:
- are any temperature probes correctly configured.
Arthur Pemberton:
Non issue in this case since lm_sensors was not involved.
Do we know that? I don't beleive he sais what was warning him about the overheating. It could have been an alarm triggered off by it.
Tim:
Overheating is always a failure of the cooling system.
Mike McCarty:
I never said otherwise. Of course it is.
Well, you were certainly arguing in that direction with my post that you got your knickers in a twist over. I'd argued that whatever the reason for the overheating (faulty, or normal use of the computer), the cooling needed fixing. You were *only* being concerned about what the computer was doing at the time.
It doesn't matter *why*, at that stage. If the computer is overheating, the computer *is* overheating. *That* has to be addressed, and the computer has to be built so it doesn't overheat, even if run at 100% continuously.
Arthur Pemberton wrote:
On 12/13/06, Mike McCarty Mike.McCarty@sbcglobal.net wrote:
Arthur Pemberton wrote:
You may also be interested to know that in another thread some dude has claimed that FC6 destroyed his LCDs ability to report its capabilty to accept digital input...somehow that seemed more relevant when I began typing.
I've seen that thread. I find it difficult to believe that FC6 acutally did any damage. Monitors since the EGA came out have been protected against that. Furthermore, it's the flyback which is prone to death. He seems to have an interface problem, or so he claims.
Mike
He as also, supposedly unsuscribed from the list.
I thought that was weird.
He was getting what I thought was some good suggestions.
Mike
On 12/12/06, Tim ignored_mailbox@yahoo.com.au wrote:
James Wilkinson:
- are any temperature probes correctly configured.
Arthur Pemberton:
Non issue in this case since lm_sensors was not involved.
Do we know that? I don't beleive he sais what was warning him about the overheating. It could have been an alarm triggered off by it.
As I have stated since the beginning, this happened to me very recently. This warning comes from the kernel. Only after the problem did I setup lm_sensors.
Gordon Messmer:
What's htdig got to do with pie charts?
Tim:
Nothing, it was part of another conversation: A minimal, headless, X-less, server installation installing graphical library files.
Gordon Messmer:
Oh. Sorry, I missed some connection. To address that, then:
# rpm -q --whatrequires `rpm -q --provides libpng` | grep -v '^no ' cups-libs-1.1.22-0.rc1.9.11 # rpm -q --whatrequires `rpm -q --provides cups-libs` | grep -v '^no ' cups-1.1.22-0.rc1.9.11 # rpm -q --whatrequires `rpm -q --provides cups` | grep -v '^no ' redhat-lsb-3.0-8.EL
So, there you go. "libpng" is needed by cups. "cups" is needed for LSB conformance. That's why you have graphics libraries on a headless server.
But CUPS isn't *needed* on a PC. Sure, you might want it if you're printing. But there's going to be a plethora of boxes that don't need to print. A headless HTTP server, or mail server, or new server, etc., just being some of them. They won't need to print, or be printed to.
Requiring CUPS is a bogus requirement. Maybe CUPS should be a requirement if you're including printing support, but it shouldn't be, otherwise.
CUPS, being just one example of this mentality. We could "require" BIND, because Linux does need to resolve hostnames, but we don't (don't require *it* as the solution).
Some people, and I don't mean you, but those putting together what they think is a minimal install list, have a strange idea about what minimal and required actually mean.
But disregarding minimalism, there's still plenty of situations where a rather extensive installation won't need various things considered to be "required", but actually aren't. And that bloats out installations to the point that we needlessly have to get multi-gigabyte hard drives to do moderately basic installations.
Tim wrote:
Tim:
Overheating is always a failure of the cooling system.
Mike McCarty:
I never said otherwise. Of course it is.
Well, you were certainly arguing in that direction with my post that you got your knickers in a twist over.
It may have seemed so to you, but I was not. I won't suggest that you re-read my posts, because that would be boring. But my only point along the way was to find out why his CPU was overheating when running Linux was because X might have gone crazy, or a kernel thread might be bonkers. I was suggesting that after finding what, if any, unexpected software behavior was taking place, then to try to find the hardware problem.
Somehow, lack of diligence, overzealous delete key, or what, I missed the fact that yum was running and that it is known that yum puts a stress on the CPU. So, I went off on a tangent, since nobody seemed to care why the CPU was being eaten.
I apologize again.
I'd argued that whatever the reason
for the overheating (faulty, or normal use of the computer), the cooling needed fixing. You were *only* being concerned about what the computer was doing at the time.
I was concerned that (as it seemed to me) no one was concerned with why the software load was causing heat rise in the CPU. The fact that the heat rose /very much/ is a hardware problem. That it rose /at all/ is an indicator that some software is using inordinate amounts of CPU.
And I was concerned that nobody seemed to care. All they wanted to do was clean out the case and make the machine run.
It doesn't matter *why*, at that stage. If the computer is overheating, the computer *is* overheating. *That* has to be addressed, and the computer has to be built so it doesn't overheat, even if run at 100% continuously.
It does matter why if the reason the machine temp is rising is a kernel thread gone amok.
Certainly, if possible, any physical problems with the machine also need to be addressed. If the machine is just underdesigned, then there is not much which can be done, otherwise the stuck fan needs replacement, or the dirt needs removal, or etc.
Mike
On 12/12/06, Tim ignored_mailbox@yahoo.com.au wrote:
Gordon Messmer:
What's htdig got to do with pie charts?
Tim:
Nothing, it was part of another conversation: A minimal, headless, X-less, server installation installing graphical library files.
Gordon Messmer:
Oh. Sorry, I missed some connection. To address that, then:
# rpm -q --whatrequires `rpm -q --provides libpng` | grep -v '^no ' cups-libs-1.1.22-0.rc1.9.11 # rpm -q --whatrequires `rpm -q --provides cups-libs` | grep -v '^no ' cups-1.1.22-0.rc1.9.11 # rpm -q --whatrequires `rpm -q --provides cups` | grep -v '^no ' redhat-lsb-3.0-8.EL
So, there you go. "libpng" is needed by cups. "cups" is needed for LSB conformance. That's why you have graphics libraries on a headless server.
But CUPS isn't *needed* on a PC. Sure, you might want it if you're printing. But there's going to be a plethora of boxes that don't need to print. A headless HTTP server, or mail server, or new server, etc., just being some of them. They won't need to print, or be printed to.
Requiring CUPS is a bogus requirement. Maybe CUPS should be a requirement if you're including printing support, but it shouldn't be, otherwise.
CUPS, being just one example of this mentality. We could "require" BIND, because Linux does need to resolve hostnames, but we don't (don't require *it* as the solution).
Some people, and I don't mean you, but those putting together what they think is a minimal install list, have a strange idea about what minimal and required actually mean.
But disregarding minimalism, there's still plenty of situations where a rather extensive installation won't need various things considered to be "required", but actually aren't. And that bloats out installations to the point that we needlessly have to get multi-gigabyte hard drives to do moderately basic installations.
To the best of my memory, the Fedora devs are aware of this issue, and _do_ address any such thing of significant size that can be addressed.
That's going on (no such threads currently active) over on the fedora-devel list. If you have such a situation that can and should be addressed, please email the dev list about it, and/or create a bugzilla report. I am fairly certain that it will be addressed.
Peace.
On Wed, 13 Dec 2006 10:42:54 +1030 Tim ignored_mailbox@yahoo.com.au wrote:
But disregarding minimalism, there's still plenty of situations where a rather extensive installation won't need various things considered to be "required", but actually aren't.
For grins, try to remove "cdrecord" on a fedora system. If I have no CDRW drive, you'd think I could live without it, but the dependencies say it also has to remove lots of bits of gnome and even decides it has to remove compiz (I was trying to get rid of excess baggage in a Xen guest OS when I discovered this one).
Bet you never expected CD recording software was required to make your windows wiggle :-).
Tim wrote:
Gordon Messmer:
What's htdig got to do with pie charts?
Tim:
Nothing, it was part of another conversation: A minimal, headless, X-less, server installation installing graphical library files.
Gordon Messmer:
Oh. Sorry, I missed some connection. To address that, then:
# rpm -q --whatrequires `rpm -q --provides libpng` | grep -v '^no ' cups-libs-1.1.22-0.rc1.9.11 # rpm -q --whatrequires `rpm -q --provides cups-libs` | grep -v '^no ' cups-1.1.22-0.rc1.9.11 # rpm -q --whatrequires `rpm -q --provides cups` | grep -v '^no ' redhat-lsb-3.0-8.EL
So, there you go. "libpng" is needed by cups. "cups" is needed for LSB conformance. That's why you have graphics libraries on a headless server.
But CUPS isn't *needed* on a PC. Sure, you might want it if you're printing. But there's going to be a plethora of boxes that don't need to print. A headless HTTP server, or mail server, or new server, etc., just being some of them. They won't need to print, or be printed to.
CUPS isn't necessary to print, either. It is a convenient solution, but others exist.
Requiring CUPS is a bogus requirement. Maybe CUPS should be a requirement if you're including printing support, but it shouldn't be, otherwise.
Possibly. Other print solutions exist.
CUPS, being just one example of this mentality. We could "require" BIND, because Linux does need to resolve hostnames, but we don't (don't require *it* as the solution).
Exactly. OTOH, trying to make everything work with every possible print driver is not necessarily a good goal.
Some people, and I don't mean you, but those putting together what they think is a minimal install list, have a strange idea about what minimal and required actually mean.
I suppose a "minimal required system" would be the kernel, the init RAM disc, and tmpfs for /tmp. Not a very usable system. But when you go beyond this, then you get into "minimal required to do <x>" where <x> is some desired function. Everyone seems to have a different set of <x> to put into there. I don't know of any objective means to ascertain what <x> must contain.
But disregarding minimalism, there's still plenty of situations where a rather extensive installation won't need various things considered to be "required", but actually aren't. And that bloats out installations to the point that we needlessly have to get multi-gigabyte hard drives to do moderately basic installations.
I was amazed when I installed FC2. I didn't think I selected all that much to install. It was about 7 Gig.
All systems seem enormously bloated to me these days. But I started with computers when 4K of RAM was considered a lot.
Mike
Arthur Pemberton wrote:
On 12/13/06, Mike McCarty Mike.McCarty@sbcglobal.net wrote:
Arthur Pemberton wrote:
On 12/13/06, Mike McCarty Mike.McCarty@sbcglobal.net wrote:
[snip]
I don't think your approach is idiotic, just seeminly unnecessary in this case, but again, feel free to suggest how I can get the data you require.
We already have it. Sorry for the fuss.
Apologoy accepted upon receipt of my symbolic Heineken in the mail.
You may also be interested to know that in another thread some dude has claimed that FC6 destroyed his LCDs ability to report its capabilty to accept digital input...somehow that seemed more relevant when I began typing.
I've seen that thread. I find it difficult to believe that FC6 acutally did any damage. Monitors since the EGA came out have been protected against that. Furthermore, it's the flyback which is prone to death. He seems to have an interface problem, or so he claims.
Mike
He as also, supposedly unsuscribed from the list.
I think it was on the monitor with the DVI problem where the marbles were picked up in midplay. It could be this thread too! I deleted quite a bit of the postings.
If I read correctly, the OP for this thread is a regular here and has posted before. He just has a problem with overheating or Linux programs being overly stringent or getting the wrong information from the system. It could be a physical limitation on the system.
Jim
On Tue, 2006-12-12 at 18:35 -0600, Mike McCarty wrote:
Tim wrote:
Gordon Messmer:
What's htdig got to do with pie charts?
Tim:
Nothing, it was part of another conversation: A minimal, headless, X-less, server installation installing graphical library files.
Gordon Messmer:
Oh. Sorry, I missed some connection. To address that, then:
# rpm -q --whatrequires `rpm -q --provides libpng` | grep -v '^no ' cups-libs-1.1.22-0.rc1.9.11 # rpm -q --whatrequires `rpm -q --provides cups-libs` | grep -v '^no ' cups-1.1.22-0.rc1.9.11 # rpm -q --whatrequires `rpm -q --provides cups` | grep -v '^no ' redhat-lsb-3.0-8.EL
So, there you go. "libpng" is needed by cups. "cups" is needed for LSB conformance. That's why you have graphics libraries on a headless server.
But CUPS isn't *needed* on a PC. Sure, you might want it if you're printing. But there's going to be a plethora of boxes that don't need to print. A headless HTTP server, or mail server, or new server, etc., just being some of them. They won't need to print, or be printed to.
CUPS isn't necessary to print, either. It is a convenient solution, but others exist.
Requiring CUPS is a bogus requirement. Maybe CUPS should be a requirement if you're including printing support, but it shouldn't be, otherwise.
Possibly. Other print solutions exist.
CUPS, being just one example of this mentality. We could "require" BIND, because Linux does need to resolve hostnames, but we don't (don't require *it* as the solution).
Exactly. OTOH, trying to make everything work with every possible print driver is not necessarily a good goal.
Some people, and I don't mean you, but those putting together what they think is a minimal install list, have a strange idea about what minimal and required actually mean.
I suppose a "minimal required system" would be the kernel, the init RAM disc, and tmpfs for /tmp. Not a very usable system. But when you go beyond this, then you get into "minimal required to do <x>" where <x> is some desired function. Everyone seems to have a different set of <x> to put into there. I don't know of any objective means to ascertain what <x> must contain.
But disregarding minimalism, there's still plenty of situations where a rather extensive installation won't need various things considered to be "required", but actually aren't. And that bloats out installations to the point that we needlessly have to get multi-gigabyte hard drives to do moderately basic installations.
I was amazed when I installed FC2. I didn't think I selected all that much to install. It was about 7 Gig.
All systems seem enormously bloated to me these days. But I started with computers when 4K of RAM was considered a lot.
Mike
p="p=%c%s%c;main(){printf(p,34,p,34);}";main(){printf(p,34,p,34);} This message made from 100% recycled bits. You have found the bank of Larn. I can explain it for you, but I can't understand it for you. I speak only for myself, and I am unanimous in that!
If you can't do it with 12bits and 4K what's the point. If you haven't used paper tape, how do you realize what a program really is?
Les wrote:
On Tue, 2006-12-12 at 18:35 -0600, Mike McCarty wrote:
I was amazed when I installed FC2. I didn't think I selected all that much to install. It was about 7 Gig.
All systems seem enormously bloated to me these days. But I started with computers when 4K of RAM was considered a lot.
If you can't do it with 12bits and 4K what's the point. If you haven't used paper tape, how do you realize what a program really is?
I can't remember anymore what the word size of the IBM 1401 was. I recall we had 4K words of memory. I *think* it was 12 bits, because we used punch cards, which have 12 rows on them. So I think 12 bits was right. And it was true core, little magnetic doughnuts. We had no assembler and certainly no compiler for that machine. We did machine language.
Mike
Mike McCarty wrote:
Les wrote:
On Tue, 2006-12-12 at 18:35 -0600, Mike McCarty wrote:
I was amazed when I installed FC2. I didn't think I selected all that much to install. It was about 7 Gig.
All systems seem enormously bloated to me these days. But I started with computers when 4K of RAM was considered a lot.
If you can't do it with 12bits and 4K what's the point. If you haven't used paper tape, how do you realize what a program really is?
I can't remember anymore what the word size of the IBM 1401 was. I recall we had 4K words of memory. I *think* it was 12 bits, because we used punch cards, which have 12 rows on them. So I think 12 bits was right. And it was true core, little magnetic doughnuts. We had no assembler and certainly no compiler for that machine. We did machine language.
Mike
Is that the sound of a dinosaur I hear roaring in the distance ;-) *just kidding, couldn't resist*
Tim:
Some people, and I don't mean you, but those putting together what they think is a minimal install list, have a strange idea about what minimal and required actually mean.
Mike McCarty:
I suppose a "minimal required system" would be the kernel, the init RAM disc, and tmpfs for /tmp. Not a very usable system. But when you go beyond this, then you get into "minimal required to do <x>" where <x> is some desired function. Everyone seems to have a different set of <x> to put into there. I don't know of any objective means to ascertain what <x> must contain.
To me, minimal means just that... "minimal." It boots, no more, no less. We have package selection to add to that, where wanted. Start with minimal, add X, or add webserving, and so on.
Tim:
But disregarding minimalism, there's still plenty of situations where a rather extensive installation won't need various things considered to be "required", but actually aren't.
Tom Horsley:
For grins, try to remove "cdrecord" on a fedora system. If I have no CDRW drive, you'd think I could live without it, but the dependencies say it also has to remove lots of bits of gnome and even decides it has to remove compiz (I was trying to get rid of excess baggage in a Xen guest OS when I discovered this one).
I've not tried that one, but yes, I've tried to remove a few things that had some strange cascade effects. Please, lets have the OS separate fro user interfaces and from applications and from other things...
Bet you never expected CD recording software was required to make your windows wiggle :-).
;-)
On Tue, 2006-12-12 at 18:05 -0600, Arthur Pemberton wrote:
As I have stated since the beginning, this happened to me very recently. This warning comes from the kernel. Only after the problem did I setup lm_sensors.
I'm curious about how it's generated. CPU says it's too hot, as a flag? A variable value is counted to equate to temperatures?
Arthur Pemberton:
He as also, supposedly unsuscribed from the list.
Mike McCarty:
I thought that was weird.
He was getting what I thought was some good suggestions.
Same here. Doctor shopping, perhaps? (Didn't like the answers, wanted something else.) I could almost have protested a "Linux burns up PCs troll", but it didn't have enough hallmarks for that.
As someone else, I forgotten who, said - the last thing that you did with something before it broke, isn't bound to be the thing that broke it. My background is in engineering AND servicing. I've seen weird faults, and spontaneous failures. I'm usually quite good at diagnosing things that were not causes of failures, because it's just not sensible that they were. The original poster's assertion about the cause just doesn't make sense.
On 12/14/06, Tim ignored_mailbox@yahoo.com.au wrote:
On Tue, 2006-12-12 at 18:05 -0600, Arthur Pemberton wrote:
As I have stated since the beginning, this happened to me very recently. This warning comes from the kernel. Only after the problem did I setup lm_sensors.
I'm curious about how it's generated. CPU says it's too hot, as a flag? A variable value is counted to equate to temperatures?
I's assume the former....the term ACPI comes to mind upon reading your question. But I lack much detail on the topic.
On Tue, 2006-12-12 at 16:07 -0500, Mike Chalmers wrote:
On 12/12/06, Mike McCarty Mike.McCarty@sbcglobal.net wrote:
Mike Wohlgemuth wrote:
On Tue, 12 Dec 2006 06:38:15 -0600, Mike McCarty Mike.McCarty@sbcglobal.net wrote:
Yes, the problem was caused by hardware. I cleaned out my computer, and it was pretty bad. Besides the cooling the hardware is good. I am waiting on some new hardware also, cpu fan and power supply, case fans and a new case. That should fix this problem.
The question is why doesn't it do this on Windows when I run cpu intensive apps. I believe in Windows that it was close to overheating though.
Efficiency.
This is an issue that has cropped up from time to time since I started using Linux. The way Linux uses hardware is much more efficient than Windows. This is why some processes will work much better on Linux than Windows and why you can do more with less hardware.
In the past there was an article that I read about actual testing that showed the difference on hardware due to running Linux. It was also a great way to find poor hardware choices.
Here is a link on hardware issues.
http://www.drsdigitalimaging.com/PDF/Ultra%208%20Framing%20Camera% 202.pdf
It would be interesting to see the actual wait times that the processor has under the various loads. An intensive application (media conversion) may still leave the processor waiting for hardware.
Maybe someone has some actual data on this.
Hadders wrote:
Mike McCarty wrote:
Les wrote:
On Tue, 2006-12-12 at 18:35 -0600, Mike McCarty wrote:
I was amazed when I installed FC2. I didn't think I selected all that much to install. It was about 7 Gig.
All systems seem enormously bloated to me these days. But I started with computers when 4K of RAM was considered a lot.
If you can't do it with 12bits and 4K what's the point. If you haven't used paper tape, how do you realize what a program really is?
I can't remember anymore what the word size of the IBM 1401 was. I recall we had 4K words of memory. I *think* it was 12 bits, because we used punch cards, which have 12 rows on them. So I think 12 bits was right. And it was true core, little magnetic doughnuts. We had no assembler and certainly no compiler for that machine. We did machine language.
Mike
Is that the sound of a dinosaur I hear roaring in the distance ;-) *just kidding, couldn't resist*
Better start running; I *love* small furry mammals for lunch :-)
Mike
Hadders wrote:
Is that the sound of a dinosaur I hear roaring in the distance ;-) *just kidding, couldn't resist*
I'm surprised Gene Heskitt hasn't chimed in yet...
"Ones? You had Ones? We just had Zeroes in my day!"
(With apologies to Scott and Dilbert.)
Mike
On Wednesday 13 December 2006 23:18, Mike McCarty wrote:
Hadders wrote:
Is that the sound of a dinosaur I hear roaring in the distance ;-) *just kidding, couldn't resist*
I'm surprised Gene Heskitt hasn't chimed in yet...
Well, as dinosaur's go, I guess I sorta qualify. But now if I could just get folks to spell my name right... I've certainly done the dinosaur roar a time or two in my time, occasionally even getting someones attention whom I might not otherwise have. Funny part is, the last time I did that, to the outgoing Chief at a tv station I'd been sent to babysit while the new owners were getting all their ducks in a row, must have taken me 10 minutes to run down. When I was done, he said no one had ever explained it that way before, and then asked me if he could ask why the hell wasn't I teaching someplace?
You see, here it seems I'm the eternal newbie.
There I was in my element and could back it up with references if need be.
The subject in that instance was the proper termination of a cable. Any cable would have sufficed, but in this case it was a video cable he had double terminated after I'd removed it on a previous visit. Because the cable lengths were accidentally correct, the tee off the middle of it was essentially black and white as the color subcarrier stuff was nicely nulled out by the miss-term generated echo.
"Ones? You had Ones? We just had Zeroes in my day!"
Nuh uh, all we had was naughts back then, and sometimes it took quite a few of them to enumerate something. Kernels of corn in a wagonload comes to mind. Doug, rest his soul, had yet to give us the answer of 42 by at least 4 decades.
Gene Heskett wrote:
On Wednesday 13 December 2006 23:18, Mike McCarty wrote:
Hadders wrote:
Is that the sound of a dinosaur I hear roaring in the distance ;-) *just kidding, couldn't resist*
I'm surprised Gene Heskitt hasn't chimed in yet...
Well, as dinosaur's go, I guess I sorta qualify. But now if I could just get folks to spell my name right... I've certainly done the dinosaur
Sorry 'bout that. Of course, I meant Jean Heskett :-)
[...]
Mike
On Thu, 2006-12-14 at 06:06 +0000, Arthur Pemberton wrote:
On 12/14/06, Tim ignored_mailbox@yahoo.com.au wrote:
On Tue, 2006-12-12 at 18:05 -0600, Arthur Pemberton wrote:
As I have stated since the beginning, this happened to me very recently. This warning comes from the kernel. Only after the problem did I setup lm_sensors.
I'm curious about how it's generated. CPU says it's too hot, as a flag? A variable value is counted to equate to temperatures?
I's assume the former....the term ACPI comes to mind upon reading your question. But I lack much detail on the topic.
-- Fedora Core 6 and proud
The cpu has a watchdog A/D internally with one or more thermal diodes. The transfer curves are processor and technology dependent, so the system has to know some "stuff" to set up calibration for what is read. I don't know the software side of it. sorry.
Regards, Les H
On Thursday 14 December 2006 01:52, Mike McCarty wrote:
Gene Heskett wrote:
On Wednesday 13 December 2006 23:18, Mike McCarty wrote:
Hadders wrote:
Is that the sound of a dinosaur I hear roaring in the distance ;-) *just kidding, couldn't resist*
I'm surprised Gene Heskitt hasn't chimed in yet...
Well, as dinosaur's go, I guess I sorta qualify. But now if I could just get folks to spell my name right... I've certainly done the dinosaur
Sorry 'bout that. Of course, I meant Jean Heskett :-)
Who's she?
It makes the CPU work indeed...
yum-updatesd takes up 100% CPU all the time. Thought the proces has a low thread since it gives away CPU resource if other processes need it.
But if you set your bios to overclock, yum would indeed try to use 100% at the overclock rate you chose. In this light, you need to watch out when overclocking a processor when using yum. It may indeed smoke your CPU! Make sure you know what you are doing when overclocking!
But that's what you always need to do so don't blame yum