which info retrieved by sdparam or hdparam would indicate that the drive is reaching it's EOL.
I'm getting a lot of these errors:
[12152.032068] ata4.00: exception Emask 0x0 SAct 0x0 SErr 0x0 action 0x6 frozen [12152.032082] ata4.00: failed command: READ DMA [12152.032101] ata4.00: cmd c8/00:08:27:14:3e/00:00:00:00:00/e1 tag 0 dma 4096 in [12152.032101] res 40/00:00:00:00:00/00:00:00:00:00/00 Emask 0x4 (timeout) [12152.032108] ata4.00: status: { DRDY } [12152.032123] ata4: hard resetting link [12152.592066] ata4: SATA link up 1.5 Gbps (SStatus 113 SControl 310) [12152.601327] ata4.00: configured for UDMA/100 [12152.601344] ata4.00: device reported invalid CHS sector 0 [12152.601370] sd 3:0:0:0: [sdc] [12152.601375] Result: hostbyte=DID_OK driverbyte=DRIVER_SENSE [12152.601381] sd 3:0:0:0: [sdc] [12152.601385] Sense Key : Aborted Command [current] [descriptor] [12152.601393] Descriptor sense data with sense descriptors (in hex): [12152.601397] 72 0b 00 00 00 00 00 0c 00 0a 80 00 00 00 00 00 [12152.601416] 00 00 00 00 [12152.601427] sd 3:0:0:0: [sdc] [12152.601432] Add. Sense: No additional sense information [12152.601439] sd 3:0:0:0: [sdc] CDB: [12152.601442] Read(10): 28 00 01 3e 14 27 00 00 08 00 [12152.601460] end_request: I/O error, dev sdc, sector 20845607 [12152.601506] ata4: EH complete
On Sat, 24 Nov 2012 01:34:59 -0700 JD jd1008@gmail.com wrote:
which info retrieved by sdparam or hdparam would indicate that the drive is reaching it's EOL.
You want the smart health test man smartctl. I suspect --health is what you are looking for ?
On 11/24/2012 04:13 AM, Alan Cox wrote:
On Sat, 24 Nov 2012 01:34:59 -0700 JD jd1008@gmail.com wrote:
which info retrieved by sdparam or hdparam would indicate that the drive is reaching it's EOL.
You want the smart health test man smartctl. I suspect --health is what you are looking for ?
Interesting! Running # smartctl -H /dev/sdc smartctl 5.43 2012-06-30 r3573 [i686-linux-3.6.6-1.fc16.i686] (local build) Copyright (C) 2002-12 by Bruce Allen, http://smartmontools.sourceforge.net
=== START OF READ SMART DATA SECTION === SMART overall-health self-assessment test result: PASSED
Yet, running smartctl -x shows a few pre-failure values. To wit:
1 Raw_Read_Error_Rate POSR-- 117 099 006 - 131606848
5 Reallocated_Sector_Ct PO--CK 100 100 036 - 3 7 Seek_Error_Rate POSR-- 083 060 030 - 209089036
10 Spin_Retry_Count PO--C- 100 100 097 - 0
I am having serious problems with disk writes because the disks are going to standby, which the OS (F16) is not handling it correctly and reports write failures, especially during shutdown where it is unable to write the journal to the disk and reports journal commit failures: messages:Nov 20 20:51:25 localhost kernel: [46916.385586] journal commit I/O error messages:Nov 20 20:51:25 localhost kernel: [46916.385998] journal commit I/O error messages:Nov 20 21:14:03 localhost kernel: [48274.947764] journal commit I/O error messages-20121118:Nov 16 23:42:05 localhost kernel: [ 4543.391840] journal commit I/O error
Yet, this does not happen if I shutdown after doing ls -R /<mount point> on the sleeping drives, which wakes them up and then I shutdown and no more write journal errors.
So, I have been looking for ways to prevent the drives from going to sleep. Only thing I have found was the option -s in the manpage for hdparm, but it has the "VERY DANGEROUS" phrase tacked to it, but without explanation as to the consequences of the danger.
But, I to a chance. I first woke up the drive by writing a small junk file to a dir in one of it's mounted partitions, then ran
# hdparm -C /dev/sdc
/dev/sdc: drive state is: active/idle # hdparm -s 0 /dev/sdc
/dev/sdc: spin-up: setting power-up in standby to 0 (off) HDIO_DRIVE_CMD(powerup_in_standby) failed: Input/output error
Tried it a few times, and still getting io error, but the rest of normal operation are OK, as long as the disk is not in standby when a write command is issued.
Am 24.11.2012 16:36, schrieb JD:
So, I have been looking for ways to prevent the drives from going to sleep. Only thing I have found was the option -s in the manpage for hdparm, but it has the "VERY DANGEROUS" phrase tacked to it, but without explanation as to the consequences of the danger
on my permanent running machines since years: /etc/rc.d/rc.local: /sbin/hdparm -B 255 /dev/sda /sbin/hdparm -B 255 /dev/sdb /sbin/hdparm -B 255 /dev/sdc /sbin/hdparm -B 255 /dev/sdd /sbin/hdparm -S 0 /dev/sda /sbin/hdparm -S 0 /dev/sdb /sbin/hdparm -S 0 /dev/sdc /sbin/hdparm -S 0 /dev/sdd
nothing more bad for disks as spinup/spindown on modern operating systems it is unlikely that they are not waking up very often
generally: if i get any messages like you a throw the disk away and insert a new one, these days the price of a disk compared with importance of relieable data is a clear decision
On 11/24/2012 10:36 AM, JD wrote:
On 11/24/2012 04:13 AM, Alan Cox wrote:
On Sat, 24 Nov 2012 01:34:59 -0700 JD jd1008@gmail.com wrote:
which info retrieved by sdparam or hdparam would indicate that the drive is reaching it's EOL.
You want the smart health test man smartctl. I suspect --health is what you are looking for ?
Interesting! Running # smartctl -H /dev/sdc smartctl 5.43 2012-06-30 r3573 [i686-linux-3.6.6-1.fc16.i686] (local build) Copyright (C) 2002-12 by Bruce Allen, http://smartmontools.sourceforge.net
=== START OF READ SMART DATA SECTION === SMART overall-health self-assessment test result: PASSED
Yet, running smartctl -x shows a few pre-failure values. To wit:
1 Raw_Read_Error_Rate POSR-- 117 099 006 - 131606848
5 Reallocated_Sector_Ct PO--CK 100 100 036 - 3 7 Seek_Error_Rate POSR-- 083 060 030 - 209089036
10 Spin_Retry_Count PO--C- 100 100 097 - 0
I am having serious problems with disk writes because the disks are going to standby, which the OS (F16) is not handling it correctly and reports write failures, especially during shutdown where it is unable to write the journal to the disk and reports journal commit failures: messages:Nov 20 20:51:25 localhost kernel: [46916.385586] journal commit I/O error messages:Nov 20 20:51:25 localhost kernel: [46916.385998] journal commit I/O error messages:Nov 20 21:14:03 localhost kernel: [48274.947764] journal commit I/O error messages-20121118:Nov 16 23:42:05 localhost kernel: [ 4543.391840] journal commit I/O error
Yet, this does not happen if I shutdown after doing ls -R /<mount point> on the sleeping drives, which wakes them up and then I shutdown and no more write journal errors.
So, I have been looking for ways to prevent the drives from going to sleep. Only thing I have found was the option -s in the manpage for hdparm, but it has the "VERY DANGEROUS" phrase tacked to it, but without explanation as to the consequences of the danger.
But, I to a chance. I first woke up the drive by writing a small junk file to a dir in one of it's mounted partitions, then ran
# hdparm -C /dev/sdc
/dev/sdc: drive state is: active/idle # hdparm -s 0 /dev/sdc
/dev/sdc: spin-up: setting power-up in standby to 0 (off) HDIO_DRIVE_CMD(powerup_in_standby) failed: Input/output error
Tried it a few times, and still getting io error, but the rest of normal operation are OK, as long as the disk is not in standby when a write command is issued.
I have a similar situation but after updating to 3.6.7 kernel it seems to have resolved its self. on an odd note nothing shows up in /var/logs/messages so i backed up just in case
On 11/24/2012 09:36 AM, JD wrote:
=== START OF READ SMART DATA SECTION === SMART overall-health self-assessment test result: PASSED
Yet, running smartctl -x shows a few pre-failure values. To wit:
1 Raw_Read_Error_Rate POSR-- 117 099 006 - 131606848
5 Reallocated_Sector_Ct PO--CK 100 100 036 - 3 7 Seek_Error_Rate POSR-- 083 060 030 - 209089036
10 Spin_Retry_Count PO--C- 100 100 097 - 0
No, nothing there is indicating a risk of imminent failure. That "Pre-fail" label is just telling you what it would mean if there were something other than "-" in the "WHEN_FAILED" column. The only thing there that is at all indicative of something bad happening is the three reallocated sectors, and that's only a problem if that number continues to grow. (Vibration or electrical disturbances can cause a sector or two to get marked as bad, and that doesn't mean that the drive is in danger of catastrophic failure.) An attribute of interest that you didn't report is #197, "Current Pending Sector". That would indicate sectors that are currently unreadable and will be reallocated the next time they are written. Any such sectors would cause an I/O error if the OS should attempt to read them.
On 11/24/2012 05:39 PM, Robert Nichols wrote:
On 11/24/2012 09:36 AM, JD wrote:
=== START OF READ SMART DATA SECTION === SMART overall-health self-assessment test result: PASSED
Yet, running smartctl -x shows a few pre-failure values. To wit:
1 Raw_Read_Error_Rate POSR-- 117 099 006 - 131606848
5 Reallocated_Sector_Ct PO--CK 100 100 036 - 3 7 Seek_Error_Rate POSR-- 083 060 030 - 209089036
10 Spin_Retry_Count PO--C- 100 100 097 - 0
No, nothing there is indicating a risk of imminent failure. That "Pre-fail" label is just telling you what it would mean if there were something other than "-" in the "WHEN_FAILED" column. The only thing there that is at all indicative of something bad happening is the three reallocated sectors, and that's only a problem if that number continues to grow. (Vibration or electrical disturbances can cause a sector or two to get marked as bad, and that doesn't mean that the drive is in danger of catastrophic failure.) An attribute of interest that you didn't report is #197, "Current Pending Sector". That would indicate sectors that are currently unreadable and will be reallocated the next time they are written. Any such sectors would cause an I/O error if the OS should attempt to read them.
Here are the values for #197 ID# ATTRIBUTE_NAME FLAGS VALUE WORST THRESH FAIL RAW_VALUE For sdb: 1 Raw_Read_Error_Rate POSR-- 117 099 006 - 131606848 197 Current_Pending_Sector -O--C- 100 100 000 - 0 For sdc: 1 Raw_Read_Error_Rate POSR-- 114 069 006 - 80733495 197 Current_Pending_Sector -O--C- 100 100 000 - 0
If the raw read error values are so high, and the normalized values for raw read error rate exceeds worst case value, does that mean the drive dying or near death?
On 11/24/2012 10:27 PM, JD wrote:
Here are the values for #197 ID# ATTRIBUTE_NAME FLAGS VALUE WORST THRESH FAIL RAW_VALUE For sdb: 1 Raw_Read_Error_Rate POSR-- 117 099 006 - 131606848 197 Current_Pending_Sector -O--C- 100 100 000 - 0 For sdc: 1 Raw_Read_Error_Rate POSR-- 114 069 006 - 80733495 197 Current_Pending_Sector -O--C- 100 100 000 - 0
If the raw read error values are so high, and the normalized values for raw read error rate exceeds worst case value, does that mean the drive dying or near death?
Not at all. First, suspiciously high "raw" numbers can't always be taken at face value. Seagate in particular likes to pack more than one number into that variable, frequently the total number of operations in addition to the error count, so you have to trust the normalized values, or perhaps go Googling for info on that raw value for your particular drive model. Second, for the normalized values, higher is better. A failure is indicated by a normalized value that is at or below the threshold.
None of the SMART Attributes Data you have posted indicate any serious problem with the drives.
On 11/25/2012 09:27 AM, Robert Nichols wrote:
On 11/24/2012 10:27 PM, JD wrote:
Here are the values for #197 ID# ATTRIBUTE_NAME FLAGS VALUE WORST THRESH FAIL RAW_VALUE For sdb: 1 Raw_Read_Error_Rate POSR-- 117 099 006 - 131606848 197 Current_Pending_Sector -O--C- 100 100 000 - 0 For sdc: 1 Raw_Read_Error_Rate POSR-- 114 069 006 - 80733495 197 Current_Pending_Sector -O--C- 100 100 000 - 0
If the raw read error values are so high, and the normalized values for raw read error rate exceeds worst case value, does that mean the drive dying or near death?
Not at all. First, suspiciously high "raw" numbers can't always be taken at face value. Seagate in particular likes to pack more than one number into that variable, frequently the total number of operations in addition to the error count, so you have to trust the normalized values, or perhaps go Googling for info on that raw value for your particular drive model. Second, for the normalized values, higher is better. A failure is indicated by a normalized value that is at or below the threshold.
None of the SMART Attributes Data you have posted indicate any serious problem with the drives.
Thank you Robert! I truly appreciate your help, as I was about to plunk down a few hundred bucks to get the NS series seage 3TB drives to replace my drives.
Best regards,
JD