Falha de hardware ou software? [RAID5]

0

Eu tenho um sistema raid5 contendo 4 discos rígidos Samsung SpinPoint F4 configurados para a desativação automática após 20 minutos. É uma invasão do Linux Software.

Hoje notei que ao acessar o armazenamento da rede o explorador no windows e no Linux (caja) trava. Eu então dei uma olhada no ataque (normalmente eu recebo um e-mail, se algo não está certo).

Agora estou explorando o problema e a pergunta ocorre: Hardware ou software? O disco rígido em questão é o / dev / sde, pois isso me irrita completamente com o dmesg:

[  514.321832] ata5.00: configured for UDMA/133
[  514.321849] sd 4:0:0:0: [sde] FAILED Result: hostbyte=DID_OK driverbyte=DRIVER_SENSE
[  514.321853] sd 4:0:0:0: [sde] Sense Key : Medium Error [current] [descriptor]
[  514.321856] sd 4:0:0:0: [sde] Add. Sense: Unrecovered read error - auto reallocate failed
[  514.321858] sd 4:0:0:0: [sde] CDB: 
[  514.321859] Read(10): 28 00 d7 02 00 38 00 00 10 00
[  514.321867] blk_update_request: I/O error, dev sde, sector 3607232568
[  514.321898] ata5: EH complete
[  514.785181] raid5_end_read_request: 22 callbacks suppressed
[  514.785198] md/raid:md0: read error corrected (8 sectors at 3607230520 on sde1)
[  514.785204] md/raid:md0: read error corrected (8 sectors at 3607230528 on sde1)
[  519.849195] ata5.00: exception Emask 0x0 SAct 0x3 SErr 0x0 action 0x0
[  519.849201] ata5.00: irq_stat 0x40000008
[  519.849204] ata5.00: failed command: READ FPDMA QUEUED
[  519.849209] ata5.00: cmd 60/40:00:10:02:02/05:00:d7:00:00/40 tag 0 ncq 688128 in
         res 41/40:00:60:06:02/00:00:d7:00:00/40 Emask 0x409 (media error) <F>
[  519.849212] ata5.00: status: { DRDY ERR }
[  519.849214] ata5.00: error: { UNC }
[  519.861716] ata5.00: configured for UDMA/133
[  519.861806] sd 4:0:0:0: [sde] FAILED Result: hostbyte=DID_OK driverbyte=DRIVER_SENSE
[  519.861811] sd 4:0:0:0: [sde] Sense Key : Medium Error [current] [descriptor]
[  519.861814] sd 4:0:0:0: [sde] Add. Sense: Unrecovered read error - auto reallocate failed
[  519.861816] sd 4:0:0:0: [sde] CDB: 
[  519.861818] Read(10): 28 00 d7 02 02 10 00 05 40 00
[  519.861826] blk_update_request: I/O error, dev sde, sector 3607234144
[  519.861874] ata5: EH complete
[  525.035364] ata5.00: exception Emask 0x0 SAct 0x18 SErr 0x0 action 0x0
[  525.035369] ata5.00: irq_stat 0x40000008
[  525.035373] ata5.00: failed command: READ FPDMA QUEUED
[  525.035378] ata5.00: cmd 60/80:18:60:06:02/00:00:d7:00:00/40 tag 3 ncq 65536 in
         res 41/40:00:60:06:02/00:00:d7:00:00/40 Emask 0x409 (media error) <F>
[  525.035381] ata5.00: status: { DRDY ERR }
[  525.035382] ata5.00: error: { UNC }
[  525.047886] ata5.00: configured for UDMA/133
[  525.047907] sd 4:0:0:0: [sde] FAILED Result: hostbyte=DID_OK driverbyte=DRIVER_SENSE
[  525.047910] sd 4:0:0:0: [sde] Sense Key : Medium Error [current] [descriptor]
[  525.047914] sd 4:0:0:0: [sde] Add. Sense: Unrecovered read error - auto reallocate failed
[  525.047916] sd 4:0:0:0: [sde] CDB: 
[  525.047918] Read(10): 28 00 d7 02 06 60 00 00 80 00
[  525.047925] blk_update_request: I/O error, dev sde, sector 3607234144
[  525.047962] ata5: EH complete
[  525.072001] md: md0: resync done.
[  525.541340] RAID conf printout:
[  525.541346]  --- level:5 rd:4 wd:4
[  525.541349]  disk 0, o:1, dev:sdb1
[  525.541350]  disk 1, o:1, dev:sdc1
[  525.541352]  disk 2, o:1, dev:sdd1
[  525.541354]  disk 3, o:1, dev:sde1
[  525.611488] md/raid:md0: read error corrected (8 sectors at 3607232096 on sde1)
[  525.611507] md/raid:md0: read error corrected (8 sectors at 3607232104 on sde1)
[  525.611511] md/raid:md0: read error corrected (8 sectors at 3607232112 on sde1)
[  525.611515] md/raid:md0: read error corrected (8 sectors at 3607232120 on sde1)
[  525.611518] md/raid:md0: read error corrected (8 sectors at 3607232128 on sde1)
[  525.611522] md/raid:md0: read error corrected (8 sectors at 3607232136 on sde1)
[  525.611525] md/raid:md0: read error corrected (8 sectors at 3607232144 on sde1)
[  525.611528] md/raid:md0: read error corrected (8 sectors at 3607232152 on sde1)
[  525.611531] md/raid:md0: read error corrected (8 sectors at 3607232160 on sde1)
[  525.611534] md/raid:md0: read error corrected (8 sectors at 3607232168 on sde1)

O Smart não mostra nenhum erro, mas a leitura dos dados inteligentes é incrivelmente lenta:

smartctl 6.3 2014-07-26 r3976 [x86_64-linux-3.19.2-gentoo] (local build)
Copyright (C) 2002-14, Bruce Allen, Christian Franke, www.smartmontools.org

=== START OF INFORMATION SECTION ===
Model Family:     SAMSUNG SpinPoint F4 EG (AF)
Device Model:     SAMSUNG HD204UI
Serial Number:    S2HGJ1AZ902089
LU WWN Device Id: 5 0024e9 00401ea85
Firmware Version: 1AQ10001
User Capacity:    2.000.398.934.016 bytes [2,00 TB]
Sector Size:      512 bytes logical/physical
Rotation Rate:    5400 rpm
Form Factor:      3.5 inches
Device is:        In smartctl database [for details use: -P show]
ATA Version is:   ATA8-ACS T13/1699-D revision 6
SATA Version is:  SATA 2.6, 3.0 Gb/s
Local Time is:    Wed Jan 13 10:39:37 2016 CET

==> WARNING: Using smartmontools or hdparm with this
drive may result in data loss due to a firmware bug.
****** THIS DRIVE MAY OR MAY NOT BE AFFECTED! ******
Buggy and fixed firmware report same version number!
See the following web pages for details:
http://knowledge.seagate.com/articles/en_US/FAQ/223571en
http://www.smartmontools.org/wiki/SamsungF4EGBadBlocks

SMART support is: Available - device has SMART capability.
SMART support is: Enabled

=== START OF READ SMART DATA SECTION ===
SMART overall-health self-assessment test result: PASSED

General SMART Values:
Offline data collection status:  (0x00) Offline data collection activity
                    was never started.
                    Auto Offline Data Collection: Disabled.
Self-test execution status:      (   0) The previous self-test routine completed
                    without error or no self-test has ever 
                    been run.
Total time to complete Offline 
data collection:        (21180) seconds.
Offline data collection
capabilities:            (0x5b) SMART execute Offline immediate.
                    Auto Offline data collection on/off support.
                    Suspend Offline collection upon new
                    command.
                    Offline surface scan supported.
                    Self-test supported.
                    No Conveyance Self-test supported.
                    Selective Self-test supported.
SMART capabilities:            (0x0003) Saves SMART data before entering
                    power-saving mode.
                    Supports SMART auto save timer.
Error logging capability:        (0x01) Error logging supported.
                    General Purpose Logging supported.
Short self-test routine 
recommended polling time:    (   2) minutes.
Extended self-test routine
recommended polling time:    ( 353) minutes.
SCT capabilities:          (0x003f) SCT Status supported.
                    SCT Error Recovery Control supported.
                    SCT Feature Control supported.
                    SCT Data Table supported.

SMART Attributes Data Structure revision number: 16
Vendor Specific SMART Attributes with Thresholds:
ID# ATTRIBUTE_NAME          FLAG     VALUE WORST THRESH TYPE      UPDATED  WHEN_FAILED RAW_VALUE
  1 Raw_Read_Error_Rate     0x002f   100   100   051    Pre-fail  Always       -       416
  2 Throughput_Performance  0x0026   252   252   000    Old_age   Always       -       0
  3 Spin_Up_Time            0x0023   068   045   025    Pre-fail  Always       -       9953
  4 Start_Stop_Count        0x0032   093   093   000    Old_age   Always       -       7671
  5 Reallocated_Sector_Ct   0x0033   252   252   010    Pre-fail  Always       -       0
  7 Seek_Error_Rate         0x002e   252   252   051    Old_age   Always       -       0
  8 Seek_Time_Performance   0x0024   252   252   015    Old_age   Offline      -       0
  9 Power_On_Hours          0x0032   100   100   000    Old_age   Always       -       2791
 10 Spin_Retry_Count        0x0032   252   252   051    Old_age   Always       -       0
 11 Calibration_Retry_Count 0x0032   100   100   000    Old_age   Always       -       1
 12 Power_Cycle_Count       0x0032   100   100   000    Old_age   Always       -       669
181 Program_Fail_Cnt_Total  0x0022   099   099   000    Old_age   Always       -       24623703
191 G-Sense_Error_Rate      0x0022   100   100   000    Old_age   Always       -       6024
192 Power-Off_Retract_Count 0x0022   252   252   000    Old_age   Always       -       0
194 Temperature_Celsius     0x0002   064   060   000    Old_age   Always       -       25 (Min/Max 13/41)
195 Hardware_ECC_Recovered  0x003a   100   100   000    Old_age   Always       -       0
196 Reallocated_Event_Count 0x0032   252   252   000    Old_age   Always       -       0
197 Current_Pending_Sector  0x0032   100   100   000    Old_age   Always       -       23
198 Offline_Uncorrectable   0x0030   252   252   000    Old_age   Offline      -       0
199 UDMA_CRC_Error_Count    0x0036   200   200   000    Old_age   Always       -       0
200 Multi_Zone_Error_Rate   0x002a   100   100   000    Old_age   Always       -       81
223 Load_Retry_Count        0x0032   100   100   000    Old_age   Always       -       1
225 Load_Cycle_Count        0x0032   100   100   000    Old_age   Always       -       7679

SMART Error Log Version: 1
No Errors Logged

SMART Self-test log structure revision number 1
No self-tests have been logged.  [To run self-tests, use: smartctl -t]

SMART Selective self-test log data structure revision number 0
Note: revision number not 1 implies that no selective self-test has ever been run
 SPAN  MIN_LBA  MAX_LBA  CURRENT_TEST_STATUS
    1        0        0  Completed [00% left] (0-65535)
    2        0        0  Not_testing
    3        0        0  Not_testing
    4        0        0  Not_testing
    5        0        0  Not_testing
Selective self-test flags (0x0):
  After scanning selected spans, do NOT read-scan remainder of disk.
If Selective self-test is pending on power-up, resume after 0 minute delay.

Mas também /proc/mdstat mostra:

Personalities : [linear] [raid1] [raid10] [raid6] [raid5] [raid4] 
md0 : active raid5 sdb1[0] sdc1[1] sdd1[2] sde1[4]
      5860147200 blocks super 1.2 level 5, 512k chunk, algorithm 2 [4/4] [UUUU]
      bitmap: 0/15 pages [0KB], 65536KB chunk

unused devices: <none>

Há alguns minutos atrás, estava em estado de ressincronização com este resultado:

[  525.047962] ata5: EH complete
[  525.072001] md: md0: resync done.
[  525.541340] RAID conf printout:
[  525.541346]  --- level:5 rd:4 wd:4
[  525.541349]  disk 0, o:1, dev:sdb1
[  525.541350]  disk 1, o:1, dev:sdc1
[  525.541352]  disk 2, o:1, dev:sdd1
[  525.541354]  disk 3, o:1, dev:sde1

Mas depois de ressincronizar, ainda chega:

[  525.611488] md/raid:md0: read error corrected (8 sectors at 3607232096 on sde1)
[  525.611507] md/raid:md0: read error corrected (8 sectors at 3607232104 on sde1)
[  525.611511] md/raid:md0: read error corrected (8 sectors at 3607232112 on sde1)
[  525.611515] md/raid:md0: read error corrected (8 sectors at 3607232120 on sde1)
[  525.611518] md/raid:md0: read error corrected (8 sectors at 3607232128 on sde1)
[  525.611522] md/raid:md0: read error corrected (8 sectors at 3607232136 on sde1)
[  525.611525] md/raid:md0: read error corrected (8 sectors at 3607232144 on sde1)
[  525.611528] md/raid:md0: read error corrected (8 sectors at 3607232152 on sde1)
[  525.611531] md/raid:md0: read error corrected (8 sectors at 3607232160 on sde1)
[  525.611534] md/raid:md0: read error corrected (8 sectors at 3607232168 on sde1)

Eu não consigo descobrir qual é o problema. Alguém pode me ajudar a descobrir isso?

    
por Nidhoegger 13.01.2016 / 10:33

1 resposta

0

Eu suspeitaria do drive em si, especificamente porque sempre parece estar aumentando os mesmos setores na mesma unidade, que são todos muito próximos fisicamente. De fato, olhando para eles de perto, totalmente seqüencial. Já vi discos quase completamente defeituosos anteriormente, onde eles estão fazendo um barulho horrível, mostram resultados perfeitos da SMART. Não é impossível que a SMART não os pegue.

Pessoalmente, eu estaria inclinado a fazer backup dos meus dados e testar a unidade, executando um teste de leitura completo. Eu não confiaria que o erro tenha desaparecido, especialmente porque você está usando o RAID5, que só tem tolerância a falhas para uma unidade que falha de cada vez (e se você é forçado a substituir e reconstruir a matriz, dependendo do erro) quantidade de dados que você poderia estar esperando por um dia ou mais).

De qualquer forma, agora é a hora de garantir que todos os seus backups estejam atualizados e, se tiver chance, execute um teste adequado sobre ele. Os valores SMART são ótimos para uma rápida visão geral, mas acho que você se beneficiaria de mais diagnósticos. É provável que não haja um sério fracasso nesta fase, mas poderia se desenvolver.

    
por 13.01.2016 / 10:51