Disposição degradada no ataque de software

1

Temos um problema com um array Degraded em raid de software. Após uma atualização recente do yum (centOS7 linux 7, 1708) em nosso cluster linux, uma mensagem foi enviada para a raiz, indicando que um evento DegradedArray foi detectado no dispositivo md /dev/md/swap|root|boot . Depois de executar alguns comandos e perguntar a alguns colegas, concluiu-se que sdb não estava mais associado a sdc , talvez como resultado da atualização. A partição em sdb não é a mesma que em sdc . Ao executar o comando mdadm – E … , parece que a associação teria que ser: root md125 sdc3 sdb5; swap md126 sdc1 sdb2; and boot md127 sdc2 sdb3 . Nós tentamos trocar os discos rígidos, mas isso não ajudou. Estamos cientes da resposta dada a uma pergunta semelhante em ( link ). Essa resposta seria apropriada mesmo se as partições sdb e sdc fossem diferentes? Abaixo está a saída de: MESSAGE_1 - Mensagem para o root MESSAGE_2 - saída de mdadm --detail /dev/md125 ; mdadm --detail /dev/md126 ; mdadm --detail /dev/md127 MESSAGE_3 - saída de fdisk -l e lsscsi .

Agradecemos antecipadamente pelo seu tempo e esforço.

MESSAGE_1

From root@jtainer_lab_01.mdanderson.edu  Tue Oct 17 17:17:07 2017
Return-Path: <root@jtainer_lab_01.mdanderson.edu>
X-Original-To: root
Delivered-To: root@jtainer_lab_01.mdanderson.edu
Received: by jtainer_lab_01.mdanderson.edu (Postfix, from userid 0)
id 7979A5272BF; Tue, 17 Oct 2017 17:17:07 -0500 (CDT)
From: mdadm monitoring <root@jtainer_lab_01.mdanderson.edu>
To: root@jtainer_lab_01.mdanderson.edu
Subject: DegradedArray event on /dev/md/swap:jtainer_lab_01.mdanderson.edu
Message-Id: <20171017221707.7979A5272BF@jtainer_lab_01.mdanderson.edu>
Date: Tue, 17 Oct 2017 17:17:07 -0500 (CDT)
This is an automatically generated mail message from mdadm
running on jtainer_lab_01.mdanderson.edu
A DegradedArray event had been detected on md device /dev/md/swap.
Faithfully yours, etc.
P.S. The /proc/mdstat file currently contains the following:
Personalities : [raid1]
md125 : active raid1 sdc3[1]
142701440 blocks super 1.2 [2/1] [_U]
bitmap: 2/2 pages [8KB], 65536KB chunk

md126 : active raid1 sdc1[1]
32767872 blocks super 1.2 [2/1] [_U]

md127 : active raid1 sdc2[1]
204736 blocks super 1.0 [2/1] [_U]

unused devices: <none>

From root@jtainer_lab_01.mdanderson.edu  Tue Oct 17 17:17:07 2017
Return-Path: <root@jtainer_lab_01.mdanderson.edu>
X-Original-To: root
Delivered-To: root@jtainer_lab_01.mdanderson.edu
Received: by jtainer_lab_01.mdanderson.edu (Postfix, from userid 0)
id 8E07B52BD73; Tue, 17 Oct 2017 17:17:07 -0500 (CDT)
From: mdadm monitoring <root@jtainer_lab_01.mdanderson.edu>
To: root@jtainer_lab_01.mdanderson.edu
Subject: DegradedArray event on /dev/md/boot:jtainer_lab_01.mdanderson.edu
Message-Id: <20171017221707.8E07B52BD73@jtainer_lab_01.mdanderson.edu>
Date: Tue, 17 Oct 2017 17:17:07 -0500 (CDT)
This is an automatically generated mail message from mdadm
running on jtainer_lab_01.mdanderson.edu
A DegradedArray event had been detected on md device /dev/md/boot.
Faithfully yours, etc.
P.S. The /proc/mdstat file currently contains the following:
Personalities : [raid1]
md125 : active raid1 sdc3[1]
142701440 blocks super 1.2 [2/1] [_U]
bitmap: 2/2 pages [8KB], 65536KB chunk

md126 : active raid1 sdc1[1]
32767872 blocks super 1.2 [2/1] [_U]

md127 : active raid1 sdc2[1]
204736 blocks super 1.0 [2/1] [_U]

unused devices: <none>

From root@jtainer_lab_01.mdanderson.edu  Tue Oct 17 17:17:08 2017
Return-Path: <root@jtainer_lab_01.mdanderson.edu>
X-Original-To: root
Delivered-To: root@jtainer_lab_01.mdanderson.edu
Received: by jtainer_lab_01.mdanderson.edu (Postfix, from userid 0)
id 830EB52BD72; Tue, 17 Oct 2017 17:17:07 -0500 (CDT)
From: mdadm monitoring <root@jtainer_lab_01.mdanderson.edu>
To: root@jtainer_lab_01.mdanderson.edu
Subject: DegradedArray event on /dev/md/root:jtainer_lab_01.mdanderson.edu
Message-Id: <20171017221707.830EB52BD72@jtainer_lab_01.mdanderson.edu>
Date: Tue, 17 Oct 2017 17:17:07 -0500 (CDT)
This is an automatically generated mail message from mdadm
running on jtainer_lab_01.mdanderson.edu
A DegradedArray event had been detected on md device /dev/md/root.
Faithfully yours, etc.
P.S. The /proc/mdstat file currently contains the following:
Personalities : [raid1]
md125 : active raid1 sdc3[1]
142701440 blocks super 1.2 [2/1] [_U]
bitmap: 2/2 pages [8KB], 65536KB chunk

md126 : active raid1 sdc1[1]
32767872 blocks super 1.2 [2/1] [_U]

md127 : active raid1 sdc2[1]
204736 blocks super 1.0 [2/1] [_U]

unused devices: <none>

MESSAGE_2 ("mdadm --detail / dev / md125"; "mdadm --detail / dev / md126"; "mdadm --detail / dev / md127")

/dev/md125:
Version : 1.2
Creation Time : Tue Jan 10 06:49:38 2017
Raid Level : raid1
Array Size : 142701440 (136.09 GiB 146.13 GB)
Used Dev Size : 142701440 (136.09 GiB 146.13 GB)
Raid Devices : 2
Total Devices : 1
Persistence : Superblock is persistent

Intent Bitmap : Internal

Update Time : Wed Oct 18 12:44:11 2017
State : clean, degraded 
Active Devices : 1
Working Devices : 1
Failed Devices : 0
Spare Devices : 0

Consistency Policy : unknown

 Name : localhost:root
 UUID : efa7c05e:02986ab9:5e588700:f0c62cb2
 Events : 4631277

 Number   Major   Minor   RaidDevice State
   -       0        0        0      removed
   1       8       35        1      active sync   /dev/sdc3



/dev/md126:
Version : 1.2
Creation Time : Tue Jan 10 06:49:33 2017
Raid Level : raid1
Array Size : 32767872 (31.25 GiB 33.55 GB)
Used Dev Size : 32767872 (31.25 GiB 33.55 GB)
Raid Devices : 2
Total Devices : 1
Persistence : Superblock is persistent

Update Time : Sun Oct 15 01:01:03 2017
State : clean, degraded 
Active Devices : 1
Working Devices : 1
Failed Devices : 0
Spare Devices : 0

Consistency Policy : unknown

 Name : localhost:swap
 UUID : 3774f886:17152e89:79d96cdb:ce258389
 Events : 99

 Number   Major   Minor   RaidDevice State
   -       0        0        0      removed
   1       8       33        1      active sync   /dev/sdc1



/dev/md127:
Version : 1.0
Creation Time : Tue Jan 10 06:50:40 2017
Raid Level : raid1
Array Size : 204736 (199.94 MiB 209.65 MB)
Used Dev Size : 204736 (199.94 MiB 209.65 MB)
Raid Devices : 2
Total Devices : 1
Persistence : Superblock is persistent

Update Time : Wed Oct 18 12:29:03 2017
State : clean, degraded 
Active Devices : 1
Working Devices : 1
Failed Devices : 0
Spare Devices : 0

Consistency Policy : unknown

Name : localhost:boot
UUID : ae97ea74:2a96c6db:c2dd6443:be74b4e8
Events : 732

Number   Major   Minor   RaidDevice State
   -       0        0        0      removed
   1       8       34        1      active sync   /dev/sdc2

MESSAGE_3 (saída de "fdisk -l" e "lsscsi")

root@jtainer_lab_01:~$ fdisk -l
WARNING: fdisk GPT support is currently new, and therefore in an experimental
phase. Use at your own discretion.

Disk /dev/sda: 28001.6 GB, 28001576157184 bytes, 54690578432 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes
Disk label type: gpt
Disk identifier: 59269F14-5508-4597-97D4-B5E68F5B4826


#         Start          End    Size  Type            Name
1         2048  54690578398   25.5T  Linux filesyste  

Disk /dev/sdb: 180.0 GB, 180045766656 bytes, 351651888 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk label type: dos
Disk identifier: 0x000ac099

Device Boot      Start         End      Blocks   Id  System
/dev/sdb1            2048        6143        2048   83  Linux
/dev/sdb2            6144    65574911    32784384   fd  Linux raid autodetect
/dev/sdb3   *    65574912    65984511      204800   fd  Linux raid autodetect
/dev/sdb4        65984512   351651887   142833688    5  Extended
/dev/sdb5        65986560   351651839   142832640   fd  Linux raid autodetect

Disk /dev/sdc: 180.0 GB, 180045766656 bytes, 351651888 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk label type: dos
Disk identifier: 0x0002bda2

Device Boot      Start         End      Blocks   Id  System
/dev/sdc1            2048    65570815    32784384   fd  Linux raid autodetect
/dev/sdc2   *    65570816    65980415      204800   fd  Linux raid autodetect
/dev/sdc3        65980416   351647743   142833664   fd  Linux raid autodetect

Disk /dev/md127: 209 MB, 209649664 bytes, 409472 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes


Disk /dev/md126: 33.6 GB, 33554300928 bytes, 65535744 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes


Disk /dev/md125: 146.1 GB, 146126274560 bytes, 285402880 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes

root@jtainer_lab_01:~$ lsscsi
[0:2:0:0]    disk    LSI      MR9271-8i        3.46  /dev/sda 
[5:0:0:0]    disk    ATA      INTEL SSDSC2BF18 LWDi  /dev/sdb 
[6:0:0:0]    disk    ATA      INTEL SSDSC2BF18 LWDi  /dev/sdc
    
por abacolla 03.11.2017 / 16:02

1 resposta

0

Tudo o que você precisa fazer é adicionar as partições em sdb ao respectivo array, por exemplo,

mdadm --re-add /dev/md125 /dev/sdb5

Depois disso, você verá o progresso da sincronização na saída de cat /proc/mdstat . Faz sentido atrasar a restauração / sincronização de uma matriz até que a sincronização da anterior tenha terminado.

    
por 04.11.2017 / 12:01

Tags