Estou executando um servidor com Ubuntu 16.04. O armazenamento de dados neste sistema usa um lvm no topo de uma invasão de software 6, enquanto o sistema operacional é instalado em uma invasão separada 1 com lvm.
O ataque 6 é composto de 7 partições em 7 discos.
Depois de uma quantidade crescente de erros s.m.a.r.t em um desses discos, decidi trocar este disco por um novo antes que o array seja degradado.
Eu fiz um sudo mdadm /dev/md2 --fail /dev/sdd4
, seguido por sudo mdadm /dev/md2 --remove /dev/sdd4
antes de trocar os discos.
Após a próxima inicialização tudo parecia estar bem, então eu comecei a particionar o novo disco. Eu fiz um sudo parted --list
para adaptar o particionamento dos outros discos.
Neste momento, um problema estranho ocorreu e o parted teve problemas ao acessar um disco antigo. Eu notei que outra unidade foi do array e alguns segundos depois outra. A matriz falhou. Fiquei chocado e desliguei o sistema para evitar mais erros.
Mais tarde, tentei iniciar o sistema novamente e tive falhas estranhas como estas:
ata2: irq_stat 0x00000040, connection status changed
ata2: SError: { CommWake DevExch }
Eu só tinha um console de emergência neste momento, então eu iniciei um live linux para inspecionar o problema. Eu li que eu poderia seguramente fazer um mdadm --assemble --scan
para tentar consertar a matriz, mas ela permanece em um estado curioso, então eu só removi a matriz de mdadm.conf
e fstab
.
O ataque agora é mostrado como um raid0 com 7 unidades sobressalentes, mas as unidades parecem estar funcionando bem nas últimas horas no RAID1 restante.
Eu não tenho certeza do que devo fazer agora e espero perder todos os dados, mas também espero que haja uma chance de resgatar pelo menos uma parte dos dados. Eu tenho um backup, mas apenas uma parte porque era uma matriz de 19TB.
Estado antes de trocar os discos
chris@uranus:~$ sudo mdadm --detail /dev/md2
/dev/md2:
Version : 1.2
Creation Time : Thu Aug 6 00:45:41 2015
Raid Level : raid6
Array Size : 18723832320 (17856.44 GiB 19173.20 GB)
Used Dev Size : 3744766464 (3571.29 GiB 3834.64 GB)
Raid Devices : 7
Total Devices : 7
Persistence : Superblock is persistent
Intent Bitmap : Internal
Update Time : Fri Jul 13 17:39:04 2018
State : clean
Active Devices : 7
Working Devices : 7
Failed Devices : 0
Spare Devices : 0
Layout : left-symmetric
Chunk Size : 512K
Name : uranus:2 (local to host uranus)
UUID : 607914eb:666e2a46:b2e43557:02cc2983
Events : 2738806
Number Major Minor RaidDevice State
0 8 20 0 active sync /dev/sdb4
1 8 36 1 active sync /dev/sdc4
2 8 52 2 active sync /dev/sdd4
6 8 1 3 active sync /dev/sda1
5 8 68 4 active sync /dev/sde4
8 8 97 5 active sync /dev/sdg1
7 8 81 6 active sync /dev/sdf1
Estado real
chris@uranus:/$ sudo mdadm --detail /dev/md2
/dev/md2:
Version : 1.2
Raid Level : raid0
Total Devices : 6
Persistence : Superblock is persistent
State : inactive
Name : uranus:2 (local to host uranus)
UUID : 607914eb:666e2a46:b2e43557:02cc2983
Events : 2739360
Number Major Minor RaidDevice
- 8 1 - /dev/sda1
- 8 20 - /dev/sdb4
- 8 36 - /dev/sdc4
- 8 68 - /dev/sde4
- 8 81 - /dev/sdf1
- 8 97 - /dev/sdg1
Para esclarecer as coisas
6 drives não estão com defeito, o 7º teve erros mas eu mudei para um novo. Após a troca desta unidade defeituosa, os dados inteligentes são bons para todas as unidades. Não há erros, bloqueios inválidos, setores pendentes, incorrigíveis ou realocados.
Meu último --detail
mostra apenas 6 unidades porque eu não adicionei a nova unidade à matriz existente.
O ataque em que o sistema operacional se baseia foi basicamente em 3 + 1 dos mesmos 7 discos, mas em partições próprias. Quando eu removo o / dev / sdd, a unidade de reposição ocupou seu lugar, de modo que agora consiste em 3 partições sem reposição.
Há também partições de inicialização em 3 desses discos e partições de troca em uma invasão 1 em 2 desses discos.
O problema é que o mdadm mostra este array agora como um raid 0 com 7 spares como cat /proc/mdstat
mostra e eu tenho que levá-lo para sua configuração original do raid6 com 6 de 7 drives em seu estado degradado. Parece haver um problema com a configuração, mas eu não mudei nada nisso. Depois e somente se eu pudesse restaurar o array, eu adicionaria a 7ª partição comutada de volta ao array para obter o meu 7º drive raid6 original.
Se eu leio corretamente a página de manual, mdadm --assemble --scan
restaura as informações da matriz com base na configuração ou /proc/mdstat
, mas elas parecem estar erradas.
Mais algumas saídas
cat /proc/mdstat
- agora
Personalities : [raid1] [linear] [multipath] [raid0] [raid6] [raid5] [raid4] [raid10]
md2 : inactive sdg1[8](S) sdf1[7](S) sdb4[0](S) sda1[6](S) sde4[5](S) sdc4[1](S)
22468633600 blocks super 1.2
md129 : active raid1 sdb3[0] sde3[4] sdc3[1]
146353024 blocks super 1.2 [3/3] [UUU]
md128 : active raid1 sdb2[0] sde2[4](S) sdc2[1]
15616896 blocks super 1.2 [2/2] [UU]
unused devices: <none>
cat /etc/mdadm/mdadm.conf
- agora
#DEVICE partitions containers
# auto-create devices with Debian standard permissions
CREATE owner=root group=disk mode=0660 auto=yes
# automatically tag new arrays as belonging to the local system
HOMEHOST <system>
# instruct the monitoring daemon where to send mail alerts
MAILADDR root
# definitions of existing MD arrays
ARRAY /dev/md128 metadata=1.2 UUID=6813258b:250929d6:8a1e9d34:422a9fbd name=uranus:128
spares=1
ARRAY /dev/md129 metadata=1.2 UUID=ab06d13f:a70de5a6:c83a9383:b1beb84c name=uranus:129
ARRAY /dev/md2 metadata=1.2 UUID=607914eb:666e2a46:b2e43557:02cc2983 name=uranus:2
# This file was auto-generated on Mon, 10 Aug 2015 18:09:47 +0200
# by mkconf $Id$
#ARRAY /dev/md/128 metadata=1.2 UUID=6813258b:250929d6:8a1e9d34:422a9fbd name=uranus:128
# spares=2
#ARRAY /dev/md/129 metadata=1.2 UUID=ab06d13f:a70de5a6:c83a9383:b1beb84c name=uranus:129
# spares=1
#ARRAY /dev/md/2 metadata=1.2 UUID=607914eb:666e2a46:b2e43557:02cc2983 name=uranus:2
cat /etc/fstab
- agora
# <file system> <mount point> <type> <options> <dump> <pass>
/dev/mapper/vgSystem-vRoot / ext4 errors=remount-ro 0 1
# swap was on /dev/md128 during installation
UUID=5a5b997d-9e94-4391-955f-a2b9a3f63820 none swap sw 0 0
#/dev/vgData/vData /srv ext4 defaults 0 0
#10.10.0.15:/srv/BackupsUranusAutomatic/data /mnt/mars/uranus/automatic/data nfs clientaddr=10.10.0.10,vers=4,noatime,addr=10.10.0.15,noauto 0 0
#10.10.0.15:/srv/BackupsUranusAutomatic/media /mnt/mars/uranus/automatic/media nfs clientaddr=10.10.0.10,vers=4,noatime,addr=10.10.0.15,noauto 0 0
#/srv/shares/Videos/Ungesichert/Videorecorder /srv/vdr/video bind bind 0 0
#/dev/sdh1 /mnt/usbdisk ntfs noatime,noauto 0 0
Discos e partições - Antes do problema ocorrer
Medium /dev/sda: 3,7 TiB, 4000787030016 Bytes, 7814037168 Sektoren
Einheiten: sectors von 1 * 512 = 512 Bytes
Sektorengröße (logisch/physisch): 512 Bytes / 4096 Bytes
I/O Größe (minimal/optimal): 4096 Bytes / 4096 Bytes
Typ der Medienbezeichnung: gpt
Medienkennung: 98C35BD3-BFBC-4A4B-AEC9-6D4AFB775AF4
Gerät Start Ende Sektoren Größe Typ
/dev/sda1 2048 7489808383 7489806336 3,5T Linux RAID
/dev/sda2 7489808384 7791525887 301717504 143,9G Linux RAID
Medium /dev/sdb: 3,7 TiB, 4000787030016 Bytes, 7814037168 Sektoren
Einheiten: sectors von 1 * 512 = 512 Bytes
Sektorengröße (logisch/physisch): 512 Bytes / 4096 Bytes
I/O Größe (minimal/optimal): 4096 Bytes / 4096 Bytes
Typ der Medienbezeichnung: gpt
Medienkennung: 49102EF7-9FA2-4990-8C30-6C5B463B917E
Gerät Start Ende Sektoren Größe Typ
/dev/sdb1 2048 20479 18432 9M BIOS boot
/dev/sdb2 20480 31270911 31250432 14,9G Linux RAID
/dev/sdb3 31270912 324239359 292968448 139,7G Linux RAID
/dev/sdb4 324239360 7814035455 7489796096 3,5T Linux RAID
Medium /dev/sdc: 3,7 TiB, 4000787030016 Bytes, 7814037168 Sektoren
Einheiten: sectors von 1 * 512 = 512 Bytes
Sektorengröße (logisch/physisch): 512 Bytes / 4096 Bytes
I/O Größe (minimal/optimal): 4096 Bytes / 4096 Bytes
Typ der Medienbezeichnung: gpt
Medienkennung: 6A037D00-F252-4CA0-8D68-430734BCA765
Gerät Start Ende Sektoren Größe Typ
/dev/sdc1 2048 20479 18432 9M BIOS boot
/dev/sdc2 20480 31270911 31250432 14,9G Linux RAID
/dev/sdc3 31270912 324239359 292968448 139,7G Linux RAID
/dev/sdc4 324239360 7814035455 7489796096 3,5T Linux RAID
Medium /dev/sdd: 3,7 TiB, 4000787030016 Bytes, 7814037168 Sektoren
Einheiten: sectors von 1 * 512 = 512 Bytes
Sektorengröße (logisch/physisch): 512 Bytes / 4096 Bytes
I/O Größe (minimal/optimal): 4096 Bytes / 4096 Bytes
Typ der Medienbezeichnung: gpt
Medienkennung: EADC29D6-C2E9-4AC8-B1B2-F01A5233467C
Gerät Start Ende Sektoren Größe Typ
/dev/sdd1 2048 20479 18432 9M BIOS boot
/dev/sdd2 20480 31270911 31250432 14,9G Linux RAID
/dev/sdd3 31270912 324239359 292968448 139,7G Linux RAID
/dev/sdd4 324239360 7814035455 7489796096 3,5T Linux RAID
Medium /dev/sde: 3,7 TiB, 4000787030016 Bytes, 7814037168 Sektoren
Einheiten: sectors von 1 * 512 = 512 Bytes
Sektorengröße (logisch/physisch): 512 Bytes / 4096 Bytes
I/O Größe (minimal/optimal): 4096 Bytes / 4096 Bytes
Typ der Medienbezeichnung: gpt
Medienkennung: 3D7EBBFD-C00D-4503-8BF1-A71534F643E1
Gerät Start Ende Sektoren Größe Typ
/dev/sde1 2048 20479 18432 9M Linux filesystem
/dev/sde2 20480 31270911 31250432 14,9G Linux filesystem
/dev/sde3 31270912 324239359 292968448 139,7G Linux filesystem
/dev/sde4 324239360 7814035455 7489796096 3,5T Linux filesystem
Medium /dev/sdf: 3,7 TiB, 4000787030016 Bytes, 7814037168 Sektoren
Einheiten: sectors von 1 * 512 = 512 Bytes
Sektorengröße (logisch/physisch): 512 Bytes / 4096 Bytes
I/O Größe (minimal/optimal): 4096 Bytes / 4096 Bytes
Typ der Medienbezeichnung: gpt
Medienkennung: FCA42FC2-C5E9-45B6-9C18-F103C552993D
Gerät Start Ende Sektoren Größe Typ
/dev/sdf1 2048 7489824767 7489822720 3,5T Linux RAID
/dev/sdf2 7489824768 7791525887 301701120 143,9G Linux RAID
Medium /dev/sdg: 3,7 TiB, 4000787030016 Bytes, 7814037168 Sektoren
Einheiten: sectors von 1 * 512 = 512 Bytes
Sektorengröße (logisch/physisch): 512 Bytes / 4096 Bytes
I/O Größe (minimal/optimal): 4096 Bytes / 4096 Bytes
Typ der Medienbezeichnung: gpt
Medienkennung: 8FF8C4CC-6788-47D7-8264-8FA6EF912555
Gerät Start Ende Sektoren Größe Typ
/dev/sdg1 2048 7489824767 7489822720 3,5T Linux RAID
/dev/sdg2 7489824768 7791525887 301701120 143,9G Linux RAID
Medium /dev/md2: 17,4 TiB, 19173204295680 Bytes, 37447664640 Sektoren
Einheiten: sectors von 1 * 512 = 512 Bytes
Sektorengröße (logisch/physisch): 512 Bytes / 4096 Bytes
I/O Größe (minimal/optimal): 524288 Bytes / 2621440 Bytes
Medium /dev/md128: 14,9 GiB, 15991701504 Bytes, 31233792 Sektoren
Einheiten: sectors von 1 * 512 = 512 Bytes
Sektorengröße (logisch/physisch): 512 Bytes / 4096 Bytes
I/O Größe (minimal/optimal): 4096 Bytes / 4096 Bytes
Medium /dev/md129: 139,6 GiB, 149865496576 Bytes, 292706048 Sektoren
Einheiten: sectors von 1 * 512 = 512 Bytes
Sektorengröße (logisch/physisch): 512 Bytes / 4096 Bytes
I/O Größe (minimal/optimal): 4096 Bytes / 4096 Bytes
Medium /dev/mapper/vgSystem-vRoot: 74,5 GiB, 79997960192 Bytes, 156246016 Sektoren
Einheiten: sectors von 1 * 512 = 512 Bytes
Sektorengröße (logisch/physisch): 512 Bytes / 4096 Bytes
I/O Größe (minimal/optimal): 4096 Bytes / 4096 Bytes
Medium /dev/mapper/vgData-vData: 17,4 TiB, 19173199577088 Bytes, 37447655424 Sektoren
Einheiten: sectors von 1 * 512 = 512 Bytes
Sektorengröße (logisch/physisch): 512 Bytes / 4096 Bytes
I/O Größe (minimal/optimal): 524288 Bytes / 2621440 Bytes
Medium /dev/mapper/vgSystem-testBtrfs: 5 GiB, 5368709120 Bytes, 10485760 Sektoren
Einheiten: sectors von 1 * 512 = 512 Bytes
Sektorengröße (logisch/physisch): 512 Bytes / 4096 Bytes
I/O Größe (minimal/optimal): 4096 Bytes / 4096 Bytes
Discos, partições, dispositivos de ataque e volumes - Antes do problema ocorrer
NAME UUID FSTYPE MOUNTPOINT LABEL SIZE
sda 3,7T
├─sda1 607914eb-666e-2a46-b2e4-355702cc2983 linux_raid_member uranus:2 3,5T
│ └─md2 OTNyDe-fNAP-aLzy-Uwat-yYVH-E11D-d1LyzH LVM2_member 17,4T
│ └─vgData-vData a9b3d18d-e45f-4d0f-ab3d-9fe8bfa42157 ext4 /srv data 17,4T
└─sda2 143,9G
sdb 3,7T
├─sdb1 9M
├─sdb2 6813258b-2509-29d6-8a1e-9d34422a9fbd linux_raid_member uranus:128 14,9G
│ └─md128 5a5b997d-9e94-4391-955f-a2b9a3f63820 swap [SWAP] 14,9G
├─sdb3 ab06d13f-a70d-e5a6-c83a-9383b1beb84c linux_raid_member uranus:129 139,7G
│ └─md129 7QXSVM-dauj-RUQ1-uoQp-IamT-TTZo-slzArT LVM2_member 139,6G
│ ├─vgSystem-vRoot fb4bfbb3-de6c-47ef-b237-27af04fa2f4c ext4 / root 74,5G
│ └─vgSystem-testBtrfs 27bbab4c-3c9f-4743-83ac-61e8b41f2bd3 btrfs 5G
└─sdb4 607914eb-666e-2a46-b2e4-355702cc2983 linux_raid_member uranus:2 3,5T
└─md2 OTNyDe-fNAP-aLzy-Uwat-yYVH-E11D-d1LyzH LVM2_member 17,4T
└─vgData-vData a9b3d18d-e45f-4d0f-ab3d-9fe8bfa42157 ext4 /srv data 17,4T
sdc 3,7T
├─sdc1 9M
├─sdc2 6813258b-2509-29d6-8a1e-9d34422a9fbd linux_raid_member uranus:128 14,9G
│ └─md128 5a5b997d-9e94-4391-955f-a2b9a3f63820 swap [SWAP] 14,9G
├─sdc3 ab06d13f-a70d-e5a6-c83a-9383b1beb84c linux_raid_member uranus:129 139,7G
│ └─md129 7QXSVM-dauj-RUQ1-uoQp-IamT-TTZo-slzArT LVM2_member 139,6G
│ ├─vgSystem-vRoot fb4bfbb3-de6c-47ef-b237-27af04fa2f4c ext4 / root 74,5G
│ └─vgSystem-testBtrfs 27bbab4c-3c9f-4743-83ac-61e8b41f2bd3 btrfs 5G
└─sdc4 607914eb-666e-2a46-b2e4-355702cc2983 linux_raid_member uranus:2 3,5T
└─md2 OTNyDe-fNAP-aLzy-Uwat-yYVH-E11D-d1LyzH LVM2_member 17,4T
└─vgData-vData a9b3d18d-e45f-4d0f-ab3d-9fe8bfa42157 ext4 /srv data 17,4T
sdd 3,7T
├─sdd1 9M
├─sdd2 6813258b-2509-29d6-8a1e-9d34422a9fbd linux_raid_member uranus:128 14,9G
│ └─md128 5a5b997d-9e94-4391-955f-a2b9a3f63820 swap [SWAP] 14,9G
├─sdd3 ab06d13f-a70d-e5a6-c83a-9383b1beb84c linux_raid_member uranus:129 139,7G
│ └─md129 7QXSVM-dauj-RUQ1-uoQp-IamT-TTZo-slzArT LVM2_member 139,6G
│ ├─vgSystem-vRoot fb4bfbb3-de6c-47ef-b237-27af04fa2f4c ext4 / root 74,5G
│ └─vgSystem-testBtrfs 27bbab4c-3c9f-4743-83ac-61e8b41f2bd3 btrfs 5G
└─sdd4 607914eb-666e-2a46-b2e4-355702cc2983 linux_raid_member uranus:2 3,5T
└─md2 OTNyDe-fNAP-aLzy-Uwat-yYVH-E11D-d1LyzH LVM2_member 17,4T
└─vgData-vData a9b3d18d-e45f-4d0f-ab3d-9fe8bfa42157 ext4 /srv data 17,4T
sde 3,7T
├─sde1 9M
├─sde2 6813258b-2509-29d6-8a1e-9d34422a9fbd linux_raid_member uranus:128 14,9G
│ └─md128 5a5b997d-9e94-4391-955f-a2b9a3f63820 swap [SWAP] 14,9G
├─sde3 ab06d13f-a70d-e5a6-c83a-9383b1beb84c linux_raid_member uranus:129 139,7G
│ └─md129 7QXSVM-dauj-RUQ1-uoQp-IamT-TTZo-slzArT LVM2_member 139,6G
│ ├─vgSystem-vRoot fb4bfbb3-de6c-47ef-b237-27af04fa2f4c ext4 / root 74,5G
│ └─vgSystem-testBtrfs 27bbab4c-3c9f-4743-83ac-61e8b41f2bd3 btrfs 5G
└─sde4 607914eb-666e-2a46-b2e4-355702cc2983 linux_raid_member uranus:2 3,5T
└─md2 OTNyDe-fNAP-aLzy-Uwat-yYVH-E11D-d1LyzH LVM2_member 17,4T
└─vgData-vData a9b3d18d-e45f-4d0f-ab3d-9fe8bfa42157 ext4 /srv data 17,4T
sdf 3,7T
├─sdf1 607914eb-666e-2a46-b2e4-355702cc2983 linux_raid_member uranus:2 3,5T
│ └─md2 OTNyDe-fNAP-aLzy-Uwat-yYVH-E11D-d1LyzH LVM2_member 17,4T
│ └─vgData-vData a9b3d18d-e45f-4d0f-ab3d-9fe8bfa42157 ext4 /srv data 17,4T
└─sdf2 143,9G
sdg 3,7T
├─sdg1 607914eb-666e-2a46-b2e4-355702cc2983 linux_raid_member uranus:2 3,5T
│ └─md2 OTNyDe-fNAP-aLzy-Uwat-yYVH-E11D-d1LyzH LVM2_member 17,4T
│ └─vgData-vData a9b3d18d-e45f-4d0f-ab3d-9fe8bfa42157 ext4 /srv data 17,4T
└─sdg2
Superblocos de dispositivos de matriz
mdadm --examine /dev/sd<array-member-harddrives>
- agora
Existem apenas 6 unidades porque a 7ª 'nova' unidade não foi adicionada ao array ainda.
chris@uranus:/$ sudo mdadm --examine /dev/sda1
[sudo] Passwort für chris:
/dev/sda1:
Magic : a92b4efc
Version : 1.2
Feature Map : 0x1
Array UUID : 607914eb:666e2a46:b2e43557:02cc2983
Name : uranus:2 (local to host uranus)
Creation Time : Thu Aug 6 00:45:41 2015
Raid Level : raid6
Raid Devices : 7
Avail Dev Size : 7489544192 (3571.29 GiB 3834.65 GB)
Array Size : 18723832320 (17856.44 GiB 19173.20 GB)
Used Dev Size : 7489532928 (3571.29 GiB 3834.64 GB)
Data Offset : 262144 sectors
Super Offset : 8 sectors
Unused Space : before=262064 sectors, after=11264 sectors
State : active
Device UUID : 49c6404e:ee9509ba:c980942a:1db9cf3c
Internal Bitmap : 8 sectors from superblock
Update Time : Fri Jul 13 22:34:48 2018
Checksum : aae603a7 - correct
Events : 2739360
Layout : left-symmetric
Chunk Size : 512K
Device Role : Active device 3
Array State : AA.AAAA ('A' == active, '.' == missing, 'R' == replacing)
chris@uranus:/$ sudo mdadm --examine /dev/sdb4
/dev/sdb4:
Magic : a92b4efc
Version : 1.2
Feature Map : 0x1
Array UUID : 607914eb:666e2a46:b2e43557:02cc2983
Name : uranus:2 (local to host uranus)
Creation Time : Thu Aug 6 00:45:41 2015
Raid Level : raid6
Raid Devices : 7
Avail Dev Size : 7489533952 (3571.29 GiB 3834.64 GB)
Array Size : 18723832320 (17856.44 GiB 19173.20 GB)
Used Dev Size : 7489532928 (3571.29 GiB 3834.64 GB)
Data Offset : 262144 sectors
Super Offset : 8 sectors
Unused Space : before=262064 sectors, after=1024 sectors
State : clean
Device UUID : 61d97294:3ce7cd84:7bb4d5f1:d301c842
Internal Bitmap : 8 sectors from superblock
Update Time : Fri Jul 13 22:42:15 2018
Checksum : 890fbe3d - correct
Events : 2739385
Layout : left-symmetric
Chunk Size : 512K
Device Role : Active device 0
Array State : AA..A.. ('A' == active, '.' == missing, 'R' == replacing)
chris@uranus:/$ sudo mdadm --examine /dev/sdc4
/dev/sdc4:
Magic : a92b4efc
Version : 1.2
Feature Map : 0x1
Array UUID : 607914eb:666e2a46:b2e43557:02cc2983
Name : uranus:2 (local to host uranus)
Creation Time : Thu Aug 6 00:45:41 2015
Raid Level : raid6
Raid Devices : 7
Avail Dev Size : 7489533952 (3571.29 GiB 3834.64 GB)
Array Size : 18723832320 (17856.44 GiB 19173.20 GB)
Used Dev Size : 7489532928 (3571.29 GiB 3834.64 GB)
Data Offset : 262144 sectors
Super Offset : 8 sectors
Unused Space : before=262064 sectors, after=1024 sectors
State : clean
Device UUID : ee70c4ab:5b65dae7:df3a78f0:e8bdcead
Internal Bitmap : 8 sectors from superblock
Update Time : Fri Jul 13 22:42:15 2018
Checksum : 6d171664 - correct
Events : 2739385
Layout : left-symmetric
Chunk Size : 512K
Device Role : Active device 1
Array State : AA..A.. ('A' == active, '.' == missing, 'R' == replacing)
chris@uranus:/$ sudo mdadm --examine /dev/sde4
/dev/sde4:
Magic : a92b4efc
Version : 1.2
Feature Map : 0x1
Array UUID : 607914eb:666e2a46:b2e43557:02cc2983
Name : uranus:2 (local to host uranus)
Creation Time : Thu Aug 6 00:45:41 2015
Raid Level : raid6
Raid Devices : 7
Avail Dev Size : 7489533952 (3571.29 GiB 3834.64 GB)
Array Size : 18723832320 (17856.44 GiB 19173.20 GB)
Used Dev Size : 7489532928 (3571.29 GiB 3834.64 GB)
Data Offset : 262144 sectors
Super Offset : 8 sectors
Unused Space : before=262064 sectors, after=1024 sectors
State : clean
Device UUID : 6ce5311f:084ded8e:ba3d4e06:43e38c67
Internal Bitmap : 8 sectors from superblock
Update Time : Fri Jul 13 22:42:15 2018
Checksum : 572b9ac7 - correct
Events : 2739385
Layout : left-symmetric
Chunk Size : 512K
Device Role : Active device 4
Array State : AA..A.. ('A' == active, '.' == missing, 'R' == replacing)
chris@uranus:/$ sudo mdadm --examine /dev/sdf1
/dev/sdf1:
Magic : a92b4efc
Version : 1.2
Feature Map : 0x1
Array UUID : 607914eb:666e2a46:b2e43557:02cc2983
Name : uranus:2 (local to host uranus)
Creation Time : Thu Aug 6 00:45:41 2015
Raid Level : raid6
Raid Devices : 7
Avail Dev Size : 7489560576 (3571.30 GiB 3834.66 GB)
Array Size : 18723832320 (17856.44 GiB 19173.20 GB)
Used Dev Size : 7489532928 (3571.29 GiB 3834.64 GB)
Data Offset : 262144 sectors
Super Offset : 8 sectors
Unused Space : before=262064 sectors, after=27648 sectors
State : clean
Device UUID : 7c4fbe19:d63eced4:1b40cf79:e759fe4b
Internal Bitmap : 8 sectors from superblock
Update Time : Fri Jul 13 22:36:17 2018
Checksum : ef93d641 - correct
Events : 2739381
Layout : left-symmetric
Chunk Size : 512K
Device Role : Active device 6
Array State : AA..A.A ('A' == active, '.' == missing, 'R' == replacing)
chris@uranus:/$ sudo mdadm --examine /dev/sdg1
/dev/sdg1:
Magic : a92b4efc
Version : 1.2
Feature Map : 0x1
Array UUID : 607914eb:666e2a46:b2e43557:02cc2983
Name : uranus:2 (local to host uranus)
Creation Time : Thu Aug 6 00:45:41 2015
Raid Level : raid6
Raid Devices : 7
Avail Dev Size : 7489560576 (3571.30 GiB 3834.66 GB)
Array Size : 18723832320 (17856.44 GiB 19173.20 GB)
Used Dev Size : 7489532928 (3571.29 GiB 3834.64 GB)
Data Offset : 262144 sectors
Super Offset : 8 sectors
Unused Space : before=262064 sectors, after=27648 sectors
State : clean
Device UUID : 36d9dffc:27699128:e84f87e7:38960357
Internal Bitmap : 8 sectors from superblock
Update Time : Fri Jul 13 22:35:47 2018
Checksum : 9f34d651 - correct
Events : 2739377
Layout : left-symmetric
Chunk Size : 512K
Device Role : Active device 5
Array State : AA..AAA ('A' == active, '.' == missing, 'R' == replacing)