Eu sei que isso é antigo, mas essas etapas podem ser úteis para as pessoas.
Como adicionar discos ao RAID-0?
Env:
- centos 7 (kernel: 3.10.0-327.22.2.el7.x86_64)
- versão mdadm v3.4 - 28 de janeiro de 2016
- Primeiros 3 discos de 10 GB cada
- Quarto disco também 10 GB
Configuração inicial:
$ sudo mdadm --create --verbose /dev/md0 --level=0 --name=DB_RAID2 --raid-devices=3 /dev/xvdh /dev/xvdi /dev/xvdj
$ sudo mdadm -D /dev/md0
/dev/md0:
Version : 1.2
Creation Time : Tue Sep 5 14:25:10 2017
Raid Level : raid0
Array Size : 31432704 (29.98 GiB 32.19 GB)
Raid Devices : 3
Total Devices : 3
Persistence : Superblock is persistent
Update Time : Tue Sep 5 14:25:10 2017
State : clean
Active Devices : 3
Working Devices : 3
Failed Devices : 0
Spare Devices : 0
Chunk Size : 512K
Name : temp:DB_RAID2 (local to host temp)
UUID : e8780813:5adbe875:ffb0ab8a:05f1352d
Events : 0
Number Major Minor RaidDevice State
0 202 112 0 active sync /dev/xvdh
1 202 128 1 active sync /dev/xvdi
2 202 144 2 active sync /dev/xvdj
$ sudo mkfs -t ext4 /dev/md0
$ sudo mount /dev/md0 /mnt/test
Adicione um disco ao RAID 0 em uma etapa (não funciona):
$ sudo mdadm --grow /dev/md0 --raid-devices=4 --add /dev/xvdk
mdadm: level of /dev/md0 changed to raid4
mdadm: added /dev/xvdk
mdadm: Failed to initiate reshape!
Provavelmente, isso falha devido a esse bug .
Passo 1: Converta para RAID-4:
$ sudo mdadm --grow --level 4 /dev/md0
mdadm: level of /dev/md0 changed to raid4
$ cat /proc/mdstat
Personalities : [raid0] [raid6] [raid5] [raid4]
md0 : active raid4 xvdj[2] xvdi[1] xvdh[0]
31432704 blocks super 1.2 level 4, 512k chunk, algorithm 5 [4/3] [UUU_]
unused devices: <none>
Passo-2: Adicione um disco:
$ sudo mdadm --manage /dev/md0 --add /dev/xvdk
mdadm: added /dev/xvdk
Espere até recuperar:
$ cat /proc/mdstat
Personalities : [raid0] [raid6] [raid5] [raid4]
md0 : active raid4 xvdk[4] xvdj[2] xvdi[1] xvdh[0]
31432704 blocks super 1.2 level 4, 512k chunk, algorithm 5 [4/3] [UUU_]
[=>...................] recovery = 8.5% (893572/10477568) finish=3.5min speed=44678K/sec
$ cat /proc/mdstat
Personalities : [raid0] [raid6] [raid5] [raid4]
md0 : active raid4 xvdk[4] xvdj[2] xvdi[1] xvdh[0]
31432704 blocks super 1.2 level 4, 512k chunk, algorithm 5 [4/4] [UUUU]
unused devices: <none>
Passo-3: Converta para RAID-0 de volta:
$ sudo mdadm --grow --level 0 --raid-devices=4 /dev/md0
$
Espere até que ele seja reformulado:
$ cat /proc/mdstat
Personalities : [raid0] [raid6] [raid5] [raid4]
md0 : active raid4 xvdk[4] xvdj[2] xvdi[1] xvdh[0]
31432704 blocks super 1.2 level 4, 512k chunk, algorithm 5 [5/4] [UUUU_]
[===>.................] reshape = 16.2% (1702156/10477568) finish=6.1min speed=23912K/sec
$ cat /proc/mdstat
Personalities : [raid0] [raid6] [raid5] [raid4]
md0 : active raid0 xvdk[4] xvdj[2] xvdi[1] xvdh[0]
41910272 blocks super 1.2 512k chunks
Etapa 4: redimensione o sistema de arquivos:
$ sudo mdadm -D /dev/md0
/dev/md0:
Version : 1.2
Creation Time : Tue Sep 5 14:25:10 2017
Raid Level : raid0
Array Size : 41910272 (39.97 GiB 42.92 GB)
Raid Devices : 4
Total Devices : 4
Persistence : Superblock is persistent
Update Time : Tue Sep 5 14:55:46 2017
State : clean
Active Devices : 4
Working Devices : 4
Failed Devices : 0
Spare Devices : 0
Chunk Size : 512K
Name : temp:DB_RAID2 (local to host temp)
UUID : e8780813:5adbe875:ffb0ab8a:05f1352d
Events : 107
Number Major Minor RaidDevice State
0 202 112 0 active sync /dev/xvdh
1 202 128 1 active sync /dev/xvdi
2 202 144 2 active sync /dev/xvdj
4 202 160 3 active sync /dev/xvdk
$ df -h
/dev/md0 30G 45M 28G 1% /mnt/test
Redimensionamento real e depois de redimensionar:
$ sudo resize2fs /dev/md0
resize2fs 1.42.9 (28-Dec-2013)
Filesystem at /dev/md0 is mounted on /mnt/test; on-line resizing required
old_desc_blocks = 4, new_desc_blocks = 5
The filesystem on /dev/md0 is now 10477568 blocks long.
$ df -h /dev/md0
Filesystem Size Used Avail Use% Mounted on
/dev/md0 40G 48M 38G 1% /mnt/test