Eu tenho um grande problema, meu Ubuntu Server não inicializa após a reinicialização, caras do data center da Hetzner começaram a recorrer a sistemas e descobriram isso em sda:
-----------------%<-----------------
5 Reallocated_Sector_Ct 0x0033 001 001 036 Pre-fail Always
FAILING_NOW 4095
-----------------%<-----------------
Existem 3 x HDDs de 1,5 TB em RAID5 e em sda havia carregador de boot .
Eles insistiram em substituir o sda com o novo HDD, então eu confirmei.
Depois disso, eles me enviaram de volta com outra coisa, sdc também está corrompido ...
Até as 7:30 da manhã, tudo funcionou bem. Depois que o servidor foi congelado, e após a reinicialização, dois HDD-s estão corrompidos ... um pouco estranho, mas não há tempo para pensar sobre isso.
Eles returned corrupted sda back
, então "que eu posso restaurar alguns dados ...", mas o problema é que eu estava preso.
Eu tentei montar sdb enquanto em live-rescue-system e instalar o grub , mas há este erro:
root@rescue ~ # grub-install /dev/sdb
/usr/sbin/grub-probe: error: cannot find a device for /boot/grub (is
/dev mounted?).
root@rescue ~ # mkdir /media/sdb
root@rescue ~ # mount /dev/sdb /media/sdb
mount: you must specify the filesystem type
root@rescue ~ # mount /dev/sdb /media/sdb -t auto
mount: you must specify the filesystem type
root@rescue ~ # mount /dev/sdb /media/sdb -t ext3
mount: wrong fs type, bad option, bad superblock on /dev/sdb,
missing codepage or helper program, or other error
In some cases useful info is found in syslog - try
dmesg | tail or so
Alguém por favor pode me ajudar a tentar iniciar o servidor Ubuntu ou, de alguma forma, restaurar dados de qualquer um dos discos ...
Informação extra:
root@rescue ~ # ll /dev/sd
brw-rw---T 1 root disk 8, 0 Nov 18 14:04 /dev/sda
brw-rw---T 1 root disk 8, 1 Nov 18 14:06 /dev/sda1
brw-rw---T 1 root disk 8, 2 Nov 18 14:06 /dev/sda2
brw-rw---T 1 root disk 8, 16 Nov 18 14:04 /dev/sdb
brw-rw---T 1 root disk 8, 17 Nov 18 14:04 /dev/sdb1
brw-rw---T 1 root disk 8, 18 Nov 18 14:04 /dev/sdb2
brw-rw---T 1 root disk 8, 32 Nov 18 14:06 /dev/sdc
root@rescue ~ # df -h
Filesystem Size Used Avail Use% Mounted on
rootfs 12G 3.0M 12G 1% /
udev 12G 0 12G 0% /dev
188.40.24.212:/nfs 1.4T 592G 722G 46% /root/.oldroot/nfs
aufs 12G 3.0M 12G 1% /
tmpfs 2.4G 264K 2.4G 1% /run
tmpfs 5.0M 0 5.0M 0% /run/lock
tmpfs 4.8G 0 4.8G 0% /run/shm
root@rescue ~ # ll /dev/loo*
brw-rw---T 1 root disk 7, 0 Nov 18 14:04 /dev/loop0
brw-rw---T 1 root disk 7, 1 Nov 18 14:04 /dev/loop1
brw-rw---T 1 root disk 7, 2 Nov 18 14:04 /dev/loop2
brw-rw---T 1 root disk 7, 3 Nov 18 14:04 /dev/loop3
brw-rw---T 1 root disk 7, 4 Nov 18 14:04 /dev/loop4
brw-rw---T 1 root disk 7, 5 Nov 18 14:04 /dev/loop5
brw-rw---T 1 root disk 7, 6 Nov 18 14:04 /dev/loop6
brw-rw---T 1 root disk 7, 7 Nov 18 14:04 /dev/loop7
crw------T 1 root root 10, 237 Nov 18 14:04 /dev/loop-control
root@rescue ~ # fdisk -l
Disk /dev/sdb: 1500.3 GB, 1500301910016 bytes
255 heads, 63 sectors/track, 182401 cylinders, total 2930277168 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x000cdbda
Device Boot Start End Blocks Id System
/dev/sdb1 3906 1060289 528192 fd Linux raid autodetect
/dev/sdb2 1060290 2930272064 1464605887+ fd Linux raid autodetect
Disk /dev/sda: 1500.3 GB, 1500301910016 bytes
255 heads, 63 sectors/track, 182401 cylinders, total 2930277168 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x000c5b77
Device Boot Start End Blocks Id System
/dev/sda1 3906 1060289 528192 fd Linux raid autodetect
/dev/sda2 1060290 2930272064 1464605887+ fd Linux raid autodetect
Disk /dev/md0: 540 MB, 540803072 bytes
2 heads, 4 sectors/track, 132032 cylinders, total 1056256 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00000000
Disk /dev/md0 doesn't contain a valid partition table
root@rescue ~ # cat /proc/mdstat
Personalities : [raid1]
md1 : inactive sda2[0](S)
1464605760 blocks
md0 : active raid1 sda1[0]
528128 blocks [3/1] [U__]
unused devices: <none>