RAID fails to rebuild after reboot

Asked by WonderStivi

This is my first question on launchpad, so please feel free to give me guidance on any information you need to solve this problem. I'll try to explain the problem as accurately as possible.

I have 5 disks in my computer, running Ubuntu 9.04. One of them is running OS (and is, as far as I know, not a part of the problem), while the remaining four are setup as 2x2 RAID0, with LVM combining the two RAID-arrays to one, big partition.

To elaborate a bit:
I have 2x750GB disks making out md0, built with mdadm.
I have 2x500GB disks making out md1, built with mdadm.
LVM then joins these two RAID-arrays into one, big partition totalling to 2.27TB.
LVM is then failing to mount because of faulting RAID-arrays. Setup to mount by UUID in fstab.

After a reboot, the md-devices fails to build. I have to do a mdadm --stop --scan, and then rebuild the md-devices before I can reassemble the LVM. What bothers me is that this worked flawless in 8.10, and problem did not appear before I upgraded. I did however do a full reinstall of 9.04, not a direct upgrade from 8.10.

After stopping the devices, rebuilding and reassemble, all the data on the LVM is intact.

I have not seen any bug reports that has similar problems. Most of the bugs regarding RAID are related to booting OS on RAID-arrays.

Hopefully someone can help me solve this.

dmesg-log provided on pastebin: http://pastebin.com/f28825e
Uname -a: Linux Base 2.6.28-11-generic #42-Ubuntu SMP Fri Apr 17 01:57:59 UTC 2009 i686 GNU/Linux

If other information is needed, feel free to guide me.

Question information

Language:
English Edit question
Status:
Solved
For:
Ubuntu lvm2 Edit question
Assignee:
No assignee Edit question
Solved by:
WonderStivi
Solved:
Last query:
Last reply:
Revision history for this message
WonderStivi (stian-sigbjornsen) said :
#1

Here's the information I get running "cat /proc/mdstat" after a reboot:

stian@Base:~$ cat /proc/mdstat
Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10]
md_d0 : inactive sdc5[0](S)
      732571904 blocks

md_d1 : inactive sde1[1](S)
      488383936 blocks

unused devices: <none>
stian@Base:~$

Revision history for this message
WonderStivi (stian-sigbjornsen) said :
#2

Here's the output of "fdisk -l", though after the arrays has been rebuilt:
stian@Base:~$ sudo fdisk -l

Disk /dev/sda: 80.0 GB, 80026361856 bytes
255 heads, 63 sectors/track, 9729 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Disk identifier: 0x000ae72d

   Device Boot Start End Blocks Id System
/dev/sda1 * 1 976 7839688+ 83 Linux
/dev/sda2 977 9729 70308472+ 5 Extended
/dev/sda5 977 1231 2048256 82 Linux swap / Solaris
/dev/sda6 1232 9729 68260153+ 83 Linux

Disk /dev/sdb: 500.1 GB, 500107862016 bytes
255 heads, 63 sectors/track, 60801 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Disk identifier: 0xf68f9d3f

   Device Boot Start End Blocks Id System
/dev/sdb1 1 60801 488384001 fd Linux raid autodetect

Disk /dev/sdc: 750.1 GB, 750156374016 bytes
255 heads, 63 sectors/track, 91201 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Disk identifier: 0x00003f67

   Device Boot Start End Blocks Id System
/dev/sdc1 1 91201 732572001 5 Extended
/dev/sdc5 1 91201 732571969+ fd Linux raid autodetect

Disk /dev/sdd: 750.1 GB, 750156374016 bytes
255 heads, 63 sectors/track, 91201 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Disk identifier: 0xf640f640

   Device Boot Start End Blocks Id System
/dev/sdd1 1 91201 732572001 5 Extended
/dev/sdd5 1 91201 732571969+ fd Linux raid autodetect

Disk /dev/sde: 500.1 GB, 500107862016 bytes
255 heads, 63 sectors/track, 60801 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Disk identifier: 0x186e3350

   Device Boot Start End Blocks Id System
/dev/sde1 1 60801 488384001 fd Linux raid autodetect

Disk /dev/md1: 1000.2 GB, 1000210300928 bytes
2 heads, 4 sectors/track, 244191968 cylinders
Units = cylinders of 8 * 512 = 4096 bytes
Disk identifier: 0x00000000

Disk /dev/md1 doesn't contain a valid partition table

Disk /dev/md0: 1500.3 GB, 1500307259392 bytes
2 heads, 4 sectors/track, 366285952 cylinders
Units = cylinders of 8 * 512 = 4096 bytes
Disk identifier: 0x00000000

Disk /dev/md0 doesn't contain a valid partition table
stian@Base:~$

Revision history for this message
WonderStivi (stian-sigbjornsen) said :
#3

After doing "mdadm --examine --scan --config=mdadm.conf >> /etc/mdadm/mdadm.conf", it seems the RAID-arrays are working fine after a reboot. However, the LVM has to be built and mounted manually. This shows below, which is done just after a reboot:

stian@Base:~$ cat /proc/mdstat
Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10]
md0 : active raid0 sdc5[0] sdd5[1]
      1465143808 blocks 64k chunks

md1 : active raid0 sdb1[0] sde1[1]
      976767872 blocks 64k chunks

unused devices: <none>
stian@Base:~$ sudo pvscan
[sudo] password for stian:
  PV /dev/md1 VG lvmg lvm2 [931.52 GB / 0 free]
  PV /dev/md0 VG lvmg lvm2 [1.36 TB / 0 free]
  Total: 2 [2.27 TB] / in use: 2 [2.27 TB] / in no VG: 0 [0 ]
stian@Base:~$ sudo vgscan
  Reading all physical volumes. This may take a while...
  Found volume group "lvmg" using metadata type lvm2
stian@Base:~$ sudo lvscan
  ACTIVE '/dev/lvmg/storage' [2.27 TB] inherit
stian@Base:~$

Revision history for this message
WonderStivi (stian-sigbjornsen) said :
#4

I solved this eventually:

1. Updated /etc/mdadm/mdadm.conf:
CODE
mdadm --examine --scan --config=mdadm.conf >> /etc/mdadm/mdadm.conf

2. Edited /etc/lvm/lvm.conf:
CODE
# By default, LVM2 will ignore devices used as components of
    # software RAID (md) devices by looking for md superblocks.
    # 1 enables; 0 disables.
    md_component_detection = 0

3. Changed /etc/fstab to mount /dev/lvmg/storage instead of UUID.

Revision history for this message
Vitaliy Kulikov (slonua) said :
#5

so, grub 1.98 works perfect.
u can use deb from here: https://edge.launchpad.net/~ricotz/+archive/unstable

also, u can run following command to use default settings:

$ sudo dpkg-reconfigure grub-pc