Ubuntu server upgraded to Edgy, problem with md RAID

Asked by fedorowp

I upgraded an old computer that is a lightweight Internet server and firewall/NAT box for the network behind it to Edgy Eft. The system is not running X-windows.

The md RAID-1 in the system is made up of two partitions, one on a SCSI drive, /dev/sda1, and one on an IDE drive, /dev/hda1.

On the reboot after the upgrade, and every reboot since, the RAID starts without the SCSI drive.

I can add it back just fine after the system is up with: mdadm /dev/md0 --add /dev/sda1

The other issue I am having, because it could be related, is that the second Ethernet NIC in the computer, an Intel EtherExpress Pro/100 in the computer usually does not cause it's interface, eth1, to exist until after I unload and reload the device driver. Occasionally it comes up just as it should though.

---
uname -r
---
2.6.17-10-386
---

---
Message it e-mails me about the degraded RAID:
---
From <email address hidden> Sun Jan 7 17:11:46 2007
X-Original-To: root
From: mdadm monitoring <email address hidden>
To: <email address hidden>
Subject: DegradedArray event on /dev/md0:hostname
Date: Sun, 7 Jan 2007 17:11:45 -0500 (EST)

This is an automatically generated mail message from mdadm
running on hostname

A DegradedArray event had been detected on md device /dev/md0.

Faithfully yours, etc.
---

---
Kernel section from /boot/grub/menu.lst:
---
title Ubuntu, kernel 2.6.17-10-386
root (hd1,0)
kernel /boot/vmlinuz-2.6.17-10-386 root=/dev/md0 ro splash
initrd /boot/initrd.img-2.6.17-10-386
savedefault
boot
---

---
/boot/grub/device.map
---
(hd0) /dev/hda
(hd1) /dev/sda
---

---
/etc/fstab
---
# /etc/fstab: static file system information.
#
# <file system> <mount point> <type> <options> <dump> <pass>
proc /proc proc defaults 0 0
/dev/md0 / reiserfs notail,noatime 0 1
# /dev/sda5 -- converted during upgrade to edgy
UUID=57b4f0d8-9b38-41b9-ac4f-595045c5a85b none swap sw 0 0
/dev/hdc /media/cdrom0 udf,iso9660 ro,user,noauto 0 0
/dev/fd0 /media/floppy0 auto rw,user,noauto 0 0
---

---
After the SCSI drive is added back:
sudo mdadm --detail /dev/md0
---
/dev/md0:
        Version : 00.90.03
  Creation Time : Sat Jun 25 16:32:10 2005
     Raid Level : raid1
     Array Size : 8787456 (8.38 GiB 9.00 GB)
    Device Size : 8787456 (8.38 GiB 9.00 GB)
   Raid Devices : 2
  Total Devices : 2
Preferred Minor : 0
    Persistence : Superblock is persistent

    Update Time : Sun Jan 7 19:18:50 2007
          State : clean
 Active Devices : 2
Working Devices : 2
 Failed Devices : 0
  Spare Devices : 0

           UUID : ea95427b:c203a295:146ca717:76a30548
         Events : 0.12369600

    Number Major Minor RaidDevice State
       0 8 1 0 active sync /dev/sda1
       1 3 1 1 active sync /dev/hda1
---

Question information

Language:
English Edit question
Status:
Solved
For:
Ubuntu Edit question
Assignee:
No assignee Edit question
Solved by:
fedorowp
Solved:
Last query:
Last reply:
Revision history for this message
David Morris (dave-greenacre) said :
#1

Don't you require an mdadm.conf file which maps the drives to md0?

I think its either in /etc or /etc/mdadm what is the contents of this file if it exists.

Revision history for this message
Best fedorowp (fedorowp) said :
#2

I solved the problem based on information from this bug report: https://launchpad.net/ubuntu/+source/hw-detect/+bug/40075

What I did was:
1. Added aic7xxx to /etc/initramfs-tools/modules
2. dpkg-reconfigure linux-image-`uname -r`

Shouldn't Ubuntu Edgy automatically load the necessary module like previous version?

---

/etc/mdadm/mdadm.conf:
DEVICE partitions
ARRAY /dev/md0 level=raid1 num-devices=2 UUID=ea95427b:c203a295:146ca717:76a30548
   spares=2

Revision history for this message
fedorowp (fedorowp) said :
#3

I should also add, fixing the md RAID startup issue seems to have addressed the Intel EtherExpress Pro/100 not working until I unload and reload the module.

Perhaps there is a timing issue in the boot processes triggered when the RAID is degraded?