boot with degraded raid5 always triggers initramfs prompt

Asked by dov aharonson on 2012-07-29

I have a remote ubuntu 12.04 with 5 harddrives, the ubuntu system and swap are installed on drive /dev/sde and in addition I have software raid5 over sda0, sdb0, sdc0. sdd0 partitions, the raid5 is used as a physical-volume for LVM.

The system is a headless remote system that must restart/boot automatically without any user input.

I tried to simulate disk failure on the raid5 by pulling one drive out of the system while it was running.
everything looks ok, I can continue to use the raid5 and it is reported as degraded and send event triggered email.

BUT - when I shut it down and try to boot, it detects the degraded raid5 array and goes into initramfs where I need to respond manually and click exit in order to let it continue the boot, but as I said it is a remote unit that should boot/restart automatically.

I already tried the following:
1. I modified /etc/initramfs-tools/conf.d/mdadm to have the line: BOOT_DEGRADED=true
2. Just to make sure I also ran: sudo dpkg-reconfigure msadm and set the boot degraded option through this tool too.

Is this a bug in ubuntu or am I doing something wrong?
If it is a bug is there a workaround this issue until it will be solved?

I am relatively new to both linux and ubuntu so I realy need your help to resolve this problem.
Appreciate your time and your help

Question information

English Edit question
Ubuntu util-linux Edit question
No assignee Edit question
Last query:
Last reply:

I suggest you report a bug

Can you help with this problem?

Provide an answer of your own, or ask dov aharonson for more information if necessary.

To post a message you must log in.