I recently installed CentOS 5 on a server with a Promise PDC20621 SATA Raid card in it (according to lspci). This particular card, of course, is a FAKE raid device, meaning that the physical card is nothing more than a regular SATA controller, and they provide drivers that emulate RAID functionality. This is supposed to be useful for Windows users that don’t have a native software raid service available, but it is kindof useless for Linux since most distros provide md for creating a software raid device.
When trying to create a new software raid array, I would get a bunch of errors about the devices being busy, like this:
[root@www ~]# mdadm --create --verbose /dev/md0 --level=5 --raid-devices=4 /dev/sda1 /dev/sdb1 /dev/sdc1 /dev/sdd1
mdadm: layout defaults to left-symmetric
mdadm: chunk size defaults to 64K
mdadm: Cannot open /dev/sda1: Device or resource busy
mdadm: Cannot open /dev/sdb1: Device or resource busy
mdadm: Cannot open /dev/sdc1: Device or resource busy
mdadm: Cannot open /dev/sdd1: Device or resource busy
lsof didn’t show any processes that were using these files, and it took a little while to finally find out that ‘dmraid’ was the culprit. dmraid is the linux driver for fake raid controllers like the Promise FastTrak and nVidia on-board SATA controllers. From what I could tell, it is loaded from initrd and automatically attaches itself to any partitions that are of type ‘fd’ (Linux raid autodetect).
After a few hours of googling for answers, I had become pretty familiar with the topic. Many of the search results were from people trying to get mdraid working for these devices before it was stable and widely included in distros.
Unfortunately, it looks like the default CentOS 5 install has the dmraid drivers built into the initrd, and there was no way to disable it from taking control of the drives. I tried looking for an argument to pass to the kernel to disable dmraid support, but couldn’t find anything. A few of the posts and emails that I came across on the subject suggested removing the ‘dmraid’ package, and a few people appeared to have some success with that. But when I tried a ‘yum erase dmraid’ on my box, it wanted to remove the kernel, which would probably be bad.
After a little more searching, I found that mkinitrd had an option to rebuild the initrd without dmraid support. The was an upgrade available for my kernel, so I did a ‘yum update’ to install a new one, which also gave me one to fall-back to if this didn’t work. Once the new kernel was running, I installed the ‘kernel-devel’ and ‘kernel-headers’ packages to pull down some necessary headers, then ran this command to create a new initrd without the troublesome dmraid drivers:
mkinitrd --omit-dmraid /boot/NO_DMRAID_initrd-2.6.18-8.1.6.el5.img 2.6.18-8.1.6.el5
Then, simply change /etc/grub.conf to create an option that pointed to my new initrd. My /etc/grub.conf looks like this:
default=0
timeout=5
splashimage=(hd0,0)/grub/splash.xpm.gz
hiddenmenu
## My new non-dmraid boot option
title CentOS (2.6.18-8.1.6.el5) WITHOUT DMRAID GARBAGE
root (hd0,0)
kernel /vmlinuz-2.6.18-8.1.6.el5 ro root=/dev/hda1
initrd /NO_DMRAID_initrd-2.6.18-8.1.6.el5.img
## The regular option
title CentOS (2.6.18-8.1.6.el5)
root (hd0,0)
kernel /vmlinuz-2.6.18-8.1.6.el5 ro root=/dev/hda1
initrd /initrd-2.6.18-8.1.6.el5.img
## My working backup option:
title CentOS (2.6.18-8.el5)
root (hd0,0)
kernel /vmlinuz-2.6.18-8.el5 ro root=/dev/hda1
initrd /initrd-2.6.18-8.el5.img
Now, I just rebooted off the first option, and it didn’t load all of the dmraid junk. I can now access the partitions without the ‘resource busy’ problem, and create a software raid array like I’m used to.