I have a system at home with a pair of 640 GB drives, that are getting full. The drives are configured in a Linux Software RAID1 array. Replacing those drives with a pair of 2 TB drives is a pretty easy (although lengthy process).
First, we need to verify that our RAID system is running correctly. Take a look at /proc/mdstat to verify that both drives are online:
root@host:~# cat /proc/mdstat Personalities : [raid1] [linear] [multipath] [raid0] [raid6] [raid5] [raid4] [raid10] md1 : active raid1 sdb3[1] sda3[0] 622339200 blocks [2/2] [UU] md0 : active raid1 sdb1[1] sda1[0] 192640 blocks [2/2] [UU]
After verifying that the system is running, I powered the machine off and removed the second hard drive and replaced it with the new drive. Upon starting it back up, Ubuntu noticed that the RAID system was running in degraded mode and I had to hit yes at the console to have it continue booting.
Once the machine was running, I logged in and created a similar partition structure on the new drive using the fdisk command. On my system, I have a small 200 partition for /boot as /dev/sdb1, a 1 GB swap partition, and then the rest of the drive is one big block for the root partition. The I copied the partition table that was on /dev/sda, but for the final partition, made it take up the entire rest of the drive. Make sure to set the first partition as bootable. The partitions on /dev/sdb now look like this:
Device Boot Start End Blocks Id System /dev/sdb1 * 1 24 192748+ fd Linux raid autodetect /dev/sdb2 25 146 979965 82 Linux swap / Solaris /dev/sdb3 147 243201 1952339287+ fd Linux raid autodetect
After the partitions match up, I can now add the partitions on /dev/sdb into the RAID array
root@host:~# mdadm --manage /dev/md0 --add /dev/sdb1 root@host:~# mdadm --manage /dev/md1 --add /dev/sdb3
And watch the status of the rebuild using
watch cat /proc/mdstat
After that was done, I installed grub onto the new /dev/sdb
grub /dev/sdb root (hd1) setup (hd1,0)
Finally, reboot once and make sure that everything works as expected.
The system is now running with one old drive and one new drive. The next step is to perform the same steps with removing the other old drive and rebuilding and re-adding it to the raid system. The steps are the same as above, except performing them with /dev/sda. I also had to change my BIOS to boot from the second drive.
Once both drives are installed and working with the RAID array, the final part of the process is to increase the size of the file system to the full size of our new drives. I first had to disable the EXT3 file system journal. This was necessary so that the online resize doesn’t run out of memory.
Edit /etc/fstab and change the file system type to ext2, and the arguments to include “noload” which will disable the file system journal. My /etc/fstab looks like this:
# /etc/fstab: static file system information. # # <file system> <mount point> <type> <options> <dump> <pass> proc /proc proc defaults 0 0 # /dev/md1 UUID=b165f4be-a705-4479-b830-b0c6ee77b730 / ext2 noload,relatime,errors=remount-ro 0 1 # /dev/md0 UUID=30430fe0-919d-4ea6-b3c2-5e3564344917 /boot ext3 relatime 0 2 # /dev/sda5 UUID=94b03944-d215-4882-b0e0-dab3a8c50480 none swap sw 0 0 # /dev/sdb5 UUID=ebb381ae-e1bb-4918-94ec-c26e388bb539 none swap sw 0 0
You then have to run `mkinitramfs` to rebuild the initramfs file to include the desired change to the mount options
root@host:~# mkinitramfs -o /boot/initrd.img-2.6.31-20-generic 2.6.31-20-generic
Reboot to have the system running with journaling disabled on the root partition. Finally, you can actually increase the RAID device to the full size of the device:
root@host:~# mdadm --grow /dev/md3 --size=max
And then resize the file system:
root@host:~# resize2fs /dev/md1 resize2fs 1.41.9 (22-Aug-2009) Filesystem at /dev/md1 is mounted on /; on-line resizing required old desc_blocks = 38, new_desc_blocks = 117 Performing an on-line resize of /dev/md1 to 488084800 (4k) blocks.
The system is now up and running with larger drives :). Change the /etc/fstab, rebuild the initramfs, and reboot one more time to re-enable journaling and be running officially.