Kolbu

Computers, Gadgets and Beyond!
Subscribe


Growing RAID5 sets in Ubuntu

January 10, 2007 By: webmaster Category: Technology, Tutorials, Ubuntu

Growing RAID5 sets under Linux by adding new disks on the fly has been possible for some time now. However, the kernel that comes default with Ubuntu 6.10 does not appear to contain all the needed support to do this. So if you need to do this, as I did, you first need to boot a newer kernel than the one available. I picked 2.6.19 and compiled it using these instructions. That took about five commands in total to do, so I won't repeat those instructions here. Note however, that I had to specifically add support for both RAID5 and my various SATA-cards to the kernel configuration (the make menuconfig part of the instructions). Your mileage may vary.


Note up front that the procedure is somewhat new and you must either be willing to lose every bit of data on the disk array you are playing around with or made a recent backup. There are absolutely no guarantees here.

You know that your current kernel is too old if you get these kind of error messages when you try to grow your array:

# mdadm --grow /dev/md0 --raid-disks=4
mdadm: Cannot set device size/shape for /dev/md0: Invalid argument

And maybe also:

# mdadm --grow /dev/md0 --raid-disks=4
mdadm: /dev/md0: Cannot get array details from sysfs

 

 

Once running the newer kernel however, this is how I did it. Current setup is one RAID5 array with 6 disks.

 # cat /proc/mdstat
Personalities : [raid6] [raid5] [raid4]
      
md0 : active raid5 sda[0] sdf[5] sde[4] sdd[3] sdc[2] sdb[1]
      1465180480 blocks level 5, 32k chunk, algorithm 2 [6/6] [UUUUUU]

 Adding a disk to the array will cause it to be tagged as a spare, only to be used if one of the existing disks fail:

 # mdadm --add  /dev/md0 /dev/sdg
mdadm: added /dev/sdg
# cat /proc/mdstat
Personalities : [raid6] [raid5] [raid4]

md0 : active raid5 sdg[6](S) sda[0] sdf[5] sde[4] sdd[3] sdc[2] sdb[1]
      1465180480 blocks level 5, 32k chunk, algorithm 2 [6/6] [UUUUUU]

And now for the critical part, telling the raid set to convert the spare into a working part of the array. The way this is done is with the --grow argument and telling the array how many disks the array will be made up of after the insertion. Since we had 6 disks to begin with, the new number is 7.
 

# mdadm --grow /dev/md0 --raid-disks=7
mdadm: Need to backup 960K of critical section..
mdadm: ... critical section passed.
# cat /proc/mdstat
Personalities : [raid6] [raid5] [raid4]

md0 : active raid5 sdg[6] sda[0] sdf[5] sde[4] sdd[3] sdc[2] sdb[1]
      1465180480 blocks super 0.91 level 5, 32k chunk, algorithm 2 [7/7] [UUUUUUU]
      [>....................]  reshape =  0.0% (49804/293036096) finish=489.2min speed=9960K/sec


Piece of cake, and as you can see our array has started to incorporate the new disk into the array. This takes a long time, since it has to rewrite the existing data across the existing disks and the new one.

After that is done, we need to expand the actual file system to fill the new, larger device we've just extended.  How to do this depends greatly on what file system you have of course. I'll show one ext2/ext3 method here, but check out the LVM HOWTO for information about how to deal with the other types.

 Note that there is a way of resizing an ext2/ext3 file system without unmounting, but we'll do it the safe way here:

# umount /dev/md0
# ext2resize /dev/md0
ext2resize v1.1.19 - 2001/03/18 for EXT2FS 0.5b
# mount /dev/md0


The ext2resize also takes forever on a large file system, in the order of several hours. So be patient.
 

There, that's it.


 

Leave a Reply