We aims to provide in house support to our colleagues and the Engineers for any sort of technical help.You can raise your technical queriesor browse for any solutions for the probelms through our groups.We provide driver support,OS support and utility support too to make your life a better one.

Our Mission

Transform HCL Infosystems LTD into services and solution centric organization

Friday, April 9, 2010

Linux Software RAID10

Linux Software RAID10




Creating RAID Arrays in Linux during installation is an easy task (using the disk druid or any such graphical installers). it's best to keep your root filesystem out of both RAID and LVM for easier management and recovery.









Linux RAID and Hardware



I've seen a lot of confusion about Linux RAID, so let's clear that up. Linux software RAID has nothing to do with hardware RAID controllers. You don't need an add-on controller, and you don't need the onboard controllers that come on most motherboards. In fact, the lower-end PCI controllers and virtually all the onboard controllers are not true hardware controllers at all, but software-assisted, or fake RAID. There is no advantage to using these, and many disadvantages. If you have these, make sure they are disabled.

Ordinary PC motherboards support up to six SATA drives, and PCI SATA controllers provide an easy way to add more. Don't forget to scale up your power and cooling as you add drives.

If you're using PATA disks, only use one per IDE controller. If you have both a master/slave on a single IDE controller, performance will suffer and any failure risks bringing down both the controller and the second disk.







GRUB Follies



GRUB Legacy's (v. 0.9x) lack of support for RAID is why we have to jump through hoops just to boot the darned thing. Beware your Linux's default boot configuration, because GRUB must be installed to the MBRs of at least the first two drives in your RAID1 array, assuming you want it to boot when there is a drive failure. Most likely your Linux installer only installs it to the MBR of the drive that is first in the BIOS order, so you'll need to manually install it on a secondary disk.

First open the GRUB command shell. This example installs it to /dev/sdb, which GRUB sees as hd1 because it is the second disk on the system:





root@uberpc ~# grub

GNU GRUB version 0.97 (640K lower / 3072K upper memory)



Minimal BASH-like line editing is supported. For the first word, TAB lists possible command completions. Anywhere else TAB lists the possible completions of a device/filename.



grub> root (hd1,0)

Filesystem type is ext2fs, partition type 0xfd



grub> setup (hd1)

Checking if "/boot/grub/stage1" exists... yes

Checking if "/boot/grub/stage2" exists... yes

Checking if "/boot/grub/e2fs_stage1_5" exists... yes

Running "embed /boot/grub/e2fs_stage1_5 (hd1)"... 17 sectors are embedded. succeeded

Running "install /boot/grub/stage1 (hd1) (hd1)1+17 p (hd1,0)/boot/grub/stage2 /boot/grub/grub.conf"... succeeded

Done.



You can do this to every disk in your RAID 1 array. /boot/grub/menu.lst should have a default entry that looks like something like this:



title Ubuntu 7.10, kernel 2.6.22-14-generic, default root (hd0,0)

kernel /boot/vmlinuz-2.6.22-14-generic root=/dev/md0 ro

initrd /boot/initrd.img-2.6.22-14-generic



Let's say hd0,0 is really /dev/sda1. If this disk fails, the next drive in line becomes hd0,0, so you only need this single default entry.

GRUB sees PATA drives first, SATA drives second. Let's say you have two PATA disks and two SATA disks:





/dev/hda

/dev/hdb

/dev/sda

/dev/sdb



GRUB numbers them this way:





hd0

hd1

hd2

hd3

If you have one of each, /dev/hda=hd0, and /dev/sda=hd1. The safe way to test your boot setup is to power off your system and disconnect your drives one at a time.







Managing Linux RAID With mdadm



There are still a lot of howtos on the Web that teach the old md command and raidtab file. Don't use these. They still work, but the mdadm command does more and is easier.







Creating and Testing New Arrays



Use this command to create a new array:







# mdadm -v --create /dev/md1 --level=raid10 --raid-devices=2 /dev/hda2 /dev/sda2



You may want to have a hot spare. This is a partitioned, formatted hard disk that is connected but unused until an active drive fails, then mdadm (if it is running in daemon mode, see the Monitoring section) automatically replaces the failed drive with the hot spare. This example includes one hot spare:







# mdadm -v --create /dev/md1 --level=raid10 --raid-devices=2 /dev/hda2 /dev/sda2 --hot-spares=1 /dev/sdb2





You can test this by "failing" and removing a partition manually:







# mdadm /dev/md1 --fail /dev/sda2 --remove /dev/sda2





Then run some querying commands to see what happens.

When you have more than one array, they can share a hot spare. You should have some lines in /etc/mdadm.conf that list your arrays. All you do is create a share group by adding lines as shown in bold:







ARRAY /dev/md0 level=raid1 num-devices=2 UUID=004e8ffd:05c50a71:a20c924c:166190b6

shared-group=share1

ARRAY /dev/md1 level=raid10 num-devices=2 UUID=38480e56:71173beb:2e3a9d03:2fa3175d

shared-group=share1





View the status of all RAID arrays on the system:







$ cat /proc/mdstat

Personalities : 'linear' 'multipath' 'raid0' 'raid1' 'raid6' 'raid5' 'raid4' 'raid10'

md1 : active raid10 hda2'0' sda2'1'

6201024 blocks 2 near-copies '2/2' 'UU'



md0 : active raid1 hda1'0' sda1'1'

3076352 blocks '2/2' 'UU'



The "personalities" line tells you what RAID levels the kernel supports. In this example you see two separate arrays: md1 and md0, that are both active, their names and BIOS order, and the size and RAID type of each one. 2/2 means two of two devices are in use, and UU means two up devices.



You can get detailed information on individual arrays:







# mdadm --detail /dev/md0

Is this partition part of a RAID array? This displays the contents of the md superblock, which marks it as a member of a RAID array:





# mdadm --examine /dev/hda1

You can also use wildcards, like mdadm --examine /dev/hda*.







Monitoring



mdadm itself can run in daemon mode and send you email when an active disk fails, when a spare fails, or when it detects a degraded array. Degraded means a new array that has not yet been populated with all of its disks, or an array with a failed disk:





# mdadm --monitor --scan --mail=shiroy.p@hcl.in --delay=2400 /dev/md0





Your distribution may start the mdadm daemon automatically, so you won't need to run this command. Kubuntu controls it with /etc/init.d/mdadm, /etc/default/mdadm, and /etc/mdadm/mdadm.conf, so all you need to do is add your email address to /etc/mdadm/mdadm.conf.







Starting, Stopping, and Deleting RAID



Your Linux distribution should start your arrays automatically at boot, and mdadm starts them at creation.

This command starts an array manually:





# mdadm -A /dev/md0



This command stops it:





# mdadm --stop /dev/md0





You'll need to unmount all filesystems on the array before you can stop it.

To remove devices from an array, they must first be failed. You can fail a healthy device manually:





# mdadm /dev/md1 --fail /dev/sda2 --remove /dev/sda2



If you're removing a healthy device and want to use it for something else, or just want to wipe everything out and start over, you have to zero out the superblock on each device or it will continue to think it belongs to a RAID array:





# mdadm --zero-superblock /dev/sda2

Adding Devices

You can add disks to a live array with this command:





# mdadm /dev/md1 --add /dev/scd2





This will take some time to rebuild, just like when you create a new array.

That wraps up ourtour of RAID 10 and mdadm.







Resources



man mdadm

Serial ATA (SATA) for Linux

GRUB manual

BAARF: Battle Against Any RAID Five

Basic RAID





Visit my BLOG

No comments:

Post a Comment