chrispitude

Tips for creating a RAID-optimized XFS partition

Recommended Posts

Hi all,

I've been meaning to repartition my 3ware 320GBx4 RAID5 array. I finally have a couple extra loose drives to back everything up, and I'm ready to repartition.

My question is, how do I steer mkfs.xfs to create a RAID-optimized XFS partition? My understanding is that for a 4-drive RAID5 array with a 64k stripe size, I would issue:

mkfs.xfs -d su=64k,sw=3 ...

Do I need to do anything special as far as making sure the partitions start on some kind of boundary? Would it mess things up if I created the XFS filesystem on a partition which isn't stripe-aligned? Also, the largest partition on the array is 900GB, and will store mostly large files (>1Mb). Are there any special XFS switches which should be used for this partition?

Thanks!

- Chris

Share this post


Link to post
Share on other sites

XFS tends to store large files well by default.

mkfs.xfs -d su=64k,sw=3

chage stripe width to 4, as you have a 4 drive array

mkfs.xfs -d su=64k,sw=4

sw must be a multiple of su

Frank

Edited by Big Buck Hunter

Share this post


Link to post
Share on other sites
XFS tends to store large files well by default.

mkfs.xfs -d su=64k,sw=3

chage stripe width to 4, as you have a 4 drive array

mkfs.xfs -d su=64k,sw=4

sw must be a multiple of su

Frank

Hi Frank,

Thanks for your post. This is a RAID5 array, so the array effectively spans 3 drives. Shouldn't this be taken into account?

- Chris

Share this post


Link to post
Share on other sites
With raid5, the data spans all of the drives, as does the parity information. Raid3/4 are where the parity information is stored on a single drive.

http://www.storagereview.com/guide2000/ref...ngleLevel5.html

Frank

Hi Frank,

I agree, RAID5 parity is interleaved across the physical drives. Logically, however, a RAID5 array of N drives requires only N-1 stripes to span the physical drives in the array.

- Chris

Share this post


Link to post
Share on other sites
Hi Frank,

I agree, RAID5 parity is interleaved across the physical drives. Logically, however, a RAID5 array of N drives requires only N-1 stripes to span the physical drives in the array.

- Chris

No, Raid 5 uses "N" stripes. Raid 3/4 uses N-1. Also, 64 is not a multiple of 3. You will have to use 4.

Frank

Share this post


Link to post
Share on other sites

Hi all,

After some research, I now have my answers.

First, creating partitions on 64k stripe boundaries is pretty easy. Use 'parted' and configure it to use sectors as the default unit. The default unit is cylinders, and due to the default 63 heads*255 sectors=1cylinder geometry, it's about impossible to align to a 64k boundary using cylinders.

Below I print the partition table in GiB, then in sectors:

(parted) unit s
(parted) unit GiB
(parted) print

Model: AMCC 9500S-4LP DISK (scsi)
Disk /dev/sda: 894GiB
Sector size (logical/physical): 512B/512B
Partition Table: msdos

Number  Start	End	  Size	 Type	 File system  Flags
1	  0.00GiB  7.81GiB  7.81GiB  primary  ext3		 boot
2	  7.81GiB  9.77GiB  1.95GiB  primary  linux-swap
3	  10.0GiB  30.0GiB  20.0GiB  primary
4	  30.0GiB  894GiB   864GiB   primary  reiserfs

(parted) unit s
(parted) print

Model: AMCC 9500S-4LP DISK (scsi)
Disk /dev/sda: 1874933759s
Sector size (logical/physical): 512B/512B
Partition Table: msdos

Number  Start	  End		  Size		 Type	 File system  Flags
1	  63s		16386299s	16386237s	primary  ext3		 boot
2	  16386300s  20482874s	4096575s	 primary  linux-swap
3	  20971520s  62914559s	41943040s	primary
4	  62914560s  1874933759s  1812019200s  primary  reiserfs

(parted)

For now, I am just concerned about partitions 3 and 4. I will repartition 1 and 2 the next time I reinstall the operating system.

I want partition 3 to start 10GiB into the drive. Each sector is 512 bytes, so that means that we want partition 3 to start precisely at 10GiB * (1024MiB/GiB) * (1024KiB/MiB) * (2 sectors/KiB) = 20971520. This is the start value for partition 3. Partition 3 is 20GiB in size, so I calculate the start of partition 4, and end partition 3 immediately before it. I probably could have also done this by setting the units to GiB, MiB or kiB, but I felt more comfortable with this for now. When I reinstall partitions 1 and 2, I'll try MiB next.

For optimal XFS filesystem creation, I settled on the following commands:

mkfs.xfs -f -L /vmware -d su=64k,sw=3 /dev/sda3
mkfs.xfs -f -L /storage -d su=64k,sw=3 /dev/sda4

All the RAID5 XFS examples I found indicated that the stripe width should be (N-1) for RAID5 arrays with N drives. This makes intuitive sense to me, since the stripes will wrap back to the original drive every (N-1) stripes.

In the end I ended up benchmarking xfs and reiserfs, and decided to go with reiserfs as an experiment. Unfortunately reiserfs has no switches to optimize it for a RAID array, but we'll see how it goes. It seems a little slower in absolute raw throughput, but faster in file manipulation (archiving, copying, etc.).

- Chris

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now