jwb

Linux storage benchmark results

Recommended Posts

It is a shame that Storage Review ignores Linux, but I can imagine their workload is already overlarge. To pick up the slack, I will be happy to provide you with some numbers of my own :D

Disclosure: Linux kernel 2.4.21-pre3 running on SMP AMD64 with 8GB memory. IDE disk is WD1200JB (8MB cache) on AMD 8111 controller. SCSI disks are 4 each Seagate 15k.3 36GB ST336753LC on Adaptec 39320D PCI-X controller. Unfortunately this bus is configured as U160 instead of U320, because of cabling quality problems. Shameful, I know. In the U320 configuration, the SCSI RAID can put over 300MB/s to the disk. Yeehaw.

Stumbling back to the point, I needed to benchmark this machine to calculate the expected improvement over my previous database server, a 2-way Pentium III with Seagate 10k.6 storage. I chose to use tiobench, a threaded I/O benchmark. tiobench measures sequential and random read and write performance with a large number of concurrent processes. I used 32 processes for this benchmark with a dataset of 16GB.

The first tested device is a (software) RAID 0 of all four SCSI disks. Yeah jeebus it is fast. When the array was exceeding 250MB/s to the disk is when I detected my dysfunctional cabling and reduced bus speed to 160MB/s. Consider this handicap when interpreting the results. All results in MB/s.

4-way SCSI RAID 0

Seq. Read 95.53

Seq. Write 76.06

Rand. Read 8.64

Rand. Write 7.59

Read Service Time 12ms

You can see this setup smokes. However, I have no intention of operating the array in this manner. What I will actually be doing is using 8 drives on two busses in RAID 1 pairs with various databases on each pair. So let us benchmark a 2x2 RAID 10 setup:

2-way RAID 0 over 2 each 2-way RAID 1

Seq Read 74.42

Seq Write 40.36

Rand Read 8.72

Rand Write 3.74

Read Service Time 12ms

We obviously lost some performance going from four stripes to only two, and the mirrored writes take their toll. Reads are 23% slower and writes are 47% slower. Still, it hauls, and random reads benefit from RAID 1 read balancing. The next benchmark is a single SCSI disk, for comparison with the lone IDE disk:

Seagate Cheetah 15k.3 36.7GB SCSI disk

Seq Read 22.79

Seq Write 24.61

Rand Read 2.57

Rand Write 1.73

Read Service Time 44ms

The performance of the Seagate by itself is close to what we might derive from the RAID performance. This disk suffers a slight disadvantage versus the Western Digital in that it must use half its capacity for the dataset where the competitor uses only 12%. This might allow the Western Digital better locality of seeks. Let's find out:

Western Digital Caviar WD1200JB IDE disk

Seq Read 15.77

Seq Write 27.40

Rand Read 1.02

Rand Write 0.82

Read Service Time 89ms

The IDE disk produces a noble effort. In random performance the Seagate is 152% and 111% faster for reads and writes (as we might expect from the 2:1 rotational speed advantage enjoyed by the SCSI unit), but in sequential performance the WDC equipment meets and exceeds the Seagate. The Seagate takes the top sequential read by 45% but the WDC tops its rival by 11% in sequential writes.

I hope you enjoyed this small window into Linux storage performance.

If I need to run this test again I'm definitely going to reboot with mem=128M. 16GB files take too long (especially on the IDE drive ... ugh).

Cheers,

jwb

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now