BassKozz

Poor Raid 5 preformance / Please Help

Recommended Posts

Here's my general spec's:

AMD Sepron 3400+

2gb Ram

WinXP SP2

LSI MegaRaid i4 (PCI - 4 channel / 8 Drive PATA raid controller)

3x Seagate ST3400632A (in a RAID5 array)

---

LSI MegaRaid i4 - User Manual Excerpt:

Write Policy This option sets the caching method to write-back or write-through.

In Write-back caching, the controller sends a data transfer completion signal to the host

when the controller cache has received all the data in a transaction.

In Write-through caching, the controller sends a data transfer completion signal to the host

when the disk subsystem has received all the data in a transaction. This is the default

setting.

Write-through caching has a data security advantage over write-back caching. Write-back

caching has a performance advantage over write-through caching. You should not use

write-back for any logical drive that is to be used as a Novell NetWare volume.

Read Policy This option enables the IDE read-ahead feature for the logical drive. You can

set this parameter to Normal, Read-ahead, or Adaptive.

Normal specifies that the controller does not use read-ahead for the current logical drive.

This is the default setting.

Read-ahead specifies that the controller uses read-ahead for the current logical drive.

Adaptive specifies that the controller begins using read-ahead if the two most recent disk

accesses occurred in sequential sectors. If all read requests are random, the algorithm

reverts to Normal; however, all requests are still evaluated for possible sequential

operation.

Cache Policy This parameter applies to reads on a specific logical drive. It does not affect

the Read ahead cache.

Cached I/O specifies that all reads are buffered in cache memory.

Direct I/O specifies that reads are not buffered in cache memory. Direct I/O does not

override the cache policy settings. Data is transferred to cache and the host concurrently.

If the same data block is read again, it comes from cache memory. This is the default

setting.

After much testing of various settings of my MegaRaid card I've stuck with the following settings...

Write Policy = Write-through caching

Read Policy = Normal

Cache Policy = Direct I/O

Which gives me the results (IN RED):

Blue = Write-through/Adaptive Read Policy/Direct I/O

HDTach-Raid5-update6.jpg

Questions:

1.So I should be happy with 75.4 Burst & 54.8 Average ?

2.Why are the results with Normal Read Policy (RED) so bouncy (up and down, not smooth), as opposed to the Adaptive Read Policy setting (BLUE) which is smoother but the performance sucks.

Any Idea’s ?

I think it might have something to do with this thread: Seagate 500GB SATA2 firmware upgrade helps RAID5 dramatically

Do I need to upgrade my HD's Firmware to correct this?

Thanks,

-BassKozz

Edited by BassKozz

Share this post


Link to post
Share on other sites

After writing to Seagate to find out if there is a FW upgrade available for my drives I got the following reply:

Issues that may occur with SATA hard drives in an array do not apply to PATA hard drives. There are no firmware updates for PATA hard drives.

Regardless of this, conventional benchmarking software tends to be unreliable when testing RAID performance. We cannot guaranty the performance of our hard drives in a RAID array anyway because that performance to a large degree depends on the controller and other components within your system.

I recommend benchmarking these hard drives individually outside of a RAID environment to make sure that they are actually performing up to specification.

Any idea's on how to get better preformance out of my Raid Array...

or a better testing product/software to use (besides HDtach) ?

Thanks,

-BassKozz

Share this post


Link to post
Share on other sites

I took the advice of Seagates Tech support and hooked up each of the Seagate drives (ST3400632A) to the Main Board's IDE connector sepratly (instead of running it thru the Raid Controller)...

Below are my results for each disk.

Red graph is with Windows Disk Management set to "Dynamic Disk" and the Blue graph is with Windows Disk Management set to "Basic Disk"...

As you can see there isn't much of a difference between the two, but I wanted to make sure I had all my bases covered.

Disk #1

SeagateDisk1_RED-dynamic_blue-basic.jpg

Disk #2

SeagateDisk2_RED-dynamic_blue-basic.jpg

Disk #3

SeagateDisk3_RED-dynamic_blue-basic.jpg

As you can see from these performance graphs these disks aren't working properly.

I am at a loss...

Please Help, I don't know what else to do :(

Thank you,

-BassKozz

Edited by BassKozz

Share this post


Link to post
Share on other sites

Re: individual drive tests

The drive appear to be operating in UDMA mode 2 (33MB/s).

Check/verify:

1. 80 conductor IDE cables are used.

2. How are drives jumpered? (Master, Slave, or Cable Select [CS]); try CS for troubleshooting.

3. Motherboard BIOS is set to allow UDMA modes and detects drives as UDMA 5 capable; autodetect should set them correctly.

4. Are the IDE channels transfer mode set to "DMA if available" in device manager?

Re: RAID

Check/verify

1. 80 conductor cables used.

2. Single drive on each channel.

3. Drives jumpered for Master/Slave/CS according to RAID card instructions.

HTH!

Share this post


Link to post
Share on other sites

For the raidcontroller:

1) Use Write-Back instead of Write Through

2) Do not use Raid 5 for anything else then a server that doesn't need a lot of performance. Raid 5 is very bad for gaming hen you don't have a blazing fast XOR hardware proc and a lot of cache on the controller (i4 doesn't have any of them). You cloud check testing a 2 drive Raid0 to see how far you could get. Due to limitation of the PCI bus the max transferrates will not get much better, but you might find that raid 0 will work much smoother than raid 5, if sou still need redundancy and raid 0 proves much better then raid 5 buy another drive and use raid 10 which is much faster then raid 5 and only a little slower then raid 0.

While HD tach is usefull to find bottlenecks in the storagesetup it is not really suitable for comparing real-life working situations. Intel I/O meter with the appropriate traces will do much better in this area.

Share this post


Link to post
Share on other sites
Re: individual drive tests

The drive appear to be operating in UDMA mode 2 (33MB/s).

Check/verify:

1. 80 conductor IDE cables are used.

2. How are drives jumpered? (Master, Slave, or Cable Select [CS]); try CS for troubleshooting.

3. Motherboard BIOS is set to allow UDMA modes and detects drives as UDMA 5 capable; autodetect should set them correctly.

4. Are the IDE channels transfer mode set to "DMA if available" in device manager?

Re: RAID

Check/verify

1. 80 conductor cables used.

2. Single drive on each channel.

3. Drives jumpered for Master/Slave/CS according to RAID card instructions.

HTH!

sdbardwick thanks for the help/post,

I've checked and double-checked all of these settings for the RAID config, and everything seems to be in order... I even changed all the drives from CS to there corresponding setting (Master and Slave)...

As far as single drive on each channel why would I do this... will this work better?

There are 4 channels and each channel has a Master and Slave setting on it... I currently have it setup like this:

Channel 1: 1x Seagate 400gb MASTER

------------: 1x Seagate 400gb SLAVE

Channel 2: 1x Seagate 400gb MASTER

Channel 3: Nothing

Channel 4: Nothing

Would it be better to set it up like this...

Channel 1: 1x Seagate 400gb MASTER

Channel 2: 1x Seagate 400gb MASTER

Channel 3: 1x Seagate 400gb MASTER

Channel 4: Nothing

Does this really make a difference?

Also, in the device manager all of the IDE controllers are set to "DMA if available" but I don't see any IDE controller option for the Raid Controller... There is nowhere to set "DMA if available" for the raid controller... is this normal? I would assume so because the RAID controller auto-detects speed.

Thanks,

-BassKozz

For the raidcontroller:

1) Use Write-Back instead of Write Through

2) Do not use Raid 5 for anything else then a server that doesn't need a lot of performance. Raid 5 is very bad for gaming hen you don't have a blazing fast XOR hardware proc and a lot of cache on the controller (i4 doesn't have any of them). You cloud check testing a 2 drive Raid0 to see how far you could get. Due to limitation of the PCI bus the max transferrates will not get much better, but you might find that raid 0 will work much smoother than raid 5, if sou still need redundancy and raid 0 proves much better then raid 5 buy another drive and use raid 10 which is much faster then raid 5 and only a little slower then raid 0.

While HD tach is usefull to find bottlenecks in the storagesetup it is not really suitable for comparing real-life working situations. Intel I/O meter with the appropriate traces will do much better in this area.

AeroWB thanks for the help/post,

Here are the results with the Write Policy set to "Write-Back"

write-backsetting.jpg

As you can see this is worst then "Write-thru"...

I will consider switching to Raid 10 but I need to go out and buy another 400gb HD and I won't see any capacity increase (800gb with 4x400gb RAID 10 vs. 1.2TB with 4x400gb RAID 5) :(

I might end up doing this if I can't get anymore preformance out of the current config.

I really won't be playing games on this machine, it's more of a backup & media server... but It's hooked up to gigabit LAN so I need to squeeze as much throuput out of this RAID controller as I can.

Thanks for all the help guys,

-BassKozz

Edited by BassKozz

Share this post


Link to post
Share on other sites

Having each disk on a separate channel will make a noticeable difference for performance. You should definitely have each disk on a separate channel to avoid channel contention/bottlenecks.

Also, by default your RAID controller only comes with 16 MB of cache. This is simply not enough to really perform well with RAID 5. You should either install more cache (if you haven't already) and make sure you're running in Write Back cache mode, or you should buy another drive and go with RAID 1+0.

However, your primary issue, as already pointed out, seems to be that your disks are not running in UDMA mode 5. Verify the cabling is 80 pin cabling and is not longer than 18" (maximum length for IDE specification cables). You should also check your system BIOS to make sure that the disks are set for UDMA 5 or Auto detect.

If you can't get your disks to benchmark higher than 33 MB/sec, you're just not going to get any reasonable performance, so concentrate first on rechecking those cables and your BIOS settings.

Share this post


Link to post
Share on other sites
If you can't get your disks to benchmark higher than 33 MB/sec, you're just not going to get any reasonable performance, so concentrate first on rechecking those cables and your BIOS settings.

I agree on this totally (well not totally but a lot :) )

Tell us the brand and model of your mainboard / and or its chipset, maybe that would help.

-Check in bios if DMA is on

-Check in device manager in windows, look under the "IDE ATA/ATAPI controllers" for something like "Primary IDE channel" doubleclick this Primary IDE controller and choose advanced settings tab, there you will be able to see the current transferrate of all IDE drives, off course this also works for the Secondary IDE channel.

Also you are able to choose the mode between "PIO only" or "DMA if available" choose DMA if available!

Share this post


Link to post
Share on other sites

Trinary,

Having each disk on a separate channel will make a noticeable difference for performance. You should definitely have each disk on a separate channel to avoid channel contention/bottlenecks.

Ok, I placed the HD's on different channels, and here are the results:

HD's on Channels 1,2,&4

seprateidechannels_1-2-4.jpg

HD's on Channels 1,2,&3

seprateidechannels_1-2-3.jpg

As you can see this is still worse then the intial tests in post #1 of this thread. So placing these HD's on different channels didn't do the trick :(

Also, by default your RAID controller only comes with 16 MB of cache. This is simply not enough to really perform well with RAID 5. You should either install more cache (if you haven't already) and make sure you're running in Write Back cache mode, or you should buy another drive and go with RAID 1+0.

I will have to look into upgrading the Cache memory... I don't think it's possible because it's soldered onto the board, and my soldering skills aren't that great ;)

I think the card comes with either 16 or 32megs of ram but is NOT upgradeable... I could be wrong on this and will look into it.

Look's like Raid 1+0 is my only option, since I can't seem to get this Raid 5 to work :(

Any other idea's suggestions?

However, your primary issue, as already pointed out, seems to be that your disks are not running in UDMA mode 5. Verify the cabling is 80 pin cabling and is not longer than 18" (maximum length for IDE specification cables). You should also check your system BIOS to make sure that the disks are set for UDMA 5 or Auto detect.

If you can't get your disks to benchmark higher than 33 MB/sec, you're just not going to get any reasonable performance, so concentrate first on rechecking those cables and your BIOS settings.

***ok I'll get cracking on this, and see what I can comeup with. I am almost 100% possitive my BIOS and Windows settings are correct, but I am going to reconnect one of the drives to the main board now and run some more tests with different cables, the current cables I am using are 36inches.

I'll keep you posted.***

[EDIT: I deleted the Raid 5 Array, and setup a Raid 0 array with 1 drive (I know this doesn't make sence, but there was no way to make the HD show up in windows via the raid controller with out using an array... so really there is NO array just 1 drive physical drive as a logical Raid 0 array... kinda not really ;) ).

The reason for doing this is to test the preformance of a single physical drive from the raid controller to see what happens.

And now for the results:

SingleDisk_offraidcard.jpg

I also created a RAID 1 array with 2 physical drives:

Raid1.jpg

And Also a Raid 0 array:

Raid0.jpg

So judging from these results, Am I chasing a pipe dream...

Is the original graph/preformance testing results the best I am going to see from this setup (raid card) ?

Thanks for the help,

-BassKozz

---

AeroWB,

Tell us the brand and model of your mainboard / and or its chipset, maybe that would help.

-Check in bios if DMA is on

-Check in device manager in windows, look under the "IDE ATA/ATAPI controllers" for something like "Primary IDE channel" doubleclick this Primary IDE controller and choose advanced settings tab, there you will be able to see the current transferrate of all IDE drives, off course this also works for the Secondary IDE channel.

Also you are able to choose the mode between "PIO only" or "DMA if available" choose DMA if available!

I knew this was going to comeup eventually...

I am using a Mobo from a Compaq Presario SR1710NX, and it's an OEM Mobo (Model Number: A8AE-LE) with little to no documentation at all out there...

I've actually modified this MoBo's Bios so I could add a couple of features to it, that were not supported initially (Memory Timings, WOL, etc...)

http://www.geocities.com/whydothis1000/index.html

or

http://www.geocities.com/basskozz/A8AE-LE/

or

http://www.wimsbios.com/phpBB2/viewtopic.php?t=7623

After putzing around on SiSandra, I pulled the following info on my Mainboard & Chipset:

SiSandra-Mainboard.jpg

SiSandra-Mainboard2.jpg

Also the raid controller:

SiSandra-RaidController.jpg

SiSandra-RaidController2.jpg

...

And while I was at it I ran SiSandra's HD Benchmark on the RAID 5 array:

SiSandraHDbench2.jpg

What's that File Server Optimization = NO all about???

Look's like it's not that bad as it looked using HDtach.

Next for intel I/O meter tests...

comming soon

Thanks,

AeroWB

---

Everyone,

So I guess the big question is:

Am I chasing a pipe dream...

Is the original graph/preformance testing results the best I am going to see from this setup ?

And who's fault is it for the preformance, the Raid Controller card or the Motherboard?

I am exhusted... I am going to bed now ;)

Thanks for all the help guys,

-BassKozz

Edited by BassKozz

Share this post


Link to post
Share on other sites

Anyone?

Here are some more spec's and benchmarks courtesy of Everest & Ipeak…

MoBo Spec’s

everest-mobo.jpg

Chipset Spec’s

everest-chipset.jpg

Everest Read Suite

everest-read.jpg

Everest Average Read

everest-averageread.jpg

Everest Write

everest-writebench.jpg

Ipeak RankDisk Benchmark

Ipeak-diskrank.jpg

Any other idea’s ?

The only other thing I can think of is are there any settings on the HD’s bios that I could change using Seagate’s Software (SeaTools Software) ?

I don’t know what else to try…

Please help,

-BassKozz

Edited by BassKozz

Share this post


Link to post
Share on other sites

I give up :(

Thanks for all the help guys, I really appreciated it.

I am just going to have to live with this throughput... I am only using it for a backup server and Media server to my XBMC... I should be fine doing this right?

Thanks again to everyone who helped out.

-BassKozz

Share this post


Link to post
Share on other sites
I give up :(

Thanks for all the help guys, I really appreciated it.

I am just going to have to live with this throughput... I am only using it for a backup server and Media server to my XBMC... I should be fine doing this right?

Thanks again to everyone who helped out.

-BassKozz

Actually Scratch that Idea... I am still going to try and get this working if it kills me...

Ok, I've hooked up Disks #1 & #2 to the Mobo's IDE controller and here are the results from WinXP...

Device Manager

DeviceMan-SeagateDisk12.jpg

As you can see for some reason the second disk has an exclimation ?

I tried "re-installing drivers" as XP said to do and still have this issue...

Advanced IDE

AdvancedSettings-SeagateDisk12.jpg

UDMA 2 ? Shouldn't this read 5 ?

Now time to disconnect these two and test the third disk from the array plugged into the MoBo's IDE controller.

Be right back...

Share this post


Link to post
Share on other sites

Ok...

Here is the device manager and Advanced settings for Disk#3...

Device Manager

DeviceManager-Disk3.jpg

No Exclimation on this one

Advanced Settings

AdvancedSettings-disk3.jpg

Still showing UDMA 2 not 5 ???

And here are the HDtach results for all three drives connected to the MoBo's onboard IDE controller:

Disk1-connectedtoMobo.jpg

I only posted Disk 1's results but they are the same results for disk 2&3 so I figured no need to post.

Any ideas?

Share this post


Link to post
Share on other sites

UDMA 2 is 33MB/s, So the transferrate you get is quite expected

The Ati SB450 Southbridge and the Barracuda 7200.9 should indeed operate at UDMA mode 5

which is 100MB/s.

Normally this is caused by using 40 pin IDE cables, for speeds greater then 33MB/s 80 pin special ultra ata cables should be used.

Or your disks may be forced to run at UDMA2 max (by a jumper or settings program) but I've no idea if this is possible.

When the bios has a setting about detecting 80pins ATA cables you should try changing that setting.

Share this post


Link to post
Share on other sites

the current cables I am using are 36inches.

/quote]

This is very long and longer then maximum accourding to the standards.

Could you try a much shorter cable, and make sure its 80 pins, connect this to the onboard controller.

You only need to connected one disk and check in windows' devicemanager the mode in which it is running.

Share this post


Link to post
Share on other sites

Have you considered doing a re-install for XP SP 2 to see if that straightens out the problem?

Also, what else do you have installed in that system? The things I'm considering suspect at this point are:

1) Your entire install might be hosed and re-installing might fix it (try this as a last resort, obviously)

2) Try testing with a shorter cable (less than 18"). This was mentioned previously, but I'm unsure if you

were able to try testing with a cable less than 18".

3) Something causing lots of interrupts or consuming lots of PCI bandwidth which is making it impossible for your RAID card to function up to par

4) Make sure you have installed the latest driver for the RAID controller.

5) Check MS Update or Windows Update and install all critical fixes and all optional fixes that list "Windows XP" as part of the description

You should also definitely test with a "standard" BIOS for that system (i.e., not altered by you). You could save off a copy of your modded BIOS and then re-flash with a standard one. This will just help remove one more element that might possibly be causing the issue.

Edited by Trinary

Share this post


Link to post
Share on other sites

On the topic of IDE cables, make sure the blue connector of your 80 conductor cable is attached to the host controller and the black connector attached to your hard drive. If you attach it backwards with the blue connector attached to the hard drive the host controller will not sense an 80 conductor cable and is supposed to (if it complies with the ATA specification) limit your transfer mode to UDMA mode 2.

Also the cables cannot be longer than 18 inches if you wish to stay in compliance with the ATA specification.

Free

Share this post


Link to post
Share on other sites

the current cables I am using are 36inches.

This is very long and longer then maximum accourding to the standards.

Could you try a much shorter cable, and make sure its 80 pins, connect this to the onboard controller.

You only need to connected one disk and check in windows' devicemanager the mode in which it is running.

That did it... it was the 36-inch cables. I connected a standard 18-inch cable I had laying around and voila:

NewCables-HDtoMoBo.jpg

So I got that sorted out... now I know these HD's are working fine and in UDMA mode5... Now to try using different cables on the RAID controller card:

RAID 5:

NewCables-RAID5.jpg

Still poor preformance even with new cables on :(

The only difference between this result and my original post are about ~9% CPU utilization.

I guess this is the best I am gonna get... ? :unsure:

Edited by BassKozz

Share this post


Link to post
Share on other sites

Good thing you now know what caused the UDMA2 operation.

You're using brand new 2006 drives with an old 2001 raidcontroller which is only connected to a normal 33Mhz 32bit PCI bus.

Even using a very high-end scsi-raidcontroller from 2001 will not give good results by todays standards for most workloads.

Even if you would get a better PATA raidcontroller (if they even exists) You will not get higher then around 100MB/s because of the PCI limit (133MB/s theoretical but in practice allways below 100MB/s)

There's even a chance that the mainboard/chipset youre using will be limited on pci to around 75MB/s, as was my first nForce mainboard, to which I fitted a LSI Logic PCI-X u320 SCSI raid controller (in a normal pci slot) and got also around 80MB/s max.

To find this out you should try something like atto tool and bench small datasets that will fit in the controllers cache, since I don't know exactly how atto works you might want to search for manuals/forum-threads before you investigate.

Because most desktop mainboards are very limited in high-bandwith connectors (mostly only one AGP or PCI-E x16) I'm always looking for mainboards that incorporate PCI-X slots next to PCI-E x16 slots to get a fast Worksstation with a good SCSI controller, last year I bought Supermicro's PDSGE and I'm currently following the new boards for the Woodcrest/Conroe's and AM2. To bad Asus' new P5WDG2-WS has almost all important slots/devices connected to one interrupt-channel. For high performance Gaming you definately don't want shared interrupts for your sound, graphics and storage. Until now woodcrest looks like the only option but boards start at around 400 euro's so that hurts.

Share this post


Link to post
Share on other sites

Not to beat a dead horse, but do you now have each hard drive on a different channel as "MASTER" with cables that are less than 18"? If so, then yes, you have achieved maximum RAID 5 performance under your current configuration.

If that's not good enough, then I'd suggest changing your configuration by either:

1) Getting another hard drive and going with RAID 1+0 (easiest and least expensive overall option, in my opinion, but will still be limited by being connected to the PCI bus)

2) Upgrade to a different motherboard with some PCI-X slots (hopefully at least 1 133 MHz / 64-bit slot), and upgrade your RAID controller to 133MHz / 64-bit PCI-X. Obviously, this is a bit more expensive.

3) Upgrade to a different motherboard with an integrated RAID controller that's not connected via PCI and that supports RAID 1+0 (*NOT* 0+1!) and go with RAID 1+0. This would probably provide the highest performance for reads and writes, and still cost less than upgrading your motherboard AND the RAID controller.

When considering options for a motherboard upgrade, please remember that, unless you also plan on upgrading your CPU and/or RAM too, you should restrict yourself to motherboards that support your current RAM and CPU.

Share this post


Link to post
Share on other sites
Good thing you now know what caused the UDMA2 operation.

You're using brand new 2006 drives with an old 2001 raidcontroller which is only connected to a normal 33Mhz 32bit PCI bus.

Even using a very high-end scsi-raidcontroller from 2001 will not give good results by todays standards for most workloads.

Even if you would get a better PATA raidcontroller (if they even exists) You will not get higher then around 100MB/s because of the PCI limit (133MB/s theoretical but in practice allways below 100MB/s)

There's even a chance that the mainboard/chipset youre using will be limited on pci to around 75MB/s, as was my first nForce mainboard, to which I fitted a LSI Logic PCI-X u320 SCSI raid controller (in a normal pci slot) and got also around 80MB/s max.

To find this out you should try something like atto tool and bench small datasets that will fit in the controllers cache, since I don't know exactly how atto works you might want to search for manuals/forum-threads before you investigate.

Because most desktop mainboards are very limited in high-bandwith connectors (mostly only one AGP or PCI-E x16) I'm always looking for mainboards that incorporate PCI-X slots next to PCI-E x16 slots to get a fast Worksstation with a good SCSI controller, last year I bought Supermicro's PDSGE and I'm currently following the new boards for the Woodcrest/Conroe's and AM2. To bad Asus' new P5WDG2-WS has almost all important slots/devices connected to one interrupt-channel. For high performance Gaming you definately don't want shared interrupts for your sound, graphics and storage. Until now woodcrest looks like the only option but boards start at around 400 euro's so that hurts.

Thanks for your help AeroWB,

I have a PCI express (but no PCI-X) slot free on my MoBo, any recommendations on a PATA Raid controller that has RAID 5 support ???

I will look into ATTO.

Not to beat a dead horse, but do you now have each hard drive on a different channel as "MASTER" with cables that are less than 18"? If so, then yes, you have achieved maximum RAID 5 performance under your current configuration.

Thanks for your help Trinary,

I forgot to test seprate channels with the new cables...

Here are the results:

Results in RED are the HD's on Seprate Channels (Channels 0,1,3)

Results in BLUE are the same results as before (channels 0 & 1) - with channel 0 having a Master & Slave

Raid5-NewCables-sepratechannels.jpg

Results

Burst Speed: +3.8

Avg Read Speed: +2.7

Random Read Speed: +0.1

CPU utilization: +2%

The increase in preformance is minimal, and not only that the cables are restricted. What I mean by this is, I planned on setting up a seprate RAID 5 on Channels 2&3 but if I have these HD's using Channels 0,1&2 I can't route the cables to the other HD's properly to have them run on the SLAVE cable connections... So I will probably leave this as is running the HD's as follows:

Channel 0

Master = Seagate 400gb (1st Raid 5 Array)

Slave = Seagate 400gb (1st Raid 5 Array)

Channel 1

Master = Seagate 400gb (1st Raid 5 Array)

Slave = NONE (I might add another Seagate 400gb)

Channel 2

Master = None as of yet (Will be adding a Maxtor 300gb for 2nd Raid 5)

Slave = None as of yet (Will be adding a Maxtor 300gb for 2nd Raid 5)

Channel 3

Master = None as of yet (Will be adding a Maxtor 300gb for 2nd Raid 5)

Slave = NONE (I might add another Maxtor 300gb)

If that's not good enough, then I'd suggest changing your configuration by either:

1) Getting another hard drive and going with RAID 1+0 (easiest and least expensive overall option, in my opinion, but will still be limited by being connected to the PCI bus)

2) Upgrade to a different motherboard with some PCI-X slots (hopefully at least 1 133 MHz / 64-bit slot), and upgrade your RAID controller to 133MHz / 64-bit PCI-X. Obviously, this is a bit more expensive.

3) Upgrade to a different motherboard with an integrated RAID controller that's not connected via PCI and that supports RAID 1+0 (*NOT* 0+1!) and go with RAID 1+0. This would probably provide the highest performance for reads and writes, and still cost less than upgrading your motherboard AND the RAID controller.

When considering options for a motherboard upgrade, please remember that, unless you also plan on upgrading your CPU and/or RAM too, you should restrict yourself to motherboards that support your current RAM and CPU.

I have a PCI express (but no PCI-X) slot free on my MoBo, any recommendations on a PATA Raid controller that has RAID 5 support ???

Thanks for all the help everyone :) ,

-BassKozz

Share this post


Link to post
Share on other sites
Why is the second array provide much smoother results ?

Less jumping up and down on the graph ?

In my SCSI disk test the Seagate Cheetah 10K7 were a lot slower then Maxtors Atlas 10KIV and 10KV disks.

Maybe Maxtor IDE drives are also much faster then Seagate IDE's.

Or maybe its the mix that does this, you can try combining other disk sets since you already have 3 different disks and see If you see a pattern.

Share this post


Link to post
Share on other sites

Why is the second array provide much smoother results ?

Less jumping up and down on the graph ?

In my SCSI disk test the Seagate Cheetah 10K7 were a lot slower then Maxtors Atlas 10KIV and 10KV disks.

Maybe Maxtor IDE drives are also much faster then Seagate IDE's.

Or maybe its the mix that does this, you can try combining other disk sets since you already have 3 different disks and see If you see a pattern.

It's not so much the speed that I am worried about, but more of the inconsistancy or lack of smoothness in the graph... I don't understand why the Graph is so jagged /\/\/\ (up's and down's) for the 1st array, but not the second?

Can someone explain what might be causing this?

Could it be a faulty raid controller?

Because when I test the HD's individually (outside of an array) the graph is fairly smooth, and also when I tested the 2nd array (300gb hd's) it was also smooth, the jaggedness scares me :unsure:

Share this post


Link to post
Share on other sites

To test the MegaRaid i4 controller (to see if the channels 0&1 were faulty), I switched the channels on the two arrays, and re-initialized the arrays...

So for the previous tests I was running the following (OLD Layout):

Channel 0

Master = 400gb Seagate

Slave = 400gb Seagate

Channel 1

Master = 400gb Seagate

Slave = NONE

Channel 2

Master = 300gb Seagate

Slave = 300gb Maxtor

Channel 3

Master = 300gb Maxtor

Slave = NONE

NOW this is the NEW Layout:

Channel 0

Master = 300gb Seagate

Slave = 300gb Maxtor

Channel 1

Master = 300gb Maxtor

Slave = NONE

Channel 2

Master = 400gb Seagate

Slave = 400gb Seagate

Channel 3

Master = 400gb Seagate

Slave = NONE

Results 3x300gb RAID5:

NEW LAYOUT

OLD LAYOUT

300gb-raid5-compare.jpg

I gained about 4MB/s burst and avg read

and a strange spike @ 200gb's in to the test? (I assume this is due to the Seagate and Maxtor’s not getting along, but why didn't it show up on the "old" layout ?)

Results 3x400gb RAID5:

NEW LAYOUT

OLD LAYOUT

400gb-raid5-compare.jpg

I lost about 3MB/s burst and lost 5MB/s avg read

What have I learned:

1. Firstly, I've learned that Channels 0 & 1 run slightly faster then channels 2 & 3 on my MegaRaid i4 Controller card.

2. The jagged /\/\ up's and down's on the graphs are not caused by the RAID controller’s channels... This is not to say that the MegaRaid i4 isn't the culprit for the inconsistency of the 3x400gb RAID 5 array, just that the channels don't have to do with it.

3. I plan on keeping the OLD layout with the 3x400gb's on channels 0&1 and the 3x300gb's on channels 2&3 because the 400gb hd's need as much help as they can get ;)

Still Unanswered:

I still haven't heard a definitive answer as to why my 3x400gb Seagate’s results in such an inconsistent graph (that is very jagged /\/\ with up's and down's)...

Can someone please explain this to me ?

Are these Seagate HD's (Model Number:ST3400632A) known for this sort of performance ?

Thanks,

-BassKozz

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now