gwenster

WINXP LSI Megaraid-E 500 -SLOW-

Recommended Posts

After having happily used SCSI drives for the last 2 years I`ve recently ( 3 months ago ) deceided to replace my trusty Adaptec 39160 card with a LSI Logic megeraid express 500 ( also known as a HP Netraid 1M ) in order to take advantage of putting my growing collection of SCSI drives in RAID 0 , namely 4 identical Fujitsu 36GB 15K drives because it best suited my unsatisfiable needs for more performance .

I got a great deal on the controller because I`ve got it pretty cheap at the office when it was being replaced with a newer card that had more CPU power onboard to offload the processor .

So there you have it . 4 15K drives to be put in Raid0 to increase the speed and performance of OS / those big tediously loading games and speedier loading of resources in my 3D modelling and design applications whilst I keep 2 drives in non-raid for backup and storage purposes

Up to this point , I have had no complaining . It all worked . I could burn a DVD at 16 speed and work with disk intensive applications at the same time without something , somewhere going wrong .

However !! I started to notice several performance related issues I shouldn`t be really having with the raid0 setup I had recently . . and thus I finally deceided to do a disk becnhmark . . why after 3 months ? Cause I really was too busy to be bothered when I was behind the PC at all :)

To my surprise the raid0 benchmark was painfully slow . A burst speed of 38 mb/sec Avarage read speed of 35 MB/sec with a maximum read of 60 mb/sec , something a single 15k drive could do and this for 4 15k scsi drives in raid0 :( not good . .

Even more shockingly is that the 2 other drives in my config wich arent in raid perform the same as the raid0 array !

I remember the 15K cheatah scoring virtually the same , whilst the 10 K had a bit lower acess time wich is correct , tough I forgot about the read/write chart . . it wasnt far off if not the same either .

It would leave you to believe that the system is working on SCSI 2 . but thats not the case according the controller .

I have tried many things to solve this already amongst wich , upgrading chipset drivers . . upgrading to latest scsi card drivers both sets ( HP branded / LSI Logic ) re-installing windows XP without SP2 at first before putting SP2 back after seeing no results .

The only thing I did do that had any result at all was moving the terminator back to the very end of the cable .

See this cable has 8 connectors and an extra one just half an inch away from the last connector ( pointing to the back ) wich you can plug in an terminator and not lose the 8th connector on the cable to it .

I originally placed the terminator on the 8th connector because no HD was going to be used on it , and hey its another half an inch closer to the chain so I tought that`d be better . But no . I actually gain 10 mb/sec burst rate on my already pethetic 38

At least thats a start .

Personally , I think the rounded cable is at fault . Whilst there typcly isnt anything wrong with rounded cables , at least not on their IDE counterparts ( Used them for years with the same results as regular cables ) I just have this weird hunch that it has to be it .

Both the card and the drives indicate no faults on their behalf , I`m using the same converters they were using on the servers at work so thats not the cause either , cant be anything else but the cable I think cause hell a terminator isnt that complex to be faulty I suppose .

I cant let this card be tested at work either cause of the holiday season things are hectic .

But I cant be for sure , with the Xmas celebration now over I dont have enough money to fork out 130 Euro`s on an Adaptec flat SCSI cable that has enough connectors for my needs .

Especially if that turns out to be not the problem

So if anybody can help me with this I`d be gratefull

Below I`ve written down everything you`d ever need to know to possibly solve the problem

The specifications

HP Netraid 1M / LSI Logic Megaraid Express 500

1 x storage - Ultra Wide SCSI LVD/SE - 68 pin HD D-Sub (HD-68) ( internal )

With onboard Intel 80960RM 100 MHz CPU and 32 MB sdram

This controller supports the following raid modes : RAID 0, RAID 1, RAID 5, RAID 10, RAID 50

To connect my hard drives to the controller I use a nameless / no brand U320 LVD rounded SCSI-cable of 220 CM .

It has 8 HD68 connectors with an extra connector on the end of the cable dedicated for a plug-in terminator .

The cable has siliconlayered casing around it to protect the wires from harm and actually makes my case quite neat inside .

I have a terminator wich according to the store I got it from should support up to U320 connected to the

first connector AFTER the last hard drive .

Tough on the terminator itself it says " Ultra 160 M Terminator LVD+SE ACT (Active) an then some .

This is how my cable config looks like

Controller--Seagate 136GB--Fujitsu#1--Fujitsu#2--Fujitsu#3--Fujitsu#4--Seagate 18GB--Terminator--Terminator connector @ end

of cable (Empty) Having switched the terminator up one spot to the dedicated terminator connector at the very end of the

cable allowed me to gain an additional 10 mb/sec burst rate . Tough read / write performance stays the same .

Because 5 out of the 6 drives I have use a SCA connector , I had to convert them to 68 pins using converters .

Incase you dont know what I`m talking about it kinda looks like this

sca_converter.jpg

They are pretty common .

The rest of the computer parts are as follows :

CPU : AMD 3200 + (Venice) S939

Motherboard : DFI Lanparty SLI-DR

Soundcard : Creative Audigy 4 Pro ( PCI )

Videocard : XFX Geforce 7800 GTX ( PCI-Express )

PSU : 600 W Enermax

Integrated sound , SATA , Firewire , printer ports , serial ports are disabled tough I am using the 2 integrated network

cards wich saves up PCI slots ( I only have 2 so I filled those up already now )

The hard drive setup

1x Seagate ST314680LW 136 Gb 10.000 rpm ID 0

1x Fujitsu MAM3367MC 36 Gb 15.000 rpm ID 1 ( Raid 0 array )

1x Fujitsu MAM3367MC 36 Gb 15.000 rpm ID 2 ( Raid 0 array )

1x Fujitsu MAM3367MC 36 Gb 15.000 rpm ID 3 ( Raid 0 array )

1x Fujitsu MAM3367MC 36 Gb 15.000 rpm ID 4 ( Raid 0 array )

1x Seagate ST318452LC 18 Gb 15.000 rpm ID 5

*The controller usese ID7 , this ID cannot be changed with the Firmware on this card .

*The ID`s are set using jumpers on the SCA to 68 pin converters mounted on the back of each SCA HD

*The 136 gb Seagate cannot be changed from ID0 cause I never got the tiny pins with it to jumper it to anything else

Array configuration:

3 Arrays .

Array 1 = 4 X 36 gb in raid 0 ( ID 1,2,3,4 ) Size : 140172 MB , Stripe : 64 kb

Array 2 = 1 x 136 gb in raid 0 ( ID 0 ) Size : 140013 MB , Stripe : 64 kb

Array 3 = 1 x 18 gb in raid 0 ( ID 5 ) Size : 17500 MB , Stripe : 64 KB

Partities

C (OS, Apps, Temp, Page 17.5 GB @ Array 1

F (Games) 119 GB @ Array 1

G (Documents, Backups , Install files , Project Files) 17.0 GB @ Array 3

H (Video, Mp3, Downloads) 136 GB @ Array 2

D: and E: are respectivly a Plextor PX716A and Plextor Premium writer .

It may be in an unlogical order as drive letters are concerned .. I know .

It`s just I`ve been used to having my two cd drives @ D & E for ages now that I couldnt help it :)

Policies

WRite policy : WRTHRU ( I set this to WRTBACK but no difference )

Read policy : Adaptive

Cache policy : Cached I/O

Virtual sizing : Disbaled

# stripes : 4

State : Optimimal

( For the two drives that arent linked in raid to other drives their stripes is offcourse #1 )

SCSI command = Enhanced Qtag sheduling

( Alternate opties are 2 ,3 & 4 Que tags )

Synchronous negotiation = Enable

Powerfaul safeguard = Disabled

Fast init = On

Interface = Ultra-3

Cache flush timings = 4 Sec ( set this to 10 seconds , no difference )

Rebuild rate = 50 %

Auto rebuild = Enabled

Initiator ID = 7 ( adapter ID )

In windows the drives are using the following settings .

Policy tab : Optimised for performance ( Option is selected and greyed out , so I cant change it )

SCSI properties tab : Disable tagged queing and disable syncronous transfers both UNTICKED

Additionally I checked the IRQ`s . The only IRQ that is being shared is 19 .

The SCSI controller uses 19 as well as an IEEE1394 host controller wich is odd since I disabled the onboard one on the

motherboard first thing . I dont even have any firewire devices so its not nescaserry to have it operational .

It could be the firewire port on the audigy 4 external drive tough it uses a firewire port to communicate to the PCI card

inside the computer , if thats the case then I geuse i cant disable it .

Benchmark results

Standard drives wich came with Windows

Atto 32 mb length

scsi_raid0.jpg

HD Tach 8 Mb Zones ( Short test )

scsi_raid0test2.jpg

HD Tach 32 mb Zones ( Long test )

scsi_raid0test3.jpg

With latest drivers

HD Tach 32 mb Zones ( Long test #2 )

scsi_raid0test4.jpg

Share this post


Link to post
Share on other sites

Hey Gwenster,

I know this is a late repy, but beter late then never.

There are a few problems with your setup:

A) The raid card is in a normal PCI slot, PCI theretically allows for a max 133MB/s in practice it is more around 100/MB/s. Since the PCI bus is a shared bus all devices on this bus will need to share this 100MB/s and because of overhead the more devices that share the bus the lower the maximum Transferrate is.

The normal PCI bus is way to slow for modern Storage subsystems.

(I get easily 80MB/s with two Maxtor Atlas 10KIV in hardware RAID 0 on a LSI 21320-R in PCI)

B) The raid controller your using is too old, while a hardware raidcontroller is better then a software one when CPU utilization is important, processing power of raid controllers is generally quit low, therefore software raid will often be faster with todays fast CPU's, off course CPU utilization will increase so it will give very good benchmarks but when running heavy applications this CPU utilization will be a problem. So its always important too look at the CPU speed of the raidcontroller, which in this case is a 100MHz i960 which is quite slow by todays standards. also this controller was the budget line and is only u160 (which is not the bottleneck here but I mention it anyways)

C)The SCA to WideSCSI convertors u use might be the problem, most convertors I see seem to handle only speeds up to UltraWide (40MB/s). U definately need to use convertors that are listed for use with LVD (low voltage differential capable) then the can run up to 320MB/s. If youre unsure about the convertor, there's an easy way to check this. from the 68pins connector are a lot of traces going to the SCA connector, if only one of the rows of the 68p connector have traces to the SCA adapter and the other row is ground/zero (check with multimeter) then this is a 40MB/s max adapter that will definately limit the speed of the setup. With a LVD convertor both rows should have traces from the 68p to the 80pin connector.

When you really want to benefit from 15K scsi drives you should definately use a different controller in a faster slot. Otherwise it will be better(faster) to use a WD raptor on the SATA ports.

Since you have a DFI Lanparty SLI-DR the only option I could think of is getting a LSI Logic MegaRAID 320E and put your board in SLI mode, if you don't have a SLI setup off-course. Also check with DFI that it is possible to use the 2nd PCI-E slot for a Raidcontroller (some mainboards only support graphics cards in these slots) The LSI Logic Megaraid 320E is also sold as Intel SRCU42E which might be a bit cheaper or easier to get.

Other option is to get a different mainbord with PCI-X slots, as this allows for most choise in controllers.

Since I still haven't seen AMD Athlon 64 boards with PCI-X you will need to change to an opteron board (with nForce Pro) or a Intel P4 board (See Supermicro PDSGE) Then you can use a LSI Logic Megaraid 320X (or intel SCRU42X) or take a LSI Logic 21320-R or 22320-R these last two are SCSIcontrollers with simple raid functions build in (you can only make one Raid array and it must be RAID 0 or 1, but it is a hardware raid, so much better than adaptec's sucking hostraid) and these controllers are much cheaper then a full raid controller. I don;t know an alternative for these controller on PCI-E.

Also a sas version with raid will hit the shelves soon : LSI Logic SAS3041X-R wich has 4 ports and should cost something around 250 dollars. It should also have the RAID 0 and 1 function and is PCI-X.

I'm keeping my eye on this one for my next storagesubsystem, off course for you this won't be great option since you should then also replace your drives for SAS ones.

greetz

Willem.

Edited by AeroWB

Share this post


Link to post
Share on other sites

It's kinda odd. I have an almost completely different setup but my transfer graphs from me RAID controller (Elite 1600) from my drive (Atlas 15K II) look nearly identical to yours. My setup is single Celeron 2.66Ghz CPU on a Asus P5RD1-V board with everything onboard and a single card in the system on the PCI bus (LSI MegaRAID Elite 1600). That only has a 3 end cable hooked up with one end to the card, one to the single drive configured as RAID0 and an active terminator at the end of the bus on the final connector. No matter what I have done, I cannot get a higher throughput than the same 37mb/sec or so that you show on your graphs and mine look the same. I'm interested in a solution too if you find one. Also, the drive hits 98/sec on my home system, but the controller, cable and terminator are all different.

Aero, while your points are valid about PCI bus speed, that doesn't account for why he is only seeing 1/3 to 1/4 the possible speed of the bus with a set of drives that should easily hit 100mb/sec on the bus (I have a RAID0 of MAS 36's on a 21320-IS at home that can hit 96mb/sec sustained over the entire usable space on a max'd out PCI bus where every slot on the board is filled with various things from two different network cards to a secondary video card). Also, his controller may be old and have a slower processor, but how much processing does RAID 0 really require of your card onboard CPU? Yes, I have seen adapter problems cause such issues, but in my case, I have a 68-pin drive plugged into a 68-pin cable and still see an identical result. Also, the poster says that 5 out of the 6 drives use these convertors and if they are the problem, that would explain it for those drives, but not for all the drives.

Share this post


Link to post
Share on other sites

What I said about the PCI bus speed, was not meant as a direct location of the problem, however, even if he would solve the problem of slow transfers he will definately hit the roof of this bus, so In general it is not a good idea to put high-performance storage subsystems on the normal pci bus.

About the adapters your incorrect, if you only attach one of these ultra-wide 40MB/s adapters the whole scsi bus will run at 40MB/s. Normally the scsi bus will use different speeds per device but in this case there is a hardware limitation. all speeds up to 40 MB/s use SCSI single ended, al 16 scsi channels (wide scsi) have a return which in case of single ended are connected to the ground/zero. To achieve higher speeds LVD was born which is used at 80, 160 and 320MB/s. for this the returns are not connected to ground/zero but to the inverse signal of the signal. Thats why 80MB/s and faster cables are twisted, bot the signal and its inverse are curling around each other this works the same as with UTP/STP cable. When at one point the bus is connected to a non-LVD device (hdd/cdrom/adapter/cable/terminator etc) the returns will be connected to ground/zero, ths scsi controller will detect this and switch to 40MB/s. So in oder to achieve better than 40MB/s all connected devices must be LVD compatible, therefore dual channel scsi controllers are quite common, because all scsi cd/dvd drives and most of the tapedrives don't support LVD so connecting one at the same channel as the harddisks will cripple performance. Also note that an active terminator was required since ultra scsi speeds (40MB/s wide or 20MB/s narrow) so using an active terminator does't imply that it's LVD ready, it has to written on the terminator itself LVD compatible (sometimes the print 160/320 compatible)

Nice you have the same great SCSI controller I have with a raid 0, as I also told with this setup I can easily get 80MB/s so I'm aware of your point. I carefully read your post twice and I cannot find which component is the problem with your 1600 setup. The only thing I know is that your Harddisk and Controller are caple of higher then 40MB/s. But what you tell me about the cable is not good enough, for the cable the pairs should be twisted and for the terminator it should say LVD compatible (or 160/320 compatible). this you can quickly check by using the cable from your 21320-R controller setup.

Also the harddisk has a jumper to force single-ended this should not be set off-course, and the controller can also set slower speeds for devices.

What you say about processing power of Raid 0/1 is correct, it won't use much, however if the scsi controller used is not caple of raid 0/1 itself directly the raid CPU will have to be part of the data-chain and this will not benefit latency and speed, though I have now idea how big this impact will be, but I do know the 1600 is caple of more then 40MB/s in RAID5 so this is not the issue in your case. Since your speeds are also capped just below the "magical" 40MB/s it seems highly likely your running in single-ended mode to.

So exchange those cables/teminators with both setups and see what happens, if thats not helping check the force SE jumper on the Atlas if thats not it check megaraid settings about speed.

Edited by AeroWB

Share this post


Link to post
Share on other sites

Hmmm .. :unsure:

Since I was pressed for time , I only checked the 80 to 60 pin converters I had lying in spare (3 of em)

After checking the lanes on their connectors and mucking about with a multi-meter I confirmed these indeed support LVD .

Tough I can`t be sure about the ones already INSIDE the computer ( not enough time to crack her open and run checks ) seeing as I mix-matched some new converters I bought with some old ones I`ve got for free from somebody ( scsi setups can really increase in costs getting all the tidbits together )

Tough in the bios setup screen of the card , it does say it`s running on 160 mb/sec transfer mode wich is the strange part , I doubt it`d still say that if at least one part of the chain was faulty ( i.e converter )

But thanks for the tip , I was already suspecting it might be either the converters or the cable .

Now I can rule it out by checking if they support LVD , wich I will tomorrow on my day off .

Secondly , I`ll run a backup of all the data on the drives in preperation of tomorrows "cracking the case"

I was kinda pissed off that all the data on my 2 seperate drives I put in there (i.e not in an array with others ) got their data wiped just cause the hardware does so seemingly when putting them online (i.e raid 0 .. it has to be cause it only seems to detect arrays ) dont wanna run into that again :]

The things I`ll check are going to be :

* Converters : What speed they support , if even one is slower then the others that will explain a thing or two .

* The cable . I still cant help but think about this as the cause its an usual cable for the SCSI world after all

I only have another cable that maxes out at 4 drives , so I`ll put the 2 drives that arent in an array with other drives as :off-line: so i can pull them out and place the different cable to see what difference that makes

Another thing of note .

Suppose the system is running in Ultrawide mode for one reason or the other wich maxes out at 40MB/s

doesnt that -not- hold true for my benchmarks ?

I mean even tough it`s mostly in the 40`s it still peaks out at 65 ish for the first 7 Gb for all the benchmarks

Or is that just something that does happen with Ultrawide ?

Also another thing .

I was running a search for Firmware on this card .

I found them somewhere ( tough the LSI logic ones I believe ) the release notes mentioned something about " performance increase under PCI Express motherboards " Wich is something I do have .

Now the tricky thing is , since windows recoqnises this as the HP Netraid and also since the netraid is plastered all over the bios , would it be wise to install the LSI Logic FW over this one ?

Anyway , I hope I can get to the bottom of this one . At least I have a good few options I can still try out it seems .

I`ll report back what happens .

Oh yeah . . buying an opteron board isn`t really a choice . I plan on upgrading this system to dualcore and dual graphics in 1 1/2 years as a simple upgrade means without interchanging the board / ram as well .

Share this post


Link to post
Share on other sites

Hi qwenster,

I don't know a lot about atto, but if the raid controller can use its cache it may be possible that this will cause a few runs to max above the 40MB/s, but to be honest I did only look at the end values first.

When the systems boots and the bios wil detect hardware and says it runs at 160MB/s it does run at 160MB/s, if you enter the bios and look at the device table, the only thing you will see is the max speed the controller will try to use for the device.

So I'd guess the sytem does runs at 160MB/s.

I still would try a different cable/terminator, since a bad one will cripple speeds cause scsi can work under extreme conditions since it has crc correction and will decrease device speeds in case of problems.

If this controller has a log file (you should be able to see it in the windows management program) it will probably list there if it switches back speeds. I have ssen this happen with a LSI Megaraid or an Adaptec raid, can't remember but they probably have it both.

Flashing a HP controller with a LSI firmware is tricky, I'd prefer to use (if it is available ofcourse) the HP version of the firmware. Because HP will buy the cards design and software and you'll never know how much they change to the design, so contacting LSI will probably not help cause they probably don't exactly know what Hp changed. Contacting HP might work. If the controller is not interesting in the way it is, I would at least try flashing the LSI firmware, but you will risk your controller. (you have a chance that you can flash back HP's firmware, in case LSI's won't work, but its only a chance)

Share this post


Link to post
Share on other sites

I was using an Adaptec 39160 on Mac (no longer being supported) but found that most SCSI cables, including the ones Adaptec includes, were a no-go, and I've got a box of cheap cables and terminators I feel the same way about - replaced with Granite Digital cables, adapters and terminators, which all work.

I think you need to put two 15K drives on each channel - that would help - I could get 135MB/sec from two Atlas 10K IVs, about as much as one channel would offer. It is possible to have too many drives on one channel and get slow I/O as well, though I would start with cables, terminator, and get shorter cable, only as long as needed, for starters.

When I had the 39160, one PCI bus was 66MHz and I could get 200MB/sec, while the 33MHz/32-bit was very limited. My current setup uses 64-bit/33MHz and no trouble getting 220MB/sec.

Not sure how you could deal with the two other drives you want to use though.

Share this post


Link to post
Share on other sites

The controller is the reason for your poor performance. You have some of the fastest drives in the world, paired with one of the slowest RAID cards. It's no surprise that you got much better performance running the drives individually on your 39160. Windows softraid 0 should give you the speed you are looking for, with no need to buy another card.

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now