gundersausage

The Death Of Raid

Recommended Posts

Calm down, everyone. qaws(etc.) has posted intelligently and helpfully in the past, and he's trying to be reasonable.

In his case, yes, RAID 0 helped. That's because his SSD setup has almost negligible seek time compared to hard drives, so the seek time penalties of RAID 0 were insignificant compared to the STR advantage.

So yes, perhaps we shouldn't make such sweeping statements about RAID 0. There *are* circumstances in which it will help a lot, though they're not necessarily typical.

For those of us with normal, single user, non-workstation, non-server, modest to mid-budget setups and usage, RAID 0 is usually (if not generally) a waste of money and/or resources.

RAID 0 isn't dead. It lives on in certain uses, and may become useful to everyone when we're all on solid state storage - if STR of a single device isn't enough for our needs at that time. In the mean time, I don't think any worse of SR for recommending against it for desktop use.

Share this post


Link to post
Share on other sites
Perhaps I should be more clear on this. SR is wrong making general statements about RAID. Even in a desktop environment setting RAID 0 across 2 LUNS on ONE hard drive was faster than a single partition on a single hard drive for a 64GB RAM SAN. SR isn't a bad place for info, however they aren't perfect. They are allowed to make mistakes or change their findings depening on the technology changes.

when you say RAM SAN I will assume you mean something like a Texas M RAM-SAN device. I don't know how you could possible correlate any behaviour from a device like that to a hard drive based solution as SSD acts very differently. Calpine used to have a single RAM-SAN 210 for our Oracle scan tables. It's a very very cool device and it's very fast, but it does *not* perform like a fast hard drive, we had to learn a lot of new stuff to tune it right. At factory default settings our CLARioNs outrun it!

Try your same test on a physical disk array - take a san disk, break the disk up into two 73GB chunks, assign them different LUNs, then map the LUNs to your machine and make an array out of it. Do your benchmarking, then undo all of that and just map that drive over to your machine directly and retest. I'd be amazed if you saw any real performance improvement with the RAID. I'd love to see what numbers you come up with either way.

Share this post


Link to post
Share on other sites
qaws - WTF is wrong with you? Dropped on your head? Too much playing in the paint chips? Stop being a dumbass.

Um... I mostly use fibre for servers usually with Brocade switches. I use SCSI mostly for high end workstations.

I'm sorry your company is cheap.

A MegaRAID SCSI 320-4X with Redhat drivers? Sale price: $1,599.00

Monarch computers sell it for $1140. I got one one for about $1300 from pricewatch. It works fine as a workstation card for me.

A co-worker actually ordered a MegaRAID SCSI 320-2 for herself through my company. Her coast through employee discount: $150 (no battery or RAM stick). Though JBODS still cost in the thousands. :(

Good lord! You sound like one of those damn Anandtech posters! Am I to assume that my new 1890 Watt power supply is going to make my RAID faster?? Maybe thats why all these review sites have been showing that raid isn't any faster, not enough power!!!!

1) What's wrong with Anandtech? They are one of the few sites that post WarIII marks. :wub:

2) It's ovious you don't know how power impacts performance. Cleaner power means faster performance.

3) If you want good hardware you might have to spend good money.

Wake up qaws!!!

I already already addressed that point.

Share this post


Link to post
Share on other sites

I still don't understand why the RAM-SAN got faster when he partitioned it. We hooked up to a single Sun E12K to test it, we found our bottleneck in a hurry - the fiber channel card - even with both ports wired in that was still the bottleneck. We got right around 500MB/sec STR and we could pretty much kill the server with I/O and the SAN kept on going (we had the entire 12K on one domain). Maybe there was an issue with the OS that qaws was using?

Share this post


Link to post
Share on other sites

Oh and qaws, because other people doesn't make their companies cheap - it makes them "financially responsible". There is a difference :-)

And I'd love to see some hard data on how clean power adds performance, if at all. If you can prove that to me I will be very happy and will gladly take you out for drinks should you ever make your way up here. While I have seen that claim before I've never seen it substantiated, but since at work we run off of online UPS power all the time instead of utility power I cannot really speak from experience as all our machines are large brand name systems with lots of spare power supplies in them. I figure brand name + clean input power means that this issue should never affect me if it exists, correct?

Share this post


Link to post
Share on other sites

Darn it, I wish I could edit...

Oh and qaws, because other people doesn't make their companies cheap - it makes them "financially responsible". There is a difference :-)

I meant to say:

Oh and qaws, because other people don't work for a company that spends the amount on IT that places like your company and mine doesn't make their companies cheap - it makes them "financially responsible". There is a difference :-)

Share this post


Link to post
Share on other sites
qaws - WTF is wrong with you? Dropped on your head? Too much playing in the paint chips? Stop being a dumbass.

Um... I mostly use fibre for servers usually with Brocade switches. I use SCSI mostly for high end workstations.

I'm sorry your company is cheap.

A MegaRAID SCSI 320-4X with Redhat drivers? Sale price: $1,599.00

Monarch computers sell it for $1140. I got one one for about $1300 from pricewatch. It works fine as a workstation card for me.

A co-worker actually ordered a MegaRAID SCSI 320-2 for herself through my company. Her coast through employee discount: $150 (no battery or RAM stick). Though JBODS still cost in the thousands. :(

Good lord! You sound like one of those damn Anandtech posters! Am I to assume that my new 1890 Watt power supply is going to make my RAID faster?? Maybe thats why all these review sites have been showing that raid isn't any faster, not enough power!!!!

1) What's wrong with Anandtech? They are one of the few sites that post WarIII marks. :wub:

2) It's ovious you don't know how power impacts performance. Cleaner power means faster performance.

3) If you want good hardware you might have to spend good money.

Wake up qaws!!!

I already already addressed that point.

Did I ever mention what my company uses? I didn't - I could, but it would be irrelevant.

I'm glad to hear that your $1300 card works fine in your workstation, what was your point? That a $1300 card can do it as well as a normal SATA port? What level of RAID are you using? If you purchased a $1300 card to do RAID 0 with a couple of SCSI drives I'm going to laugh at you. And now your leading your co-workers astray as well - you should be proud.

You can keep claiming that cleaner power impacts drive performance, that doesn't make it any less ridiculous of a concept. Clean power can help system stability, but it doesnt make drives faster - sorry. (go find something respectable that supports this stupid idea and link to it - more fluff scores no points)

Your point about spending good money for good hardware is valid, except for the insane perspective you give it. I do alot of movie\photo editing, should I look into a 64 CPU solution with 16 GB of RAM running Mircrosoft's Datacenter Server? Probably not, cause that would be stupid, and wouldn't give me more performance then a solidly configured workstation.

Now that I think about it, aren't you the one that was swearing up and down that peer/peer file sharing apps were destroying ATA drives on your campus or something like that? Maybe everyone who shares files would get a nice performance boost by using a $20k storage solution eh?

Share this post


Link to post
Share on other sites
the fiber channel card - even with both ports wired in that was still the bottleneck.  We got right around 500MB/sec STR

Fibre 2 Gb/s burst limit is 212.5 MB/s per channel. Were you using 4 Gb/s ports?

Share this post


Link to post
Share on other sites

This is interesting, but has nothing to do with the affect of RAID on desktop performance, which was the topic of the thread. Qaws, I don't really see the point of your line of reasoning until I get my massive SAN arrays and SSDs at home for writing and gaming. Sheesh.

Share this post


Link to post
Share on other sites

I think, for the purposes of keeping this already large thread as focused and intelligible to the average user concerned with RAID 0 as possible, we should let the verdict on RAID 0 in the enterprise stand as it has been iterated many times already (in this thread as a matter of fact):

Access patterns that exhibit random IO at high queue depths are likely to benefit from striping. These qualities are characteristic of server (multi-user) usage and not, at all, of desktop (single-user) usage.

Anyone concerned about the details that determine whether or not an access pattern is likely to benefit from striping, as well as the degree to which the access pattern will benefit from striping should refer to my post here which details the fundamental factors which determine how a striped array responds to an access pattern.

A lot of this recent hullabaloo has already been examined more accurately, carefully, and concisely earlier in the thread.

Share this post


Link to post
Share on other sites

Ralf, we were using a 4 port adapter with three channels bonded (Stupic system makes us leave the 4th channel for fault tolerance). Theoretical max burst before controller overhead is about 600MB/second, so we figure we hit the ceiling or darn close to it, as we had bursts of up to 550 and a sustained of around 500-501. The reason we point the finger at the card is because of how the waiting I/O queues on the server start to ramp up as soon as the FCAL card tells us it's buffers are full. SAN utilization at that point is only about 35%. Unless we are missing something? The Sun software said PCI-X bandwidth was only 60%, so it really doesn't seem like it's the server.

Anyhow we do digress from the original topic. I was always a big believer in RAID 0 for desktops and I could literally "feel" the speed improvement. Then a friend of mine encouraged me to stopwatch test various things. For the games that I played and the apps that I ran, RAID didn't help. In fact with some games it actually slowed down. We ended up finding out that for most game load times it was the CPU that was holding me back, especially for games that are known for being a bit of a pig (C&C Generals anyone?).

I think it really is a case by case thing. Maybe it helps with very specific apps and maybe it could help some mid-range enterprise gear, but I don't think it's relevant to the *vast* majority of users out there. For the most part I think people are better off with multiple separate drives. We have a lot of tests from many respectable people showing the same thing. I don't really know why the flames started regarding this topic, I hereby consider it a dead issue until some new software comes out that requires 100+MB/second STR in order to operate properly.

Share this post


Link to post
Share on other sites

The scratch disk for photoshop, is really the only app I can think of, that really gets a boost from raid 0...Iam sure there are others, but photoshop, is the only one Ive had direct experience with.

Share this post


Link to post
Share on other sites

Occupant2, have you benched a photoshop scratch on RAID 0 versus a photoshop scratch on a single drive (in both cases where the drive/array is *only* used for scratch)?

Share this post


Link to post
Share on other sites
The scratch disk for photoshop, is really the only app I can think of, that really gets a boost from raid 0

It does appear to be the case, although it should be seperated from the pagefile before it is striped (as well as any other potential IO targets) if you've only got two or three disks --it's most important to have it on its own disk.

Adobe recommended I use a fast SCSI disk over more, slower, striped 7200RPM ATA disks when I contacted them about a year and a half ago. I've heard of people using multiple 15K's... I do some pretty serious Photoshop work, but not that serious. My Fujitsu MAN has treated me quite well.

Share this post


Link to post
Share on other sites

Damn lack of editing...

I ask because the company I start working for on Tuesday has a lot of Photoshop users, and if RAID really does help the system out for scratch that's a cheap investment.

Cheers,

Michael

Share this post


Link to post
Share on other sites

This place has all the Photoshop users with three drives, one OS/App, one page and one scratch. If the RAID gave the system a noticable boost I'd reccomend it to management. Does anyone have a link somewhere with numbers? I would still have to convince the folks in suits that RAID controllers were needed for all of the graphics people's systems...

Share this post


Link to post
Share on other sites

Huh - I missed this one:

Calm yourself, my son. I only get mad when people insult Linux.

Now that is humorous. Maybe you should quit using mommie's checkbook before you toss out phrases like 'son'. This puts some serious credibility issues right at your feet, you certainly act like your in charge of all this high end gear that you keep talking about - but I think you have to be older then 18 to head up an IT Dept.

Share this post


Link to post
Share on other sites
This place has all the Photoshop users with three drives, one OS/App, one page and one scratch.

That is exactly the proper configuration for three disks for optimal Photoshop performance :). You should only stripe the scratchdisk if you have more than 3 disks.

More RAM is the first place to put money if scratchdisk performance is a productivity limitation (which it may not be, depending on what the workload is). Then you should purchase a fast scratchdisk, and then multiple disks.

Also, remember that it's only worth spending money on if there are actual productivity limitation involved. It's entirely possible that something like, for example, buying better monitors, could offer a greater return ;) in terms of employee productivity.

Share this post


Link to post
Share on other sites

Oh, as for benchmarks, if you were serious about it, the best way to do it would be as a pilot project at the particular business itself. Many Photoshop workloads will not stress the scratchdisk at all, or not significantly, some might.

I would suggest stop watch tests on common workloads (which you'd get from the employees), and more importantly, double-blind tests involving your employees:

Two boxes, one with a striped pagefile one without. Let them perform their normal work patterns on both and then pick the faster one. If anyone is going to notice if it actually matters, it's the people who work on the systems all the time.

As I said earlier, if scratchdisk performance matters, then RAM is most likely to actually make the most dramatic difference. The scratchdisk is used constantly in Photoshop, but is only limiting if you don't have enough RAM. With enough RAM it works seamlessly in the background. PS has caching mechanisms that have become very refined over the years --it doesn't need everything in RAM, or even on a fast disk.

Share this post


Link to post
Share on other sites
That's very limited. However they are trying hard with what meager resources they have. To me a decent card would be a MegaRAID SCSI 320-4X with proper Red Hat drivers.
Perhaps you could describe why this card is a limitation. Certainly it isn't the onboard processor. How much processing power does a RAID card need for RAID 0? Close to zero, by today's standards?
What's more is that I see they're using a medicore PSU.

It's an Antec PP412X 400W. I repeat, it's an Antec rated at 400W. This is a mediocre power supply? Perhaps everyone should buy their own nuclear generator to go with their new $1,500 RAID card? I am sure having a highly regulated nuclear power plant connected to a PC will make exactly as much performance difference as getting that SCSI card.

I've run a dual AthlonMP with 3 SCSI and 2 IDE hard drives from an ancient Antec 300W power supply. Antec is known for making top-quality, well-engineered, efficient, and clean power supplies. Even if this wasn't the case...

Clean power can make or break performance. Am I to make large judgements based on those limited resources?

Interestingly,

Of course, if you meant that a good 400W power supply couldn't handle the power drain of the drives:

If you ask me, you are digging yourself deeper and deeper, and rather than admitting fault with some of your reasoning and backing up your complaints against the reasoning of others, you are playing it high-school debate style, where winning is all that matters, and where you never admit a fault in your plan... Much like the B&G 'trolls' you often comment about. The farther you reach, the more distant your target becomes.

RAID 0 is advantageous (performance-wise anyway) in certain desktop and workstation uses--but not that many. Using the drives individually can sometimes outperform the RAID 0 configuration even in most of these situations.

RAID is generally advantageous for servers and occasionally for workstations. Why are you trying to force RAID 0's square peg into the round hole that is general-purpose desktop and general purpose workstation performance?

Share this post


Link to post
Share on other sites

I don't know enough about my new photoshop users at the new job to know for sure, but they all claim that their dual G4 and G5 towers at home are oh so much faster than their dual Xeon 3GHz/2GB RAM/triple SCSI disk systems here at work. And when they start laying on the filters left right and center they whine about how long it takes. I will take a look and see once I start, but I suspect I might be better off telling them to "deal with it"

Share this post


Link to post
Share on other sites

To Add to Sivar's quotes regarding power supplies and drives:

I feed a Dual Xeon 2.8 Server with 12 250GB Maxtor hard drives all from an Antec 550 EPS power supply. No glitches ever, the machine's file serving performance is out of this world (it's a 3Ware 8506-12 with linux kernel 2.6.7-gentoo-r11 and XFS). I monitor the system health with LM_Sensors and I never see any issues with voltage drops due to an overloaded power supply,etc. Heck, the air out of the power supply doesn't even get overly warm!

Share this post


Link to post
Share on other sites
Huh - I missed this one:
Calm yourself, my son. I only get mad when people insult Linux.

Now that is humorous. Maybe you should quit using mommie's checkbook before you toss out phrases like 'son'. This puts some serious credibility issues right at your feet, you certainly act like your in charge of all this high end gear that you keep talking about - but I think you have to be older then 18 to head up an IT Dept.

That probably could have been less condescending.

That's very limited. However they are trying hard with what meager resources they have. To me a decent card would be a MegaRAID SCSI 320-4X with proper Red Hat drivers.

What's more is that I see they're using a medicore PSU.

So could that.

Share this post


Link to post
Share on other sites
I don't know enough about my new photoshop users at the new job to know for sure, but they all claim that their dual G4 and G5 towers at home are oh so much faster than their dual Xeon 3GHz/2GB RAM/triple SCSI disk systems here at work.  And when they start laying on the filters left right and center they whine about how long it takes.  I will take a look and see once I start, but I suspect I might be better off telling them to "deal with it"

TobySmurf, though I don't use Photoshop myself, all tests that I have seen indicate that PS tends to run better in general on Macs; it was developed with them in mind.

Share this post


Link to post
Share on other sites
I don't know enough about my new photoshop users at the new job to know for sure, but they all claim that their dual G4 and G5 towers at home are oh so much faster than their dual Xeon 3GHz/2GB RAM/triple SCSI disk systems here at work.  And when they start laying on the filters left right and center they whine about how long it takes.  I will take a look and see once I start, but I suspect I might be better off telling them to "deal with it"

TobySmurf, though I don't use Photoshop myself, all tests that I have seen indicate that PS tends to run better in general on Macs; it was developed with them in mind.

A long long time ago, in a galaxy far far away... Photoshop was first developed for the Mac, but then they switched over to Windows... and the Mac platform has changed considerably... I would think that Mac users are now useing a port of a windows program... Adobe knows what side the bread is buttered on, and will develop software accordingly...

I only started the whole 'photoshop' thing, because its an (rare) example of where raid 0 actually helps the performance of the program...

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now