gundersausage

The Death Of Raid

Recommended Posts

TobySmurf, I wasn't saying that those vendors were gung ho about RAID 0. They had different bits of data than what some people on this forum suggested.

I was telling the RAID 0 user that if you're happy stick with it. What do you want me to do? Tell him RAID 0 is evil?

As for the SSD striping, I was saying that RAID 0 had benefits. I can play UT2k3 fine on our RAM SAN.

A MegaRAID adapter is not typical of workstation or low end server installs.

Fibre channel only goes in our high end systems. We get an enormous discount through HP so price isn't that much of an issue at the moment.

So yes, perhaps we shouldn't make such sweeping statements about RAID 0. There *are* circumstances in which it will help a lot, though they're not necessarily typical.

exactly what I'm saying

What does that have to do with this thread? I was saying that RAID 0 has its uses.

I still don't understand why the RAM-SAN got faster when he partitioned it.

2 HBAs seemed to better address across set LUNS. Likewise if you connected 8 HBAs then 8 LUNs seemed to give better performance vs 1 LUN.

Maybe there was an issue with the OS that qaws was using?

I never got linux to work the way I want it due to lack of good HBA drivers. Solaris works fine but a quad 846 Oppie is faster than our SUN server. So there is a driver and OS issue.

The HBAs drivers with an Athlon64 kernel were awful. Plus Veritas doesn't have a good Athlon kernel module for volume management.

because other people don't work for a company that spends the amount on IT that places like your company and mine doesn't make their companies cheap - it makes them "financially responsible". There is a difference :-)

Tell that to my finance department :cry: !

What do you need another SAN for they say...

EVERYTHING that to be explained to them even if you have the budget for it.

Share this post


Link to post
Share on other sites
but I think you have to be older then 18 to head up an IT Dept.

Thank you for the compliment but I am older than 18.

That a $1300 card can do it as well as a normal SATA port?

Show me a single SATA controller that can hold 32 drives.

What level of RAID are you using?

Depending on the situation: 5, 10, or if paranoid 50

If you purchased a $1300 card to do RAID 0 with a couple of SCSI drives I'm going to laugh at you.

They'd pry the card away from me.

You can keep claiming that cleaner power impacts drive performance,

Where did I mention a clean PSU would impact DRIVE performance? Clean PSUs impact system performance.

Qaws, I don't really see the point of your line of reasoning until I get my massive SAN arrays and SSDs at home for writing and gaming.

Come on. Who would use a $100,000 unit for home? I wasn't referring to home use.

Perhaps you could describe why this card is a limitation.

It's a single card with specific drivers on a specific OS. So that's supposed to tell us how things work in general? Different combos have different results.

I'm not saying a new controller will give you globs of performance difference but might be a difference.

A new OS might give you something new to play with.

I'm not insulting your equipment but I have no plans on making generalizations based on specific setups.

Saviar is great hard drive performance doesn't depend on power. That's a given. I never said power change DRIVE performance. System performance is impacted though.

If you ask me, you are digging yourself deeper and deeper, and rather than admitting fault with some of your reasoning and backing up your complaints against the reasoning of others

How so?

I admit I was wrong before.

RAID 0 is advantageous (performance-wise anyway) in certain desktop and workstation uses--but not that many.

When did I disagree?

If you recall on RAID 0 I said:

Depends on the situations. Mom-and-pop don't need RAID. Bobby's FTP server with a $1000 month circuit might.

To Add to Sivar's quotes regarding power supplies and drives...

Drives are fine. System depends on power. If you're on a UPS odds are the UPS will clean it well enough already.

So could that.

Fine:

SR is working hard the the limited resources they have. They're doing a good job, but I don't plan on making generalizations based on a specific setup.

Better?

Share this post


Link to post
Share on other sites

There are certain generalizations that are demanded, reasonably, by the data, and by the theory qawsedrftgzxcvb.

They were detailed very thoroughly earlier in the thread. I don't disagree with the essential point of your recent posts, but I am also certain that your posts have done nothing but obfuscate an issue that was being dealt with very specifically, concisely and thoroughly.

Share this post


Link to post
Share on other sites
There are certain generalizations that are demanded, reasonably, by the data, and by the theory qawsedrftgzxcvb.

Under Windows I agree. However I cannot accept it under Solaris, BSD, or Linux. I have no prob with SR. I actually want to tell my company to let them do our storage testing. Hell the damn consultants are getting paid friggin $100/hr and a week later I get a 4-way Oppie without a configured HBA. That took a week?

Share this post


Link to post
Share on other sites

I don't think the OS changes things particularly significantly. I think I would have difficulty constructing an argument to that affect. There are very simple rules that govern how an array of striped disks responds to an access pattern, and nothing changes that, and certain types of usage will always produce certain types of access patterns.

For example, databases at places like Amazon.com, with many users, still generate random IO at high queue depths. Smaller multiuser databases may reduce the load, but the qualities of the access pattern are invariable.

As another example, applications like Openoffice.org, Photoshop, and Cinepaint all still install themselves all together at one time, so loading them off the disk is still going to produce a localized access pattern whether you do it in Windows, in Solaris, or in Linux. Any accesses they need to make to libraries, help files etc will all usually occur in that general area as well. Bigger applications fetch more files, more data, but, again, the qualities of the access pattern is the same.

Overall I think the general access patterns won't change, so the disk, or the arrays response, won't vary dramatically either. For this reason Eugene prefers to use the terms 'single-user' and 'multi-user' because it avoids confusion. Terms like 'desktop,' 'workstation,' and 'server' all have connotations that can confuse things. One person multitasking his ass off still produces multiple localized patterns, while many users running database searches still generates high-queue-depth, random IO --single-user vs multi-user, the whole point of the terms is that the accurately describe a context which is intrisically tied to a type of workload. These are entirely reasonable generalizations that are inescapable.

Share this post


Link to post
Share on other sites

This thread needs to undergo a Fourier transform....of course, all the more so after the noise I just added.

Share this post


Link to post
Share on other sites
Thank you for the compliment but I am older than 18.

Really? Time to get your own checkbook then maybe?

Show me a single SATA controller that can hold 32 drives.

*SIGH* Why don't you show me some home users/enthusiasts that are using 32 drives in one machine. Your missing the point again.

Depending on the situation: 5, 10, or if paranoid 50

*SIGH* This thread was about enthusiasts using RAID 0 for their personal machines. Your missing the point again.

They'd pry the card away from me.

Who is "they"? I thought this was your card that you bought. Nevertheless, this thread was about enthusiasts using RAID 0 on their home power pc. Congratulations on filling the thread with random garbage that clearly doesn't apply.

Where did I mention a clean PSU would impact DRIVE performance? Clean PSUs impact system performance.

Here comes the backpedaling.. You clearly insinuated that a 'mediocre' Antec 400w PS (!!) would hurt the performance of a RAID test. Should I quote from you earlier in the thread? It's all there for everyone so see. Maybe you should actually read the thread (including your own posts).

Come on. Who would use a $100,000 unit for home? I wasn't referring to home use.

Well since this thread was about home use - I have to ask you again - WTF are you doing?

I won't bother to directly quote the rest of your 'fluff' - This one liner posting style sucks anyways. You mention a large FTP circuit - whats this have to do with home use again?

Well thats enough for now - You missed the entire thread somehow and went off on a $100,000 SAN solution blah blah blah when we were talking about enthusiasts using RAID 0. I ignored you during your insane "P2P applications kill IDE hard drives" (I tossed this one in for whoever was saying that qaws has posted rationally in the past) thread, but this one - I'll say it again - Wake up.

Share this post


Link to post
Share on other sites
*SIGH* Why don't you show me some home users/enthusiasts that are using 32 drives in one machine. Your missing the point again.

You said:

That a $1300 card can do it as well as a normal SATA port?

Oviously it can't.

*SIGH* This thread was about enthusiasts using RAID 0 for their personal machines. Your missing the point again.

You asked:

What level of RAID are you using?

I answered it. What more do you want?

Who is "they"?
And now your leading your co-workers astray as well - you should be proud.

Read your own post and figure it out.

You clearly insinuated that a 'mediocre' Antec 400w PS (!!) would hurt the performance of a RAID test.

I said:

What's more is that I see they're using a medicore PSU. Clean power can make or break performance. Am I to make large judgements based on those limited resources?

Did I say RAID performance? If you don't know clean power is important for a system then it's time to learn.

Well since this thread was about home use - I have to ask you again - WTF are you doing?

If you actually read the posts, I'm saying that RAID 0 has some kind of use. People are disregarding RAID 0 as a waste of money. I said that it has uses.

You mention a large FTP circuit - whats this have to do with home use again?

There are benefits to RAID. People DO run RAID at home you know.

You missed the entire thread somehow and went off ...

I didn't miss it. You're not getting what I'm saying.

I ignored you during your insane "P2P applications kill IDE hard drives"

If you have another explination I'd like to hear it. I might not be right but I don't hear an alternate explination.

Am I wrong on the p2p, possibly. I'd be more than happy to learn if you'll teach me otherwise.

Share this post


Link to post
Share on other sites

blablabla

Yet another thread has degenerated into a "yes" - "no" match. I find those long posts full of quotes tiresome. Back to the subject please?

Share this post


Link to post
Share on other sites

Finally - something solid:

There are benefits to RAID. People DO run RAID at home you know.

Exactly! And its almost pointless to do so (at home) - that's what this thread was about, not $100,000 SAN solutions, $1500 RAID cards running 32 15K SCSI drives in RAID 50, and not $100,000 FTP circuits....

Share this post


Link to post
Share on other sites
And its almost pointless to do so (at home)

Exactly! Finally agreement!

It seems people don't like the way I format my posts. I guess I'll change that to better serve your eyes.

Share this post


Link to post
Share on other sites

What, you mean after all that arguing, you were agreeing all along? :D

My apologies, but I'm definitely seeing the funny side of this.

There are plenty of threads where people can see concise arguments against RAID 0 for posters who've demonstrated that they're not in the niche that can take good advantage of RAID 0. This one has degenerated a little into arguing over minor points in peoples' arguments, but it's still fun to watch!

What we're seeing here is people arguing opposite sides of two slightly different debates - perhaps its time for the summaries. Think of all the poor newbies trying to work out the overall message from this mess! Forget about the nitpicking, justification and picking holes in each others' arguments for a minute, and try to write, in simple terms, without the supporting comments, evidence and examples, what message you're trying to express!

Amused diatribe over.

Share this post


Link to post
Share on other sites
My apologies, but I'm definitely seeing the funny side of this.

Amazing..... that you would see this as an agreement simply because qaws posts that it is. His viewpoints have been wildly fluctuating across this entire thread. That anybody could read this thread and arrive at that conclusion is astounding.

I will quote one simple example from this page.

There are benefits to RAID. People DO run RAID at home you know.

How this could be taken for anything other then "Raid benefits home users" is beyond me. I would suggest that he does in fact change his posting style, and maintain some consistency as well.

Share this post


Link to post
Share on other sites

Ahhh Fourier transform...

I always thought Fourier transform was one of the cleverest/most-useful math tricks I've ever encountered. Has anyone ever dealt with infrared or nuclear magnetic resonance spectra? Fourier transform is great...

...in fact Fourier transform is the best. Fourier transform is better than 12-disk RAID 0 on a 3ware 9500!

Share this post


Link to post
Share on other sites

so.... for someone who's been reading through this thread and the other related articles searching for a definitive answer to the question of RAID0 desktop use, should i assume there isnt one?

i dont really care that much about the average user, im interested in knowing how it truly effects the power user. By power user i dont mean workstation user who might be using their machine heavily for CAD, A/V capture or processing.

one doubter i showed the SR and anandtech articles to said that the ICH5 used in the anand article was the crappiest raid controller and it wasnt a good test. i tried to find something in the article that would explain *why* it didnt matter what raid hardware was used but i had no luck.

ssorry if this post is a bit disjointed.

Share this post


Link to post
Share on other sites
one doubter i showed the SR and anandtech articles to said that the ICH5 used in the anand article was the crappiest raid controller and it wasnt a good test.

You can most accurately describe such an individual as an apologist. He almost certainly has little idea what he is talking about, and is trying to pretend that there are flaws with the review to justify his own opinions.

"The ICHR5 RAID controller is crappy" is not a cogent argument. Complete arguments have been provided which should supercede such idiocy. While RAID 5 performance is extremely controller-dependent, RAID 0 is not at all. Anyone who tries to claim otherwise has little apprehension of the math/computational-power involved in both tasks.

The controller is not a significant factor. Striping has inherent limitations (please, do read that). If the guy says "it's the controller" about RAID 0 performance, he probably has no education relevant to storage technology. The fact is, striping is useful in some situations, useless in others, and whether it is or it isn't, is dependent on the access pattern --not the controller, not the disks, but the access pattern. The access patterns that single-user programs generate are almost universally unable to extract benefits from striped arrays.

It's not that the arrays aren't functioning the way that they're supposed to; it's not that the controller is somehow messing it up; it's that striping data across disks is only useful, with respect to performance, if you try and extract the data in certain specific manners. The manner in which single-user applications request (require) data from the disk gives the array little-to-no opportunity to stretch its legs. To add insult to injury, the rare, specific access patterns generated by single-user machines that can extract additional performance from RAID arrays are invariably capable of extracting more performance from the same number of disks if they're used independently, and not in an array!

so.... for someone who's been reading through this thread and the other related articles searching for a definitive answer to the question of RAID0 desktop use, should i assume there isnt one?

There is one. People can split hairs as much as they want, but, the fact is, that the data points to one very legitimate, conclusive ;) conclusion: RAID 0 is essentially useless outside the server arena.

People will bring up exceptions and they'll whine about things like video editing and photoshop, but the fact will remain that the foundation of performance in those tasks is derived from independent disks and RAID arrays are only an afterthought required for exceptional situations.

Share this post


Link to post
Share on other sites

The FFT isn't going to do that much good unless you pair it with a bandstop filter and corrosponding Inverse FFT.....

a histogram equalization wouldn't hurt the thread's contrast either....

-Chris

Share this post


Link to post
Share on other sites

I perused this thread and many of its individual arguments and noticed something that doesn't make any sense to me. Many people in this thread were deriding the "idiot gamers" for "wasting their money" on RAID configurations. What are you talking about? It doesn't cost any more. The RAID controllers come built in to high performance motherboards that the performance-minded are buying anyway for their improved busses, ram/processor support, and overclocking ability. The only way it really costs more money is if you are adding it to an old system via a PCI raid controller and the "idiot money-wasting gamers" as you call them aren't going to bother with that - they'll just build a new computer.

Also you buy two drives, but I would buy two anyway for the capacity AND half the arguments people were using said that you could gain the same benefits by using two drive at the same time rather than 2 drives raided - so in those scenarios you still buy 2 drives.

When it comes right down to it, performance is all that matters - cost is not factor in modern systems. I have a P4C800-E and 2 160 gig seagate drives hooked into my IHC5 controller. I don't run them in a RAID, but if I wanted to it would cost exactly 0 DOLLARS more. So if people want to play around with it or if it gives them some gains (even small ones) in certain scenarios I say more power to them because it costs NOTHING. Free is free - and it would be free for me and so many others to do it.

Share this post


Link to post
Share on other sites

RAID-0 reduces the overall reliability of the array in direct proportion to the number of drives in the array. The general equation is:

Reliability(RAID) = Reliability(Drive)/Number_of_Drives.

A 2 drive RAID-0 is 1/2 as reliable as a single drive.

The point made by the original SR article is that *most* Workstation-type applications see little if any benefit from RAID-0 configurations.

If you don't get a benefit in performance, and you're taking added risk with reliability, why do it even if it is free?

You want to be smart, use that embedded SATA RAID controller to build yourself a RAID-1 mirror set. In a RAID-1, reliability is improved in direct proportion to the number of drives in the array (though, sad to say, most RAID-1s only support two drives). Sure, write performance takes a small hit in RAID-1, but Read is as good as a single drive (and if the RAID controller is smart enough, Reads can be as fast as a RAID-0 configuration).

Share this post


Link to post
Share on other sites

You are right, but I was really just addressing the other posts that showed up and were blasting people for wasting money. I was just pointing out this is not the case these days.

But in response to the above:

I realize that it is less reliable technically, but if I were one of the aforementioned gaming types I wouldn't mind if my installed games got blown away by something like this - nothing too crucial there. Untouchable data reliability is not a big factor for some people. This was stated earlier in the posts by some people.

Also, the only hard drive that ever failed on me is the one that was in my external enclosure and was used heavily for long periods of time and taken places. A typcial gaming machine is going to have alot less load (since games aren't being played 24 hours a day) so it'll be a long time before one decides to fail.

As for the RAID1 suggestion, the one that failed on me showed signs before all hope was lost such that I could get anything needed off of it. Given this I wouldn't be a huge fan of RAID1, because it costs twice as much for the same size and an additional fault tolerance that I haven't felt the need for. If I was storing irreplacable genealogy files or something then it would be a different story - RAID1 would be very good. I like running my drives singly, as I stated, and have alot of space rather than a fault tolerance I don't require. I can read different things off each at the same time and have a smaller impact on overall performance.

Regardless, my post wasn't about me in particular - I was just using my setup as a typical example of a modern machine someone might build (the P4C800 is very popular) and how implementing a RAID would not increase monetary cost. That is all I was saying.

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now