gundersausage

The Death Of Raid

Recommended Posts

Davidbradley, I believe that the independent configuration that FemmeT tested involved two disks. You quoted the following:

Windows pagefiles, temp directories and Photoshop scratch files were placed on the second drive in the configuration of independent Raptor WD740GD drives.

To me it suggests he used two WD740GD's independently. So I believe his configuration was done in the manner you suggested.

As you may know, or may not know, davidbradley, I also do a great deal of Photoshop work --also with large files. I agree with everything you've said, but would like to add one final point. While you very capably pointed out that in the image loading test it is very important to seperate the image source drive and the scratchdisk, it is worth mentioning that it appears to be likely that FemmeT did this. He doesn't mention it, but I believe he imaged everything to the first disk and then moved only the temporary files (including the pagefile and scratchdisk) to the other drive.

There is, however another tremendously important consideration which FemmeT did not respect (which, admittedly, most inexperienced Photoshop users are not aware of): the pagefile and the PS scratchdisk should never be, if it can be avoided, on the same disk. It is akin to putting two pagefiles on the same disk on different partitions --it will have a tremendously negative affect on virtual memory performance. Remember PS has its own independent VM machine. Windows does not know this, so in many situations they can compete for disk IOs if they page to the same disk.

I don't suggest anyone ever try and do anything like RAW conversions (converter set to idle priority of course) in the background while editing with the Windows pagefile and the Photoshop scratchdisk on the same disk, it initiates a vicious competition for VM resources. I once, to my shame, used such a configuration briefly before I knew better. I couldn't understand how my system was performing so badly with the background application set to idle. Of course, with independent disks background tasks like RAW conversion or denoising will proceed without impeding Photoshop performance.

I chose this configuration because every time I compiled there was significant concurrent I/O to/from each of these four sources. I wanted each of those four sources on seprate spindles to minimize head movement, increase concurrency, etc. Switching to this configuration made a HUGE difference. Would I have set up those four disks in a RAID5 array? No way! A pair of RAID1 arrays? Probably not.

Exactly. It isn't hard at all to seperate hot spots onto seperate disks, and this is an excellent demonstration of how this technique can be useful to deal with multiple, concurrent disk accesses. It all comes down to the most basic fact of platter-based storage: a drive doesn't move a single bit of data while its head is seeking. With multiple data sources (I often call them localities), striped sets have to do a lot of seeking.

Share this post


Link to post
Share on other sites
I wonder how he is going to retract from his current 'RAID 0 is insane' viewpoint.
Well that is a rather incendiary statement, but regardless, I think Eugene has shown in the past that he is not beyond admitting mistakes and re-evaluation of his conventional wisdom.

Indeed balding ape. Eugene's willingness to criticize his own work, to audit his methods, and to admit when he is wrong is one reason that I trust SR's reviews over any others. When Testbed 3 came out, it essentially demonstrated that Eugene had been making the wrong conclusions for years (that nearly everyone else went right along didn't really matter --he was publishing the reviews). The most impressive thing, to me, was that he didn't hesitate to damn his previous methodology, to admit he was wrong, and publish results obtained using a fair and carefully considered methodology.

There is no doubt in my mind that Eugene could have contrived traces of workloads that would have demonstrated that the old methodology was not seriously flawed, but he didn't. He sat down and produced a realistic general usage benchmark. He didn't try to artifically inflate the queue depth or the amount of seeking involved to make the SCSI drives he'd previously believed to be head-and-shoulders above the competition look good.

In the end he had to take a lot of flack for publishing those results. People who couldn't believe that a 7200 RPM ATA drive with a 9.4ms seek time was faster than a 10K SCSI drive with a 5.6ms seek time flamed his ass off. I have seen so many 'reviewers' on the internet try to cover their asses, spin reviews with methodology, distort results by ommitting specific observations and including less relevant ones, and refuse to standup and state in their conclusion what the data demands that they state, that my opinion of 'hardware reviewers' in general is quite low. Print magazines are generally even worse, if that's possible. There is no review site on the internet that approaches garnering the respect I have for SR (except maybe Ace's).

Of course that doesn't mean that I don't give SR's methodology the same careful treatment I give everyone else's ;). It does mean though, FemmeT, that you can expect Eugene to be the first person in line to agree with you when he can prove that you're right. And I'll be right after him.

Share this post


Link to post
Share on other sites

I'm afraid that bit of sanity came from CityK, and not me. :)

Share this post


Link to post
Share on other sites

And I even attributed it properly in the header. Sorry CityK.

I don't think I should be trying to think anymore. See today's thread here to watch me carefully and thoroughly ;) forget to add a hard disk and an optical disk to a system recommendation --nah, computers don't need those, save your cash :unsure: .

Share this post


Link to post
Share on other sites

I wonder how a double-blind test of performance perception would work out...

Share this post


Link to post
Share on other sites

First off we don't even really know how hard drives work and people are throwing out theories like it's a religion.

Secondly, RAID 0 on a 64GB RAM SAN has proven to be a boost in heavy random I/O for me. There is NO ONE that's going to convince me that RAID 0 is no good. If you do then spend the $100,000 on a RAM SAN and then see the difference.

Thirdly CARD AND DRIVERS MATTER! You can't just say that RAID 0 sucks or rocks in general. I just saw a 4GB RAID 5 array of 10k.6 drives full format in NTFS in less than 1 min. Now tell me that's not good.

I think RAID is good, even for desktops, but not for performance reasons usually.

Depends on the situations. Mom-and-pop don't need RAID. Bobby's FTP server with a $1000 month circuit might.

Seeing as there is very little performance increase in test after test on the application level, it's not entirely unreasonable to pronounce the death of RAID for most single-user scenarios.

Benchmarks can be misleading.

he article right below, for example, recommends Raptors in a RAID0 array.

Likewise contents on SR's board lead people to buy certain drives and setups. Let's not start a war here.

And... don't forget to add SCSI to this list.

The more I'm using Win2k3, I notice how faster it is than WinXP with SCSI. Seconds vs minutes is not just a psychological thing.

Other than doubling the possibility of drive failure, are there any really bad disadvantages running two HDDs in stripping array?

Increased seek time.

I just can't believe Storage Review is making such bold claims on such limited data. The SR results only prove there is a limited performance improvent from striping in very light desktop workloads on a legacy PCI bus. Using a four years old SCSI RAID adapter does not tell anything about the performance of modern SCSI RAID setups.

SR seems very gung ho to limited technologies with limited resources.

I don't think it's their fault fully as they are limited on resources. We could donate things, but unless I have a good reason I have no plans to donate equipment to people I don't know.

The average computer enthusiast can create some pretty heavy disk I/O.

Agreed. With 1500+ employees I've noticed some of those people really work their drives. They're just average users in many cases. Access, Excel, Word,

File copying is a a tricky area....

BTW, I noticed that filecopying is much faster under Linux and pre-SP3 in 2k. My X15-36LP to a 73GB Fujitsu MAN3735 copied 1GB in ~20sec. With the SCSI issue it takes 2min+.

Limited data? SR's hard drive tests are more in depth then anything on the web - you point me to a different site and prove me wrong.

Not to be evil, but people from SUN, EMC, and Texas Instruments have given us different and sometimes conflicting info vs SR. I'm not talking about salesmen but actual engineers.

Besides, unless we start identifying issues how can SR move forward and mold with upcoming technolgies and testing methods?

Assuming you have a decent RAID controller, RAID1 can be faster than RAID0 for _reads_.

I can't belive people overlooked DRIVERS!!!!! A shotty card will give you shotty performance.

You people can dismiss it all you like - I've found superior performance with a RAID 0 setup of Raptors - period.

If it's fast for you then that's all that matters. Mine RAID 0 system decimates what most people have. Though your milage may vary (even to the negative range).

Most of you poor (sorry) people are complaining about RAID on mechanical drives. Just wait until better drives come along. I've went solid state and I'm very happy with RAID 0. Actually RAID 0 on the same drive game me better performance overall than a single LUN. But don't worry. I'll be poor soon, too.

I just wanted to remind everyone that the performance characteristics of RAID 0 are not worth getting emotionally involved in, making enemies, making an ass of yourself, or trying to put down other people.

RAID 5, however...

Funny how the trolls in B&G don't bother me but the technical debates get me anrgy.

I'm wierd.

It would be a good idea for Eugene to develop a new generation of desktop benchmarks,

And hopefully move to an additional OS like BSD or linux.

I have to agree with Gilbo here, FemmeT, it really looks like you are trying to manufacture scenarios where there will be a high queue depth. Perhaps some people are foolish enough to use their computers while backing up or scanning for viruses, but most people schedule these things for when they are NOT trying to do anything productive, deliberately.

It also depends on the person. Some people multi-task and others don't. Some people can't afford to multi-task and others can (ie laptop drive vs BitMicro SSD).

Eugene's willingness to criticize his own work, to audit his methods...

More reason he has our respect.

I'm afraid that bit of sanity came from CityK, and not me.

Just play along ;).

It's incredible how people fight little details and forget the whole picture.

Share this post


Link to post
Share on other sites
It's incredible how people fight little details and forget the whole picture.

We wouldn't have to fight over little details if people didn't keep getting them wrong, and then using them to support bogus conclusions.

Share this post


Link to post
Share on other sites
Guest Eugene
It's incredible how people fight little details and forget the whole picture.

We wouldn't have to fight over little details if people didn't keep getting them wrong, and then using them to support bogus conclusions.

His statement is a classic "fallacy of the golden mean" deal. Pretty condescending.

Frankly, the whole post was very... uh, fluffy.

Share this post


Link to post
Share on other sites
Secondly, RAID 0 on a 64GB RAM SAN has proven to be a boost in heavy random I/O for me. There is NO ONE that's going to convince me that RAID 0 is no good. If you do then spend the $100,000 on a RAM SAN and then see the difference.

SIGH - once again, I'll state the obvious - some people still don't get it - not even a little bit.

This is the Death of Raid for the desktop environment. WTF does heavy, random IO have to do with Desktop performance? SR has shown that RAID 0 benefits a server environment that encounters many random IO operations.

The second page of the TCQ/RAID article shows this!

And it appears that you are comparing desktop performance with a $100,000 solution? And you use this to prove a point regarding workstation performance?

amazing......

Share this post


Link to post
Share on other sites
Guest Eugene
amazing......

More specifically, there has been some excellent discussion presented from all sides in this rather lengthy thread.

Then an individual comes in, skims the thread, zings out a bunch of vacuous, incongruous, and downright meaningless one-liners and then has the unmitigated gall to proclaim that everyone else misses the big picture.

Share this post


Link to post
Share on other sites
Face it folks, those who like striping will continue to stripe. Those who do not will not. No amount of testing and / or facts will persuade either group. Why not let this thread die?

Free

Please?

Share this post


Link to post
Share on other sites
Face it folks, those who like striping will continue to stripe. Those who do not will not. No amount of testing and / or facts will persuade either group. Why not let this thread die?

Free

Please?

What is the point of this, other then to appear to 'be above it all'? - if your not going to participate in this thread, and are not interested in reading it, then I would suggest staying out of it.

Share this post


Link to post
Share on other sites

Again people are fighting over a technology that when humans don't even know how hard drives work.

SIGH - once again, I'll state the obvious - some people still don't get it - not even a little bit

RAID 0 on 1 drive over 2 LUNS was never discussed.

Then an individual comes in, skims the thread, zings out a bunch of vacuous, incongruous, and downright meaningless one-liners and then has the unmitigated gall to proclaim that everyone else misses the big picture.

My, my, my, getting personal? :P

Share this post


Link to post
Share on other sites
Again people are fighting over a technology that when humans don't even know how hard drives work.

Didn't humans, er, invent and design hard drives, gathering decades of knowledge through billions of dollars in R&D to highly tune and refine every aspect of the technology? Or were you talking about some principal of hard drive manufacture? (There are some principles of LCD manufacture that aren't understood I suppose, but none regarding hard drives that I know of). I don't know of any natural magnetic platter formations...

I think your response would be better if you individually addressed the arguments presented with logic, reason, theory, and/or strong and applicable evidence.

Share this post


Link to post
Share on other sites

There is one good thing about raid: I can have six hard drives (or 4 including 2 CDROMS). So if raid is good for anything its to open up a few extra slots for more internal hard drives. :P

Share this post


Link to post
Share on other sites
Again people are fighting over a technology that when humans don't even know how hard drives work.

hrm.. What?

I think you'll have to speak for yourself, and not the rest of us. Hard drives are hardly 'magical' in nature. They are not some great mystery that we fanatically study in hopes of gaining a glimmer of understanding. It's a hard drive - 'we' (since you are using an all encompassing term) thought them up, designed them, and built them. There is nothing about them thats hard to understand.

A foolish comment like this is a prime example of what could provoke an ostensibly hostile response from someone who actually knows what they are talking about.

Before you start jabbering that you are knowledgeable in this area, consider that your posts don't appear to relate to the rest of this thread, and don't even contradict SR's opinion regarding striping.

The understanding that is just now starting to awake across the net involves workstation performance, and the lack of benefits derived from striping local workstation disks.

(I will note again that SR tests with IO Meter have shown improvements when striping - which means that heavy random IO will benefit from RAID, but workstation patterns do NOT)

This is nothing new to those who have been following SR, to those that actually understood what was being shown to them. We know that server performance is boosted by striping. We know that RAID 0 brings almost nothing to workstation performance.

Your response to all this information:

Humans don't know how a hard drive works. - this is barely worth responding too

Clearly SR is wrong about RAID, since my $100,000 storage system helps with heavy random IO (server patterns) - We -know- that raid helps server patterns - this was never doubted.

I can only make the assumption that you are either skimming these threads, and the SR reviews in general, or don't understand any of it.

Consider the very first sentence of this thread:

RAID helps multi-user applications far more than it does single-user scenarios.

And one of your responses here:

Not to be evil, but people from SUN, EMC, and Texas Instruments have given us different and sometimes conflicting info vs SR.

Now I have no doubt that these companies speak about raid when talking about server scenarios, but you are welcome to post something more specific then 'Fluff'.

Why don't you get on the same track as us qaws - because right now I feel like I'm telling a 5 year old to stop pushing buttons on the dvd player.

Share this post


Link to post
Share on other sites
I can only make the assumption that you are either skimming these threads, and the SR reviews in general, or don't understand any of it.

or he's just a troll...

Share this post


Link to post
Share on other sites
Didn't humans, er, invent and design hard drives, gathering decades of knowledge through billions of dollars in R&D to highly tune and refine every aspect of the technology? Or were you talking about some principal of hard drive manufacture?

I think it was a single aspect and not the entire disk system. I don't remember the details but 2 engineering professors told me about it. It's like one of those nugget pieces of info you'll get when you talk to old people.

I think you'll have to speak for yourself, and not the rest of us.

I didn't believe it myself when I heard it. However it's been sometime since I heard it so there might have been changes in theory. The human race does make a things from time to time and don't really know how they work. They just think they know how stuff works (ie remember the Chaloric Theory).

A foolish comment like this is a prime example of what could provoke an ostensibly hostile response from someone who actually knows what they are talking about.

Calm yourself, my son. I only get mad when people insult Linux.

Before you start jabbering that you are knowledgeable in this area, consider that your posts don't appear to relate to the rest of this thread, and don't even contradict SR's opinion regarding striping.

I wasn't trying to contradict the opionins of the people here. I was trying to placate the RAID 0 user. Besides, there are advantages to RAID 0 beyond benchmarks. Also there are MANY factors to consider when setting up a system.

The understanding that is just now starting to awake across the net involves workstation performance, and the lack of benefits derived from striping local workstation disks.

There are a good and mis- info on the net. Some people flock to it like it's magic and will defend it. Personally, I test it in our lab at work or use word of mouth before trusting marketing departments/site working for marketing departments *cough* tomshardware *cough*.

(I will note again that SR tests with IO Meter have shown improvements when striping - which means that heavy random IO will benefit from RAID, but workstation patterns do NOT)

I will note again, benchmarks can be misleading. Try it out for youself and see if you like it. If not then it's not for you. If you do then fine.

Humans don't know how a hard drive works. - this is barely worth responding too

Unfortunately I can't take your side on this. Remember duck and cover for atom bombs? Scientists do lie as so engineers. Maybe not willingly but at times out of mistake. When I see the profs again I'll ask them about it.

Clearly SR is wrong about RAID...

Perhaps I should be more clear on this. SR is wrong making general statements about RAID. Even in a desktop environment setting RAID 0 across 2 LUNS on ONE hard drive was faster than a single partition on a single hard drive for a 64GB RAM SAN. SR isn't a bad place for info, however they aren't perfect. They are allowed to make mistakes or change their findings depening on the technology changes.

Why don't you get on the same track as us qaws - because right now I feel like I'm telling a 5 year old to stop pushing buttons on the dvd player.

Speaking of which... I'm working on 12 hours of sleep over the past 4 days. Thinking I'm awake I point to a portable generator with my pinky in our electrical closent. Turns out the static electricity from my pinky killed the generator. *sigh*

This is gonna go very well with my VP.

Share this post


Link to post
Share on other sites
Not to be evil, but people from SUN, EMC, and Texas Instruments have given us different and sometimes conflicting info vs SR.

Well of course.

However pieces of knowledge differ from vendors, SR, and other people. Is SR a reliable source? I'd rather say SR is more of another way of looking at things. SR is very limited in resources. I'd love to lend them some of my unused lab equipment but how will I know I'll get it back?

SR writes

'SCSI Host Adapter: Adaptec ASC-29160; Bios Revision v3.10; Native Windows XP Driver'

That's very limited. However they are trying hard with what meager resources they have. To me a decent card would be a MegaRAID SCSI 320-4X with proper Red Hat drivers.

What's more is that I see they're using a medicore PSU. Clean power can make or break performance. Am I to make large judgements based on those limited resources?

Personally I feel bad for the people who run SR. They are enjoying themselves and bettering the community but the problem is that they have financial limitations. On the other hand some idiots in our company buy millions of dollars worth of equipment and people steal it and let it collect dust.

Share this post


Link to post
Share on other sites

qaws - WTF is wrong with you? Dropped on your head? Too much playing in the paint chips? Stop being a dumbass.

A MegaRAID SCSI 320-4X with Redhat drivers? Sale price: $1,599.00

Would you call this a common power users setup for a workstation? I could easily argue that a $1600 raid controller isn't going to do stinker for workstation performance, but I don't have to - because that is moronic.

As for server use - SR has already shown that SCSI RAID greatly assists server performance, so what would the point of a higher end SCSI card serve?

What's more is that I see they're using a medicore PSU. Clean power can make or break performance. Am I to make large judgements based on those limited resources?

Good lord! You sound like one of those damn Anandtech posters! Am I to assume that my new 1890 Watt power supply is going to make my RAID faster?? Maybe thats why all these review sites have been showing that raid isn't any faster, not enough power!!!!

Wake up qaws!!!

Share this post


Link to post
Share on other sites
Not to be evil, but people from SUN, EMC, and Texas Instruments have given us different and sometimes conflicting info vs SR.

Funny, I haven't heard squat from EMC or Sun that has contracticted SR's pool of wisdom. We give those two companies a few million dollars a year and believe me, we have a lot of their gear on site.

Ask EMC what they think of you running RAID-0 on their Clariions. That should be good for a laugh. 0+1 of course, and maybe even RAID 5 if you have the newer software, but not 0. When we first moved from Hitachi to EMC, we asked them if we could set up RAID 0 on our "small" 25TB Clarriion CX700 array in order to make it as fast as possible. Our EMC engineer told us to stop being stupid and that we should just use RAID 1 or JBOD, and that he could provide us with data showing that with large spindle counts RAID 0 would be detrimental to performance in a large way.

As a matter of point, our loaded CX700s can pull off 1.5GB/second of transfer and over 200K IOs each. We have four large Sun farms feeding off of those SANs and we cannot sustain more than a total of 3GB/second across four CX700s. That's with Oracle and seismic applications, powered by over 400 UltraSPARC III Cu CPUs. We get far more tied up by IO than we ever do by STR.

Ask SUN why they don't reccomend RAID 0 on their T3+ arrays. You might be surprised by what you find out. For that matter, I might be surprised by what you come back with. Please enlighten us if you have information from them that is radically different than what you see here. EMC and Sun employ some mighty intelligent people and I love to learn from them every chance I get.

Also, if you get the chance, ask both companies why they turn off caching on all drives in their systems. The answers are different but valid and quite interesting.

Share this post


Link to post
Share on other sites

I have to agree with Mars' last post. A MegaRAID adapter is not typical of workstation or low end server installs. And from my experience the only time the power supply has affected my disk performance was when we ignored Compaq's reccomendations and put 24 15K drives into a server that wasn't ready for drives that were that power hungry. The net result is that drives kept burning up from brownouts, which degrades RAID 5 arrays in a hurry.

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now