I've reserved comment on these proceedings because I knew there was little point to maintaining the truth in the face of a mob mentaility. But I can hold my tongue no longer after reading the crass comments in this thread towards anyone who might arrive at a different conclusion than one's own.
This is all crap. There are many problems here that invalidate the claims that RAID 0 is pointless. First off is the test rig. Guess what, the impact of disk subsystem differences is blunted when you are cpu-bound. A 2GHz P4 may be a lot more respresntative than a P3-700, but neither is totally revealing in a world of 3.5-4GHz P4s and 2.2-2.5GHz FX-53s.
Secondly, the testing regiment is crap. IPEAK results may be interesting and important in an indirect way, but they are not application testing. They attempt to model it, but are not. They prove no more than sandra drystones when it comes to comparing the effectiveness of different cpu architectures. They are the hard drive equivalent of an engine dyno, and any racer worth his salt knows you don't race dynos. The point where you arbitrarily choose what work pattern will represent this use or that use is the problem. Whatever pattern you choose is not the reality of the situation, and you have no way of quantifying if it is off or by how much for a given application.
Thirdly is the choice of controllers. Guess what, if you are going to confine yourself with a 32bit 33MHz pci controller, STR can't help much. Pretty obvious if you ask me. This would point to the advisability of using a faster bus for connection to the system. And the cost effective solution? How about ICH5/R? Hmm? Perhaps the choice of the best RAID controller for desktop use might help the desktop-oriented results... hmm.
Certainly RAID 0 degrades seek performance slightly. And that is a very important factor. But in all but the very worst case scenario the STR increase at least makes up for the loss, and in some areas allows a goodly effective increase in application performance. There is certainly the possiblitly of creating a laundry list of (cpu-limited, synthetic) benchmarks that show little advantage for it, but it is even easier to come up with a list of actual applications that do benefit.
But to do so one must buy two cutting edge drives, use the correct controller (ICH5R for desktop use), and house it in fast enough machine to allow it be of benefit. Don't judge RAID0 by its inability to transform your KT333/1GHz Tbird system, properly integrated it into a modern, optimized system design and you can indeed benefit.
And the data loss problem? The naysayers would be just as quick to nay at someone who says they can't be bothered to have implemented a proper backup regime, but then of course ignore that truth when it adds strength to the mob.
And finally is the actual experience. If you round up the latest and best components and time application load times on RAID0 vs a single identical drive, the RAID box wins. Not hugely, but it wins. Generally 5-10%. IPEAK be damned.
But I must congratulate you. The dogged pursuit of whiches real and imagined has had its impact. A fellow on my home board (ocforums.com) measured this real world performance--application load times--in a real machine (read of modern porportions):
Let's compare the results:
UT2004 14.32-12.12 = 2.2s
(2.2s/14.32) x 100 = 15.4% improvement
GC2 12.37-11.81 = .56s => 4.5% improvement
Lock On 28.16-25.96 = 3.2s => 11.4% improvement
Far Cry 90.42-83.70 = 6.72s => 7.5% improvement
Unreal 2 16.44-15.90 = .54s => 3.3% improvement
And even though he measured an 8.4% average improvement, immediately started chanting the anti-RAID0 mantra. I'll take my 8.4% and be happy, thank you, as this is the area least conducive to RAID0 improvement. It gets nothing but better from here.