NovaTC

What is the best cheap RAID5 solution?

Recommended Posts

Hi!

I am looking into building a cheap RAID 5 for my home rig.

If you are interested in my motivation, read on, otherwise skip to the asterix. :-)

It all started when I considered buying a new hard disk to get some capacity for all those... digicam shots, you know. ;-)

I had quite some trouble choosing the most reliable drive. Of course I checked the reliability survey here on SR, but the number of reports is still quite small for some of the drives I am interested in. So I thought "Why not put that SATA-RAID controller on my Mobo to good use and do some mirroring?" This would ease the fear for a loss of my valuable data (is it only me, or is it quite impracticable to backup 200GB hard drives using 4,7GB DVD-Rs?) and make the choice for the "best" hard disk a bit easier, since reliability would not be that much of an issue (unless I get unlucky and both drives fail almost at the same time).

Thinking again, I didn't like the idea of loosing 50% of all capacity in a RAID1, so I checked prices for RAID-5-controllers. Prices vary widely, I could not find any good reviews so here I am, asking the experts out there. :-)

To save me some trouble when routing the wires in my tower case, and of course as it is the latest technology, I would prefer SATA over PATA, but it is not an absolute must.

The only hard disk I owned, that ever died on me is a 40GB Toshiba notebook drive. It bit the dust after running in 45°-tilted position for almost 1.5 years of an average 14 hours power-on hours a day. Since unlike the Toshiba I normally treat my desktop drives well (cooling-wise) I don't want to take any chances.

BTW: I am well aware that RAID5 won't protect me from file system/OS errors, viruses, or my own stupidity when handling files.

*

The prices of RAID-5-capable (P/S)ATA-controllers vary widely, some are below 100€ (about 120$), like the HighPoint RocketRAID 1640, but as it seems it doesn't do parity calculations in hardware, leaving the job to the host CPU. Not good.

Adaptec Serial ATA RAID 2410SA is about 250-300€, as is LSI Logic MegaRAID SATA 150-4 controller. Both of these bring 64Megs of cache and appear to be native SATA, i.e. no nasty Marvell PATA-2-SATA converters (at least not apparently visible on the photos of the boards I found). However, for my purposes, 300 bucks is a tad too much, in addition it is hard to get a hold on the LSI controller.

So I searched on and stumbled across the Promise FastTrak S150 SX4. Its only about 160€, has a hardware "XOR-Engine", comes without cache, but with a slot to put a PC133-SDRAM into. I got one of those lying around anyways, so I could save about 100€ here. Alas, as it seems, Promise only took its old ATA-RAID-5-board (SX4000) and added Marvell converters to jump on the SATA bandwagon. I am not sure how much of an issue this is in practise (i.e lower bandwidth, higher latency, compatibility issues with certain drives?), but it will definitely loose me Native Command Queueing, which alone wouldn't be that bad, as neither LSI nor Adaptec nor most of the current drives seem to support NCQ anyways.

3Ware is recommended in some postings in this board. I have not heard much of this company before, but its 3ware Escalade 8506-4LP adapter fits my needs, but is almost 300€ aswell. From the images alone I cannot tell whether it uses PATA-2-SATA-conversion, but the specs do not mention any cache (neither cache already included nor added using a free SDRAM socket like the Promise controller). As I understand, a cache is almost a must, and therefore I would have to go for the 3Ware Escalade 9500S-4LP, which includes 128MB, but approaches ~350€. A bit pricey.

So which of the adapters I mentioned (3Ware, Adaptec, Promise, LSI) would you choose when it comes to price/quality ratio? I do not need the ultimate performance, any benefit compared to a single drive would be a nice add-on, but my main concern are drive failures. Troublefree operation under common operating systems (Windows, Linux) is very important, I don't wanna loose any data to crappy drivers.

As I still have unused RAM lying around, I would prefer the Promise controller, as it is considerably cheaper. Just how bad is its PATA-2-SATA conversion? What about its reliability, driver support etc.?

Should I consider PATA instead? The drives and controllers are minimally cheaper, but are not really future-proof.

Some of the controllers are 64Bit-PCI. My board only offers plain-old standard 32Bit-PCI, but if I understand correctly, a 64Bit-card will still work in a 32Bit-slot, unless the extra pins do not colide with any protuding elements on the board. Is this correct?

Buying the "right" drives, is another issue, here I am equally undecided. As I am opting for a minimal solution for starters, I plan to go with three drives, which should also save me some noise and heat issues (my case is well ventilated though)

- When it comes to performance, the Hitachi 7K250 appears to be the current king of the hill. Knowing of IBMs bad reputation with its Deathstar-series, I am a bit uneasy going with former-IBM-now-Hitachi-drives, but RAID5 would let me sleep easily anways. Unfortunately, the 160GB variant (at about 85-90€) is in short supply by all my favourite web shops, while the 250GB variant is available but considerably more expensive when comparing €/GB, costing more than 150€.

- Seagate's Barracuda 7200.7 SATA 200GB (ST3200822AS) seems interesting, good €/GB-ratio (its about 110€ here). Seagates have a record for being cool and quiet (which is confirmed by the "hard drives 2.0" review here on SR), while not performing as good as Hitachi.

- What about the Maxtor DiamondMax 10 160GB drive (6B160M0)? I have two Maxtors (one 27GB, the other 80GB) still going strong in my older desktop, what about performance, heat, noise?

-Western Digital? The current 250GB version is quite a screamer (in the true meaning of the word) according to the SR review. Its not that fast either, still an option (in its 160GB or 200GB variant)?

I am no big fan of Samsung, I met some Samsung hard disks that had gone belly up back in the hard disk dark age (2GB). Current models appear to have quite a record in terms of reliability and noise, should I consider them instead of the above drives.

Share this post


Link to post
Share on other sites

Right now, we use the new Broadcom Raidcore raid cards in raid 5. Although they're dont have hardware parity checking, they do pretty good. CPU utilization is low, under 4% for typical use. (thats with 3 8 port cards in the system) Dual Xeon system though.

What are your system specs? Its getting to the point now, that cpu's are fast enough, that hardware parity checking doesnt matter that much.

I hear mixed reviews of the LSI sata raid cards, apparently they're slower than 3ware's, but the new 9500 series cards, have driver stability problems that the LSI cards dont have... (3ware has made huge strides in fixing the problems since the card's release, but still has some ground to cover.)

Share this post


Link to post
Share on other sites

I didn't realize Broadcom makes RAID-controllers, I only knew them for their LAN and WLAN chips.

The CPU usage ratios you mention are interesting, perhaps software parity is not that bad after all. I am still looking for more comparison benchmarks of RAID5 controllers, no luck so far.

What drives are you using with the Broadcom controllers?

My system is a year-old Athlon XP2500+ (Barton core, 1833MHz) on an Asus A7N8X Deluxe 2.0 (nForce, Silicon Image 2-Channel-SATA controller onboard), 1024MB PC400-DDR-RAM, currently only one 120GB harddisk (Western Digital 1200JB, 8MB Cache, 7200UPM). I pretty much like the WD, for me it is not too noisy, compared to my other drives (an old 10GB 7200UPM IBM DTTA sounds like a jet engine in comparison ;-) ).

I use a Coolermaster Wavemaster tower, which has two built-in 8cm fans (currently running at 7V as 12V was too noisy and simply not necessary) in front of the HDD cages, so cooling should not be an issue.

I built it mainly for gaming (with its Radeon 9800 NON-Pro I find no trouble playing Doom 3 at High Quality, as long as I stick to reasonable resolutions), while my main working machine is a two-year old Toshiba notebook (Satellite 5100-503, Mobile Pentium 4-M 1,8GHz, 512MB DDR-RAM, upgraded with a 60GB Hitachi 7K60 drive). I love the notebook especially for its UXGA screen and it is still easily fast enough for anything not 3D-related, especially with that Hitachi 7200RPM drive that pretty much closes the noticable gap in hard disk performance between notebooks and desktops. The replacement drive I got from Toshiba for the original 40GB Toshiba 4019GAX drive that went belly up is in an external firewire enclosure. It is actually the second replacement, the first one didn't work for more than three days, so almost a DOA.

BTW: Someone should add notebook hard disks to the reliability survey here on SR.

Share this post


Link to post
Share on other sites
I am looking into building a cheap RAID 5 for my home rig.

You might find something of use in this thread. Dave Dreuding and I kicked the idea around for a while.

-- Rick

Share this post


Link to post
Share on other sites

Thanks, I will check it out. Of course I had used the search function, but I didn't get over the first few result pages. I actually hoped for something like this, as I was pretty sure I was not the first one asking for a value RAID-5. :-)

Share this post


Link to post
Share on other sites

Hi NovaTC,

If it were me (and I've done this: though I added capacity since this post: http://forums.storagereview.net/index.php?showtopic=14878) ... I'd skip the hardware RAID5 and do it in software. If you don't have enough ports on your motherboard for 1-channel-per-drive (which isn't strictly required) then at worst you buy a couple cheap Promise PCI cards to add few more PATA/SATA ports.

I spent a weekend fiddling with hardware/software RAID and came up with this OpenOffice doc: basically unless you need every last CPU cycle software RAID5 is faster. Mind you I only tested a 3ware 7500-8 controller, and on a midsized AMB CPU, on Linux:

http://battlemage2.dyndns.org:88/Hardware_...ftware_RAID.sxc

As for drives, I'd buy the cheapest capacity-per-dollar drives, as many as I could afford. Even the slowest drives these days have plenty fast read speeds when you stack 4+ of them in an array: and if you're looking for blistering fast writes speeds then RAID5 isn't for you. If this is over a network you're better off looking into some GigE cards instead of hard drive speeds because that will be your bottlneck.

Anyways, I'm still kinda new to this, but that's my opinion :)

Bit

Share this post


Link to post
Share on other sites
You might find something of use in this thread.  Dave Dreuding and I kicked the idea around for a while.

Of course, that was a year ago... maybe we should try again?

How about this:

$70 Asus A7V880 (Integrated Gigabit LAN)

$48 Sempron 2200+

$10 CPU HS

$23 128MB PC2700 Kingston ValueRAM

$51 (3x$17) SATA PCI controllers cards (No-name SI cards)

$52 Antec SLK1600 (5x3.25 bays,300W PS)

-----

$254

Now, I would use Linux and software RAID on the thing. Also, I would most likely set it up to boot off of a USB thumb drive. Wouldn't need to be very big. Make it $17 for 128MB and the total becomes:

$271

Now for storage (SATA, from pricewatch):

  1. 400GB
    • $1.03/GB Hitachi 7k400

[*]300GB

  • $0.72/GB Maxtor 6B300S0 (Maxline III ?)

[*]250GB

  • $0.56/GB Maxtor DiamondMax Plus 9
  • $0.57/GB Western Digital WD2500JD
  • $0.60/GB Whitelabel (5mo dealer warranter) Maxtor Maxline III (here)
  • $0.62/GB Maxtor Maxline Plus II

[*]200GB

  • $0.53/GB Western Digital WD2000JD
  • $0.53/GB Maxtor 6Y200M0
  • $0.60/GB Seagate Barracuda 7200.7

[*]160GB

  • $0.55/GB Seagate Barracude 7200.7
  • $0.60/GB Maxtor 6Y160M0

So, assuming a 5-drive RAID5 setup, it looks like we have:

0.64TB for $711

0.80TB for $796

1.00TB for $971

1.20TB for $1346

1.60TB for $2321

-JoeTD

Share this post


Link to post
Share on other sites

I built myself a basic server with a RAID-5 array as well at the start of this year, don't know if it will help you with your endevours but here is what I got and my experience of using it for the past 9 months:

Intel Celeron 2.2GHz

1GB Dual Channel DDR333 Memory

Gigabyte 8IPE1000-Pro2 (865PE Chipset, onboard Intel CSA Gigabit Network)

Main Boot Drive:

80GB Wester Digital Caviar Special Edition (800JB) - connected to onboard IDE Channel 1

RAID-5 Array:

Promise FastTrak S150 SX4 with 256MB PC133 RAM in Cache slot

3 x 160GB HGST (IBM/Hitachi) 7K250 Drives

Intel SATA Hot-Swap Enclosure (4 SATA Hot Swap Bays, takes up 3 5.25" Drive Bays) - setup as RAID-5, 64k Strip Size, 320GB usable space

Enermax 350W Power Supply, PFC and Fan Control

OS: Was running Windows Server 2003 (Standard), currently on Windows XP Professional (but going to switch back to Server 2k3 soon).

Network Setup: 3Com OfficeConnect Switch 8 with Gigabit Uplink (3C1670108), Server connected to Gigabit port, 4 machines on network (100Mbps) + Wireless Access Point (802.11g) serving 2 Wireless devices (2 laptops)

The major reason why I decided to build the RAID array and the server is so that I can put ALL my files onto the one place for easier management, and all the users on the network can access the many media files (MP3s, DivXs etc..) without needing a local copy (cuts down on duplicated files across the network).

The reason why I went with the Intel platform is because I already had the 2.2GHz Celeron from my previous workstation (and reusing it means 1 less thing to buy).

I choose the Gigabyte 8IPE1000-Pro2 mainly because of 2 reasons:

1) It will work with the CPU I already have, plus having good upgradability

2) The 865PE chipset is mature and provides good performance (Dual channel memory)

3) It has the CSA Intel Gigabit Ethernet built in

The CSA GbE was an important deciding factor because once I put in the RAID-5 card on the PCI bus, its going to get saturated pretty quickly (say, 40MB average transfer per HDD, x3 drives = 120MB bandwidth on the RAID array, that's already close to the 133MB bandwidth on the entire PCI channel). The Intel CSA interface gives the GbE controller direct connection to the Northbridge and won't fight over the PCI bus for bandwidth.

The HDD choice was rather easy as the HGST (IBM/Hitachi) 7K250 was the fastest SATA 7200rpm drive for sale then (still is today). I picked up 3 160GB drives and have 320GB usable space in a RAID 5 array. I also picked up a nice Intel SATA Hot-Swap drive cage to make use of the Hot-Swap capability.

As far as SATA RAID card were concerned, I have also considered other cards:

- Highpoint RocketRAID 1640 - cheap but entire software implementation

- Adaptec, LSI Logic, 3Ware - too expensive (at least 2x as much as the Promise when I purchased it start of this year)

So the Promise card was a nice choice. Plus I have a spare stick of 256MB PC133 SDRAM I can put into the cache slot on the card.

I have looked at the advices offered on the forums here prior to picking the RAID card, and most more "advanced" posters here will recommend 3Ware or LSI Logic over Promise (in fact I got a feeling that the Promise card isn't too welcomed around here).

But, honestly, after 9 months of using it, I found it to do the job it does, no errors, no problems with the OS, no dropped HDDs or dropped arrays and I run my server 24/7. The Promise Array utilities works in Server 2k3 as well as XP and are easy to navigate around.

I haven't tested my array with any benchmarking programmes, but I don't care much for a number telling me how my RAID performs. I just need it to work, and work well it did.

I can tell you that the server has no problems serving up 3 simutaneous movie files off the array at the same time, with no dropped frames, no glitched/dropped audio signal whatsoever. I can be streaming a movie to my laptop, while doing backups on my parent's machine (constant writing of data) and transferring a large file to another and everything runs smoothly as is expected. The transfer speed would be in-line with what a standard 100Mbps network can dish out (which would be 8-10MBytes/sec transfer rate on the files).

To give you a comparison, the previous "server" I had was the same 2.2GHz Celeron but on an older 845 chipset board (with 1GB PC133 SDRAM), and a 80GB 800JB (boot, retained on the new server) and 180GB 1800JB (storage) W.D. Caviar SE drives each connected to saperate UltraATA 33 ports on board, and I can't say that its entirely smooth. If I start a file transfer on it, an existing movie stream will start skipping frames or MP3 audio will start to have "clicks" and skips as well.

The whole setup costed approximately (do note that these are start-of-this-year's prices):

Motherboard: US$ 85 (01/2004)

Promise RAID card: US$ 235 (01/2004)

3 x 160GB HGST 7K250 HDDs: US$ 330 (or US$110 each) (01/2004)

Intel SATA Hot-Swap Cage: US$ 255 (01/2004)

Enermax 350W PSU: US$ 45 (01/2004)

So it costed me about US$ 950 for the setup. Of course there's still the CPU, Memory, and other bits that I haven't included in the price since I already have those prior, but should only add another US$ 200 or so.

Of course prices have fallen since and I reckon you can have a higher capacity for less if I were to build this again today - probabally in the US$600-US$700 range for half a Terabyte of storage.

Right now my 320GB array is about 70% full, and with the Hot-Swap cage I can just pop in another 160GB drive to boost the capacity to 480GB, or swap them out for 3 x 250GB = 500GB or 4 x 250GB = 750GB drive array at the end of this year.

The CPU utilization isn't much at all so the 2.2GHz Celeron is enough, but a 2.8GHz "Prescott" Celeron is a nice upgrade possibility and those are cheap. The 865PE motherboard can last me probabally another 2 years or so and can support upto the P4 Extreme Edition (when they fall in price in the future).

I went with the "Hardware RAID-5" approach because I don't like doing it in Software, I like to stick with what I know best (Windows as opposed to learning and setting up everything in Linux for a complete Software approach). Plus after I calculated the costs (at the start of the year), there isn't too much a price difference between the 2 approaches (about US$ 100, maybe less). Therefore I picked and built what I have and use today.

Well, that's my experience anyway, hope it gives you some help towards building your server. :)

P.S.: If you don't need a fancy Hot-Swap drive cage then you can build it for cheaper compared to what I had.... but I reckon if you were to go RAID-5 and do it with a Hardware RAID card, you owe it to yourself to set it up with Hot-Swap cages as well. Just my opinion.

Share this post


Link to post
Share on other sites

Thank you guys, you have all been very helpful!

@Bit

While the thread was an interesting read, I don't wanna go with an extra machine, i.e. an external RAID. I want it all to end up in my existing box. I got enough cables running through my room already. ;-)

About write speeds: I understand that I will not get the same increase in write speed as I would if I went with the Kamikaze version of RAID (RAID0, that is...), but still I should get an increase compared to a single drive. Even if this is not true, as I said, performance is secondary, protection against failing drives is my primary goal.

I have just read your benchmark numbers (had to download OO first). While CPU usage is about 30% against 10% in software RAID, hardware RAID write performance appears to suck big time. Really disappointing. I wonder, what kind of caching strategie did you use? Could it be a problem with the 3Ware controller? Any links to numbers for other controllers?

Still I prefer hardware RAID, since it should allow me to use the array with two operating systems, which probably will not work using software. I have no experience with software RAID5 anyways, so correct me if I am wrong. What software did you use? Reading you spreadsheet ("Bonnie", ext3,...) I asume it was exclusively Linux, I want Windows aswell (or, to be honest, only Windows for now, Linux to come eventually on that machine). I am not aware of any software RAID-5 for Windows.

@Kakarot

What you describe is (controller and disk-wise) very much what I plan to do. The cheaper promise controller (I only got 128Megs of Apacer PC133 lying around here, which I ripped out of my old Celery 400 that stood in the corner unused and diskless for about a year) with only three disks in the 160GB range. I am especially happy to hear that your RAID proved reliable, i.e. no software/driver problems whatsoever under the OS family I plan to use primarily. Like you, I want to consolidate quite a large number of media files in one place, instead of the curent variant of distributing them across several harddisks on three different machines, always being sure to have a copy of each file at least on two different disks. It is not only wasteful, but also very uncomfortable.

I am aware of the PCI bus bottleneck. My mobo has got two 100MBit ethernet ports, one of which is built into the nForce2 chipset. I haven't checked, but I believe it is not connected via PCI. Even if I lost fast ethernet's 8MB/s, that wouldn't hurt too much. I don't need more than one media stream right now, and if I ever do, I will want a new board with CSA GBe anyways. ;-)

I am almost ready to hit that order button, but I am not totally sure about the drives yet. While having a Hitachi as my primary working disk in my notebook, I still feel somewhat uneasy with Hitachi 3.5". If I just could get a hold of some serious benchmark of the latest maxtor desktop drives (Diamond Max 10). Some retailers offer them with a three year waranty, so they should not be inferior quality-wise in any aspect than the expensive MaxLine III, which is only available in 250 and 300GB variants. Since the DM 10 offers NCQ, it appears to be an all-native SATA disk, which makes it preferable over the Hitachis even if the controller does not support it .

A general observation is that most people use an extra disk for the OS. As I understand, a hardware RAID 5 should be bootable, any reason for not putting the OS on the RAID as well?

Share this post


Link to post
Share on other sites
Still I prefer hardware RAID, since it should allow me to use the array with two operating systems, which probably will not work using software.

It definitely would not work w/ software RAID. However, it most likely wouldn't work with hardware either, at least not in the way you intend. You will not be able to just install Linux and still use the array. The reason is that you will be formatting the array with NTFS initially (I assume). Linux has the ability to read NTFS, but write support is very limited. ATM, you can only change data in a file, not create, truncate, append to or delete files. Someone correct me if I'm mistaken, but I can't recall much change in this regard for some time.

So, don't make a decision based on being able to use the array w/ both Windows and Linux as you will most likely have to put a new FS on the array after the switch anyways.

(However, I should note that a separate fileserver negates these problems. Both Windows and Linux can use the same methods to access the files then.)

-JoeTD

Share this post


Link to post
Share on other sites

You are right about the reliability of the setup I have. The server I had originally had Windows Server 2003 on it (180 day evaluation version), which then expired and I installed Windows XP Pro on it, then gotten another copy of Server 2003 (180 day evaluation version again), expired and now its running on Windows XP Pro again.

During all those OS transitions and me physically plugging/unplugging the Promise card (when I was tinkering with the server setup), the array remained intact (as it should be). No data loss, no problems with the RAID controller with detecting/re-detecting the drives whatsoever. The Hot-Swap works as well (as it should be).

At first I thought about the HDDs I was going to use as well, since I previously owned 2 of the infamous IBM "DeathStar" 75GXP drives (20GB and a 30GB) and both developed the "click-of-death" (both RMA'ed and quickly auctioned/sold off). I was going to use WD Caviar SE drives as I owned 2 800JBs and a 1800JB but the benchmark here on StorageReview and the price made the decision for me to go HGST 7K250s. Also, RAID 5 is all about having redundancy so if something goes wrong my data will still be intact.

As a sidenote I just upgraded my laptop's HDD to a HGST 7K60 and it works so much better compared to the old one (Hitachi 40GB 5400rpm). HGST's seems to have gotten reliability back in check since the days of the IBM "DeathStar" (75GXP/120GXPs).

There is no reason why you couldn't just setup your OS on the RAID 5 array as well, but yes as you have noticed most tend to use saperate HDDs for the Boot/OS. One of the reasons would be that if your OS crashes then the chance of it bringing down the array will decrease (since it doesn't reside on it)... that and it just feel more "right" to keep the array for storage only while running your OS/Apps/Games on their own HDDs or a saperate array.

Share this post


Link to post
Share on other sites

Forgot to add this in my previous post:

As far as TCQ/NCQ is concerned, reading the many reviews and articles on the web (both here on StorageReview and places like AnandTech), TCQ/NCQ will no doubt bring a slight performance increase over current drives in say a RAID 5 setup but I wouldn't worry too much about it (as current SATA drives and controllers are already very fast) unless you are VERY picky (and want every ounce of performance squeezed out as much as possible no matter what the $ cost is).

The HGST 7K250 actually feature TCQ (since the drive isn't a native SATA implementation it can't be called NCQ (correct me if I am wrong)), the WD Raptor 740GD drive also supports TCQ. However currently there are not many shipping SATA controllers that supports TCQ/NCQ (or should I say not very common yet), let alone TCQ/NCQ enabled RAID 5 cards. The Intel ICH6R would be a good example of a current chipset/controller supporting NCQ.

The newly release Promise FastTrak TX4200 supports BOTH SATA TCQ and NCQ so I'd say both the 7K250/Raptor (with SATA TCQ) and the newer Barracuda 7200.7 NCQ/7200.8 / DM10 (with NCQ) are very well supported and taken advantage of. Unfortunately it doesn't do RAID 5 (only 0, 1 and 0+1). (StorageReview looked at the TX4200 paired with 740GD Raptors before).

Share this post


Link to post
Share on other sites

@JoeTheDestroyer

I am aware of the issues regarding file system support with Linux. Of course I will go with FAT32 if I decide to use the array from Linux. I considered a file server, but that would be overkill, since I do not have a whole family that needs its media files served 24/7. Having a second computer running is, for my purposes, a waste of energy. I would also have to go for Gigabit Ethernet, which would essentially mean at least one new mainboard, to avoid the PCI bottleneck involved in PCI GBE adapters.

@Kakarot

Having used a 6,4 and a 10GB IBM hard disk, I was lucky with the Deathstars as I changed from IBM to Maxtor at that time, for no special reason. The 27GB and 80GB Maxtors are still working, although I hardly fire up the machine they reside in.

Having some extra speed running OS/Games/Apps off the array (which will be the only array in my house for quite a while) would be a nice touch, but like you said, the idea of mixing data and apps is not appealing. Since I am too lazy to reinstall all the stuff on the box I will at least for the time being stick with system and apps on the WD1200JB.

Regarding the drives: I read the review on the MaxLine III, which is supossedly mechanical identical to the DM 10. Its performance is quite a disappointment, and the DM 10 with only 8 MB cache should perform lower still. Maybe it was an issue with the reviewed sample, but unless we see benchmarks of another here on SR, we will not know for sure.

I still wonder whether loosing the MD10's NCQ to the Promise controller is really a loss for my purposes. I saw some benchmarks (I believe it was in SR's TCQ/RAID/etc. article) that actually showed disks performing slower with NCQ in desktop use.

Share this post


Link to post
Share on other sites

Didn't read your second post when writing my answer.

I have read the review that included the TX4200, and wondered how long it might take Promise to offer an RAID5-version of the TX4200 (or a NCQ version of the SX4, whatever you please ;-) ).

As Murphy's Law tends to strike me hard, it will be released about a day or two after I buy the "old" one (i.e. SX4).

Do I _need_ the capacity right now? -No.

But I still have some days of my vacation left, so now would just fit into my schedule. Perhaps I should stop trying to maximise whatever I do/buy and go with what is available now and at a reasonable price.

Share this post


Link to post
Share on other sites

I checked out that Maxtor DM10 some more. According to this spec sheet, it does not have a legacy power connector:

http://www.maxtor.com/_files/maxtor/en_us/..._quickspecs.pdf

Adapters from standard molex to SATA power are cheap, but if I am informed correctly, the SATA power connector has a 3.3V rail. So if the Maxtor expects 3.3V, an adapters won't do.

Then comes the question about warranty. My favourite price-check site lists the DM10 having 3 years of warranty. However, most resellers do not mention warranty at all, I called some and no one could tell me. "Ask manufacturer" was the usual answer. Maxtor itself does not mention warranty periods of this drive on their web site, either. So I should asume the worst, i.e. 1 year only. I do not buy drives with 1 year only, as a matter of principle. I don't throw any money at a company that doesn't trust its products.

Which brings me back to Hitachi. Apparently, my favourite retailer just received a shipment.

Share this post


Link to post
Share on other sites
... if the Maxtor expects 3.3V, an adapters won't do ...

Yeah, but so far, all SATA drives have actually used 5 vDC instead of 3.3, so check the DM10 specifically before you decide it won't work.

-- Rick

Share this post


Link to post
Share on other sites

i AM Surprised thtat no one mentioned high-point technologies for RAID controllers. I love my high-point raid 5 controller (the 404 version). Simple, cheap, ultra-reliable with my windows server 2003 datacenter edition (don't ask), using 4 WD 200 gig JB hdds. It is fast enough, no onboard xor calculation, but the cpu is enough. I have 600 gig of usable storage, all the drives are hotswap. And since it is ATA, they make a SATA version, i can have up to 8 drives. So when the time comes, i can make a Jbod for data storage (stuff that i don't critically need). See i buy something that has purpose for the long haul. Not short term!

Give'm a try, i like mine!

SCSA

Share this post


Link to post
Share on other sites

I would really like to buy the Highpoint cause it would save me quite a few bucks. But I wonder how OS-dependet this half-soft-half-hardware solutions are. Not to forget CPU load. While Bit's calculations indicate that software RAID is faster, his observations only compare one 3Ware controller with pure software RAID.

How high is you CPU load when copying large files, and what is your CPU?

Share this post


Link to post
Share on other sites

Forgot something, can't you edit posts in this forum? Can't seem to find a button... .

"Giving it a try" is a problem. If I give Highpoint a try and decide it's crap, can I migrate the array to Promise without having to recreate it, loosing all data? I highly doubt this. This is especially a problem if the array is already filled, and if I don't have the capacity to store the files elsewhere.

So I would like a solution that is an almost sure hit right from the start. No switiching controllers (what to do with the other one? Yeah, "ebay", but I loathe selling at ebay...), no crappy drivers, no drivges dropped for no apparent reason etc.

BTW: Tomshardware (yeah, we all know it is crap, but better crappy numbers than none ;-) ) has a Cheap-SATA-RAID-5-controller review. Highpoint performs quite good, but good ol' Tom forgot to measure CPU load.

Share this post


Link to post
Share on other sites
"Giving it a try" is a problem. If I give Highpoint a try and decide it's crap, can I migrate the array to Promise without having to recreate it, loosing all data? I

You are correct. Every raid controller does things a little differently. You would have to backup your data, rebuild the array and restore.

We use some broadcom raidcore controllers. Normal daily use the CPU utilization is only 4% or so. But when the utilization changes, the cpu load can spike upto 70%. This maybe acceptable on a file server, but on a desktop machine could well cripple other applications.

Iam afraid the best solution for you is to buy a hardware raid controller, LSI or 3ware...

Share this post


Link to post
Share on other sites
Iam afraid the best solution for you is to buy a hardware raid controller, LSI or 3ware...

The problem with these two controllers is the steep price tag.

Th 3Ware Escalade 9500S-4LP is about 330€ and for that price I expect native SATA support. LSI Logic MegaRAID SATA 150-4 would be more acceptable at around 250€ and seems to be native SATA, but is hard to come by.

I know Promise isn't highly regarded here, but if it was only a minor loss in performance (i.e. throughput, not CPU load!), I would be willing to accept that. It is only about 170€, I can reuse old memory and it has hardware XOR. But if someone can direct me to (so far unsolved) critical problems with the Promise RAID5 controllers that actually compromise the integrity of the array, I *might* be willing to pay the price for 3Ware.

Otherwise, while I still like the RAID5 idea very much, I will go with mirroring on my onboard controller for now and spend the money elsewhere.

Share this post


Link to post
Share on other sites

Yes, for some strange reason people around here (esp. the more established members) seems to really dislike the Promise cards.

I visit here (Storage Review) nearly everytime when I am about to to major upgrades to my storage sub-systems - that is why I have in the past purchased the IBM DeathStar 75GXPs (well back then they were the hottest thing around plus the reliability problem didn't show up at least 9 months into the product release... by which time I already gotten mine); WD Caviar SE 800JBs, 1800JBs, and now the HGST Deskstar 7K250s.

From the various posts/comments and suggestions I was going to buy a 3Ware or LSI Logic SATA RAID card (seeing as it was going to be my first experience at RAID 5), BUT... their prices are too steep for me to justify buying them. That's why I have looked at and gotten the Promise card instead.

So far, after 9 months of constant 24/7 usage, the Promise card never gave me any problems at all, and performance is just as I expected it to be. So I am really puzzled as to why everyone just dismiss the Promise cards.

I know of course 3Ware and LSI makes better cards, perhapes with better support and feature sets, but does it justify the high cost of me getting them? No. Another way of looking at it is "sure a BMW or Merc is going to be nice, best performance and highsafety ratings, but can I afford one? No, a Honda is just going to do the same job - probabally have similar features and performance, but at a cheaper price. So is there really any reasons to NOT even consider the Honda? In my opinion, no."

The Promise does RAID 5, have all the standard RAID features - online capacity migration, array migration, hot-swap etc.. - in short, it has all that I need it to be able to do, just like the more expensive 3Ware or LSI cards... so why would I need/want to spend more for a 3Ware or LSI card?

P.S. Really, why do people here dislike the Promise card so much? Is there a KNOWN flaw in their hardware and software? I would say not that I know of... Is there a well known reliability issue with the card (e.g. high product failure/return rate)? Again searching the web gives me a No as the answer... just seems very strange to me.

P.P.S.: Of course saying all the above, I would still choose the 3Ware and LSI cards (over Promise) IF I have the budget to do so... but the fact of the matter is I don't.

Share this post


Link to post
Share on other sites

The only promise product I recomend is a simple little tx2 card...

Promise has no (or very little) support for linux, so its a windows only product.

Ive never had much luck with the raid cards. (I only ever had 1, that was enough)

Share this post


Link to post
Share on other sites
I am looking into building a cheap RAID 5 for my home rig.

You might find something of use in this thread. Dave Dreuding and I kicked the idea around for a while.

-- Rick

Did you ever come around to build the box ?

And if "Yes" witch parts did you use ?

Best Regards

Theis

Share this post


Link to post
Share on other sites

@Kakarot: Occupant mentions that Promise Linux support sucks. Did you ever try running your setup with Linux?

Linux support is no issue for me right now, the system I plan to place the RAID in is currently my Windows XP gaming machine and will stay that for a while, I just need some handy and reasonably safe storage space, not 24/7 operation to serve media files to several clients at once. But I might be interested in Linux support sooner or later, as I consider building a Linux home server (for firewall/routing and file serving) if I ever find the time.

@Occupant: It seems Promise supplies drivers ("kernel module" whatever) for specific Linux distros (Suse and Redhat), which indicates "binary only". I am no Linux expert, but AFAIK binary drivers are always locked to a specific kernel version, i.e. no security updates unless Promise delivers a new driver as well.

Is there an open source driver available by the community?

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now