• Content Count

  • Joined

  • Last visited

Community Reputation

0 Neutral

About shoek

  • Rank
  1. I'm a software developer working on large projects in Visual Studio. We've experimented with a putting tools/OS on a single SSD and code on a R0 array and it is not as fast for build times as having everything on the same R0 array. We're starting to see that be matched when building on Mac's using Bootcamp and taking advantage of their PCI-ex SSD interface for the boot drive and a Thunderbolt R0 array for the code, but I'm not aware of similar technology on the PC.
  2. Hi Kevin - thanks for the reply. My budget is up to ~$750 for the controller, but of course would love to hear that I could max out the throughput of this array with something cheaper. I'd like to be able to move to the next gen of SATA/SAS (12Gb/s) when they become more mainstream, which is why I was thinking about the LSI 93x1 models. I've been running a 2, 3 and now 4 drive (SATA 3Gb/s) array on my current Intel 920/X79 system using the Intel chipset RAID for almost 5 years now and have not had a single drive failure. Perhaps this has made me overconfident, but I'm comfortable with the risk. -Steve
  3. Hi, I'm building a dual Xeon IVB-E machine and want to boot off of a RAID-0 array of 4 512GB SSD's (Samsung 840 Pro). The mobo has Intel C602/X79 chipset so not enough Intel SATA3 ports. I'm looking for a HBA/RAID card that would be best for this use case. LSI 9361/9341 - prepare for the future of SATA 12G, may be overkill, early reviews on NewEgg aren't that great Areca 1882i - I love Areca for RAID-5 so this seems the safe choice Adaptec 7805 - haven't had Adaptec in a decade; not sure what to think Do any of these have a BIOS like Intel's RAID where you don't need F6-installed drivers to get Windows installed as a boot drive? I'm thinking no... What other cards should I be considering? TIA, -Steve
  4. I'd call that a press release re-hash, not a new article. But it does show that Eugene is still alive
  5. I've looked at a number of threads here, and looked at reviews from and Tom's Hardware, but I thought I'd ask again to see what the current thinking is. I'm considering these 3 cards for a home server running 8 320GB Seagate 7200.10 drives in RAID5 (or 6 if the controller supports it): 1) HighPoint RocketRAID 2220 - Marvel based, software RAID5, about $250 2) Areca 1120 - Intel IOP331 based, hardware RAID5, about $300 used 3) 3ware 9550SXU - 3ware ASIC, hardware RAID5, about $200 used I'm moving from a HighPoint RocketRAID 1820A, which seems to drop drives from the array if it is not shutdown cleanly (eg: a power outage), and does not support staggered spin-up or OCE or SMART monitoring or bad sector repair or ... Any thoughts? -shoek
  6. Hello all, Before I proceeded in handling this, I thought I'd do some research and seek advice so I don't screw anything up. I have a home server (Dual Athlon XP) with a HighPoint 1820A PCI-X 8-port SATA card. I upgraded the array about 6 months ago to 8 Seagate Barracuda 7200.10 ST3320620AS 320GB drives. All was fine until a power outage here and when things came back up, the controller was complaining that the array was broken and that drive 4 was missing. Drive 4 is spinning and is recognized by the controller, so it is functioning at some level. The remaining 7 drive array is listed as "Critical" but is still accessible -- all of my data appears to be there, speed is probably off a bit. Drive 4 is listed as "Disabled", with the option to delete it from the array. I went ahead and ordered another drive. My plan is to replace drive 4 with the new drive, then walk thru the web-based management software to add the new disk to the array. My understanding is that it will rebuild that disc's parity contents at that point, which is faster within Windows than thru the BIOS according to HighPoint. Do I have to delete the existing drive 4 from the array first before I power down to do the drive swap? Anything else I need to do or watch out for? I realize that HighPoint doesn't provide the best documentation or tools, but then again this is a somewhat smaller scale than what is required in a business setting. Thanks in advance for any help anyone can offer. -shoek
  7. I hope I didn't come across as challenging Eugene. I respect the work that he's done over the years here at SR, and I was curious to know how he'd respond to another review of this drive where the results seemed to contradict his (specifically in the area of multiple Raptor 74's in RAID0 versus a single Raptor 150). He does appear from his responses to be a bit intolerant of questions like these (or at the very least exasperated by them), so I may be wary to post similar questions in the future... To summarize, his response was that the benchmarks used in that review are too low-level to draw conclusions about preceived performance, and the benchmarks SR uses model that better so the results are not necessarily contradictory. One drive or array can have higher low-level benchmarks than another, but perceived performance running "real world" applications may actually be lower. Like most of us here, I have an interest in storage devices and their performance but it is not my full time job and of course I don't have a web site dedicated to it. I try to read as much possible here but missed the saga of IOMeter and Eugene's history with that benchmark. I must have been distracted in November of 2001 to have missed that... Cheers! shoek
  8. If IOMeter's workstation pattern doesn't float your boat, pick any of the other benchmarks in the GamePC review, such as DiskBench or HDTach. Regardless of the benchmark, their conclusion is the same... 2+ Raptor 74's beat a single Raptor 150. The original question still stands... why do you think these benchmarks seem to contradict those of SR? Attribute it to the high-end controller? Blame it on the benchmarks or methodology not being as sound as those that SR uses? Thanks in advance for answering my question. shoek PS: a tangent.. why do they call it the "workstation" pattern in IOMeter if it should only be used to measure "server" performance? A machine that heavily multitasks on, say, software development for the FEA application market, is not representative of the "workstation" that IOMeter attempts to model?
  9. Wrong. Here's a look at how two RAIDed configurations of the WD740GD on a basic RAID controller compare vs. a single WD1500ADFD: These figures were drawn from a large database of results compiled in perparation for a future article that will examine the performance of the WD740GD, the Seagate NL35, and the WD4000YR in multidrive configurations operating off of three separate RAID controllers. As demonstrated above, even a four-drive RAID0 array matches the WD1500 in only one out of five cases. Eugene, How do you respond to's recent Raptor 150 review, where they show that a dual, triple, or quad RAID0 array of WD740GD's beats a single Raptor. They used the highly respected Areca 1220 PCIexpress card. I assume you used a Silicon Image SATA card? How do you think the nForce4 RAID would fare? Thanks, shoek
  10. Actually, I'm not even familiar with the WD4000KS... is it a real model or something that just sort of evolved from numerous typos (mine included)? The WD4000KD has a 150 MB/sec interface... as outlined in some other threads, for all intents and purposes, its the same drive as the WD4000YR with a bit less burn in period, TLER toggled off, and a shorter warranty. 217385[/snapback] Thanks for the clarification Eugene. Is it true that another difference between the YR and KD is that the KD does not have NCQ? In a single-user environment but with heavy multi-tasking, is NCQ usually a benefit (using the Nforce4 SATA controller for instance)? Thanks -shoek
  11. I think you mean the WD4000KD is virtually the same drive as the WD4000YR, which uses mostly Raptor-style physical and electronic parts, right?
  12. A scary thought. I didn't mention in my OP that I have done tests on the server doing repeated MD5's and copying the files to another, non-RAIDed drive and there was never any corruption... -shoe
  13. Yes, the workstation's onboard Intel NIC and the Broadcom BCM5703 card have these features. However, I have disabled it all and the problem still occurs. This was, in fact, Broadcom's suggestion to me. Thanks, -shoek
  14. I'll check those threads right now, thanks. I have introduced a 3rd computer, albeit a slow machine (P3-1Ghz with 5400rpm HD) that although it has a gigabit NIC, I don't get the same throughput speeds between the server and that machine that I do between the server and my workstation. However, the files transferred just fine. This leads me to believe it is something in my workstation, or maybe the switch is failing at high xfer rates (30-40Mbs or so). Of course I hope it isn't the motherboard... The Asus NCCH-DL uses the proven Intel 875p northbridge. Another thing I plan to try is different memory in the workstation. But wouldn't bad memory display itself in other ways besides just when copying large files over a gigabit network? -shoek
  15. I'm hoping someone can give me something to go on here, because I've been pulling my hair out for about a week now on this one... I just built a new workstation, and in the course of getting the machine up and running I was using TrueImage to restore a disk image stored on the server on my gigabit network. The restore failed with an "Image Corrupted" message... long story short, I find that when I transfer very large files (like > 2GB on up to 35GB) between my server and workstation, I get invisible corruption of the file. By invisible I mean that I would never have known the file was corrupted until I went to use it had I not had TrueImage verify fail on me. I subsequently used MD5sum on the file to verify its corruption (see Test Case below). My setup: Server Asus A7M266-D, dual Athlon 2400's, 512MB RAM HighPoint RocketRAID 1820A with 6 Maxtor DM9+ 160GB's in RAID5 (in 64bit/66Mhz slot) Broadcom BCM5701-based gigabit NIC (in 64bit/66Mhz slot) Workstation Asus NCCH-DL, dual Xeon 3.0Ghz, 1GB DDR400 RAM Maxtor DM9+ 80GB IDE drive Broadcom BCM5703-based gigabit NIC (in 64bit/66Mhz slot) Intel Pro/1000 CT onboard gigabit NIC (using the CSA 266mhz connection to the 875p Northbridge) Network Netgear GCS105 Gigabit switch 30-40ft CAT5e cable between workstation and switch 2ft CAT5e patch cable between server and switch Test Case Using Windows Explorer, find the test file on the server's share Copy the file to the workstation's drive. I get 20-40Mbs throughput when doing this (pretty good for gigabit IMO) MD5sum the file on the workstation Remote desktop into the server, MD5sum the file on the server THE FILES ARE DIFFERENT! What I've Tried (that didn't help) Ran new CAT5e cable Tried both the Intel and Broadcom gigabit NICs in the workstation Turned off Large Send Offload and Checksum Offload on workstation's NIC (server NIC does not support these features) Tried a virgin install of XP on the workstation Updated Broadcom BCM5703 BIOS to latest Updated NIC drivers to latest Wrote Broadcom support - they suggested turning off Large Send Offload and/or Checksum Offload, but this did not help Popped a old Netgear 10/100 NIC into the workstation and while slow, it transferred the file just fine Transferred from the server to a P3-1Ghz machine with Intel Pro/1000 MT Desktop NIC and while slower (ie 15Mbs), the file transferred just fine What I intend to try Waiting for a new Gigabit switch that supports jumbo frames Try the BCM5703 in a 32bit/33Mhz PCI slot in the workstation One thing that I should mention... after a late night trying to figure this out, something got me on the idea of playing with the RWIN and MTU and other TCP/IP parameters on the workstation. I used's TCPOptimizer to set the RWIN large (~500000bytes) and MTU to 1500 and some other settings. I did a test of a 6GB file it worked, so I was encouraged and went to sleep. Next morning I tried the big 35GB file and it corrupted it. Anyone have any ideas that I can try? Thanks in advance everyone... -shoek