jwb

Member
  • Content Count

    21
  • Joined

  • Last visited

Everything posted by jwb

  1. Supermicro has been selling hotswap SAS racks for this form factor for about a year. For example the CSE-M28E1 holds 8 2.5" SAS disks (or SATA) disks.
  2. I would try the WD RE2 500GB disk instead. In my experience you get far more IOPS from these disks than from Seagate 7200.9. I think the WD copes better with being in a rack full of disks (something the Seagate Cheetah series also does well).
  3. jwb

    HDD coolers - harmfull?

    If the cooler has a fan and the fan is hard-coupled to the drive and the fan vibrates, this could certainly lead to short life and degraded performance. Also, driver coolers are stupid. Drives are perfectly capable of cooling themselves if mounted properly, which is to say vertically. Mount them horizontally, and they are sure to cook themselves.
  4. jwb

    Areca 1260 Native PCI-E or Bridged?

    I know this is an old thread, but I just had to correct the wrongness here. Post #3 is wrong: the Areca cards are not "native" PCI-Express. The other posts are correct: it doesn't matter. The Intel CPU used on the Areca controller has a PCI-Express bridge with two PCI-X busses. The controller itself is on one PCI-X bus, and the other is unused. The performance impact of the signalling bridge is indistinguishable from zero. If you though it was a problem, you'd probably be really upset about the HyperTransport-to-PCI-Express bridge, right?
  5. Most of the replies in this thread are incorrect, with the exception of #7. You can get SATA port multipliers, and these are supported in the SATA-II protocol level. I have one of these that I am experimenting with, a 5-port multiplier, and it has so far worked flawlessly on an Areca 8-port RAID controller and Seagate disks.
  6. I'd like to see the 12V and 5V current listed separately. It will be very helpful when building machines. I'm building a machine now with 15 disks, and Seagate's documentation of max current consumption on the 7200.8 is just plainly incorrect. Some science will be nice. SCIENCE!
  7. Any word on server benchmarks? Your average queue lengths are so short that I'm sure the desktop storagemark is totally unrelated to my database and fileserver workloads.
  8. Atlas 10K V 300GB. It's a nice big, fast disk but helloooo price tag. Guess I'll hold off on building the 800-node Lustre filesystem from these...
  9. Okay, lots of stuff to respond to. The driver only "comes with" the SATA adapter if you are trapped in the Windows prison. On Linux, FreeBSD, et al the driver authors and the hardware vendors are orthogonal. So it's possible to have command queueing even if the Windows driver does not support same. Secondly, I think it is too bad that software RAID wasn't tested. Quiz: what makes a better RAID processor, 1) an Opteron 244, or 2) an Intel i960? If you picked 1, you picked correctly. And the Mylex 170, what a dog. It does not exactly have a reputation for speed. Software RAIDs will blow its proverbial doors off. Also it is easy to disable or limit command queueing on some SCSI HBAs. On Adaptec HBAs the depth can be clamped anywhere between 0 and 255, on a per-target basis. I do take some minor issue with how the article is presented, especially its conclusions. I think the definition of single-user or desktop user is vague. Many people do absurd operations on their personal computers that would make some servers blush. I personally have a PostgreSQL database that exceeds a terabyte, and I like to compile very large trees of software. I use a RAID5, because I dislike losing my data, and there is a significant benefit to compile time from the RAID setup. The RAID5 setup also improves performance on metadata-intensive workloads, like tar/untar or cvs update. Finally, thanks for a nice article with some good hard data.
  10. jwb

    Why Are People Buying S-ata?

    The "rounded" PATA cables are a signal integrity nightmare. I won't even allow them in my office, even in desktop computers. The main buyer of these cables must be people with no understand of electromagnetism at all.
  11. You are going to have to be a little more specific. How is the RAID attached? SCSI? What kind of HBA? What hardware platform specifically? How does the machine boot? From CD, DVD, network?
  12. jwb

    Recertified Cheetah 15k.3

    I used to know a fellow who worked on the assembly line at Seagate's integration factory in Oklahoma City. Based on the stories he told of how failed, but new, devices were repaired by hand and sent out as new, I'd be inclined to trust a refurbished device just as much as a new one. Also I agree with the upthread poster. I have piles of failed Hitachis and Fujitsus, but not a single failed Seagate SCSI drive.
  13. jwb

    Your Worst Drive Failure?

    Once I had a SCSI raid set that was mirrored in pairs like this: 0,1:2,3:4,5:6,7 Each pair was a RAID 1. The data was my company's entire collected research, for which there existed no current backup. Well anyway, disk 0 failed so I pulled it. Lo and behold the next day the host, which had been running for at least a year, needed a reboot. So I rebooted it. Unfortunately the RAID driver decided it needed to resync the arrays. It was performing the resync thusly: 1 -> 2 : 3 -> 4 : 5 -> 6 I wouldn't have noticed except there was no activity on device 7. Luckily I still had a good copy of each mirror, on devices 1, 3, 5, and 7. Needless to say I cut the power in a panic once I got finished scratching my head. Then I unmounted the four good devices, set the hardware read-only jumper, and mirrored them on my administrative desktop. Definitely a heart-attack moment.
  14. jwb

    Why Are People Buying S-ata?

    I started buying SATA disks in my datacenter at work. SCSI is just ridiculous $/GB these days and, although I still need SCSI for the absolute highest performance applications, the WD Raptor is a very fast disk and suffices in all other locations. With the 3Ware SATA controllers I get 12 SATA ports per PCI-X slot and the mainboards I've been using have 4 SATA ports as well. I have no driver problems with the 3Ware because they are identical to the 3Ware PATA controller cards as far as the host is concerned. We're talking $550 for 12 ports which is comparable to PCI-X u320 adapters. Supermicro offers chassis with hot-swap SATA bays and backplanes. I think these are great. I expect to take delivery of one this week in fact, and I'm going to jam 4x74GB Raptors and 4x400GB Hitachi disks in that sucker. With PATA I wouldn't have this nice hotswap case and I'd have a lot more cable clutter. I don't ever want to see a host with 16 PATA ports, thanks.
  15. jwb

    300gb Scsi Drive

    Incorrect. It uses 4-platters as do all 10K SCSI drives of its generation. This information is easily found on their website. You're a smart one, eh? http://www.hgst.com/hdd/support/dk/3/32ej14spec.html "Number of Disks: 5" Cheers.
  16. jwb

    300gb Scsi Drive

    Hitachi's existing line of 147GB SCSI disks has also 5 platters. For the new 300GB model, they doubled the areal density. There is only one response for this disk in the reliability survey.
  17. It is a shame that Storage Review ignores Linux, but I can imagine their workload is already overlarge. To pick up the slack, I will be happy to provide you with some numbers of my own Disclosure: Linux kernel 2.4.21-pre3 running on SMP AMD64 with 8GB memory. IDE disk is WD1200JB (8MB cache) on AMD 8111 controller. SCSI disks are 4 each Seagate 15k.3 36GB ST336753LC on Adaptec 39320D PCI-X controller. Unfortunately this bus is configured as U160 instead of U320, because of cabling quality problems. Shameful, I know. In the U320 configuration, the SCSI RAID can put over 300MB/s to the disk. Yeehaw. Stumbling back to the point, I needed to benchmark this machine to calculate the expected improvement over my previous database server, a 2-way Pentium III with Seagate 10k.6 storage. I chose to use tiobench, a threaded I/O benchmark. tiobench measures sequential and random read and write performance with a large number of concurrent processes. I used 32 processes for this benchmark with a dataset of 16GB. The first tested device is a (software) RAID 0 of all four SCSI disks. Yeah jeebus it is fast. When the array was exceeding 250MB/s to the disk is when I detected my dysfunctional cabling and reduced bus speed to 160MB/s. Consider this handicap when interpreting the results. All results in MB/s. 4-way SCSI RAID 0 Seq. Read 95.53 Seq. Write 76.06 Rand. Read 8.64 Rand. Write 7.59 Read Service Time 12ms You can see this setup smokes. However, I have no intention of operating the array in this manner. What I will actually be doing is using 8 drives on two busses in RAID 1 pairs with various databases on each pair. So let us benchmark a 2x2 RAID 10 setup: 2-way RAID 0 over 2 each 2-way RAID 1 Seq Read 74.42 Seq Write 40.36 Rand Read 8.72 Rand Write 3.74 Read Service Time 12ms We obviously lost some performance going from four stripes to only two, and the mirrored writes take their toll. Reads are 23% slower and writes are 47% slower. Still, it hauls, and random reads benefit from RAID 1 read balancing. The next benchmark is a single SCSI disk, for comparison with the lone IDE disk: Seagate Cheetah 15k.3 36.7GB SCSI disk Seq Read 22.79 Seq Write 24.61 Rand Read 2.57 Rand Write 1.73 Read Service Time 44ms The performance of the Seagate by itself is close to what we might derive from the RAID performance. This disk suffers a slight disadvantage versus the Western Digital in that it must use half its capacity for the dataset where the competitor uses only 12%. This might allow the Western Digital better locality of seeks. Let's find out: Western Digital Caviar WD1200JB IDE disk Seq Read 15.77 Seq Write 27.40 Rand Read 1.02 Rand Write 0.82 Read Service Time 89ms The IDE disk produces a noble effort. In random performance the Seagate is 152% and 111% faster for reads and writes (as we might expect from the 2:1 rotational speed advantage enjoyed by the SCSI unit), but in sequential performance the WDC equipment meets and exceeds the Seagate. The Seagate takes the top sequential read by 45% but the WDC tops its rival by 11% in sequential writes. I hope you enjoyed this small window into Linux storage performance. If I need to run this test again I'm definitely going to reboot with mem=128M. 16GB files take too long (especially on the IDE drive ... ugh). Cheers, jwb
  18. jwb

    Slow benchmark with 15k.3

    When the machine boots, the Adaptec BIOS will enumerate all the devices on your bus and list their bus speed. If it doesn't, download a Linux bootdisk with the AIC7xxx driver and boot it. The driver is quite verbose about what's going on. For example, this is from one of my machines in the data center: Channel A Target 8 Negotiation Settings User: 160.000MB/s transfers (80.000MHz DT|IU|QAS, 16bit) Goal: 160.000MB/s transfers (80.000MHz DT|IU|QAS, 16bit) Curr: 160.000MB/s transfers (80.000MHz DT|IU|QAS, 16bit) Transmission Errors 0 Channel A Target 8 Lun 0 Settings Commands Queued 2217041 Commands Active 0 Command Openings 32 Max Tagged Openings 32 Device Queue Frozen Count 0 Alas, this is a SCSI U320-capable bus, but I had a bad cable and had to force it down to 160. Sniff.
  19. The MAS3735 does not really seem to be "in the channel" as such yet. I bet most of the production is going to Sun, Dell, IBM, et al. I tried to order 9 MAS3735 from newegg.com, but they would only allow me to have 2. Big-name distributors like CDW do not have them at all yet.
  20. jwb

    Most reliable HD?

    I think if you check the reliability survey you will see that the Seagate Cheetah and Barracuda lines along with the Quantum Atlas 10K series have been extremely reliable. I've never seen either of these lines fail, while I've seen a number of Fujitsu, Hitachi, and IBM drives bite the dust, along with counteless IDE drives from all makers.
  21. jwb

    Cheetah 15K.3

    I cannot wait for the writeup on the 15k.3. If it really is quieter than the WD1200JB, that will be completely off the hook. I replaced my two Atlas 10K drives with a WD1200 because I couldn't stand the noise from the Atlas any longer. But, the WD1200 is really amazingly slow by my standards. I would love to get back to SCSI with a fast and quiet 15K disk!