cffrost

Member
  • Content Count

    63
  • Joined

  • Last visited

Community Reputation

0 Neutral

About cffrost

  • Rank
    Member

Profile Information

  • Location
    New York
  1. It is my hope that the existing Performance Database will be archived and replaced with test results performed on whatever platform has been in use since SR was resurrected. Comments welcome; thank you.
  2. cffrost

    Optimal SATA port arrangement

    I think A would be better; distributing drives among controllers is a proven technique for achieving good performance. If it doesn't go fast, try the single on its own controller, and the R5 640s on the other two controllers.
  3. cffrost

    Western Digital RE2 vs. RE3

    I've read this sentiment before, but recall no specifics. Correct me if I'm wrong, but I'm assuming the WD RAID complaints are directed at the RE & RE2 lines, given their target application. I have a few questions about these problems: Have these problems affected all recent WD lines with similar severity? I realize that anecdotal evidence may be the best that's available. How do these problem(s) manifest themselve(s) (i.e., e.g., low performance, premature hardware failure, frequent array corruption, frequent eviction of member disks)? Do these problem(s) occur regardless of the RAID level employed? (I generally stick use levels 1 and 10). Thank you.
  4. An article entitled Western Digital working on 20,000 RPM Raptor appeared on bit-tech.net on 2008-06-06, and was picked up by The Tech Report later that day. The bit-tech article states... and goes on to say... If this is true, WDC could leap past Seagate, Fujitsu and Hitachi in enterprise-performance. If WDC maintained their exclusive commitment to SATA with this HDD, it could place current performance-enterprise HDD manufacturers in an awkward situation, at least until they got their own 20k RPM products out the door. That shouldn't take long, given the rapidity with which HDD manufacturers have historically implemented each others' technical innovations.
  5. cffrost

    New WD drives with "dual processors"?

    Maxtor used a similar gimmick about a decade ago. Their brick & mortar retail packages touted Maxtor's "Dual Wave" (controller/ASIC/processor/"chip"/ADC... I don't remember what marketing jargon they used to identify the part, but they were definitely referring to IC(s) on their disk PCBs. In any case, there's no way in hell there's any silicon constraining the performance of a mechanical HDD. The only (marginal) exception would using a disk with firmware that was very poorly optimized for your particular application. As for 32MB disk buffer, they're already common on 512-1024 Oopps! What am I saying... 30MB buffers are already common on 500-1000MB disks, but buffer size is subject to diminishing returns, though large buffers regain some value when the underlying disk is very high capacity. I speculate that enterprise disk buffers remain relatively small, because they target applications with mostly random R/Ws, resulting in a high miss-rate for cached reads, and mitigated value for large, buffered writes, due to the high performance on the mechanical drive components.
  6. cffrost

    IDE Raid-0 Array Vs. WD 640GB

    I've read plenty of horror stories (on Newegg itself, as well as on independent vendor rating sites) regarding Newegg's HDD packaging for shipping. I don't recall the details of what their method(s) were, only that they did not achieve the safety of the usual method; HDD in a small cushioned box, within a larger box filled with shock-dampening packing material. The complaints I read said Newegg's packaging had delivered DOA HDDs. Does anyone know if Newegg has begun shipping disks the way other quality vendors have been? By the way, speaking of Newegg and HDDs, I strongly recommend Provantage over Newegg for SCSI/SAS HDDs. I ordered a handful of MAX3147s from Provantage (in 2007/12), and got a lower price, quality packaging, and the full 5yr warranty vs. Newegg's standard 1yr warranty on enterprise disks.
  7. cffrost

    Different drives for RAID-5

    I suggest buying the same model from different retailers, or same retailer with multiple orders separated by a delay (duration possibly guesstimated inversely proportional to the size/volume/popularity of the retailer), or both. R5 is slow as hell as it is, even with (in my experience,) a fairly high-end Adaptec 600MHz 256MB SAS controller. Having different drive models constantly out of synchronization (I'm talking physically, not data integrity,) can't do anything good for the already-stinker performance, and I further speculate a possible increase in actuator movement, missed seeks adding rotational latency, culminating in increased wear... No way I'd do it. If you're going to use more of the same Seagate 500GB models, you've already got one good early-failure candidate. You only need to separate the orders of the other two so you don't wind up with a dual-drive suicide pact on your hands after the disk you've already got fails.
  8. Does this occur while in the BIOS, or with the data cable disconnected? Have you tried running the manufacturer's diagnostic utility? Are there disk errors present in the Windows System Log? Have you tried disabling the physical and logical disk performance counters ("diskperf -n" at command prompt)? Have you checked SMART attributes with a utility that can display them (Active SMART, HDD Tune, etc.)? Does Task Manager show any processes trickling I/O activity? Have you inspected your machine for malware?
  9. cffrost

    Hard Drive Setup Suggestions

    Locating OS and applications/games onto separate physical disks is the way to go for maximum performance, especially when using SCSI/SAS/SATA disks, all of which provide simultaneous access to multiple disks. Most apps, especially large, expensive one like Office/Photoshop will pound both disks heavily and achieve a very noticeable performance gain, especially when starting. However, I strongly discourage moving the "Program Files" or any other default path other than "My [Documents|Pictures]." Moving the other defaults will result in a nasty mess of duplicated, orphaned, dependent, and/or unmovable files in the original and modified locations, as well as happy fellow up the organization of your user profile paths. The most convenient and problem-free scheme is to just leave "Program Files" set on C:, and create another "Program Files" directory in the root of your designated Programs partition. When you install an application, simply change the drive letter from C: to the letter for you Programs partition. Sloppy apps that force themselves to C:\Program Files\ can still get there, and those that split themselves between your choice and C:\Program Files\ (such as the aforementioned Office & Photoshop) may even benefit from further increase in potential parallelization. I personally dislike the "My [Documents|Pictures]" trees, as I prefer to create my own top-level directories in the root of my Data partition, which is where I also move the "My [Documents|Pictures]" trees for those applications lacking the courtesy to ask where I want my data. Fortunately, those programs are few and far between. Keeping most of your data on a separate partition will reduce fragmentation and areal stratification of OS files, allow for a smaller, faster OS partition, ease backups and OS reinstallation, and improve the efficacy of the Windows Prefetcher. Just make sure you make the OS partition large enough to accommodate application files that demand residing there. 4-6GB was fine for me in Win2K... I just started using WinXP x64, and 16GB seems about right so far, maybe 24GB in the end. Placing your Data partition after your OS partition is also a smart move... Once your app loads files it needs from the OS partition into RAM, you can now work with your data files and your application will be free to access task-specific files from your Programs partition simultaneously. As for pagefiles, I recommend setting them to a fixed size, and following MS's recommendation by placing one on each physical disk. Windows automatically shifts paging activity to the drive(s) experiencing the lowest utilization. This is my Windows partitioning scheme that I've been fine-tuning since Win2K was released, making improvements based on data provided by Windows Performance Monitor to get the best performance by minimizing disk contention, and maximizing parallelization, throughput, and convenience, without compromising compatibility or reliability.
  10. cffrost

    4xMax3036rc RAID 0

    On what basis (e.g., casual testing, conventional wisdom, speculation, etc.) do you make this recommendation? In my recent search for information on the performance impact of NTFS cluster size, the vast majority of sources indicated no significant/conclusive results one way or the other. However, the information I found did not typically mention RAID. Microsoft does not explicitly advise any deviation from the default NTFS cluster size. I have checked Technet, MSKB, Disk Mgmt Help, and MS Press administrator guides. MS enjoys warning about slack space and loss of compression/encryption capability, but I found no information about performance impact. It seems unintuitive, since this was a measurable factor in the days of FAT. I recall reading (either from MS, Intel, Adaptec or LSI... I can't remember which), a specific recommendation to use stripe sizes of ~64KB-128KB for transactional data (random/DB/server) arrays, and ~128KB-256KB on streamed data (sequential/MM/workstation) arrays. Perhaps this guideline is meant as a fallback in the absence of more specific knowledge about the filesystem, data and/or application(s). IIRC, stripe size always refers to the aggregate quantity of data written to the entire array per pass, and chunk size is equal to the stripe size divided by the number of physical disks in the array. Can you confirm this? Also, when formatting an NTFS RAID array, is the cluster size selection applied on a physical disk or logical disk level? I.e., if the array is built with a 16KB stripe size and formatted with 16KB clusters, will the cluster size equal stripe size or chunk size?
  11. cffrost

    RAID options for backup server

    I don't understand your statement I quoted; do you mean hardware R5 controller vs. R5 via Disk Mgr.? In any case, planning nine disks in a single R5 sounds excessively risky, hot-spare notwithstanding, due to correlated failure combined with high-utilization during rebuild. Personally, with an array that populated, I'd feel more confident in R6 with a cold-spare. If you can do hardware R5 but not R6, I'd do two four-disk R5s and a pair of global hots. If you need all this space to be contiguous, you could span them in Disk Mgr. I can't guarantee zero adverse effects from spanning R5s, but I wouldn't anticipate any problem. Come to think of it, I believe that striping a pair of smaller R5s into one R50 would provide higher fault-tolerance and performance; your data would be intact with one failure in each striped R5 member, and rebuilds would finish faster. ...This is a little off-topic and purely speculation on my part, but it's been my practice to "interleave" identical disks by serial number in an attempt to mitigate the risk associated with correlated failures in a nested array. For example, I would place disks in a six-disk R10 array in the following serial number order: R1 [ 1 & 4 ] } R1 [ 2 & 5 ] } R0 R1 [ 3 & 6 ] } Since initial physical disk placement is otherwise arbitrary, I figure it can't hurt.
  12. A little off topic, but I felt it worth pointing out that for mobile HDDs, a large disk buffer benefits both performance and power consumption, in a same manner as hybrid (i.e., NVRAM-cached mechanical) disks. To this end Hitachi and Toshiba, have quietly supplied their mobile disks with >2MB buffers even before the introduction of the first 3.5" ATA disk to feature an 8MB buffer (WD800JB). FYI, the Fedor-recommended 7K200 sports a 16MB buffer, and would be my first choice for a new 2.5" 7200RPM disk.
  13. I ran Performance Monitor on all logical disks in my general-purpose workstation for approximately three years. Specs: Supermicro X5DAE, 2xPrestonia Xeons 2.66GHz 533MHz 512kB, 2x512MB Reg. ECC CL2.5, LSI 22320-R HBA with 3x Maxtor Atlas 10k IV 36GB per SCSI channel, running Win2000 Pro SP4. The relatively large number of small disks permitted high parallelism and granularity, generally one purpose per partition per disk, plus a very small pagefile partition per physical disk in order to maximize pagefile parallelism. I found that the highest overall activity occurred on my System partition. (The pagefile for that physical disk was in my System partition in order to allow for the rare core dumps). The highest concurrent activity occurred between my System partition and my Programs partition, residing on separate physical disks/channels. In fact, this was the most dramatic of any measured concurrency by large margin, followed by my partitions used for Temp/Edit and Final/Storage partitions for large audio/video/photo media projects. Thus, I recommend against storing OS/Programs on the same physical disk, provided one has the physical disks to separate them. I realize that this sharing is most sensible from a backup and disk utilization perspective, just be aware that performance will be adversely impacted, perhaps more so than any other two partitions sharing a single physical disk.
  14. cffrost

    What is Spread Spectrum Clocking?

    Spread-spectrum (as it applies to motherboard/disk firmware options) modifies IO/processor clock signaling to improve electromagnetic compatibility (EMC) with other devices. Note that it does not improve the device's own immunity to EMF/RFI. Unless you have a device that you suspect may be being affected by EMF/RFI, I recommend leaving spread-spectrum disabled; spread-spectrum often has negative performance/reliability implications for the device on which it has been enabled.
  15. cffrost

    Win2k AS RAID5: Bad write performance

    One partition per disk, with the array forming a single 457GB partition... This large partition size doesn't incur a significant performance penalty with NTFS, does it? Unfortunately I can't sacrifice 1/3rd of the current size, but out of curiosity, does Win2K support 0+1 via its native support for plain 0 and 1?