Popular Content

Showing content with the highest reputation since 10/12/13 in Posts

  1. 2 points
    Put them in a server and run some sort of SDS on top of it like Nexenta. Fun learning experience and gets you a cheap SAN.
  2. 2 points
    Note the three years in between though...they've been surprised by the interest in the platform I think. Now, if WD could just get those 2.5" Reds up to 2TB in a 9.5mm...
  3. 2 points
    Long-time watcher of StorageReview, but I registered just to be able to comment on this review. An excellent review, though your testing seems a bit high-end for the likely intended usage. I'd bet the majority of the target users for this SOHO device won't have a backbone that supports iSCSI or even dual-port aggregation. As you point out, 2-10 users in a casual / small office setting or for home use seems a likely audience. Such an audience would be much more likely to have an entry-level GbE switch as opposed to a managed backbone that costs 10x more. To that point, I've used the entire line of BlackArmor devices, and there are three critical issues common to them that seem to be repeated with the replacement Business Storage line... none of which are mentioned in the review, but they may not impact everyone so I'm not sure they necessarily bear mentioning up-front. 1) Performance. You obtained okay numbers in your testbed, but as summarized above, I doubt you'd see that infrastructure in the wild. I'd suggest you at least pair it with testing results from a cheapo GbE switch using a single LAN plug and simple Windows file sharing / disk mapping. Unless the BS line has markedly improved from the BA line, you'll see performance on the order of 15 MB/s read, 10 MB/s write. Horrendous for anything but backups, really, which is all I use my BA boxes for. Also, I recognize that there's a massive disparity of price points and target audience, but I get 110 MB/s--TEN TIMES the performance--from my Synology boxes, and 50-70 MB/s from my Drobos. And that's on a cost-conscious backbone of entry-level GbE switches using one LAN port per device and simple, iSCSI-less file sharing in Windows. There's no comparison at all. 2) Compatibility. Massively overpriced with disks, the BA and BS line are very reasonable when purchased diskless. I've used Buffalo, Seagate, Synology, and Drobo NAS boxes in small-business and personal settings, and diskless BA/BS boxes are far and away the cheapest way really of adding reliable (but not fast!) NAS storage in such contexts. But these NAS boxes only support Seagate disks. True, this is a Seagate device, but it seems as though someone had to intentionally code a rejection routine into the firmware, which is just kind of an obnoxious move. In addition, some of the compatibility notes for "certified drives" listed for the BA line are flat-out falsified--the diskless BA 400 will simply NOT work with the 1.5 TB desktop line of Seagate disks, period. 3) Risk. For those who know what they're doing, these are fairly easy boxes to deploy, and the web-based UI is second only to Synology's in my experience. But it's easy, far too easy, to make a catastrophic mistake. For example, if you set up a BA box using one LAN port, and then try to plug in a second LAN plug, it will not only not work, but it has a strong chance of corrupting the entire array, forcing you to not only lose all data and set everything up again, but in order to even begin to do so, you must eject each disk individually and reformat it using a separate computer. Otherwise it won't set itself up. Now, much of my comments above are from my experience with the older BA boxes, but I'd like to know if those issues have been resolved with the replacement BS line. Anyway, as always, I love seeing info on Storage Review.com so keep up the good work!
  4. 2 points
    You're almost right here. What's missing is that copying small files, even from the same directory, will automatically include some random access too. The files being read may be spread across the disk, they may be written different locations, filling up holes in the current file structure (what ever the OS see fit) and the MFT may be accessed. That's why multi-threaded copy for higher queue depths still improves throughput: the disk can arrange the accesses better through NCQ and can reduce access times. BTW: if the folders you're copying are often the same I'd look into incremental sync'ing with e.g. DirSync (nice freeware). Not sure it can increase QD, but it certainly saves time not to transfer untouched files again. And I'm not a fan of buying large SSDs for storage, that's often a waste of money (IMO). I'd rather use the SSD for temperary storage and as automatic cache. If you're concerned with many small files an SSD would be ideal. And if the SSD cache also buffers writes you may see a massive speed increase. The cache capacity would also be huge compared to the amount of storage required for small files MrS
  5. 1 point
    I was reminiscing about the WD Expert 18gb review I read here in 1999 and I can't find it, also the google cache isn't finding it properly either. Are these still around? That's from an era when I would read this site daily and read every review religiously. Anyone?
  6. 1 point
    Ultimately the storage-centric R730xd offers new degrees of flexibility for those who want to keep storage as close as possible to the compute layer. The chassis has a lot of potential in SDS situations too, something Dell likely had in mind as they continue to innovate generation after generation with leading storage server solutions. Dell PowerEdge 13G R730xd Review
  7. 1 point
    What Progressive Capacity means is that WD can use odd-formatted platters in drives in boxes of 20 drives to hit a specific capacity point, like 1PB. The capacity of the drives within that box to hit a 1PB target are somewhat irrelevant, data centers looking at this class of drive often are not using RAID and are more concerned about capacity in a specific footprint. WD Ae Cold Data Storage HDDs Announced
  8. 1 point
    They had some errors early on, will look into it.
  9. 1 point
    Noe drive will ever be perfect 100% of the time. Are you planning to also have a backup of these files?
  10. 1 point
    Whatever you think, brainiac. Your opinion is your opinion. My opinion is fact.
  11. 1 point
    Yeah, we know we need to work on the charts to make them less confusing to read in a row, thanks for that pointer. The drive isn't intended to be in heavy write use cases, per our disclaimers in the review. That said, I'd take our real application tests as a better indicator of performance than synthetics. One other comment I'd add is that Pure Storage used to use SSD 830s in their AFAs. Not sure what's in there today but we've seen a lot of high-end client drives show up in AFAs or Hybrid Arrays.
  12. 1 point
    You're already sad with the Drobo performance, trust me, it's not any better now, so I'd mark that off the list promptly. In terms of NAS vs DAS though, how many people or devices will need to access the storage? Do you see any benefit from the NAS features like remote file access, etc or do you really want the performance DAS offers by comparison? Both the LaCie and G-Tech products are very nice incidentally. There are also other options form the likes of Caldigit and others who have quality products as well.
  13. 1 point
    Synology has announced the launch of the RS3614xs+, the latest in their XS+ rackstation line. The new RS3614xs+ provides enterprise features at SMB prices so that small businesses can now take advantage of performance, scalability, and the virtual storage options of VMware, Citrix, and Hyper-V all the while still offering general-purpose storage that businesses require. Synology RS3614xs+ Rackstation Announced
  14. 1 point
    You're very welcome. When I was doing those measurements, I was mostly interested in confirming the difference in upstream bandwidth between PCIe 1.0 and PCIe 2.0 chipsets, using one 6G controller and one older 3G controller. As such, I didn't do every possible measurement in a proper experimental matrix and that's why those graphics appear to be somewhat "spotty". Also, because there are so many PCI-Express motherboards installed worldwide, I put a little extra focus into exploring how easily it was to achieve high speed with PCIe 1.0 chipsets. What became very clear is that 2 x modern 6G SSDs in a RAID 0 array come pretty close to reaching MAX HEADROOM with a PCIe 1.0 chipset: roughly 2 @ 500 = ~1,000 MB/second (certainly above 900). There is no performance gain to be expected from 4 x 6G SSDs with a PCIe 1.0 chipset. The "sweet spot" was predictably 4 x 6G SSDs with a PCIe 2.0 chipset and a 6G controller like the Highpoint RocketRAID 2720SGL: that measurement was done on an ASUS P5Q Deluxe motherboard with an Intel Q6600 CPU. I really do enjoy working on that workstation, because regular file system operations are truly SNAPPY, particularly program LAUNCH. The other, less obvious issue was the lack of TRIM with these RAID 0 arrays; and, that's why I'm recommending that builders take a close look at Plextor's garbage collection for PCs that lack TRIM for some reason. The folks at xbitlabs.com produce a very useful comparison here: http://www.xbitlabs.com/articles/storage/display/toshiba-thnsnh_5.html#sect0 Incidentally, in our workstations that have 2 or more PCIe slots, we've been installing RAID controllers in the primary x16 slot; and, some of the older PCIe motherboards complain a little about that setup: I need to press F1 to finish POST and STARTUP; but with that minor exception those PCIe 1.0 chipsets still work fine with a RocketRAID 2720SGL installed in the primary x16 slot. Many less experienced Highpoint users stumble on the INT13 factory default: what they need to do is install the card withOUT any drives attached, and flash the card's bios to disable INT13. Then, the card won't interfere with chipset RAID settings. THE problem is that INT13 ENABLED on the 2720SGL has been known to DISABLE on-board RAID functionality e.g. it's like the chipset's ICH10R isn't even there at all! The solution is to revert the BIOS to IDE or AHCI modes, and sometimes IDE is the ONLY setting that will work again after the 2720SGL is installed with INT13 ENABLED. Also, the latest bios needs to be flashed, in order to operate the SATA channels at 6 GHz (using SFF-8087 "fan-out" cables). ... all "bleeding edge" lessons, to be sure :-) Hope this helps.
  15. 1 point
  16. 1 point
    There at least 4 aspects to be considered here. I'd say the first is STR (sustained transfer rate). That's the transfer rate achievable with sequential accesses, where the heads don't have to move to any random positions. It's the maximum a HDD can transfer, except for data from the cache. Here you're seeing 167 MB/s vs. 147 MB/s, i.e. a 14% improvement. In SR's review they got 187 MB/s vs. 153 MB/s, i.e. a 22% improvment. This difference could easily be due to the way the different tools measure, or maybe caused by variation among drives (although such a difference would be unusually large). The 2nd aspect is random access. Your benchmarks shows both drives doing 81 ops/s at 4k random read. Notice how the old Black gets 103 IOps in SR's review - you're clearly not testing the same thing! The most obvious explanation would be the differences in queue depth, where SR tests at a minimum of 2 threads with 2 queued items each, i.e. an effective queue depth of 4. At a queue depth of 1, as HD Tune is said to use according to someone before me, there's only so much you can do to improve access times without increasing RPMs or short stroking. And 3rd: WD themselves claim their performance improvements largely come from firmware tuning. This will show up in real world applications (like the file and web server tests), but not in cases where you're purely mechanically limited. Which are exactly the 2 corner cases you tested: completely sequential access (STR) and completely random at QD 1. That's a bit like judging a car's performance on a race track from just a dyno run to extract peak HP and torque. I'm not sure if there are any good free HDD benchmarks (that's why simple reviews always stick to these simple low-level tools).. but the one you used is certainly not it. And finally "up to xx% performance increase" doesn't mean this will manifest itself equally in all benchmarks. And I'll admit SR's review and conclusion do sound a bit enthusiastic - but that's because such performance gains are almost unheard of, for HDDs within the same technological generation! It's still an HDD, though, and if you read the article in any other way.. I don't know why. SSD performance is miles above HDDs, no matter if the HDD got 48% faster or not. And this gap is only going to grow over time. MrS
  17. 1 point
    For those interested in the main product images and spec sheets we have in hand:
  18. 1 point
    It's believable because we are reporting this as fact. Typo Something official is probably coming in the next week.
  19. 1 point
    Sounds like you should just buy a Synology or perhaps a Netgear in five/six bay config.
  20. 1 point
    I'm 99% sure it's not SMR as those drives are so poor for performance profiles that a consumer needs. I think they're just really good at engineering the platters Should know more soon, waiting on review samples.
  21. 1 point
    Just got this feedback and added it to the story - The drives will be available in SAS and SATA interfaces and only in 6TB capacity. HGST is not disclosing platter configuration at this time.
  22. 1 point
    But how is the cached used in long sequential operations that would allow for higher speeds without quicker throughput through the heads? Thanks The only thing missing is a hybrid drive option with this drive.
  23. 1 point
    I've bought quite a few used SSDs, they're good for test systems, HTPCs, etc. I won't pay a lot for them though - roughly half of what a similar new one would go for. Heck I even bought a used one just for use as a scratch disk, because I couldn't bring myself to buy a new one just to write to it non-stop and wear out :-P I second the Samsung recommendation. For what I consider my "critical" systems, I only use Samsung. I've used plenty of SanDisk SSDs, both new and used, in other systems with no problems, however. Personally I'd repurpose the old into another system, or sell it, and get one larger one. Some people opt to stripe multiple smaller ones, but I don't bother.
  24. 1 point
    Hiya, just tried entering info for some drives I have spinning accross different systems, but I am in a dead loop: - RS tells me to login or register, I click on register and go to the forums, where I am logged in ?! - If I click on RS, then I am again not logged in... Browsers tested: IE 8 and Firefox 15.01 on win7 pro x64. Account created literally an hour ago. However no e-mail for activation came through ...
  25. 1 point
    I currently have 3 NIC's in my computer. I would like to have more network bandwith by using two or three of them at the same time. However, I tried doing it and it does not work so well. One of my NIC's has internet sharing enabled. So it has a fixed ip adress and acts like a DHCP server. Other computers on the network get their IP's from that NIC. However, when I put another NIC, it does not have an IP adress. So it's not communicating with the network. How do I configure both NIC's for them to work together? I saw that with intel NIC's, you could combine two NIC's to form a single one to double your bandwith. In other words, by combining two NIC's you would have one 200mBit NIC. Does someone know how to do this without intel's pro set application (i have a 3com, a linksys and an onboard Sis).