• Content Count

  • Joined

  • Last visited

  • Days Won


Posts posted by Darking

  1. I would be worried with storing on a hdd for so long, unless you either exercise the drives by powering it on, personally i would probably store either in a cloud (but who knows who's around in 10 years time) or use a tape drive, its old fashioned but its a tried and tested technology, the problem is that not all drive technologies are backwards compatible, and can you really find a lto-4/5 drive in 10+ years?

  2. Ive been thinking about it lately.

    With the rise of 3TB NL-SAS drives, and the eventual 4-5-6-20TB disks that will come, what is the future of Raid?

    rebuild times on arrays with fx. 24 drives(even split up in several groups) is already a problem today, and it will only get worse as the drives get bigger. Meaning the chance of downtime, even with higher raidlevels is an ever expanding risk.

    What do you all think is the future of storage systems?

  3. Dell has released an update to their line of storage arrays.

    Dell EqualLogic PS6110 and PS4110 Storage Arrays Released

    Bought one of their PS6100XS hybrid arrays just before christmas.

    Excellent device, not the highest IO ever, but it delivers around 10K IOPS, and good storage space, runnning raid6.

    No 10GigE for me tho, unfortunately the investment in 10GigE must wait until next hardware cycle in the datacenter.. 2014 is just around the corner ;-)

  4. Hello all.

    Today I'm running a windows home server consisting of a http://www.zotac.com/index.php?page=shop.product_details&flypage=flypage_images-SRW.tpl&product_id=346&category_id=7&option=com_virtuemart&Itemid=100166〈=nd zotac motherboard with 6 sata ports on.

    Unfortunately I'm all out of ports, and my storage is over 8 Tb now.

    As I see it my upgrade path is a) get a controller for more sata ports. B) upgrade my existings disks.

    My disks ATM is a single 1 Tb disk and 5 1.5 tb disks.

    I must admit that upgrading those seems a bit wasteful .

    But I'm also uncertain what controller I can get that provides more ports and only use a single pci-e lane, preferably I would like a minimum of 6 more ports, and I have no need for raid. So a rocket raid or something like that might be over kill

  5. The SSD 710 is Intel’s first enterprise-class SSD in quite some time – it’s been three years since they introduced their last one, the X-25E. Packed with more cost effective eMLC instead of SLC NAND, the SSD 710 gives enterprise buyers a mix of endurance and increased capacity at a more aggressive price point than SLC alternatives. To expand on our single 710 drive review, we take a look at the 710 in RAID 1 and RAID 5 configurations as well as steady state variations to find out how it performs in an enterprise environment.

    Read Full Review

    Hi Kevin.

    I was kind of wondering how you log the different io points with iometer, would it be possible to get the icf file you use for testing, or is it a trade secret.

    Reason I'm asking is that I got a new hybrid San (equallogic ps6100xs with 7 pliant 400gb emlc ssd and 17 savvio 10k4 running some sort of raid 6)I want to test, and I'm unsure how to setup iometer so I can compare with your review.

    The write reliability on those pliant is 7.2PB!!!

  6. Sorry i didnt respond faster, im not checking the forum all the time :)

    Instead of trying to explain it myself im gonna qoute the following article: http://www.gtweb.net/RAID_desc.html

    During disk writes, RAID 5 cannot produce a write performance comparable to that of straight disk striping because other operations have to be undertaken to make and store parity codes. The I/O performance of the array depends very much on the relative levels of reads and writes requested.

    When a stripe is modified, unmodified portions must also be read to re-generate the parity for the entire stripe. Once the parity has been generated, the modified data and parity information must be written to disk. This is commonly know as Read/Modify/Write strategy.

    It reflects that, though RAID 5 is superior to RAID 0 because it offers redundancy, it is not able to perform as well as RAID 0 in terms of write performance. Because RAID 5 has distributed parity, two reads and two writes must be performed for every write operation. However, the write penalty can be overcome by the use of write caching which allows write data to be stored in the memory prior to writing to the disk, so freeing the host processor for other tasks.

    Im not gonna make any excuses for beeing wrong in my performance prediction. Especially with Caching and the way you do your IOPS tests, its merely a guesstimate :)