• Content Count

  • Joined

  • Last visited

  • Days Won


Posts posted by Darking

  1. Hi.

    Do you have any plans on testing/reviewing windows Storage Spaces Direct? it seems like an interresting upgrade to storage pools from 2k12r2 and i see it as a valid and real concern to VMwares VSAN/ESXi dominance in the market space.

    for it to succeed it requires it to perform, which the old storage spaces not really did. Microsoft this time around are claiming stuff like 6M IOPS, 1Tbit/s transferrates etc. But what are the real world numbers?

    Hope you get the time to test this exciting Software Defined Storage Solution

  2. We currently run a couple of SQL servers on our 3par storage, and we use AO to move active blocks to SSDs, so we do not really run a all-flash solution. But since we have a fairly big flash capacity (10TB), we run about 70% of the databases on the flash tier. Writes happen on the 15k tier still though.

    Our installation does not have very high IO requirements, so it is kind of overkill, but the low response time is an advantage 0.1ms is certainly better than 6-7ms from traditional storage

    • Like 1

  3. It makes one think about that perticular drive, when you read reviews on the internet, and see what backblaze has experienced with that exact model. Over the last 6 months, 57% of reviewers gave it a 1 star rating, most of them due to deaths after a few years of use.


    The rapport albeit maybe flawed, due to sourcing issues, is a good indication of the general state of that perticular drive.

    They have a whopping 6% of their initial drives still running in production, where 1342 (32%) died in production, and another 2597 (62%) was removed from the chassis, and retested.. 75% percent of those drives died under test!

  4. === Disclaimer, Im not a Storage Vendor Employee, but i do own several vendors equipment === ;-)

    Now i think most people can agree on that Distruptive Upgrades are a unwanted risk to a platform in general. And to me it would be close to unacceptable, even though im not running a "Enterprise Senario" (we only have 1200 employees, in EMC talk that would be SMB).

    Im not exactly sure why a disruptive upgrade would be needed, but from what ive read it has something to do with changes to the Metadata in the array. I am not sure why it cant handle more than one metadata type, and just do internal migration of the data to a new Set but im sure there are technical reasons as to why its not possible. EMC should look into that, and implement it so future updates would not require them to be in the same situation. That ofcause depends on their architecture beeing in a state that allows for this.

    If i were an EMC Customer, i would assume that my Storage Partner or EMC directly would provide me with some sort of Migration Path in form of either a physical array, so the data could be replicated, and the unit either be replaced or upgraded without downtime on my Production enviroment. I assume that is what EMC means by their press release/statement. If not, that is clearly unacceptable. Relying on downtime and/or risk of dataloss incase of inadiquate backup is not in any form acceptable. Even in VDI solutions.

    My newest Storage Array (a 3Par 7200), will be getting inline deduplication soon (flash only unfortunaely), and i doubt HP would allow the upgrade to be disruptive. It simply isnt "Enterprise", to do so.



  5. I must say this is a fairly aggressive price strategy.

    Btw. Thin Deduplication as you call it, is not really deduplication, its just their ASIC beeing able to do Zero Detection, and only save data (+meta data), and not write Zeroed storage. That gives a benefit via space-saving. It is not comparable to True Deduplication.

    BUT HP has allready implemented deduplication on the HP 7450 AFA. And they have indicated that the Deduplication (again using the ASIC) will trickle down to legacy arrays (it might only support it on All Flash CPGs (A CPG is sort of like a pooled portion of storage with thousand of small 3GB Raids)). I expect it will to the 7200 AFA also.

  6. None of those are Server-grade SSDs though.

    Samsung SM825, Intel S3700, or Sandisk LB806r are all ment to be run in servers. especially since they most of the time will either run in raid1 or raid5 configurations where you cannot send trim to them.

    Therefore garbage collection is something the SSD manufactures tune to maybe not hit the 500MB/S speeds, but keep a decent steady-state speed of fx. 200-250MB/s instead.

    Im sure Kevin or Brian can chirp in when they get off christmas duty :P

  7. I doubt they issue trim to the disk, i suspect they use enterprise SSDs that have proper garbage collection that does not require the drive to recieve a trim to initiate it.

    with regards to 3par i would not worry so much about the whole raid5 debacle. They use what they call Chunklets. basically a 1GB (can also be less) chunk of a random disk. then they use 4 of these chunks to create a raid5.

    This gives two destinct advantages:

    1) since your volume is created out of these chunk groups, you can scale to all disks easily

    2) it allows for using spare capacity on the drives as Hotspare space.. meaning if you do loose a physical drive, you can most likely rebuild your raid 5 on existing spare chunks. Since again it uses all drives in the array it does not provide the same danger of not having a builtin hotspare disk.

    Dell does not allow you to make raid5 on disks larger than 2TB (i think its 2, might be 1TB), and i suspect that the 1.6TB MLC disks have a lower failure rate, or that the rebuild is much faster, therefor minimizing the chance of having a disk failure at rebuild time.

  8. Received my HP offer today.

    A bit of a different setup they came up with to handle my requirements.

    28 400GB SLC ssds (8.4TB usable with raid5 8+1)

    36 300GB HDDs (7.8TB usable with raid5 8+1)

    12 4TB drives (26.2TB usable with raid 6 4+2)

    It lives up to my Capacity requirements and gives me 32 free slots for expansion in 2.5inch and 12 in 3.5inch.

    And im certain the iops is ok.

    The main concern is the price.. i was qouted something that is... 2.5 times more than the above compellent configuration. i know i know.. i will get more iops out of this box.. but i dont need 50k+ iops.. i might need 10 or 20 down the line, but not 50. problem is i do have an idea i want more data available on fast medias like ssd over time, especially with Lync becoming our primary telephony solution, and our Evergrowing sharepoint installation wants more iops. Hopefully Adaptive Data optimization (hp) and data progression (dell) will handle that.

    I am not sure how dell will make the disk layout. ive asked to expand on their initial offer. These are not official bids, i just wanted an indication of pricing.

    HPs SSDs are expensive, but what really costs is the software licesing(around 2/3rds). I am sure they are negotiable on that when it comes down to it.

  9. Merry Christmas!

    Got an offer in from a vendor

    a Compellent array with redundant controllers including the following:

    6 400GB SLC SSDs

    12 1.6TB MLC SSDs

    and 24 2TB NL-SAS disks

    Included is data progression and vmware addons

    The price seems ok. then again im not sure what is an ok price for that kind of equipment.

    Im hoping on getting an HP offer in on monday for a 3par solution :)

  10. Hi Sparky.

    First of All.. If you need the 4 Processor Architecture, forget all im writing below:

    But if your inclined to hp. Get the 380 G8 instead, in either a e or p model, depending on your needs.

    it allows you in-chassis to have upto 25 disks, and allthough it is 2.5" disks and you cant get 4TB sata or anything, you can get nice 1/1.2TB 10k spindels for your data, and 2.5 inch SSDs for whatever needs to be loaded fast. With your current needs the server allows you to grow, and not spend alot of rackspace on a 560 + Jbod.

  11. Yeah well, the problem with Equallogic Tiering is pretty much the requirements.

    In Equallogic Storage you can have a Storage Group, consisting of 4 pools with 4 members at most.

    But volumes can only reside on 3 arrays at a time. meaning you cannot buy say a PS6110S and let it handle heavy iops from all your HDD Arrays. Its one of the downsides of having each chassis have its own set of controllers. Also Equallogic does not recommend you mix and match stuff like disk speeds and raid types in the same pool.

    The boxes do scale insanely well because of the design, but it has its downsides im afraid.

    I have no complaints about support or anything of that nature. Both the Guys in ireland, and the guys in nashua ive talked with over the years have been excellent. Allthough i would wish they had more people who actually knows linux and how openiscsi works :)

  12. First of all, thank you both for Answering :-)

    Software Defined Storage, is definitely one of a few futures of storage we will see. I have no doubt about it. As a concept it is sound. Using Commodity hardware makes even more sense. But my spider senses _are_ tingling, and I am worried about going with v.1.0 products like VMware or Microsofts Solution to lowering the IOPS/$ and the GB/$. As such I believe the technologies they use are sound. But I am worried that they might not be ready for "production". VMware themselves even say... "Hey in VSAN 1.0.. use it for test/dev labs and VDI deployments". And im kind of listening to that.

    Unfortunately the Hyper-V route with Stuff like Supermicro might work in USA, but im having a hard time even finding a distributor of hardware that is certified with Storage Spaces. Dell do Claim they will support it, maybe in Q1 2014.. But that does not exactly make better, giving my rather hard deadline of Out-Of-Service on existing stuff in May 2014. Also the requirements for Storage-Spaces are a bit more hefty I feel. Redundant HBAs, to specialized Dual Controller JBOD chassis.. Im basically just building a redundant Windows box that Acts as a SAN using SMB. And im not sure I Want that... Plus as Kevin, ive also heard about the Performance issues, that atleast plagued Server 2012, and I fear it might be the same for R2.

    I had a talk with Oracle yesterday.. The boxes they Sell are powerful machines.. It's based on ZFS, using a mix of MLC and SLC for read/write cache, and larger disks for common storage. Apparently its even quite affordable.. VAAI support is in the works.. But I could not get a definite answer as to when.. "Maybe December" was the closest.

    Wednesday im having one of Dell Denmark Storage Architects swing by my office, and give me a run-through of the compellent array.. I kind of like what ive seen so far, and I hope to be pleasantly surprised.

    3par is definitely also on my horizon, and I plan on talking with a Vendor next week to get an introduction.

    This i want to avoid in my new storage system:

    1) Lack of Space

    2) Lack of Performance

    3) Too Many Hotspares (Equallogic does not have the concept of Global hotspares - so we have 14 in 7 arrays)

    4) Bad Tiering (i want the clever kind.. ;-) )

    5) Higher Uptime than Equallogic (pref true Zero-Downtime when doing firmware upgrades)

    6) Higher Quality control on firmware releases.

    At the moment we have made a 31 point Schema to fill in, for comparison. So that is atleast some sort of help in Choosing

  13. So the time has come, my Company is looking to purchase new hardware.

    Ive sort of settled on continuing with VMware for the next few years, and slowly moving stuff to either Azure or vcloud. but for now i need hardware. Servers are pretty much settled, in that im looking to switch from Opteron based Dell machines to an Intel platform on the 2697 v2 CPUs with plenty'o'RAM somewhere btween 384 and 768GB, The vendor is not set in stone, but the Vendor is prety much going to be the one, who can deliver me some nice storage.

    Our storage today is comprised of Dell Equallogic. It has served us well, its easy to manage. With that said, we have moved to a more write intensive enviroment (85% of our IO is writes), and the tiering in Equallogic solution is close to horrible, especially if you have more arrays than you can fit into a single pool.

    We are looking at around 48 TB of effective capacity needs over the next few years. its an rough estimate, cause who the hell knows what the company is planning to do.. i sure don't. and i doubt the management does either. i need to scale accordingly. and today we use around 26TB.

    Our enviroment is 99.5% VMware based, with a few standalone oracle servers. They come in a total hefty 1TB of storage need.

    We prefer having an enviroment that is situated in one storage system, but its not 100% needed. Our Standalone oracle installation can run on some local storage (maybe some SSDs for performance), and we'll be just fine.

    With that in mind, im looking at the following Vendors:

    1) HP 3Par 7400. Ive read up on it, and it looks like its pretty much the smartest allround solution i can get for my money. i like the idea of micro-raids and the full efficiency of disks it provide. it also seems to provide some SSD caching on writes, which i think is important

    2) Dell Compellent. A clear contender, it does stuff a bit different than the 3par. but it has the dataprogression i think we need.

    3) Oracle ZFS Storage Appliance. My Boss wants me to look at what oracle can deliver.. from what i can read out of specs and datasheets it provides lots of oomph, and gives several advantages with regards to Databases. But it lacks basic VMware functionalitets like VAAI.

    4) Going Hyper-V Storage Spaces route.. Maybe its the way to go and switch out good 'ol VMware and go with Storage Spaces cluster. Main issue is a severe lack of Certified JBOD chassis for the sucker

    5) VMware VSAN. I totally dig what VMware is doing. Its smart, scalable, Secure and hopefully fast. But its awefully new, still in beta.

    It does provide me with an opportunity to save a bunch of money on storage though, and we can provide our oracle/linux machines with local storage instead, or Create a VM with NFS system for drives.

    Switching im thinking also 10/40Gbe switching for storage, and for Vmware/Hyper-v maybe even considering Infiniband.

    Anyhow.. any input would be appreciated.

  14. I would not dare to create such a large raid5 set, especially with WD greeen drives. Just google (wd green tler)

    They are known to drop out of raids, and i cannot even imagine the rebuild times of an 4TB array. I would just greate them as either basic disks and find some sort of software that can replicate the data over more than location.

    regarding your stribe size and cluster sizes, i would probably just use the default values, you normally only change thoses if you see highly sequential writes etc. In normal windows usage, you will mostly see random reads/writes and alot of it will be small IOs.

    your raid 5 controller is probably not hardware assisted and it just might not be able to perform any better.

  15. Yes of course I have data from those who have reported the issues with Samsung as well as all the other brands. No one is excluded from the pitfalls of new SSD tech on consumer grade SSDs. It is what it is - immature tech with lots of issues still to be resolved. SLC could certainly be better if offered at a competitive price to the public, but it is not. There is a lot of knowledge and experience with SLC.

    Please if possible, share those information with the rest of us then. It would be interresting seeing return rates for large e-tailers like newegg fx. I doubt the vendors themselves would give out that information freely.

    My experience is only from installing about 100 830s in our corporate enviroment, so far we have not seen any errors.

    We selected the 830 because we had good experience with the dell branded 470 SSDs (allthough they were slower devices), and personally ive had issues with sanforce based SSDs at home.

  16. My point was Samsung's SSDs aren't any more reliable or compatible than any other brand of SSDs regardles of their experience, so it's not appropriate to conclude they are a better choice.

    Do you have any hard data to back that claim up, or is it just something you made up?

    It is correct that SSD tech is inheritly non-mature at the current state or rather at because of selection of MLC technology as market leading.

    But that does not speak for any differences in controller tech, nand selction/binning, Research/development capacity or things like that, that is highly different between vendors.