Darking

Member
  • Content count

    240
  • Joined

  • Last visited

  • Days Won

    1

Darking last won the day on May 16 2015

Darking had the most liked content!

Community Reputation

2 Neutral

About Darking

  • Rank
    Member

Contact Methods

  • Website URL
    http://www.darking.dk
  • ICQ
    0

Profile Information

  • Location
    Denmark
  1. Hi. Do you have any plans on testing/reviewing windows Storage Spaces Direct? it seems like an interresting upgrade to storage pools from 2k12r2 and i see it as a valid and real concern to VMwares VSAN/ESXi dominance in the market space. for it to succeed it requires it to perform, which the old storage spaces not really did. Microsoft this time around are claiming stuff like 6M IOPS, 1Tbit/s transferrates etc. But what are the real world numbers? Hope you get the time to test this exciting Software Defined Storage Solution
  2. Its quite a impressive system. allthough as a small enterprise (1100 employees in Denmark), i cant even imagine the kind of enviroment that needs 60PB and 12 million IOPS of flash storage.. that is about as insane as 70GB/s read and 35GB/s writes..
  3. SQL Server Performance Comments

    We currently run a couple of SQL servers on our 3par storage, and we use AO to move active blocks to SSDs, so we do not really run a all-flash solution. But since we have a fairly big flash capacity (10TB), we run about 70% of the databases on the flash tier. Writes happen on the 15k tier still though. Our installation does not have very high IO requirements, so it is kind of overkill, but the low response time is an advantage 0.1ms is certainly better than 6-7ms from traditional storage
  4. It makes one think about that perticular drive, when you read reviews on the internet, and see what backblaze has experienced with that exact model. Over the last 6 months, 57% of reviewers gave it a 1 star rating, most of them due to deaths after a few years of use. https://www.backblaze.com/blog/3tb-hard-drive-failure/ The rapport albeit maybe flawed, due to sourcing issues, is a good indication of the general state of that perticular drive. They have a whopping 6% of their initial drives still running in production, where 1342 (32%) died in production, and another 2597 (62%) was removed from the chassis, and retested.. 75% percent of those drives died under test!
  5. === Disclaimer, Im not a Storage Vendor Employee, but i do own several vendors equipment === ;-) Now i think most people can agree on that Distruptive Upgrades are a unwanted risk to a platform in general. And to me it would be close to unacceptable, even though im not running a "Enterprise Senario" (we only have 1200 employees, in EMC talk that would be SMB). Im not exactly sure why a disruptive upgrade would be needed, but from what ive read it has something to do with changes to the Metadata in the array. I am not sure why it cant handle more than one metadata type, and just do internal migration of the data to a new Set but im sure there are technical reasons as to why its not possible. EMC should look into that, and implement it so future updates would not require them to be in the same situation. That ofcause depends on their architecture beeing in a state that allows for this. If i were an EMC Customer, i would assume that my Storage Partner or EMC directly would provide me with some sort of Migration Path in form of either a physical array, so the data could be replicated, and the unit either be replaced or upgraded without downtime on my Production enviroment. I assume that is what EMC means by their press release/statement. If not, that is clearly unacceptable. Relying on downtime and/or risk of dataloss incase of inadiquate backup is not in any form acceptable. Even in VDI solutions. My newest Storage Array (a 3Par 7200), will be getting inline deduplication soon (flash only unfortunaely), and i doubt HP would allow the upgrade to be disruptive. It simply isnt "Enterprise", to do so. Regards Christian
  6. I must say this is a fairly aggressive price strategy. Btw. Thin Deduplication as you call it, is not really deduplication, its just their ASIC beeing able to do Zero Detection, and only save data (+meta data), and not write Zeroed storage. That gives a benefit via space-saving. It is not comparable to True Deduplication. BUT HP has allready implemented deduplication on the HP 7450 AFA. And they have indicated that the Deduplication (again using the ASIC) will trickle down to legacy arrays (it might only support it on All Flash CPGs (A CPG is sort of like a pooled portion of storage with thousand of small 3GB Raids)). I expect it will to the 7200 AFA also.
  7. Do you know why they chose the 15mm form factor? i thought industry standard was 7mm these days?
  8. Storage Refresh 2014

    None of those are Server-grade SSDs though. Samsung SM825, Intel S3700, or Sandisk LB806r are all ment to be run in servers. especially since they most of the time will either run in raid1 or raid5 configurations where you cannot send trim to them. Therefore garbage collection is something the SSD manufactures tune to maybe not hit the 500MB/S speeds, but keep a decent steady-state speed of fx. 200-250MB/s instead. Im sure Kevin or Brian can chirp in when they get off christmas duty
  9. Storage Refresh 2014

    I doubt they issue trim to the disk, i suspect they use enterprise SSDs that have proper garbage collection that does not require the drive to recieve a trim to initiate it. with regards to 3par i would not worry so much about the whole raid5 debacle. They use what they call Chunklets. basically a 1GB (can also be less) chunk of a random disk. then they use 4 of these chunks to create a raid5. This gives two destinct advantages: 1) since your volume is created out of these chunk groups, you can scale to all disks easily 2) it allows for using spare capacity on the drives as Hotspare space.. meaning if you do loose a physical drive, you can most likely rebuild your raid 5 on existing spare chunks. Since again it uses all drives in the array it does not provide the same danger of not having a builtin hotspare disk. Dell does not allow you to make raid5 on disks larger than 2TB (i think its 2, might be 1TB), and i suspect that the 1.6TB MLC disks have a lower failure rate, or that the rebuild is much faster, therefor minimizing the chance of having a disk failure at rebuild time.
  10. Storage Refresh 2014

    Received my HP offer today. A bit of a different setup they came up with to handle my requirements. 28 400GB SLC ssds (8.4TB usable with raid5 8+1) 36 300GB HDDs (7.8TB usable with raid5 8+1) 12 4TB drives (26.2TB usable with raid 6 4+2) It lives up to my Capacity requirements and gives me 32 free slots for expansion in 2.5inch and 12 in 3.5inch. And im certain the iops is ok. The main concern is the price.. i was qouted something that is... 2.5 times more than the above compellent configuration. i know i know.. i will get more iops out of this box.. but i dont need 50k+ iops.. i might need 10 or 20 down the line, but not 50. problem is i do have an idea i want more data available on fast medias like ssd over time, especially with Lync becoming our primary telephony solution, and our Evergrowing sharepoint installation wants more iops. Hopefully Adaptive Data optimization (hp) and data progression (dell) will handle that. I am not sure how dell will make the disk layout. ive asked to expand on their initial offer. These are not official bids, i just wanted an indication of pricing. HPs SSDs are expensive, but what really costs is the software licesing(around 2/3rds). I am sure they are negotiable on that when it comes down to it.
  11. Storage Refresh 2014

    Merry Christmas! Got an offer in from a vendor a Compellent array with redundant controllers including the following: 6 400GB SLC SSDs 12 1.6TB MLC SSDs and 24 2TB NL-SAS disks Included is data progression and vmware addons The price seems ok. then again im not sure what is an ok price for that kind of equipment. Im hoping on getting an HP offer in on monday for a 3par solution
  12. Hi Sparky. First of All.. If you need the 4 Processor Architecture, forget all im writing below: But if your inclined to hp. Get the 380 G8 instead, in either a e or p model, depending on your needs. it allows you in-chassis to have upto 25 disks, and allthough it is 2.5" disks and you cant get 4TB sata or anything, you can get nice 1/1.2TB 10k spindels for your data, and 2.5 inch SSDs for whatever needs to be loaded fast. With your current needs the server allows you to grow, and not spend alot of rackspace on a 560 + Jbod.
  13. Edge Memory PE236779 (PCIe SSD)

    If the choice is between the two, i would go with a Company ive actually heard about. in this Case Asus. The Edge might be an awesome product, but with limited possibilites for support, i recon its some sort of small Taiwanese producer that might be gone, in the blink of the eye.
  14. In my mind, there is no doubt that will happend to more SSD manufactors over the next few years. I recon only the nand producers will end up beeing able to carve the market between them.
  15. Storage Refresh 2014

    Yeah well, the problem with Equallogic Tiering is pretty much the requirements. In Equallogic Storage you can have a Storage Group, consisting of 4 pools with 4 members at most. But volumes can only reside on 3 arrays at a time. meaning you cannot buy say a PS6110S and let it handle heavy iops from all your HDD Arrays. Its one of the downsides of having each chassis have its own set of controllers. Also Equallogic does not recommend you mix and match stuff like disk speeds and raid types in the same pool. The boxes do scale insanely well because of the design, but it has its downsides im afraid. I have no complaints about support or anything of that nature. Both the Guys in ireland, and the guys in nashua ive talked with over the years have been excellent. Allthough i would wish they had more people who actually knows linux and how openiscsi works