Jump to content


Photo

Storage Refresh 2014


  • You cannot start a new topic
  • Please log in to reply
13 replies to this topic

#1 Darking

Darking

    Member

  • Member
  • 237 posts

Posted 25 November 2013 - 07:58 AM

So the time has come, my Company is looking to purchase new hardware.

 

Ive sort of settled on continuing with VMware for the next few years, and slowly moving stuff to either Azure or vcloud. but for now i need hardware. Servers are pretty much settled, in that im looking to switch from Opteron based Dell machines to an Intel platform on the 2697 v2 CPUs with plenty'o'RAM somewhere btween 384 and 768GB, The vendor is not set in stone, but the Vendor is prety much going to be the one, who can deliver me some nice storage.

 

Our storage today is comprised of Dell Equallogic. It has served us well, its easy to manage. With that said, we have moved to a more write intensive enviroment (85% of our IO is writes), and the tiering in Equallogic solution is close to horrible, especially if you have more arrays than you can fit into a single pool.

 

We are looking at around 48 TB of effective capacity needs over the next few years. its an rough estimate, cause who the hell knows what the company is planning to do.. i sure don't. and i doubt the management does either. i need to scale accordingly. and today we use around 26TB.

 

Our enviroment is 99.5% VMware based, with a few standalone oracle servers. They come in a total hefty 1TB of storage need.

 

We prefer having an enviroment that is situated in one storage system, but its not 100% needed. Our Standalone oracle installation can run on some local storage (maybe some SSDs for performance), and we'll be just fine.

 

With that in mind, im looking at the following Vendors:

 

 

1) HP 3Par 7400. Ive read up on it, and it looks like its pretty much the smartest allround solution i can get for my money. i like the idea of micro-raids and the full efficiency of disks it provide. it also seems to provide some SSD caching on writes, which i think is important

 

2) Dell Compellent. A clear contender, it does stuff a bit different than the 3par. but it has the dataprogression i think we need.

 

3) Oracle ZFS Storage Appliance. My Boss wants me to look at what oracle can deliver.. from what i can read out of specs and datasheets it provides lots of oomph, and gives several advantages with regards to Databases.  But it lacks basic VMware functionalitets like VAAI.

 

4) Going Hyper-V Storage Spaces route.. Maybe its the way to go and switch out good 'ol VMware and go with Storage Spaces cluster. Main issue is a severe lack of Certified JBOD chassis for the sucker

 

5) VMware VSAN. I totally dig what VMware is doing. Its smart, scalable, Secure and hopefully fast. But its awefully new, still in beta.

It does provide me with an opportunity to save a bunch of money on storage though, and we can provide our oracle/linux machines with local storage instead, or Create a VM with NFS system for drives.

 

 

 

Switching im thinking also 10/40Gbe switching for storage, and for Vmware/Hyper-v maybe even considering Infiniband.

 

 

Anyhow.. any input would be appreciated.


What We Do in Life Echoes in Eternity.

#2 Brian

Brian

    SR Admin

  • Admin
  • 5,213 posts

Posted 25 November 2013 - 10:41 AM

Wow, so many choices...we're going to be with Dell next week looking at Compellent and the new 6.4 OS. Happy to report back on what we see there. We're also going to spend time with HP on the new 3PAR gear but that's not set yet. As to VSAN, sounds really interesting but I'm not sure with it's youth that your timing is quite right for that one.

 

I'm sure Kevin an weigh in more on the interconnect side, we obviously have all three running in our lab. 


Brian

Publisher- StorageReview.com
Twitter - @StorageReview

 

#3 Kevin OBrien

Kevin OBrien

    StorageReview Editor

  • Admin
  • 1,426 posts

Posted 26 November 2013 - 02:39 PM

I think this reply has a couple of answers to it. I'm going from some personal experience on this, as we're going through a bit of a similar scenario where we have a huge need for I/O (more on the read side) for a growing VMware infrastructure). Our route was going Server 2012 R2 on a Supermicro platform with HW RAID vs Storage Spaces. This is completely built and supported internally, so not really the best fit for your scenario but it allows me to comment on the performance aspects of Windows Server.

 

My first inclination given the Dell servers you have and the experience with the EqualLogic platform is to keep it in the family to maintain support and also gain access to techs that can help connect all the dots if issues arise from storage to servers. Compellent is the step up in this case, where their platforms geared towards higher demanding workloads make a good fit. I'll be perfectly honest when I say we haven't worked with the product lines with the vendors you mentioned, although we are starting out review path with Dell's Compellent group now. We're going on-site with them next week to get a deep dive and prep for our review coming in the coming months.

 

On the HyperV Storage Spaces route, one area that really concerns me is the software RAID approach that I've been seeing already with Windows Server 2012 and 2012 R2 is dreadfully slow compared to appropriate hardware RAID inside those same platforms. RAID0 (which would be laughable in this scenario) is one area where the software RAID has an upperhand, but as soon as you add in parity overhead performance has been seeing a huge decline. Don't get me wrong, Storage Spaces has a lot of potential, although as I work on building out our high-performance storage server its not the highest performance option out there if you stick with S/W RAID.


#4 Darking

Darking

    Member

  • Member
  • 237 posts

Posted 28 November 2013 - 05:20 PM

First of all, thank you both for Answering :-)
 
Software Defined Storage, is definitely one of a few futures of storage we will see. I have no doubt about it. As a concept it is sound. Using Commodity hardware makes even more sense. But my spider senses _are_ tingling, and I am worried about going with v.1.0 products like VMware or Microsofts Solution to lowering the IOPS/$ and the GB/$. As such I believe the technologies they use are sound. But I am worried that they might not be ready for "production". VMware themselves even say... "Hey in VSAN 1.0.. use it for test/dev labs and VDI deployments". And im kind of listening to that.
 
Unfortunately the Hyper-V route with Stuff like Supermicro might work in USA, but im having a hard time even finding a distributor of hardware that is certified with Storage Spaces. Dell do Claim they will support it, maybe in Q1 2014.. But that does not exactly make better, giving my rather hard deadline of Out-Of-Service on existing stuff in May 2014. Also the requirements for Storage-Spaces are a bit more hefty I feel. Redundant HBAs, to specialized Dual Controller JBOD chassis.. Im basically just building a redundant Windows box that Acts as a SAN using SMB. And im not sure I Want that... Plus as Kevin, ive also heard about the Performance issues, that atleast plagued Server 2012, and I fear it might be the same for R2.
 
 
I had a talk with Oracle yesterday.. The boxes they Sell are powerful machines.. It's based on ZFS, using a mix of MLC and SLC for read/write cache, and larger disks for common storage. Apparently its even quite affordable.. VAAI support is in the works.. But I could not get a definite answer as to when.. "Maybe December" was the closest.
 
 
Wednesday im having one of Dell Denmark Storage Architects swing by my office, and give me a run-through of the compellent array.. I kind of like what ive seen so far, and I hope to be pleasantly surprised.
 
3par is definitely also on my horizon, and I plan on talking with a Vendor next week to get an introduction.
 
This i want to avoid in my new storage system: 
1) Lack of Space
2) Lack of Performance
3) Too Many Hotspares (Equallogic does not have the concept of Global hotspares - so we have 14 in 7 arrays)
4) Bad Tiering (i want the clever kind.. ;-) )
5) Higher Uptime than Equallogic (pref true Zero-Downtime when doing firmware upgrades)
6) Higher Quality control on firmware releases.
 
At the moment we have made a 31 point Schema to fill in, for comparison. So that is atleast some sort of help in Choosing

What We Do in Life Echoes in Eternity.

#5 Brian

Brian

    SR Admin

  • Admin
  • 5,213 posts

Posted 29 November 2013 - 02:30 PM

LOL, no bad tiering?!?

We will report back on the Compellent trip next week and let you know what we see. Dell is very advanced on software tools and support which is nice. Especially if you use Linux or some of the large microsoft apps.

Brian

Publisher- StorageReview.com
Twitter - @StorageReview

 

#6 Darking

Darking

    Member

  • Member
  • 237 posts

Posted 30 November 2013 - 03:31 PM

Yeah well, the problem with Equallogic Tiering is pretty much the requirements.

 

In Equallogic Storage you can have a Storage Group, consisting of 4 pools with 4 members at most.

 

But volumes can only reside on 3 arrays at a time. meaning you cannot buy say a PS6110S and let it handle heavy iops from all your HDD Arrays. Its one of the downsides of having each chassis have its own set of controllers. Also Equallogic does not recommend you mix and match stuff like disk speeds and raid types in the same pool.

 

The boxes do scale insanely well because of the design, but it has its downsides im afraid.

 

I have no complaints about support or anything of that nature. Both the Guys in ireland, and the guys in nashua ive talked with over the years have been excellent. Allthough i would wish they had more people who actually knows linux and how openiscsi works :)


What We Do in Life Echoes in Eternity.

#7 Darking

Darking

    Member

  • Member
  • 237 posts

Posted 20 December 2013 - 04:44 PM

Merry Christmas!

 

Got an offer in from a vendor

 

a Compellent array with redundant controllers including the following:

 

6 400GB SLC SSDs

12 1.6TB MLC SSDs

and 24 2TB NL-SAS disks

 

Included is data progression and vmware addons

 

The price seems ok. then again im not sure what is an ok price for that kind of equipment.

 

Im hoping on getting an HP offer in on monday for a 3par solution :)


Edited by Darking, 20 December 2013 - 04:52 PM.

What We Do in Life Echoes in Eternity.

#8 Darking

Darking

    Member

  • Member
  • 237 posts

Posted 23 December 2013 - 01:30 PM

Received my HP offer today.

 

A bit of a different setup they came up with to handle my requirements.

 

28 400GB SLC ssds (8.4TB usable with raid5 8+1)

36 300GB HDDs (7.8TB usable with raid5 8+1)

12 4TB drives (26.2TB usable with raid 6 4+2)

 

It lives up to my Capacity requirements and gives me 32 free slots for expansion in 2.5inch and 12 in 3.5inch.

And im certain the iops is ok.

 

The main concern is the price.. i was qouted something that is... 2.5 times more than the above compellent configuration. i know i know.. i will get more iops out of this box.. but i dont need 50k+ iops.. i might need 10 or 20 down the line, but not 50. problem is i do have an idea i want more data available on fast medias like ssd over time, especially with Lync becoming our primary telephony solution, and our Evergrowing sharepoint installation wants more iops. Hopefully Adaptive Data optimization (hp) and data progression (dell) will handle that.

 

I am not sure how dell will make the disk layout. ive asked to expand on their initial offer. These are not official bids, i just wanted an indication of pricing.

 

HPs SSDs are expensive, but what really costs is the software licesing(around 2/3rds). I am sure they are negotiable on that when it comes down to it.


What We Do in Life Echoes in Eternity.

#9 MRFS

MRFS

    Member

  • Member
  • 190 posts

Posted 23 December 2013 - 02:17 PM

>  28 x 400GB SLC ssds (8.4TB usable with raid5 8+1)

 

I'm very curious, because I don't already know the answer to this question:

 

What does HP do about TRIM with a RAID5 array of these SLC SSDs, if anything?

 

Would you be willing to ask them for us, and post their answer here?

 

 

RSVP and THANKS!


Edited by MRFS, 23 December 2013 - 03:13 PM.

#10 MRFS

MRFS

    Member

  • Member
  • 190 posts

Posted 23 December 2013 - 02:32 PM

FYI:  I found a very interesting discussion of RAID-5 here:

 

http://www.standalon...-praise-raid-5/

 


#11 Darking

Darking

    Member

  • Member
  • 237 posts

Posted 24 December 2013 - 04:27 AM

I doubt they issue trim to the disk, i suspect they use enterprise SSDs that have proper garbage collection that does not require the drive to recieve a trim to initiate it.

 

with regards to 3par i would not worry so much about the whole raid5 debacle. They use what they call Chunklets. basically a 1GB (can also be less) chunk of a random disk. then they use 4 of these chunks to create a raid5.

 

This gives two destinct advantages:

 

1) since your volume is created out of these chunk groups, you can scale to all disks easily

2) it allows for using spare capacity on the drives as Hotspare space.. meaning if you do loose a physical drive, you can most likely rebuild your raid 5 on existing spare chunks. Since again it uses all drives in the array it does not provide the same danger of not having a builtin hotspare disk.

 

Dell does not allow you to make raid5 on disks larger than 2TB (i think its 2, might be 1TB), and i suspect that the 1.6TB MLC disks have a lower failure rate, or that the rebuild is much faster, therefor minimizing the chance of having a disk failure at rebuild time.


What We Do in Life Echoes in Eternity.

#12 MRFS

MRFS

    Member

  • Member
  • 190 posts

Posted 24 December 2013 - 01:33 PM

>  I doubt they issue trim to the disk, i suspect they use enterprise SSDs that have proper garbage collection that does not require the drive to recieve a trim to initiate it.

 

FYI:  I've been referring others to xbit.com where they

maintain a good comparative table for all SSDs they have reviewed:

 

the measurement "After 30 Min. Idle" is the one that interests me the most,

because it compares effectiveness of GC before TRIM:

 

http://www.xbitlabs....nh_5.html#sect0

 

I took note of the wide variation in that one measurement

across all of the SSDs listed in that table.  Just visually,

it appears that performance "After 30 Min. Idle" ranges

from a low of about 19% to a high of about 98%:

 

THAT'S A VERY BIG RANGE.

 

You might want to show that table to the vendors you are talking to.

 

One of my main interests is the effectiveness of GC withOUT TRIM

when multiple SSDs are members of a RAID array e.g. RAID-0 for speed.

 

Here's that table from xbit.com (see link above):

 

iometer.png


#13 Darking

Darking

    Member

  • Member
  • 237 posts

Posted 25 December 2013 - 03:03 PM

None of those are Server-grade SSDs though.

 

Samsung SM825, Intel S3700, or Sandisk LB806r are all ment to be run in servers. especially since they most of the time will either run in raid1 or raid5 configurations where you cannot send trim to them.

 

Therefore garbage collection is something the SSD manufactures tune to maybe not hit the 500MB/S speeds, but keep a decent steady-state speed of fx. 200-250MB/s instead.

 

Im sure Kevin or Brian can chirp in when they get off christmas duty :P


What We Do in Life Echoes in Eternity.

#14 MRFS

MRFS

    Member

  • Member
  • 190 posts

Posted 26 December 2013 - 12:27 PM

>  None of those are Server-grade SSDs though.

 

Yes, I realize that, and also none of those use SLC Nand Flash, as far as I know.

 

Nevertheless, I put more than the average amount of trust in the reviewers at xbitlabs.com

because they apply their testing in a consistent and rigorous manner.

 

And, one focus of mine is SSDs in RAID-0 arrays -- for tengo mucho speedo!

 

Thus, maybe we could persuade xbitlabs.com to do the same series of tests

with some of the server-grade SSDs that you mention.

 

And, as you say, the firmware programming and "tuning" have a lot to do

with steady-state performance, particularly after any given SSD

has "filled up".

 

 

p.s.  I'm trying to avoid SSDS that fall too far from their advertised

performance throughout their factory warranty periods.





0 user(s) are reading this topic

0 members, 0 guests, 0 anonymous users