Sign in to follow this  
mimi

Infiniband SAN + new storage (Supermicro 2027R-AR24NV or Infortrend DS

Recommended Posts

On start: I dont have any experiee with IB!

Im still looking for my 6 servers (each node: 2xpu,32gb,8x146gb 10k,4x1gbit, 2012r2 Hyper-v server) new SAN topology + storage for VMMs SMB3.

New idea is (looks better performance and cheaper then 10GbaseT) :
1x (2x) - VOLTAIRE 4036 VLT-30011 GRID DIRECTOR 40Gbps 36 PORT QDR INFINIBAND SWITCH [url=http://rover.ebay.com/rover/1/711-53200-19255-0/1?icep_ff3=2&pub=5574933291&toolid=10001&campid=5337060739&customid=&icep_item=131102510914&ipn=psmain&icep_vectorid=229466&kwid=902099&mtid=824&kw=lg]link1[/url] [url=http://www.mellanox.com/related-docs/prod_ib_switch_systems/4036.pdf]link2[/url]
6x (12x) - Qlogic QLE7340 Single-Port 40Gbps (QDR). InfiniBandĀ® to PCI Express [url=http://rover.ebay.com/rover/1/711-53200-19255-0/1?icep_ff3=2&pub=5574933291&toolid=10001&campid=5337060739&customid=&icep_item=251270116543&ipn=psmain&icep_vectorid=229466&kwid=902099&mtid=824&kw=lg]link1[/url] [url=http://www.qlogic.com/OEMPartnerships/Dell/Documents/ds_QLE7340.pdf]link2[/url]
6x (12x) - HP 4983585-B22 2M QSFP QDR DDR SDR Infiniband Cable - [url=http://rover.ebay.com/rover/1/711-53200-19255-0/1?icep_ff3=2&pub=5574933291&toolid=10001&campid=5337060739&customid=&icep_item=190539934026&ipn=psmain&icep_vectorid=229466&kwid=902099&mtid=824&kw=lg]link1[/url]
B)1st question is : Will be it work together correctly? Doy you have any better idea (model/type/price)?

Now I have 2 faforites of my new storage:

Type 1 - SUPERMICRO 12Gb/s SAS3 fileserver (homemade):
1x Supermicro SuperStorage Server 2027R-AR24NV [url=http://www.storagereview.com/supermicro_superstorage_server_2027rar24nv]link1[/url] [url=http://www.supermicro.com/products/system/2U/2027/SSG-2027R-AR24NV.cfm]link2[/url] +2xcpu+RAM
2x Adaptec 81605ZQ SAS3 RAID Card with maxCache Plus Caching and Tiering software [url=http://www.adaptec.com/en-us/products/series/8q/]link1[/url]
2x(4x) Qlogic QLE7340 Single-Port 40Gbps (QDR) or 1x(2x) dualport card
2x (4x) infiniband cable
1x Windows server 2012r2
(and maybe next one for replicas VMMs:))

Type 2 - Infortrend DSS 3016RT
1x Infortrend DSS 3016RT [url=http://www.infortrend.com/global/products/models/ESDS%203016R]link1[/url] (dual controller, high iops up to 1,3M IOPS, 5500MBPS throughput, expandable up to 316drives)
1x Software Automated Storage Tiering (will be maybe in half february for this model) [url=http://www.infortrend.com/global/Solutions/Softwares/storage_tier]link[/url]
2x Host interface (8Gb/s or 16Gb/s Fibre Channel, 1Gb/s or 10Gb/s iSCSI, or 6Gb/s SAS)
B)2nd question is : What kind of host options choose for my next SAN for maximally throughput:
- Four 16Gb/s Fibre Channel ports (2 per controller)
- Four 10Gb/s iSCSi ports/ SFP+ or RJ-45 (2 per controller)
- Eight 8Gb/s Fibre Channel ports (4 per controller)
- Eight 1Gb/s iSCSI ports (4 per controller)
- Four 6Gb/s SAS ports (2 per controller)

B)3rd question is : What next i must buy - convertor/cables/etc connect it to IB SAN?

B)4th question is : What type of this storage do you preffered 1 or 2? Why? What I may do better?

Thanks for all opinions.

Edited by mimi

Share this post


Link to post
Share on other sites

Lets take a quick step back here for a second. What are you looking to use this storage for? What are the goals in terms of throughput and storage capacity?

Share this post


Link to post
Share on other sites

OK, I have now 2x148gb system hdds (Hyper-V server 2012r2) on each server(doesn work as cluster solution).

Rest 6x148 are as RAID5 on each node. Disks are 10k rpm sas 3gbps. There running now about 25VM (+-5VM on server) - and I need more GBs now (shared storage?!). 3TB all storage (free250Gb :-( - its horrible). + I have moutned by iscsi about 2TB to some VMMs on our one Synology. On second Synology I make backup all this VMMs.

I must migrate by two months here next 20 VMMs (5TB)

CPU preformace now is OK - 5% utilization.

Last measuring by Dell Performance Analysis Collection Kit DPACK on this 6x servers:

- Throughput 142MB/s

- IOPS 2950 at 95% 3300 at 99% and at peak 5300

- Read / write ratio 80% / 20%

- Latency read / write :150ms / 30ms

- Memory used 27%

- Average IO size read / write: 40,5kB / 23kB

This is all what i know about my servers.

IB looks cheaper than 10Gbase - T

I neednt CPU power, I need only "fast, cheap, big shared storage" :-)

Edited by mimi

Share this post


Link to post
Share on other sites

On your performance figures, you aren't pushing a lot of bandwidth but you are running into an I/O and latency crunch. I'm wondering if you could actually get by on traditional 1GbE between the servers. If you leverage SMB shares for that HyperV environment, it will load balance across multiple NICs giving you increased bandwidth and some failover characteristics.

Also apologize if this is obviously, but by VMMs do you just mean VMs? I want to make sure I'm not overlooking something.

Share this post


Link to post
Share on other sites

Excellent, now what is your working budget and total capacity requirements? What is your storage utilization currently?

On the performance figures you mentioned above from Dell DPACK, is that total for the entire group, or totals per server?

Share this post


Link to post
Share on other sites

Budget 25k - 30k usd :-(.

Requiered capacity 10TB.

I dont have SAN network and "production" storage.

Only 1xsynology DS214 2x2TB mirror for media and slow data 10xLUNs and 1x Synology DS414 3x4TB raid5 1x LUN for MS DPM for backups VMs.

Data from DPACK are collect from 6x Hyper-V servers - No from VMs.

Share this post


Link to post
Share on other sites

I'll start forming a plan later tonight revolving around option1 with the Supermicro SAS3 server with Windows Server 2012 R2. I'm somewhat partial to this option, as it is exactly what I am using in our own lab at the moment, using almost the exact hardware you were looking at (2 of the same Adaptec RAID cards) as well as 10/40GbE/IB Mellanox HBAs for interconnects out of the box.

Honestly that budget is very doable and I might actually suggest to hold on the network upgrades and focus on the array itself right now. You main pain point right now is IOPs and latency, not bandwidth restrictions.

Also on the VM/environment load stats, could you clarify if those are totals per server or totals per group... IE is it 142MB/s total across 6 servers, or 852MB/s total for 6 servers combined?

Share this post


Link to post
Share on other sites

Also didn't see it mentioned but what is your current switch and how many ports are utilized?

Share this post


Link to post
Share on other sites

Almost forgot since this will impact some of the parts availability and pricing. Where are you located and what retailers can you purchase from?

On the storage side for that 10TB of capacity, is that an even split across all the VMs in your environment? Or are the active VMs smaller and you are also added some file storage into this for shared access? Trying to mentally sort out if you could tier off some of that storage to help lower the cost of drive you'll need to purchase.

Share this post


Link to post
Share on other sites

Im from Czech republic - Europe - EMEA.

Purchasing from around the world isnt problem or a lot of distributors and resellers are located at home. Keyword is partnumber :-)

30% Graphical data (Corel, Adobe, .EPS, fonts, projecs and archives for customers)

05% Media for instalation

20% Mailservers (Exchange, Merak mail, SQUID)

30% Webservers + FTP (IIS 80%, apache )

10% Databases (SQL, MysQL) for webservers

05% Infrastructure VMs

Share this post


Link to post
Share on other sites

When it comes down to building the server, that part is easy. CPU load is minimal (0-4% on balanced), so low-end E5 CPUs will do the trick and still keep up with demand. RAM load is also fairly low, where I have 32GB installed and only about 6GB is used. On the networking side, I think you would be best served right now by sticking with 1GbE since you have plenty of ports at your disposal. Your biggest concern is getting enough aggregate bandwidth from your new S2012 NAS/SAN to the switch, since that traffic will be the greatest. Thankfully by leveraging Server 2012 across the board, you can take advantage of SMB3 for automatic load balancing across NICs, and with iSCSI if you end up using that MPIO is trivial to setup. While I'd be the first one to jump on the Infiniband bandwagon with it offering incredible bandwidth, you aren't even surpassing what 2 1GbE ports can handle right now. Save that money and invest it on the storage side.

I've been running numbers through my head and one of the pain points you'll run into is the cost of the storage hardware itself (HDDs and SSDs). I wouldn't be surprised if that alone makes up half that available budget. If you are looking at 2.5" HDDs and SSDs alone, to meet your capacity requirements in that 2U 24-drive chassis with stronger RAID types (RAID60, RAID50), you're looking at 900GB 10K SAS drives. In US prices right now I'm seeing some options in the 350-450 range. My mental layout consisted of 12 drives per RAID card, 10 HDDs in RAID50 or RAID60 paired with 2 SSDs in RAID1 for caching.

Using 900GB HDDs, that would give the following storage config:

RAID60 (10x900GB per card): 5.4TB per card, 10.8TB total

RAID50 (10x900GB per card): 7.2TB per card, 14.4TB total

Now for the really interesting thought experiment

A 900GB 10K SAS HDD goes for 350-450 depending on model and vendor pricing.

A 960GB Micron MLC SSD goes for $453 on Amazon.com right now

You could in theory build an all-flash NAS/SAN yourself with midrange SSDs for roughly the same cost as using 10K SAS HDDs. If you went that route you'd want to take into consideration wear leveling and endurance and probably over-provision the drives down to 900GB for better long-term write performance, but that is one interesting option.

Picking the best storage medium for your NAS/SAN

One thing that is hard for me to figure out is how of your capacity could sit on slower-spinning drives and how much of it needs faster storage. If you only needed 5TB of fast storage and 5TB of slower storage, you can lower your costs by adding in 3TB or 4TB 3.5" HDDs.

Backup Solution

Do you have any backup solution in place right now that will live on completely different storage hardware?

Best Chassis

The more I think about your current and most likely growing storage demands, the SAS3 chassis you had picked out might not be the best fit. Don't get me wrong, its an excellent platform, but it also limits what drives you can install in your server. You may want to end up getting a larger platform with 3.5" bays so you can have a mix of 2.5" HDDs and SSDs and also be able to include 3.5" HDDs without having to purchase a separate JBOD.

Share this post


Link to post
Share on other sites

Im try calculate it for for 1,2TB Hitachi Ultrastar C10K1200 - 10000rpm/SAS2/64MB/2,5"- it is about 450usd too.

Its look nice. Bold is possible for me.

12xHDD + 2x SSD for cache = 14 position from 24 = 10 free for raid 10 SSD Tier ?

1xHDD GB RAID50 RAID60 RAID50 RAID60 RAID50

900 9 000 7 200 7 200 5 400 5 400

1200 12 000 9 600 9 600 7 200 7 200

HDDs 12 12 10 10 8

Im preffered raid 50 for free spaces in box.

Backup will be out of this box - as i use now - any "synology with big 3,5HDDs"

Maybe will be better put in the box 2xExternal SAS cards for connection any JBOD in future.

Share this post


Link to post
Share on other sites

Well if you want to build the platform and leave room for storage growth, I might suggest buying as little as you need now and counting on storage density/performance increases coming in by the time you need to expand.

Basically what I mean is factor in the cost of those 12 HDDs and probably work in 4 400-480GB SSDs for caching, but only work in a single RAID card at this time. If you go with the Adaptec card you mentioned originally, I'd connect all 16 drives through that direct-connected. That would leave 8 open drive bays to connect to another RAID card (which could include external SAS ports to connect to a JBOD) for future expansion. One thing to keep in mind is with the direct-attached backplane in that SAS3 Supermicro chassis, is storage pools can't span multiple RAID cards. So if you connected 12 HDDs and 2 SSDs to that first RAID card, you'd have two open drive bays for expansion/hot-spares for that RAID card, and 8 for another RAID card. They can't cross-talk to each other.

I'd most likely use the RAID cards from this page for that configuration:

http://www.adaptec.com/en-us/products/series/8q/

81605ZQ for the 16 internal drives

8885Q for the remaining 8 internal drives and two ports for external expansion

The beauty of the 8885Q would be attaching a large JBOD and using some of the internal drive bays as a SSD cache pool to accelerate that storage.

Share this post


Link to post
Share on other sites

Im try a bit calculate:

pcs usd/pcs usd
Server 2027R-AR24NV 1 2200 2200
Box for 2HDD OS 1 50 50
Micron P400e 100GB SSD 2 170 340
Xeon 2620V2 2 450 900
RAM 8GB ECC DDR3 4 80 320
Adaptec 81605ZQ 2 1100 2200
HDD 1200GB 2,5 10k 12 450 5400 5400
SSD 960GB 2,5 2 450 900 900
MS WIN2012R2 1 700 700
$13 010,00 $6 300,00 $6 710,00
ALL only DRIVES ONLY STORAGE

Only storage: 6,7k USD

Only drives: 6,3k USD

Total price 13k USD

Share this post


Link to post
Share on other sites

I have got now offer to HP MSA 2040 superbundle:

1x HP MSA 2040 SFF Chassis ( 24SFF bays, 2 MSA 2040 SAN controller bay)

2x HP MSA 2040 SAN Controller with 4GB cache (4x16Gb/8Gb FC SFPs per contr, 1GbE/10GbE iSCSI, no SFPs)

4x 200G SSD (C8R19A) - Raid10

6x 600G SAS 10k (C8S58A) - Raid5

4x 16GbSFP (C8R24A)

Total 12,5k USD

I must change SFP to 10Gbit / 1 Gbit (free) and buy 8x 1,2TB HDDs for RAID50 ( 4k usd).

Total 16,5k USD (no homemade, introduced 08/2013, updated 12/2013, dual controller, with expander, 3y NBD warranty, posible 2x4x16GB FC or 2x4x10GB SFP) - its 3,5k USD more - but? I DONT KNOW?!

PS: Thin provisioning and Sub LUN tiering are planned to be addressed with paging code in 2014

Edited by mimi

Share this post


Link to post
Share on other sites

That doesn't sound like a bad plan at all considering you get a fully warranty and support from HP on it. Also it might not be a bad idea to keep the 10Gb interfaces and use some funds to purchase a switch with 10Gb uplinks that the SAN could be plugged into and then attach your other systems to the 1Gb ports on it.

Given your available budget and if that SAN meets your performance goals, it might be smart to consider a faster or more capable backup solution.

Share this post


Link to post
Share on other sites

We don't have any hands on time with the HP MSA series (yet) but from what I've been told, the units are well built and of course you get the services option which you can't get on a roll your own. If service and support are important then going this route instead of building a solution makes a lot of sense.

Share this post


Link to post
Share on other sites

Thanks Kevin and Brian for help.

Its look HP is winner on 95%.

Now, i must decide because I am in the budget (still left about 10k usd). I can invest to my new separatelly SAN. YES or NO?

I have two faforites:

1) IB QDR SAN as I wrote at the start this discussion (6x200+1x1300+6x75= 3k usd x2 (dedundant) = 6k USD

2) 10Gbit SAN - 6x10gbit intel x540t2 network card, 10gb switch with 10gb SFP uplink(s) Netgear ProSAFE XS712T/XS708E (6*500+2*1500 = 6k USD)

IB QDR is the same price as 10GBit = what is better for me?

Share this post


Link to post
Share on other sites

On your server-to-SAN connections, I'm not sure you need to worry about including 10GbE cards for each host. You are looking at well under 100MB/s per host going off your load measurements. The advantage would be having 10GbE from the SAN to the switch, then having the hosts connect to the switch over 1-4 1GbE connections.

IB QDR is very overkill for this environment. You are talking a 4GB/s connection per link, and no future equipment can talk to it unless you add an IB card into each server. You are also looking at an older switch and HBAs. I also can't find information on those HBAs supporting RDMA offloading to take advantage of SMB3 benefits in Server 2012.

EDIT: A switch like the Netgear GS752TXS with 4 SPF+ 10G uplinks would do the trick. Connect your SAN to the uplink ports and your hosts to the 1GbE ports. Another switch with 4 10GbE SFP+ uplink ports would also fit the bill... that one comes to mind since I have that exact model in the lab right now.

Share this post


Link to post
Share on other sites

i have no idea wtf is going on here (i lost track after the 5th post) but i wanted to say cool. And 30k USD is a good amount. Umm you may want to ignore my question but what is the Communication per Computation cost ratio for your setup? Is there any interpolation or fragmentation in play or have you measured it?

Share this post


Link to post
Share on other sites

Obviously RDMA protocols will defeat TCP based protocols on everything you throw at it in workloads (even at saving more CPU cycles).

I have quite a bit of experience with InfiniBand networks used for storage data delivery. However, most of my experience is with Solaris ZFS (Oracle or illumos) storage hosts, VMware's vSphere, and Linux using IB's SCSI RDMA Protocol (SRP).

If you're going to go with a Windows Server 2012 and beyond, you want to use at least SMB 3.0 Direct (RDMA version of SMB) which is 100% supported and encouraged by both Mellanox and Microsoft for InfiniBand over the other RDMA protocols in Windows. The RDMA setup will shine over your traditional TCP based 10GbE networks, specially in latency. This will always remain true unless you use RDMA in Ethernet, which "can" match IB's RDMA (but pricing still remains higher).

The pricing model of QDR InfiniBand is a lot cheaper than 10GbE as well if you don't mind a bit more management setup with InfiniBand.

I don't have a final opinion on your hardware setups but I will gladly post my system specs and performance figures.

Edited by Maxtor storage

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
Sign in to follow this