Jump to content


Photo

Storage options for higher end SQL server


  • You cannot start a new topic
  • Please log in to reply
6 replies to this topic

#1 Sparky

Sparky

    Member

  • Member
  • 65 posts

Posted 15 December 2013 - 09:36 AM

Good afternoon everyone!

 

We run a number of mission critical databases on a DL460 G6 Blade with P4400 EVA storage. This setup is mirrored across two sites using an application called CA ARCserve Replication. This whole setup is starting to show its age and we are looking at replacement options.

 

We have our eyes on some HP DL560 G8s (in a similar mirrored setup as above) and have got specs for two E5-4610 processors and 128 GB of memory per server. However, we are a little unsure on storage options. The server itself comes with 5 bays and option for a 25 drive bay external enclose. There are a raft of drive options available, from SSDs, FATA, 7.2/10/15K SAS and so on.

For the on-board storage we need to cater for the following:

  • OS (Windows 2008 R2) + SQL server (SQL Server 2005 SP4): maximum of 100 GB
  • TempDB: around 50 GB
  • Logs: 50 GB
  • Pagefile: 128-192 GB
  • ARCserve spool drive: .5 – 1 TB

 

We are currently looking at the following:

  • 2*200 GB 6G SSD in RAID 1 for OS plus either TempDB or logs
  • 2*200 GB 6G SSD in RAID 1 for pagefile and, if there is room, either TempDB or logs
  • 3 * 10k 600 GB SAS in RAID 5 (using the external bay) for spool drive and either TempDB or logs (if still required from SSDs)

 

Some regarding following this setup:

  • With 128 GB, these servers shouldn’t need to do much paging. However, I believe at the very least we need to have a 128 GB pagefile for crash dumps and know that some applications (such as Exchange) still need to page, even on servers with high memory servers. Do we still need to follow the 1.5 ratio for pagefile to physical memory (ie, is 128 GB enough, or will we need 192 GB)? Does the pagefile need to go on SSD? Does having the pagefile on RAID 6 cause issues?
  • If for whatever reason we are only able to fit one of TempDB or logs onto the SSDs, which would benefit more from the extra performance, and which would be just as happy on 10K RAID5 SAS?
  • Does anyone know enough about ARCserve Replication to know what sort of performance requirements we would need from spool drives? Would they benefit from SSD, or would 10K RAID 6 SAS be enough?
  • Is there anything else we have missed or could do differently?

 

Thanks in advance for any help with this!


#2 Kevin OBrien

Kevin OBrien

    StorageReview Editor

  • Admin
  • 1,396 posts

Posted 15 December 2013 - 05:11 PM

Couple of questions to your questions:

 

What are the constraints of your datacenter environment? How much rackspace do you have and what is your budget setup like? It looks as though you want 7 SFF drives in a server supporting 5 SFF drives, making me think you are getting that 25-bay storage expansion JBOD. If you don't plan on using the additional bays that seems like a big waste not to mention added cost of it.

 

It seems like you could easily trade the cost of that large expansion unit for larger SSDs and larger HDDs in RAID1 instead of RAID5.


#3 lecaf

lecaf

    Member

  • Member
  • 13 posts

Posted 16 December 2013 - 05:16 AM

Hi

don't know exactly what your budget and needs are, but this would be my dream SQL box.

 

- Boot/OS/Binaries/pagefile/crash dump : 2 mechanic HDD in mirror

(Can be slow you only boot OS once. Pagefile swapping might be slow but with proper amount of memory allocation there merely be any.)

- Raid (0+)1 of SSDs for SQL  logs

- Raid 6 of 4 15K SAS for SQL databases

 

Now if the database data are small enough: 

- Raid 1 of SSDs for data

- raid 6 of 4 10K SAS for BLOBs/File stream storage (optional)

 

Hope that gives you ideas

m a r c 


#4 Sparky

Sparky

    Member

  • Member
  • 65 posts

Posted 16 December 2013 - 07:28 AM

Thanks for the replies!

 

To clarify a few points:

  • Our current setup entails purchasing the extra drive bay. If we do this, short term we won't be using this for any other drives. Longer term, if we notice good gains with the performance of this server, we could look at moving some other databases to this server.
  • Budget wise, we have a healthy amount to play with - although obviously we won't want to throw money at the problem and if we can get the same results with cheaper hardware, that would be preferable.
  • Datacentre wise, we have ample room in our cabinets. Although again, we obviously woudln't want to be putting things in for the sake of it.

#5 270673

270673

    Member

  • Member
  • 118 posts

Posted 17 December 2013 - 10:21 AM

A couple of observations:

 

 

A- You ought to be talking to a server specialist at HP / a large HP partner about this.

 

 

B- In general,

  • SSDs get somewhat faster in larger sizes, and
  • SSDs have orders of magnitude more IOPS than mechanical HDDs,

... so for these reasons, today it often makes less sense to separate out TempDB, SQL logs, pagefile etc on multiple volumes. Just put it all on one large, sufficiently fast volume, backed by HDD or SSD as needed.

 

Or if you must separate, do so by cost of storage media, i.e. boot OS, apps, pagefile go on mechanical HDD, database files go on SSD.

 

 

C- Lots of people use OS -> RAID controller -> SAS/SATA SSD. While this is a completely valid choice, it contains quite a lot of duplication of effort. I.e. the RAID controller emulates being a linear block device towards the OS even though it isn't; the SSD emulate being linear block device towards the RAID controller even though it isn't.

 

Instead, investigate using just one high-uptime PCI-Express based SSD for the same workload.

 

 

D- You're getting the expensive HP server which supports 4 CPU sockets in 2U. Would 2 sockets in 2U be enough?

 

 

As a rough sketch, I would personally think along the lines of:

  • A 2U, 2-socket Intel Ivy Bridge or upcoming Haswell class system
  • For OS, pagefile and those ARCServe replication files either a: one 4-disk enterprise SAS mechanical HDD, BBU'ed, write-cached RAID 10 array of sufficient size and speed, or b: one or more read-focused, large, JBOD'ed SAS/SATA SSDs. (Choice depending on what the vendor is selling, pricing, and uptime needs.)
  • For the database files, one highly reliable PCI-Express SSD (a.k.a. IO Accelerator) such as LSI WarpDrive, Micron Px20, Fusion-IO, etc.

Edited by 270673, 17 December 2013 - 10:28 AM.

Best Regards, Jesper Mortensen

#6 Darking

Darking

    Member

  • Member
  • 236 posts

Posted 19 December 2013 - 05:25 PM

Hi Sparky.

 

First of All.. If you need the 4 Processor Architecture, forget all im writing below:

 

But if your inclined to hp. Get the 380 G8 instead, in either a e or p model, depending on your needs.

it allows you in-chassis to have upto 25 disks, and allthough it is 2.5" disks and you cant get 4TB sata or anything, you can get nice 1/1.2TB 10k spindels for your data, and 2.5 inch SSDs for whatever needs to be loaded fast. With your current needs the server allows you to grow, and not spend alot of rackspace on a 560 + Jbod.


What We Do in Life Echoes in Eternity.

#7 jeff711981

jeff711981

    Member

  • Member
  • 2 posts

Posted 07 February 2014 - 06:59 PM

You do want your SQL logs on a different set of disks than your data, though.  Log writes are sequential whereas actual database access is going to be mostly random.  When you mix small random I/O with large sequential I/O you typically see latency issues with your small random I/O due to the large sequential writes going on with the SQL logs.





0 user(s) are reading this topic

0 members, 0 guests, 0 anonymous users