Jackal29A

Raptor or fast 7.2k for SQL?

Recommended Posts

I have a SQL Server built on a Supermicro 5015M-MT+ (Core 2 QX6700, 4GB RAM, 4 x Raptor 150GB in 2 Raid1 arrays - 1 sys, 1 data) O.S. is W2k3 SP2 x64 + SQL Server 2k5 SP2 x64.

One of the DBs running is the accounting app db, its a "rather" large 2,5GB DB comming from an extremely poorly migrated from previous file ctree based (not our fault but the vendor's), the thing is that it is not perfroming as we expected, in fact is slower than the previous file based acounting app, though it has the great advantage of better integration with our custom built ERP which running on the same server.

SQL and the app (starts 1 30-50MB RAM instance for each client connection, crazy) are taking 3.5 GB+ of the RAM so I'm adding another 4GB, but I was wondering if I'd get some more speed changing the data Raptors for a couple of latest gen. 7.2k drives like the ES.2s or a7k1000s. Any suggestion?

Share this post


Link to post
Share on other sites
I have a SQL Server built on a Supermicro 5015M-MT+ (Core 2 QX6700, 4GB RAM, 4 x Raptor 150GB in 2 Raid1 arrays - 1 sys, 1 data) O.S. is W2k3 SP2 x64 + SQL Server 2k5 SP2 x64.

One of the DBs running is the accounting app db, its a "rather" large 2,5GB DB comming from an extremely poorly migrated from previous file ctree based (not our fault but the vendor's), the thing is that it is not perfroming as we expected, in fact is slower than the previous file based acounting app, though it has the great advantage of better integration with our custom built ERP which running on the same server.

SQL and the app (starts 1 30-50MB RAM instance for each client connection, crazy) are taking 3.5 GB+ of the RAM so I'm adding another 4GB, but I was wondering if I'd get some more speed changing the data Raptors for a couple of latest gen. 7.2k drives like the ES.2s or a7k1000s. Any suggestion?

2.5gb doesnt seem that large, i would have thought it could all fit in memory.... of course i don't work with sql server, so i dont know how you'd make it do that.

Have you thought about a fast SSD? If it was the seek time on the disks killing you, a SSD might help. Anandtech had good things to say about the mtron SSD's. http://www.anandtech.com/showdoc.aspx?i=3064&p=2

It looks like you can buy them here for http://www.dvnation.com/nand-flash-ssd.html for ~$1800 for 32gb.

http://www.mtron.net/English/?isclose=Y is the manuf's site, where i got the link to dvnation.

I'd first see if you can findout what the bottleneck is. Are the disks being accessed continuously?

Share this post


Link to post
Share on other sites

I would focus your efforts on tuning the existing environment. What kind of raw I/O are you seeing? What transactions are running slow, what is the bottleneck, and why? Why is so much RAM needed for each user connection and what is it used for? What kind of DB caching are you using? How do you know disk I/O is the bottleneck, such that you would consider upgrading the disk? Etc.

2.5GB is a pretty tiny database in this day and age for the kind of hardware you've already invested to support it. I don't think throwing more hardware at the problem is the solution.

Share this post


Link to post
Share on other sites

A minimally informed glance at your server hardware (sans storage subsystem) seems reasonably adequate, leading me to suspect one of two causes:

1. I/O contention caused by putting the database file(s), TempDB, and T-Logs on the same spindles--especially a RAID1. SQL Server best practice (especially in 2005) is to separate the data files, TempDB, and T-Logs on separate sets of spindles. Your database is too small to really consider table partitioning, though you could potentially benefit from multiple data files.

2. Poorly designed database/database queries. If there are insufficient table indexes, poorly written queries, improperly specified query optimizer hints, excessive cursor usage, or any of a number of things, those can have a dramatic impact on performance. One possibility is that there are too few (or even too many!) (useful) table indexes, and that's resulting in a lot of table scans when doing joins. If stored procedures are used, they could have excessive branching causing continuous recompiles. Lack of WHERE clauses could be causing SQL to return excessive information to the client (this could also be the cause of your high per-user memory footprint). Without more detailed information, it's hard to make a reasonable assessment and offer anything but wild guesses. It should be noted that some of these causes could manifest themselves as a hardware deficiency, as SQL Server must work harder to return the results. Ultimately, my guess is that the root cause is poor database design, not the hardware. I've personally had occasions where adding a simple index (to a complex query returning many tens of thousands of rows) would reduce query execution time from an hour to a second.

Both of these are, of course, assuming that the database portion is the bottleneck. If that isn't the case, disregard. :)

Share this post


Link to post
Share on other sites

I know I'm parading my ignorance, but I'm just wondering why you're not using 15K SCSI for a server situation? The cost differential is $2K max and more likely less than $1K.

M.S.

Share this post


Link to post
Share on other sites

I already had a "homeuser's" flash SSD from the Samsung 1,8"/2,5"-type for database tests. Read performance was very good as expected, but I got 60 IOPS @ 8k writes out of this thing.

Forget using Oracle or so at these type of SSD at the moment when using for heavy writes. Do not think about placing your Redolog-files or Undo Tablespace at it.

For this purpose, only professional SSD make sense.

Share this post


Link to post
Share on other sites

Thx to all for your answers, I think PrimeMover's 2nd point is 99.9% of the problem but here we're in the vendor's hands. They have promised me to upgrade performance significantly in next revision due next month, making some changes to the DB, compiling the server service in 64bit and improving the code and tweaking the java based client. Because of its tight integration with our ERP there is no chance of going back to previous version.

I'm starting to monitor Disk I/O usage, memory and CPU to see if hardware is also compromising performance.

I can't add any more drives, this chassis only has four bays.

SCSI or SAS are out the question in this 1U chassis because I'll have to change backplane, same as adding external storage (ie SAS controller + external drive enclosure) which I think will be overkill.

SSD, it they come in SATA flavor, sounds interesting but don't they have a limited # of writes? are they as reliable as conventional HD under heavy writing? size won't be a problem because 16 or preferably 32GB will do.

Share this post


Link to post
Share on other sites

Keep in mind that if the problem is truly exceptionally poor database design, you could potentially throw $100k of hardware at the problem and only result in a few % increase in performance--and still appear to be hardware constrained. That is the most unfortunately part of this whole issue--you may just be completely stuck.

Share this post


Link to post
Share on other sites
A minimally informed glance at your server hardware (sans storage subsystem) seems reasonably adequate, leading me to suspect one of two causes:

1. I/O contention caused by putting the database file(s), TempDB, and T-Logs on the same spindles--especially a RAID1. SQL Server best practice (especially in 2005) is to separate the data files, TempDB, and T-Logs on separate sets of spindles. Your database is too small to really consider table partitioning, though you could potentially benefit from multiple data files.

2. Poorly designed database/database queries. If there are insufficient table indexes, poorly written queries, improperly specified query optimizer hints, excessive cursor usage, or any of a number of things, those can have a dramatic impact on performance. One possibility is that there are too few (or even too many!) (useful) table indexes, and that's resulting in a lot of table scans when doing joins. If stored procedures are used, they could have excessive branching causing continuous recompiles. Lack of WHERE clauses could be causing SQL to return excessive information to the client (this could also be the cause of your high per-user memory footprint). Without more detailed information, it's hard to make a reasonable assessment and offer anything but wild guesses. It should be noted that some of these causes could manifest themselves as a hardware deficiency, as SQL Server must work harder to return the results. Ultimately, my guess is that the root cause is poor database design, not the hardware. I've personally had occasions where adding a simple index (to a complex query returning many tens of thousands of rows) would reduce query execution time from an hour to a second.

Both of these are, of course, assuming that the database portion is the bottleneck. If that isn't the case, disregard. :)

#3 - smaller drives, more spindles.

Best practice would tell you that you should be running at the minimum 36GB drives or even 18GB drives that have the fastest rotational speed you can find. From there they should be on a RAID 10 array for the data, RAID 1 for LOGS, and the TempDB, will depend on it's read/write characteristics...

I would get an internal SAS or SCSI card and external chassis, and get a real storage subsystem. ;)

But then again, many apps are usually built with little to no database optimizations built in... Being a SQL backend, this is where a good DBA would come into play to not only tune your SQL settings, but also analyze the types of queries that drag down the system, and find ways to optimize them.

Share this post


Link to post
Share on other sites

Yes, that's true. I already made my experience with oracle here. Sometimes, it's just a small index or referencing an additinal table with a foreign constraint in place and you get the difference of changing from a Cessna to a F-16.... :)

And sometimes, only heaven know's why this beast prefers full table scans instead of indexes.

But sometimes, I simply don't understanding the system. E.g. a heavy delete operation even when the records are quite small are consuming a lot of time. Here, dealing with huge log file or undo tablespace writes, it seems that just pure disk I/O-Power, RAM and CPU cache helps.

Share this post


Link to post
Share on other sites

Well... after monitoring the server for a few days I found that its basically doing nothing, CPU usage rarely goes beyond 3-6%, disk activity (both read & write) is very low and only RAM usage is high, but that is to be expected from SQL Server.

I talked this results with the software vendor and he told me that they have received similar complaints from other customers and that after investigation they found that most of the problem is due to latency in client/server communication. They promised me to send us a new revision with far more optimized code in both client and server and also some DB tweaking which supposedly will fix most of our problems.

Yesterday I fund this:

Hyperdrive4

Two of this in RAID-0 look very promising, and will look even better if next year's model comes with a SATA-300 interface. Also they offer the choice of adding a SSD or a 2.5" HD for backup in case of power failure or rebooting. Provided you don't need much space, wouldn't this be a good option for an SQL or web server?

Cheers and thx to all for your help.

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now