Sign in to follow this  
plockery

Raid5 Scsi vs Sata for a terminal server

Recommended Posts

I have a single server at work runing in windows server 2003 in terminal services mode for a small educational institution. There are about 25 terminals running off the server with about 18 in concurrent use in peak time and as little as 5 in non-peak times.

At the moment the terminal server has dual AMD opteron processors with 2 gig ram, an adaptec 2120S raid controller and 4 x 36 gig seagate ST336753LC 15k Scsi drives in Raid 5 configuration.

The problem is we are running out of space.

Since I have a 5 slot backplane, the first option was to get another drive. I could no longer get one of these drives, but could get one that was simply the newer version of that drive, same capacity, speed etc. Seagate said it should work in combination with the other drives and the adaptec card is supposed to be able to take different capacity drives as well, but when I tried to add this drive to the array, it did not work - slowed down the whole array and would stall etc. I had to take it back out and go back to the original four drives.

But now given our space problem, the fact that even adding one more 36 gig drive might not provide enough long term space anyway, and the low cost of Sata drives, I am wondering whether I should switch the whole system to 4 x 250 gig 7200 Sata drives with a new controller card and backplane.

This would obviously give me more space than we would ever need (I could just go with 4 x 120 gig drives really).

The question I have is whether there might be a very noticeable performance hit or not? (It would need to be fairly noticeable give the much lower cost and greater capacity of Sata drives.)

Or is there likely to be a reliability issue?

Are there any other considerations I need to think about?

I have to do something about the space problem relatively soon.

Any advice would be most appreciated.

Share this post


Link to post
Share on other sites

Depends a bit on what programs your users run. RAID 1 + 0 with SATA drives could be more than good enough and vastly cheaper than SAS or SCSI. But first tell us what your users actually do.

Share this post


Link to post
Share on other sites

If i understand you correct, you do not have a domain controller for this terminal server? It's a standalone server? How about backup?

Without knowing this, i would still advice you to have 2x Raid1 arrays for system/swap to optimizze disk performance for the subsystem. Then you could use a SATA Raid for storage if money is a issue.

It depends offcourse what type of data this storage is intended for. SATA will be good enough as long as users only use it for small documents and such.

Backup is still adviceable ;)

Share this post


Link to post
Share on other sites
...25 terminals running off the server with about 18 in concurrent use in peak time and as little as 5 in non-peak times.

So you do have 100GB in 4x disks at 15k rpm Raid5.

I will buy a pair of 300GB Velociraptor and install those in a Raid 1.

The 10% loss of IOPS (due to the lower rpm) will be partially compensated by the raid 1 usage.

Whatever your choice, be sure to take care of the NT volume alignment (using Diskpart utility) regarding the stripe size you build.

Share this post


Link to post
Share on other sites

Thanks for the replies. Here is the information asked for.

What programs do our users run?

- fairly normal sort of stuff - Microsoft Office Professional 2003, Myob Premier 11 for finance and payroll, Microsoft Sql server for database, Microsoft Exchange Server 2003 for mail, specialist library software and database indexes, Microsoft visual studio 8 etc.

Is this a stand alone server? Yes it is.

What about backup?

We backup the whole array every night to a usb external hard drive with Acronis True Image Server. We have two external hard drives and one remains off site and the two drives are swapped one a week.

So do you have 100gb... ? Yes overall about 106gb at 15K in Raid 5.

Now for my comments and further questions

1. I must confess my ignorance somewhat. I understand raid 5 because of use and that is what was recommended and installed when we bought the server a few years ago. Raid 1 I gather is mirrored disk pairs. I can see with the size of Sata drives why this is recommended since I could get plenty of space with say just 2 300 gb drives as Hachabanav suggests.

However, for some reason I always thought that RAID 5 was much faster than RAID 1 because of the ability to write to several drives at once, whereas everything is read and written from one drive in raid 1. Am I wrong here or doesn't the performance difference matter?

Is Raid 1 faster than Raid 5 or vice versa?

2. Not sure that I fully understand how Raid 1+0 works. Can you explain a bit more? What advantage is there over riad 1 alone?

3. I don't know what NT volume alignment is - never had to do it. Does it only arise with Raid 1?

4. Can I make raid 1 hot swappable?

5. Whether I use raid 1 or raid 5 what sort of performance loss is there for me to use sata instead of scsi? Would it be noticeable to the ordinary user? I presume it also makes a difference as to whether Sata is 7200rpm or 10000rpm. I could for instance buy two 300gb scsi drives - more expensive, but I already have an adaptec scsi controller and backplane?

Peter

Share this post


Link to post
Share on other sites

Ye gods... The only good advice is: run a second server. You do NOT run Terminal Server (or Citrix) on a mail server or database server. Performance will NEVER be good. For TS, you have to give apps priority, for domain controller, mail and database you give background services priority. The two are irreconcilable (did I spell that correctly?)! Security is also a problem with TS on a domain controller. Also Office and Exchange should NOT run on the same server.

Since you didn't complain about performance, I assume the server is fast enough for the tasks you it needs to do. Get a cheapo server with SATA drives, 2 to 4 GB RAM (whatever is affordable), a couple of SATA disks in a mirror and use that for domain controller, SQL and Exchange. Use your "old" server for TS and the apps your users need. Buying another RAID controller and disks will cost you a considerable fraction of buying a whole new low-end server. You can even run the TS - although it's not good for security best practices - as a second domain controller in case the other one goes down.

You really, REALLY need just another server. A server 2k3 license does not cost that much and you already have the necessary CALs and TSCALs.

Hot swap can be done with any level of RAID (except RAID 0 of course) so you don't have to worry there.

Performance is often better on RAID 1 than on RAID 5 in smaller arrays but RAID 5 may be better in larger arrays (more spindles) and when you do more reads than writes. For a small organization, it usually doesn't matter and you go RAID 5 because you get more capacity for the same money. I wouldn't worry too much about it. However, with the sheer size of SATA disks these days, it's interesting to just use 2 big mirrored SATA drives in such a small server because even if the controller goes tits up, you can read data from either disk. You cannot do this from a RAID 5 without a compatible controller or specialized software.

So, best advice IMNSHO is get a second server with a couple of large SATA drives probably 4 GB RAM. You may even find good deals or bundles with HP (or Dell but I personally detest Dell). Academic licenses for M$ software also cost next to nothing.

Share this post


Link to post
Share on other sites

"Performance" is to be looked at for a specific usage.

Usually, you have to balance between good multi-user performance and good single-user performance:

  • Multi-user have read or write access to many files at the same time : that is where IO per second (IOPS) is important
  • Mono-user request a single file read or write at the fastest speed : that is where volume throughput is important
  • Although the average size of your IO is an important parameter

In mono-user usage, you are depending on the disk throughput :

  • 5 years old disk tops at an average 50MB/s where current fastest are at 110MB/s
  • aggregating disk means aggregating those throughput
  • large stripe size is almost a prerequisite

In multi-user usage, you are mainly depending on the disk access time :

  • the faster rpm you got, the lowest access time you have
  • IO queue length of 32, 64 or 128 io is almost a prerequisite
  • NCQ/TCQ may lower this access time by re-ordering each disk IO queue to lower the number of rotations needed to fulfill the IOs
  • SCSI / SAS leads this market because the fastest 15k rpm disks are only SCSI or SAS disks
  • Obviously, SSD (and their 50x less access time) may lead this market in few years

Stripe size, disk block size, client block size and alignment :

  • in a 4x disk raid 5 array, a disk block size of 4K (8 sectors) would lead to a stripe size of 12K
  • in a 5x disk raid 5 array, a disk block size of 4K would lead to a stripe size of 16K
  • the OS does not "know" the stripe size and may issue io very of very different size to your array (4K is the NTFS format default)
  • although, the OS may reserves part of the beginning of your array for its own usage (All pre-Vista MS OS use the first 16KB or 32KB for the partition or MBR info, Vista use the first 1MB) and if your stripe size is greather than this, the OS will issue mis-aligned IO (it uses 2 stripes on every IO...50% reduction of your multi-user IOPS)
  • aligning a volume (Ex: DiskPart on Windows XP/2000/2003) will allow to reserve a stripe size multiple in your array

Raid 0, 1 and 5 performance :

  • Aggregating (Raid 0, 5 or 10) multiple disks delivers a better mono-user throughput but does not allow to balance the multi-user load on those disks
  • Raid 5 has a very bad write performance if your OS issue non stripe size multiple IO because it has to read the old parity block of your stripe, use it to compute the new parity block, and write the stripe
  • if you got a large "write back cache backed by battery" on your controller, you may lower the impact of this raid 5 write performance problem by aggregating IO from the cache but, in a multi-user usage, your controller cache can not guess if you are issuing sequential or direct io...and will issue many read-xor-write IO
  • Issuing "full stripe size" writes is a way to avoid this problem...but, as the disk aggregation, it will lower your IOPS

My rules of thumb:

  • Build arrays with 2x disks raid 1, 4x disks raid 10, 5x disks raid 5 or 6x disks raid 6
  • 1 hot swap disk is good to stay cool on vacation while the array gets rebuild for your users
  • some swap disks are good to decide when the array gets rebuild for your users not to suffer the rebuild performance degradation at peak time
  • Volume alignment is a prerequisite for >32K stripe (and a good practice)
  • Forget raid 5 for multi-user with many writes usage
  • On raid 5 arrays, get your OS/DBMS clients to issue "stripe size" writes (or "disk block size" writes if multi-user writes is rare enough to be handled by your "controller write back cache backed by battery")
  • Use raid 1 for real multi-user usage (and consider SSD if their price is not a problem)
  • Consider raid 10 only if all IO may benefits from large stripe size (or mono-user usage)
  • The OS (or DBMS) block size is to be equivalent to the disk block size (multi-user usage), the stripe size (raid 5) or multiple stripe size (very large IO for a mono-user usage...Ex: HD video edition)
  • Backup locally and be sure to have a not-too-old-backup at a different location

In your situation, I will kept my reco : use 2x 300GB Velociraptors in raid 1 ... and I don't see why you should manage another server if you don't have other bottlenecks than "disk space"

Share this post


Link to post
Share on other sites
In your situation, I will kept my reco : use 2x 300GB Velociraptors in raid 1 ... and I don't see why you should manage another server if you don't have other bottlenecks than "disk space"

It's not just about disk space but also about manageability, performance and security. And cost. 2 Velociraptors and a decent RAID controller will cost almost as much (or more) as a cheap server from HP or Dell. His server is configured completely wrong, too many roles which are too varied as well.

Share this post


Link to post
Share on other sites
2 Velociraptors and a decent RAID controller will cost almost as much (or more) as a cheap server from HP or Dell.

Sata "2 ports" controller (60€) + 2x 300GB Velociraptor (255€) = 570€...and he may reasonably get about 150€ from eBay selling the 2120S ctrl and those 4 disks.

His server is configured completely wrong, too many roles which are too varied as well.

Sure he may get more security (less manageability?)...but does he needs it ?

One point is strange : my past experience bottlenecks with Citrix/TSC was the CPU not the disk...

Share this post


Link to post
Share on other sites
Sata "2 ports" controller (60€) + 2x 300GB Velociraptor (255€) = 570€...and he may reasonably get about 150€ from eBay selling the 2120S ctrl and those 4 disks.

I wrote a decent RAID controller.

Sure he may get more security (less manageability?)...but does he needs it ?

It's vastly safer. You don't need seatbelts until you crash into something either.

One point is strange : my past experience bottlenecks with Citrix/TSC was the CPU not the disk...

I'm not talking about disks, CPU's or any other hardware but the OS itself. In Windows you can adjust CPU resources for programs (Office, ...) or background tasks (database, mail/Exchange, ...). A server will never run optimally if you combine those roles. A TS or Citrix server needs CPU for programs, a mail server for background tasks. It has nothing to do with hardware. It's the one big reason why SBS 2003 cannot be used as a terminal server (SBS 2000 could be a TS).

Share this post


Link to post
Share on other sites

(I don't want to hijack this post...PM me if you think I am)

a decent RAID controller.

I agree but mirroring 2 disks is ok with almost any controller.

By the way : does the MB has any "raid" capacities ?

I'm not talking about disks, CPU's or any other hardware but the OS itself. In Windows you can adjust CPU resources for programs (Office, ...) or background tasks (database, mail/Exchange, ...). A server will never run optimally if you combine those roles.

He does not complain about CPU bottlenecks that may requires different CPU priorities (that is only what the "CPU Priority to Application or Background" switch does)...

Although, if he thinks more security/manageability is needed, he may use the physical server with some VM (with dedicated roles).

Share this post


Link to post
Share on other sites

Thanks for the input HMTK and HachavBanav.

This convinces me that I am way out of my depth here.

I really am just an academic acting also as a part time IT administrator in an organisation that can't afford a specialist with no more under my belt than what I have learnt fiddling with pc's and this server over a fair period of time.

But this where I am at.

1. I know that not many people run an exchange server + office on the one terminal server. I know that it is not considered best practice. Until recently I thought that SBS could run in TS mode and hence could see no reason if SBS had exchange and office on the same server why we could not happily do the same. But now it seems SBS can no longer be run in TS mode for the reasons HMTK points out.

2. However, whether things are operating optimally or not, we have never had any noticeable performance problems with our server. For those who logon to the TS (we do have a couple of people who work from proper workstations/laptops) everything works well and quite fast whether that be applications, mail or database stuff. So I see no need to change this unless I have to. Greater security is not a big concern either.

3. We do however, have a space problem. Putting together another scsi raid 5 set of larger disks is expensive. So I wondered about the SATA alternative. 7200rpm Sata drives are really cheap but I suspect would give a noticeable performance hit - some people I have asked say it would, others say that practically speaking we would not notice a difference. This is one of the reasons I began this post. I think our motherboard (Tyan Thunder S2880) does have a sata raid 1 (no raid 5) controller built in but I will have to check that since I think it may be optional. The recommemded velociraptors are about 3 x the price of the 7200rpm sta drives. Hence what I am faced with is a performance vs price tradeoff and I have no expertise in knowing really how to assess this. I am relying on advice from the experts like you guys!!

4. I had not thought about raid1 before because I had assumed raid 5 would be better but from what I am reading, I gather both of you agree that raid 1 might actually give better performance for our setup than raid 5 which is a surprise. Is that correct?

Thanks again for your help.

Share this post


Link to post
Share on other sites

Different RAID levels each have their own advantages. Usually for small servers like your, there will be little difference. There should be a good article in the FAQ here.

If you only have to buy a couple of SATA disks and mirror then with the onboard controller - or Win2k3 software mirroring which is just fine - then you have a very affordable solution. If you need to need hot swap you'll have to buy a backplane, disks and a real RAID controller then you're better off buying a whole new server. And Raptors are usually a waste of money IMNSHO. If you want to spend that kind of money, go SCSI or SAS. If you don't want to spend a whole lot of money, buy 7200 rpm SATA.

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
Sign in to follow this