amdoverclocker2

iSCSI, complete network upgrade

Recommended Posts

I am one of two network admins at the company I work for. We use TS (terminal servers) to access all of our apps in a data center about 20 miles from our building via a point-to-point. We currently have 1U Dual 2.4Ghz Xeon TS that need upgrading as well as our Exchange and file servers. I need to find a solution that will allow use to be back in business after a failure in minutes rather than hours or days like it is now. It is well known that a true SAN would be the best solution but I dont have that kind of budget. I am looking into buying eleven new diskless 1U dual Opteron (some dual core) servers and get iSCSI HBA\accelerators and boot off a central storage server. The servers will be: two TS, three Exchange ( one front end and two in a cluster), file server, web server, FileMaker, online backup server for web/file/FileMaker, fax, and management. I am looking at a dual 250 Opteron iSCSI server with twenty four 150GB Raptors using two ARC-1130s. Think that will be sufficent? :D I am lost on what HBA/accelerator to use on both side, client and server. I wanted to use link aggergation on the server for sure and maybe on the more demanding servers like file and TS. I was looking at the Alaritech SES2000 and SES2100, here but it doesnt look like they support 64bit 2003 yet. I need 64bit support in whatever Im going to use. I also have been looking at the Qlogic QLA4052C but I cant find any information on it outside of the Qlogic site. Does anyone have experience with any of these or other ones? Advice and experiences would be great! Thanks for reading. :D

Share this post


Link to post
Share on other sites
I need to find a solution that will allow use to be back in business after a failure in minutes rather than hours or days like it is now.

I can't see how the solution you propose will help get your business back in minutes after the iscsi server fails. From my point of view you are going from having several servers where any one failure would be bad but wouldn't take down the whole business to a situation where a single failure of the iSCSI server will take out your entire network.

SANs/iSCSI stuff is great as long as it doesn't fail. I wouldn't be comfortable with such a solution unless I bought a solution that was entirely redundant. You can get iSCSI solutions which redundant instant failover controllers, psus, fans, etc. Not that this level of redundancy is necessarily good enough depending on your needs. A colleague recently had his fully redundant SAN completely corrupt his exchange installation.

iSCSI is still quite new and if you do want to go down this route I'd be very careful about the software you pick for the iSCSI server. I've not been involved in this sort of set up as yet, though I have considered it. If I couldn't get a controller based fully redundant solution my preference would be to go for two iSCSI servers and use the Windows/Linux/whatever mirroring features on your front end servers to ensure that even if one of your iSCSI servers completely fails your network will continue running. Remember to have redudant network links between the iscsi servers and your front end servers and this should be an entirely different set of switches to your client network (or at the very least a seperate VLAN with the appropriate security). With a dedicated storage network you should be able to run with jumbo frames to improve performance.

If you are using exchange, etc, I'd highly recommend finding a solution that has VSS drivers to allow snapshots and faster backups.

Share this post


Link to post
Share on other sites

I need to find a solution that will allow use to be back in business after a failure in minutes rather than hours or days like it is now.

I can't see how the solution you propose will help get your business back in minutes after the iscsi server fails. From my point of view you are going from having several servers where any one failure would be bad but wouldn't take down the whole business to a situation where a single failure of the iSCSI server will take out your entire network.

SANs/iSCSI stuff is great as long as it doesn't fail. I wouldn't be comfortable with such a solution unless I bought a solution that was entirely redundant. You can get iSCSI solutions which redundant instant failover controllers, psus, fans, etc. Not that this level of redundancy is necessarily good enough depending on your needs. A colleague recently had his fully redundant SAN completely corrupt his exchange installation.

iSCSI is still quite new and if you do want to go down this route I'd be very careful about the software you pick for the iSCSI server. I've not been involved in this sort of set up as yet, though I have considered it. If I couldn't get a controller based fully redundant solution my preference would be to go for two iSCSI servers and use the Windows/Linux/whatever mirroring features on your front end servers to ensure that even if one of your iSCSI servers completely fails your network will continue running. Remember to have redudant network links between the iscsi servers and your front end servers and this should be an entirely different set of switches to your client network (or at the very least a seperate VLAN with the appropriate security). With a dedicated storage network you should be able to run with jumbo frames to improve performance.

If you are using exchange, etc, I'd highly recommend finding a solution that has VSS drivers to allow snapshots and faster backups.

Yep, VSS is a requirment. All of the solutions Ive been looking at support VSS. I do see your point about iSCSI server failure. Im not so much worried about data loss because we will be taking snapshots every hour using two different backup servers but that doesnt help with hardware failure and redundancy. I am going to use two seprate networks and probally failover NICs but I dont know about the server insides. Cluster iSCSI servers? But then that would be another $15,000 to the cost, or abouts. Ill keep thinking on that part. Thanks!

Share this post


Link to post
Share on other sites

This is out of my experience but it seems to me like the iSCSI stuff will be expensive, complex, and may still not give you the uptime you are looking for. Instead of trying to find a solution to cover all your servers I would address each one individually and find the best way to maintain uptime with that particular server/application. For example, you can use Network Load Balancing with your terminal servers depending on how much data changes all the time on each one. There is actually software out there that will give you redundant exchange servers without the need for a SAN. Depending on the package purchased the failover can be automatic or manual. I'm guessing the web server could also just use NLB like the TS Servers. You may also want to look into Virtual Servers for some of these. We use Virtual Servers at my work and they are insanely easy to backup and very easy to get running again in the event of a failure. We use the Microsoft Virtual Server product but VMWare is also quite popular.

Share this post


Link to post
Share on other sites
This is out of my experience but it seems to me like the iSCSI stuff will be expensive, complex, and may still not give you the uptime you are looking for. Instead of trying to find a solution to cover all your servers I would address each one individually and find the best way to maintain uptime with that particular server/application. For example, you can use Network Load Balancing with your terminal servers depending on how much data changes all the time on each one. There is actually software out there that will give you redundant exchange servers without the need for a SAN. Depending on the package purchased the failover can be automatic or manual. I'm guessing the web server could also just use NLB like the TS Servers. You may also want to look into Virtual Servers for some of these. We use Virtual Servers at my work and they are insanely easy to backup and very easy to get running again in the event of a failure. We use the Microsoft Virtual Server product but VMWare is also quite popular.

We use NLB now and it sucks. We have a devil of a time with profiles. That will all be fixed when we move to a single TS. Virtual Server is not an option. We need multi CPU server and you cant do that in VS05, only in VMW ESX with SMP addon, $35K. We are also doing this for ease of backup. We use the hell out of Shadow copy and love it. With an iSCSI SAN, it is easy and fast. Thanks.

Share this post


Link to post
Share on other sites

So how exactly are you going to setup this iSCSI thing in relation to your TS? This is actually a topic I am very interested in and am still confused on how an iSCSI disk array will give you redundant terminal servers?

Share this post


Link to post
Share on other sites

Shy away from your ISCSI server concept and go with a real hardware target ISCSI array - performance will be much better and you'll gain functionality like snapshots, mirroring, etc, etc. Infortrend makes a nice 12 bay ISCSI hardware target array and Equalogic also makes some nice systems (albeit a bit more pricey.) Paying for raptors is no use as the latency of ISCSI will mask any perceived response benefit, so your better bet is to go with larger capacity drives and setup the array as RAID10.

As for the HBA's on the servers Alacritech is a good choice and performance is quite good. Also ensure you budget for a good wire speed gig-e switch preferably with level-3 capabilities.

What you propose is a workable solution if you do it correctly, but correctly doesn't mean cheap.

You may want to consider a blade chassis setup for your servers and buying one spare blade. That way if you have ablade go down you can just zone it's ISCSI LUN to the spare blade, reboot the blade, and presto it's taken on the image of the downed server -- if the problem didn't toast the partition.

If you want to go the next level then it would involve VMWare and virtualize your servers to allow more flexibility.

SG

I am one of two network admins at the company I work for. We use TS (terminal servers) to access all of our apps in a data center about 20 miles from our building via a point-to-point. We currently have 1U Dual 2.4Ghz Xeon TS that need upgrading as well as our Exchange and file servers. I need to find a solution that will allow use to be back in business after a failure in minutes rather than hours or days like it is now. It is well known that a true SAN would be the best solution but I dont have that kind of budget. I am looking into buying eleven new diskless 1U dual Opteron (some dual core) servers and get iSCSI HBA\accelerators and boot off a central storage server. The servers will be: two TS, three Exchange ( one front end and two in a cluster), file server, web server, FileMaker, online backup server for web/file/FileMaker, fax, and management. I am looking at a dual 250 Opteron iSCSI server with twenty four 150GB Raptors using two ARC-1130s. Think that will be sufficent? :D I am lost on what HBA/accelerator to use on both side, client and server. I wanted to use link aggergation on the server for sure and maybe on the more demanding servers like file and TS. I was looking at the Alaritech SES2000 and SES2100, here but it doesnt look like they support 64bit 2003 yet. I need 64bit support in whatever Im going to use. I also have been looking at the Qlogic QLA4052C but I cant find any information on it outside of the Qlogic site. Does anyone have experience with any of these or other ones? Advice and experiences would be great! Thanks for reading. :D

Share this post


Link to post
Share on other sites
Shy away from your ISCSI server concept and go with a real hardware target ISCSI array - performance will be much better and you'll gain functionality like snapshots, mirroring, etc, etc. Infortrend makes a nice 12 bay ISCSI hardware target array and Equalogic also makes some nice systems (albeit a bit more pricey.) Paying for raptors is no use as the latency of ISCSI will mask any perceived response benefit, so your better bet is to go with larger capacity drives and setup the array as RAID10.

I am planning on using an iSCSI server software such as StringBean so that supports everything you said it doesnt. Some of the things you are talking abot are just Storage Server 2003 on a box, no different then what I am thinking of doing except for the OS.

As for the HBA's on the servers Alacritech is a good choice and performance is quite good. Also ensure you budget for a good wire speed gig-e switch preferably with level-3 capabilities.

Yes, but they do not offer 64Bit support as of now. That is my only draw back. And yes, L3 switch, probally the Dell PowerConnectTM 6024. We have had good luck with them in the past.

What you propose is a workable solution if you do it correctly, but correctly doesn't mean cheap.

You may want to consider a blade chassis setup for your servers and buying one spare blade. That way if you have ablade go down you can just zone it's ISCSI LUN to the spare blade, reboot the blade, and presto it's taken on the image of the downed server -- if the problem didn't toast the partition.

My plan does involve have spare servers ready to be booted in a second, but blades are a bit tougher. I need a PCI slot and not many have them. I have only sceen a few, IWill and Tyan I believe, but I see no advantage over 1Us. Space is not an issue BTW, we have an entire rack at the data center.

If you want to go the next level then it would involve VMWare and virtualize your servers to allow more flexibility.

SG

This has just been said why it will not work. To be able to use SMP, the software alone, ESX Server and the SMP addon, would be something on the lines of $35,000. Not worth it. My current budget has everything, new server, software, iscsi server, network, etc, for $50,000 and that includes a KVM over IP setup. I made a Visio drawling, maybe Ill put that up to show a better picture of what Im trying to do. Thanks!

Edited by amdoverclocker2

Share this post


Link to post
Share on other sites

This will be your weak link. None of the ISCSI software targets fully support the ISCSI error recovery levels which means they are not suitable for your application. The hardware targets properly do this, and this is a huge part of the ISCSI spec for the software guys to "ignore". Think of it this way - your exchange server sends a IO that has an error somewhere on the LAN but it's *never* detected nor corrected by your software ISCSI target, so the Exchange box thinks it's all ok and goes on with life.....until this issue comes back in a very big ugly way.

As for Stringbean support everything that isn't the case - it is a ISCSI target mode driver and that's it. All the other functionality comes from the OS and the fact that the OS "owns" the disk prior to being exported by the ISCSI target driver. This is much different then being done in specialized hardware designed for the task.

You have been warned......

SG

I am planning on using an iSCSI server software such as StringBean so that supports everything you said it doesnt. Some of the things you are talking abot are just Storage Server 2003 on a box, no different then what I am thinking of doing except for the OS.

Share this post


Link to post
Share on other sites
As for Stringbean support everything that isn't the case - it is a ISCSI target mode driver and that's it. All the other functionality comes from the OS and the fact that the OS "owns" the disk prior to being exported by the ISCSI target driver. This is much different then being done in specialized hardware designed for the task.

Actually there are few "hardware" iSCSI targets on the market. Some just hide their implementation better than others.

Share this post


Link to post
Share on other sites

This will be your weak link. None of the ISCSI software targets fully support the ISCSI error recovery levels which means they are not suitable for your application. The hardware targets properly do this, and this is a huge part of the ISCSI spec for the software guys to "ignore". Think of it this way - your exchange server sends a IO that has an error somewhere on the LAN but it's *never* detected nor corrected by your software ISCSI target, so the Exchange box thinks it's all ok and goes on with life.....until this issue comes back in a very big ugly way.

As for Stringbean support everything that isn't the case - it is a ISCSI target mode driver and that's it. All the other functionality comes from the OS and the fact that the OS "owns" the disk prior to being exported by the ISCSI target driver. This is much different then being done in specialized hardware designed for the task.

You have been warned......

SG

I am planning on using an iSCSI server software such as StringBean so that supports everything you said it doesnt. Some of the things you are talking abot are just Storage Server 2003 on a box, no different then what I am thinking of doing except for the OS.

Why not go with 2 ax100iDP, support ISCSI out of the box. More storage than 24 raptors, you will most likely never see a difference. A dell blade chassis with 10 blades 2GB ram and dual 3ghz processors would cost about 30 and the 2 ax100s would go for about 10 each. These would could be integrated with larger emc solutions in the future.

If you went to them with an order like this you could probably get them to throw in a couple switches.

Share this post


Link to post
Share on other sites
I do see your point about iSCSI server failure. Im not so much worried about data loss because we will be taking snapshots every hour using two different backup servers but that doesnt help with hardware failure and redundancy.

I'm not sure that you fully appreciate that centralised storage is also a centralised point of failure. If there's a software burp or catastrophic hardware failure, all of your storage is out. I'd think that it would be preferable for some single function be out for a while rather than everything down until the storage server is back up.

I am aghast that you would seriously consider using ATA drives in mission critical systems -- especially a new, unproven model. Last time I checked, they're not even cheaper than comparable SCSI drives.

Share this post


Link to post
Share on other sites
I do see your point about iSCSI server failure. Im not so much worried about data loss because we will be taking snapshots every hour using two different backup servers but that doesnt help with hardware failure and redundancy.

I'm not sure that you fully appreciate that centralised storage is also a centralised point of failure. If there's a software burp or catastrophic hardware failure, all of your storage is out. I'd think that it would be preferable for some single function be out for a while rather than everything down until the storage server is back up.

I am aghast that you would seriously consider using ATA drives in mission critical systems -- especially a new, unproven model. Last time I checked, they're not even cheaper than comparable SCSI drives.

I am not going to be using RAID0. I also plan on having several hotspares.

Have you not looked at current HDD prices? $300 for a 150GB 10K is a steal. Show me a SCSI 10K dive under $400. Now I am only looking for retail, no OEM, 1 year warrenty, refirb, etc.. for obivious reasons. I believe you will come up short.

Share this post


Link to post
Share on other sites

This is true, but the ISCSI stacks written for embedded hardware is far more feature rich. The software stacks written to run on a host OS like Windows are not. Perhaps this will change in time, but for now it is a huge weakness.

Microsoft didn't do a ISCSI target driver for a reason, and I am sure they would have liked to do that for WSS2003.

SG

As for Stringbean support everything that isn't the case - it is a ISCSI target mode driver and that's it. All the other functionality comes from the OS and the fact that the OS "owns" the disk prior to being exported by the ISCSI target driver. This is much different then being done in specialized hardware designed for the task.

Actually there are few "hardware" iSCSI targets on the market. Some just hide their implementation better than others.

It's a nice solution and would work well, albeit Xeon's and not Opterons. My experience shows Dell will always come down or throw in some freebies.

SG

This will be your weak link. None of the ISCSI software targets fully support the ISCSI error recovery levels which means they are not suitable for your application. The hardware targets properly do this, and this is a huge part of the ISCSI spec for the software guys to "ignore". Think of it this way - your exchange server sends a IO that has an error somewhere on the LAN but it's *never* detected nor corrected by your software ISCSI target, so the Exchange box thinks it's all ok and goes on with life.....until this issue comes back in a very big ugly way.

As for Stringbean support everything that isn't the case - it is a ISCSI target mode driver and that's it. All the other functionality comes from the OS and the fact that the OS "owns" the disk prior to being exported by the ISCSI target driver. This is much different then being done in specialized hardware designed for the task.

You have been warned......

SG

I am planning on using an iSCSI server software such as StringBean so that supports everything you said it doesnt. Some of the things you are talking abot are just Storage Server 2003 on a box, no different then what I am thinking of doing except for the OS.

Why not go with 2 ax100iDP, support ISCSI out of the box. More storage than 24 raptors, you will most likely never see a difference. A dell blade chassis with 10 blades 2GB ram and dual 3ghz processors would cost about 30 and the 2 ax100s would go for about 10 each. These would could be integrated with larger emc solutions in the future.

If you went to them with an order like this you could probably get them to throw in a couple switches.

Share this post


Link to post
Share on other sites

SATA isn't so bad for these applications as the price allows you to address it's shortcoming. For example we've setup a lot of clusters using a pair of ISCSI attached arrays using SATA disks. The way to make this something you can sleep at night over is to run the arrays RAID-10 with a hotspare, and also mirror in realtime across two arrays. This way you have no SPOF and also can take advantage of the mirrored arrays to help performance.

SATA is cheap enough to allow this 100% redundancy with still being under SCSI drive pricing. A lot of this savings is that nobody does a ISCSI attached SCSI drive enclosure for reasonable $$$$.

SG

I do see your point about iSCSI server failure. Im not so much worried about data loss because we will be taking snapshots every hour using two different backup servers but that doesnt help with hardware failure and redundancy.

I'm not sure that you fully appreciate that centralised storage is also a centralised point of failure. If there's a software burp or catastrophic hardware failure, all of your storage is out. I'd think that it would be preferable for some single function be out for a while rather than everything down until the storage server is back up.

I am aghast that you would seriously consider using ATA drives in mission critical systems -- especially a new, unproven model. Last time I checked, they're not even cheaper than comparable SCSI drives.

Share this post


Link to post
Share on other sites
Have you not looked at current HDD prices? $300 for a 150GB 10K is a steal. Show me a SCSI 10K dive under $400. Now I am only looking for retail, no OEM, 1 year warrenty, refirb, etc.. for obivious reasons. I believe you will come up short.

This is just proof positive that *you* haven't looked.

Not only did a search on Pricewatch find 146GB Seagate 10K.7s (with standard 5-year warranty) for less than $300, I bought two of them less than six months ago for about $250 each!

Good luck on your project. You customers will need it!

Share this post


Link to post
Share on other sites

BS flag! Your wrong. Show me a link. The only ones that are less then $300 are "OEM", "Recertified", or "Refirbished". Get your facts right buddy. Plus the 10K.7s are trash. I've owned a few and was very unimpressed.

$290 - http://www.computergiants.com/items/one_it...rt=116681&aff=2

Condition: Recertified

$315 - http://www.rubyskytech.com/ProductInfo.asp...=3146707LC-XX9N

Condition: Refurbished

$322 - http://www.rubyskytech.com/ProductInfo.asp...=3146707LC-XX1R

Condition: Seagate Certified Repaired

Edited by amdoverclocker2

Share this post


Link to post
Share on other sites

$300 is either for an OEM or recertified drive, and warranty will be a problem. In Canadian dollars from a large distributor the Raptor 150Gb is $344 and a Seagate 10K 147Gb is $470. That is in CDN dollars but bother are new with 5 year warranty.

Your $250 is either luck or you don't have warranty from Seagate...

SG

This is just proof positive that *you* haven't looked.

Not only did a search on Pricewatch find 146GB Seagate 10K.7s (with standard 5-year warranty) for less than $300, I bought two of them less than six months ago for about $250 each!

Good luck on your project. You customers will need it!

Share this post


Link to post
Share on other sites
BS flag! Your wrong. Show me a link. The only ones that are less then $300 are "OEM", "Recertified", or "Refirbished". Get your facts right buddy. Plus the 10K.7s are trash. I've owned a few and was very unimpressed.

<asshole mode>BS flag! You are wrong. Contact the seller. The description is incorrect; refurbished drives don't come with manufacturer's warranties. Get your facts right buddy. Plus the Raptors are trash. I've seen a few and was very unimpressed.</asshole mode>

Share this post


Link to post
Share on other sites

A lot of feedback -

1. Raptor 150s? Avoid. This is an enterprise environment. Use something with a proven track record. Get Cheetah 10K.7s and call it a day. A new model with no established reputation in the industry... don't go there unless you have enough credibility banked with your employer. (And if you had that level of credibility, you wouldn't be trying to shoehorn this into a limited budget, you'd have the budget you need.)

2. iSCSI makes me nervous too. I've done three iSCSI deployments, all being iSCSI forks off a much larger SAN/NAS chassis for non-mission critical stuff. I've had to deal with integration issues on RedHat x86-64, Server 2K and Solaris 8/9/10. Combined with the fact that you have a non-clustered head... you're not going to have "minutes" failover if you lose the head.

3. Read SAN_Guy's post again, it's good advice. If you are going to go the iSCSI route, go with a HW implementation. By the way, iSCSI off a FAS3050c is pretty darn fast, for those of you who care.

4. I agree with the advice to buy a storage solution and a support contract. Dell always finds a way to compete if you can sell yourself as a strategic opportunity for them. I've also seen NetApp playing ball lately. I personally dislike NetApp in higher end applications, but in tamer situations (couple dozen spindles) they really are stable and fairly idiot-proof, and you can connect them to pretty much anything. I don't know what the current price point on a FAS250 is, but it's probably worth looking into.

Share this post


Link to post
Share on other sites

I would seriously avoid the Raptors and go with SCSI. SCSI is designed exactly for this environment, SATA is not. I couldn't find it on WD's website, but are the Raptors even rated for 100% duty cycle?

If you absolutely insist on using SATA, then use enterprise class 7200 drives (WD RE2 comes to mind, there are others), do a RAID 10, possibly even mirror the two ARC-1130's against each other. Unlike a non-SAN environment, loose one too many drives and you loose EVERYTHING, and multi drive systems have a tendency to loose drives in batches.

If money is so tight that the extra $100 per drive for SCSI is too much (what, $2400 total) then you might consider putting your web, file, fax, and management servers either on the same box or on cheaper hardware. Unless you have one hell of a busy network, I highly doubt that these servers will need to be a dual opteron box.

Fibre Channel, hardware based SANs may be expensive but you get what you pay for.

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now