A question that has been troubling me a lot lately is a few problems at a few clients. They where all related to databases and with no exception these databases all rum on different types of SAN's (some on HP some on NetApp, with varying specs). The first major point is performance... this is where the SAN solution should really do well, Impressive specs, loads of drives, nvram caches etc... but still they fail to impress me. Granted the setups I've worked with have all been on the cheap side (example: a NetApp-something with 10 15krpm drives (connected with FC though)) but with the pricetags on these pieces buying anything but the cheap stuff will ruin you. Still even quite reasonable amounts of drives (10 15krpm drives is not massive, but not that bad) performance doesn't seem to be that great. And for most installations that I've "fixed" the solution has been redesigning/optimizing software or in the worst case adding some more (sometimes quite alot more) ram (which seems to be a very cheap upgrade compared to buying the "one step better" SAN-solution). It strikes me though that ram is probably the cheapest solution since even a few hours of optimization add up to some money.
The next issue is reliability, which should also be a SAN execellence. Although I've never seen anyone completely out I've seen the effects of serious controller/drive issues causing failure to write and read data. Maybe the local array of drives is even worse but I just can't get rid of the feeling that having a single point of failure (the SAN) is worse than having two completely independent arrays of drives (one in each database server).
The systems I've worked with have typically been anything from 2-8 cores with 4-16 GB memory, often but not always two servers using clustering (using the same data on the SAN, allowing one of the servers to fail) or similar technology.
Is the SAN the best choice for some other reason? Or is it just that I'm working with the wrong type of setups (too small scale) for SAN's to really show their best? From the setups I've seen comparable performance could be reached with a few more drives (but split the bunch in two (one for each server)) using mirroring or some similar technology to have two completely redundant servers, and spend the money saved on some extra memory. For mass storage a common file server can be used.
I really need to know because I'm getting the feeling that I'll quite soon will be in the position to advice customers on solutions to choose, and I really don't want them to see the problems that I've had to solve.
Confused about SAN's.
All these emerging SSD's of different types makes me even more confused... but that maybe for another day...