Recommended Posts

Care to explain why you think they're confused? They were talking about using HBA cards.... When I was doing cost forecasts for '04 my boss told me a good rule of thumb for capex on SAN space was $50k/TB. If this is supposed to be "cheap" I'd expect it to be like $10k/TB.

Share this post


Link to post
Share on other sites

can someone tell me the differences between SAN and NAS. And what applications or roles are geared toward each.

Share this post


Link to post
Share on other sites

Basically the difference is that NAS runs over TCP/IP as an NFS mounted device and SAN runs as a SCSI device, generally via a Fibre Channel Host Bus Adapter (HBA).

The device I linked above is a SAN device because it runs the SCSI protocol via an HBA.

Here's an article that spells out the differences:

http://www.extremetech.com/article2/0,3973,1158132,00.asp

Share this post


Link to post
Share on other sites

Xyratex has some very nice new high density SATA based FC storage arrays. They hold 16 drives in 3U and come with dual active-active RAID controllers and 1Gb cache.

These things easily crank out 180MB/s writes and 260MB/s reads all day/all night in our facitly.

We have a bunch of them behind our Lustre OST machines and routinely do 1GB/s off 6 of these in a stripe. (see www.lustre.org)

4Tb will set you back $19K with dual controllers and 1Gb cache per..... Add a Qlogic 5200 2Gb switch for $4200 for 8 ports, a few HBA's at $600/each and you can have a SAN for under $30K and not be locked into the EMC/Dell Clarriion hell-hole.

Performance will also be a lot better than even the mid-range Dell/EMC Clarriion stuff....at 1/6th the price.

2 years ago this capacity of disk would have taken up 12U spaces, cost $100K, and we'd have been happy if it did 100MB/s.

This is where SATA really fits into the enterprise. For bulk data storage and sequential workflow it can't be beat. For random the true 15K FC drives are still the way to go but that may change with enough Raptors in an array.

SG

Share this post


Link to post
Share on other sites
This is where SATA really fits into the enterprise. For bulk data storage and sequential workflow it can't be beat. For random the true 15K FC drives are still the way to go but that may change with enough Raptors in an array.

In addition to that a cheap caching server can do wonders. ;)

Share this post


Link to post
Share on other sites

Well, posting a link called "cheap NAS" at this point seems a little silly. I'd guess that all the interested parties are here anyway.

I'm looking for a cheap, small capacity NAS solution.

support for at least 8 SATA Drives

Support for RAID levels 10+50

Dual Copper Gigabit Ethernet

It seems all the solutions I've found are based on a full-blown computer system (Xeon, no less). Aren't there any small-time appliance-type things around?

Share this post


Link to post
Share on other sites

Yeah, and it's on E-bay for about $1200. Thanks for the tip.

Why are these so expensive? The storage server thread came up with much better $/GB numbers, and that was an entire PC?

Share this post


Link to post
Share on other sites
A guy over at Sudhian found this:  http://www.buffalotech.com/wireless/produc.../HDH120LAN.html

Not fancy, but should do what you want.

Great link, I especially like this part:

...and with two USB 2.0 ports, additional hard drives can be added for extra space.

Not to mention the posibilities of cracking the case and adding storage (a-la TIVO ?)

Share this post


Link to post
Share on other sites
A guy over at Sudhian found this:  http://www.buffalotech.com/wireless/produc.../HDH120LAN.html

Not fancy, but should do what you want.

U.S. Robotics has a similar product. No wireless, however.

It does include Firewire, but with a lack of GbE you wouldn't notice the difference - unless you used two drives at once. :)

I don't know of any low-cost units that offer that kind of flexibility (RAID 10, 50, etc.) and offer GbE.

Building your own in a mid-tower seems the way to go.

In other news...

I find it interesting how SANs are fixated on Fibre now. I found that article intriquing and mostly correct. I didn't see where those guys found their information for SANs though.

They seemed focused on the host adapter as a means of differentiating the two. It really is based on the purpose and framework of the actual storage.

In reality it boils down to marketing,

Dogeared

8^)

Share this post


Link to post
Share on other sites

After seeing the pricing on these things, I'm definatly looking at building a file server instead (see my "PCI Bus traffic" thread). What does a NAS offer that a file server does not? Why would someone not get a fileserver?

Share this post


Link to post
Share on other sites

ddrueding, NAS is a file server appliance (NFS OR CIFS--SMB), possibly with iSCSI and drivers which let remote machines mount volumes as devices. It also may have SAN features such as hot-splits, etc.

SAN uses a dedicated network with a specific protocol. It usually uses SCSI for OS device support, but Windows 2003, for example, has store port drivers, so this is not always the case.

Both usually have a lot of great features for backup restore, mirroring, and advanced RAID, as well as monster caches (EMC's DMX frames use up to 256GB of cache), although the larger SAN boxes are usu. more feature rich. Both are very high latency versus local storage and don't always work as well for latency sensitive operations (such as paging).

SAN_guy, when you said you were pushing 1GB/s, I was wondering how, since HBAs are only 2Gb/s. Did you mean 1GB/s in the frame? In any case, you'd need 4 HBAs and at least two PCI-X buses fully loaded to drive that I/O.

This system has 28 Fiber channel adapters, servicing 2000 spindles (74 TB) all in RAID 0. Scary.

Share this post


Link to post
Share on other sites
ddrueding,  NAS is a file server appliance (NFS OR CIFS--SMB), possibly with iSCSI and drivers which let remote machines mount volumes as devices.  It also may have SAN features such as hot-splits, etc.

I can definatly appreciate the uses for a SAN. Server clustering or rendering farms could hardly operate without that concept. But I must still be missing something with the NAS...

From this I gathered that is uses a standard filesystem, that it allows it's "shares" to be mounted as drives (unless you mean boot-time drivers, that might be cool), and be easily manageable.

I must still be missing something.

The filesystem is fairly straightforward and I see no reason for it not to be.

Drivers? Something like daemon tools to mount network shares? Like drive mapping? Is it bootable?

Manageability is always a plus. The ability to expand a RAID5 or RAID10 array sounds very attractive.

But these things are also available in servers at about the same cost. Sorry if I'm sounding dumb here, but I'd like to appreciate the technology.

Share this post


Link to post
Share on other sites
This system has 28 Fiber channel adapters, servicing 2000 spindles (74 TB) all in RAID 0.  Scary.

That looks like an absolute beast...I'd need to clear out my garage...and the rest of my house :D

Share this post


Link to post
Share on other sites

I have to agree with ddrueding... there are some things a SAN just makes your life easier for.

Both are very high latency versus local storage and don't always work as well for latency sensitive operations (such as paging).

Depends on the hardware. I've seen a SAN with an SSD as couple of the drives and that sucker was nice (and expensive)!

SAN_guy, when you said you were pushing 1GB/s, I was wondering how, since HBAs are only 2Gb/s.

Imagine you have a loop at 2Gb/s on 2 cards. You loose a card and you get less bandwidth. His reason could be something else, too.

4Tb will set you back $19K with dual controllers and 1Gb cache per..... Add a Qlogic 5200 2Gb switch for $4200 for 8 ports, a few HBA's at $600/each and you can have a SAN for under $30K and not be locked into the EMC/Dell Clarriion hell-hole.

Yes but you're on your own for support. :P

Personally I've had mixed experiences with Qlogic HBAs in the past. How I wish I took the extended warrenty at the time. :lol:

Share this post


Link to post
Share on other sites
SAN_guy, when you said you were pushing 1GB/s, I was wondering how, since HBAs are only 2Gb/s.

Imagine you have a loop at 2Gb/s on 2 cards. You loose a card and you get less bandwidth. His reason could be something else, too.

;)

Share this post


Link to post
Share on other sites
From this I gathered that is uses a standard filesystem, that it allows it's "shares" to be mounted as drives (unless you mean boot-time drivers, that might be cool), and be easily manageable.

They show up as devices, so it should support boot. Don't really know though, I've only worked on SANs (where it is still somewhat tricky to boot, esp with multipath)

Drivers? Something like daemon tools to mount network shares? Like drive mapping? Is it bootable?

NAS is really just a bundled fileserver with extensions to make it more like SAN, using TCP and NFS or SMB instead of FCP and a dedicated network for connectivity. Since it's got more potential scalability problems, such as packet fragmentation (without iSCSI, packets are limited to Ethernet's 1500 MTU, and the tech is newer) it's usually targeted as a lower cost solution. It still has the SAN benefits, such as hot-splits, realtime replication, centralized backup, device driver support for clustering, and is a pre-packaged highly scalable file server. For example, EMC's NAS is supported up to 100TB (FC and 200TB (ATA) and 325,000 IOPS with thousands of connections.

Depends on the hardware. I've seen a SAN with an SSD as couple of the drives and that sucker was nice (and expensive)!

Even on a DMX3000 with 64GB of cache, you still can't do anything about the extra ms required to get traffic onto an external media, through a switch to a controller (then to cache or drives as necessary) and back. We have several 2003 boxes which boot from SAN, and the only local drives they use are for paging.

The absolute throughput is huge, though, so you can perform a lot of I/Os at the same time-- it's just that any given I/O to the SAN will take longer to complete than a local one (until the queue depth gets large).

Also, I think SAN Guy was just quoting the theoretical max of the unit.

Share this post


Link to post
Share on other sites

I would definitely like a nice, stable PCI Express HBA that allowed connectivity to some kind of affordable SAN appliance that could be filled with SATA disks.

I think that could sell well in smaller organizations.

Some small companies still have very complex workloads. Imagine a web application small enough for a few people to support, but complex enough to put a lot of stress on the database subsystem. I'm sure there are many of those out there not turning enough profit to happily drop $20K to $30K on a SAN.

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now