Sign in to follow this  
wasserkool

Confused about iSCSI server and file server..please help me pros!

Recommended Posts

I was reading the thread about a user who wants to deal with 300 hours of video and some helpful users here mentioned about setting up an iSCSI server. I heard about iSCSI before so I did some reading and found it to be quite interesting. One member even mentioned that you can link 2 Gigabit NIC together to achieve better bandwidth.

Unfortunately, I wasn't able to find that much info on running a iSCSI server in linux environment. So i am wondering which setup offers the best performance and storage reliability, with my hardware:

--------------------------

CPU : Opteron 165 Dual Core

Mobo: ASUS K8N-LR

Ram: Corsair ValueSelect 1GB DDR

NIC: 2X Integrated BroadCom PCIE x1 GbE nic

SATA II Controller: 3Ware AMCC 9550SX 8 Port

SCSI Controller: LSI Logic 21320

Storage:

4 X 250GB Seagate 7200.10 16MB Cache in RAID 5

1 X 147GB Maxtor Atlas IV 10K

----------------------------------------------

Should I opt for the Windows 2003 Server or with linux usin SAMBA?

Share this post


Link to post
Share on other sites

You really have several options if you want to run an iSCSI server of Linux. First of all, you have to install IET (iSCSI Enterprise Target - iscsitarget.sourceforge.net/) if you want to use a "free" iSCSI server. Once installed, you can refer to the wiki on their page. If you set this up on the server with the 3ware controller, then your main bottleneck will be the network. Even if you aggregate the gigabit NICs, you have to do the same on the other end (the initiator / client) and you'll max out at about 150 MB/s under ideal conditions.

Share this post


Link to post
Share on other sites

Does linux iSCSI still gives better performance compared to just a NAS under Samba or Win 2003 Server?

What i am most concerned about right now is

1) Future expandability

2) Performance

3) Reliability.

Also, if i set up the iSCSI initiator on the linux server, how do I let my windows client access the drives? I am a linux noob lol

Share this post


Link to post
Share on other sites

There is a fundemental difference between NAS, of which samba is an example, based storage and SAN, of which iSCSI is an example, storage.

NAS storage deals with files and directories on the server. The file system and its integrity is maintained by the server.

SAN storage deals with blocks. The file system and its integrity is maintained by the client(s).

There are often many communication round trips associated with NAS traffic. SAN traffic is much more basic and as such has lower overhead.

If you only have a single client per volume then there will not be many obvious differences between a SAN storage device and a NAS storage device. The SAN device will likely be faster but it takes a certain amount of tuning of systems and applications to take advantage of it.

Where things get hard is when you attempt to attach multiple clients to a common volume. In the case of NAS this is very simple. It just works. The only time there is trouble is when more than one client attempts to access the same file. Just attaching multiple clients to a common volume on a SAN is a recipe for disaster. Unless there is special, read expensive, software running on the clients the data on that common volume will be corrupted in short order.

Once an iSCSI target server is set up attaching Linux or Windows initiators to the associated volume is not too difficult. In the case of Windows 2K you have to download and install the iSCSI initiator from Microsoft.

Does linux iSCSI still gives better performance compared to just a NAS under Samba or Win 2003 Server?

What i am most concerned about right now is

1) Future expandability

2) Performance

3) Reliability.

Also, if i set up the iSCSI initiator on the linux server, how do I let my windows client access the drives? I am a linux noob lol

Share this post


Link to post
Share on other sites

So if there is more than one user accessing the SAN? the data might get corrupt and this won't happen to the NAS?

I am wondering how does the client maintain the file integrity though? Doesn't the initiator makes the drive look like a physcial drive to the windows system and the client just access it like a regular drive?

So i guess for a home network where large amount of videos and photos are stored and accessed from clients, which one will give better performance? SAN or NAS?

Share this post


Link to post
Share on other sites

A client machine accesses a SAN drive just like a physical drive maintaining things like directory structures and block free lists. If two clients access the same drive at the same time each machine could change these structures. This concurrent access is what will likely cause the corruption of a file system.

There are software packages which use some form of comunication between clients to make sure that the file system integrity is maintained but they can run anywhere from $500 - $4000 per machine.

A properly setup SAN shared file system would be faster than a NAS system for objects like photos and videos. However, It may not be practical for a smaller environment.

If the library consists of a large amount of fixed content then a shared volume could work as long as access to the volume is restricted to read only.

So if there is more than one user accessing the SAN? the data might get corrupt and this won't happen to the NAS?

I am wondering how does the client maintain the file integrity though? Doesn't the initiator makes the drive look like a physcial drive to the windows system and the client just access it like a regular drive?

So i guess for a home network where large amount of videos and photos are stored and accessed from clients, which one will give better performance? SAN or NAS?

Share this post


Link to post
Share on other sites

Because of the cost usually associated with them, SANs are normally deployed as shared storage in a clustered server environment. SANs normally run proprietary protocols optimzed for this expected usage, and typically employ high performance storage subsystems to support high I/O rates. If configured correctly and supported by the hardware, a server can even boot directly from a SAN, even though the "boot" volume for any given server isn't "shared" with any other systems.

NAS servers are usually just "plugged into the network", advertise themselves, support multiple protocols (usually standard rather than proprietary), and are accessed directly by clients (i.e., end-user systems, rather than servers).

Share this post


Link to post
Share on other sites

In my experience most storage on SANs is not shared. A lot of this storage is just carved up into pieces and allocated to individual servers. These servers access the storage using industry standard protocols.

Because of the cost usually associated with them, SANs are normally deployed as shared storage in a clustered server environment. SANs normally run proprietary protocols optimzed for this expected usage, and typically employ high performance storage subsystems to support high I/O rates. If configured correctly and supported by the hardware, a server can even boot directly from a SAN, even though the "boot" volume for any given server isn't "shared" with any other systems.

NAS servers are usually just "plugged into the network", advertise themselves, support multiple protocols (usually standard rather than proprietary), and are accessed directly by clients (i.e., end-user systems, rather than servers).

Share this post


Link to post
Share on other sites
In my experience most storage on SANs is not shared. A lot of this storage is just carved up into pieces and allocated to individual servers. These servers access the storage using industry standard protocols.

Because of the cost usually associated with them, SANs are normally deployed as shared storage in a clustered server environment. SANs normally run proprietary protocols optimzed for this expected usage, and typically employ high performance storage subsystems to support high I/O rates. If configured correctly and supported by the hardware, a server can even boot directly from a SAN, even though the "boot" volume for any given server isn't "shared" with any other systems.

NAS servers are usually just "plugged into the network", advertise themselves, support multiple protocols (usually standard rather than proprietary), and are accessed directly by clients (i.e., end-user systems, rather than servers).

Then it seems obvious that our experiences in this regard have been different. :)

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
Sign in to follow this