Sign in to follow this  
samm

Home Server Using Fiber Channel

Recommended Posts

OK, so I came across an opportunity to buy a bunch of fibre channel HDDs (Hitachi), and I'd like to implement them in my home for now. If I get them to work, I may put my HP DL server to use and start web hosting or something.

I need to know what y'all suggest as far as how to implement these hard drives.

This is what I think I know:

I need 3 fibre channel enclosures (14-15 bay)

I need a RAID controller

I need a way to access the storage

What I Have:

32 4Gbps dual port FC HDDs

1 Brocade 825 PCIe dual port 8Gbps HBA

HP DL360 G5 server machine

What I'm thinking I need:

HP EVA fibre channel enclosures?

HP EVA fibre channel controller(s)? I see most racks have two of them, do I NEED two? Or is that just for high speed access?

(Is EVA (virtual array) even the way to go? the whole "virtual" business kinda worries me - like thats maybe not what I'm looking for...)

Is this the best route to go? Can I plug controller(s) directly into HBA w/out the need for a FC hub/switch?

Can I just use a JBOD enclosure and let the HBA/OS do the RAID business?

Lastly, how would you recommend to implement these? At first, I want to try to store my media on them, using Windows 7 Media Center on client PCs, so Windows CIFS (map network drive) is easiest?

What about iSCSI? Is there a reliable/simple iSCSI setup for Windows that will allow multiple PCs to read/write the the target array at the same time? I know of iSCSI Initiator, but I need to use the target across multiple clients (thats the whole idea here).

Thanks for reading and any input you may have.

SAM

Edited by samm

Share this post


Link to post
Share on other sites

Wow, okay... let's see.

First, to clarify, FC HDD's are almost identical to SCSI hdd's except that the use FC. But FC and SCSI are very closely related and can be treated almost in the same way. FC is just a more flexible interface, but also more expensive and thus both standards are still alive. SCSI/SAS has its advantages as does FC.

Then, I do not know who pays your power bill, but don't be surprised to see it shoot through the roof. I do not know how "new" these disks are, lets say they are 2 to 3 years old. They will give you fairly decent IOps (no where near SSD speeds, need about 15 probably to reach that, still won't come near access time) but only poor GB's. Meanwhile, they use a REAL big amount of power. Where a desktop disk of 1TB would use in idle say 7 watts, these will easily do 15 to 20 watts a piece. Be warned!

So, if after heeding those warnings you are still wishing to use the equipment, let's continue.

Well, you could hook up a single HDD to the HBA and that should work. Not very effective, but still, it's all very close to SCSI.

Best thing to do is determine an array/storage box for it. This can be a box with a FC RAID controller in it which will allow you to make LUN's and then transport them through your HBA to your system, or this can be a simple JBOD box and thus leave your system to do the processing work.

A HP EVA is indeed an example of a storage box with RAID functionality in the box and in the case of EVA this is called a "virtual" RAID. It leaves less up to the user and just presents you with diskspace you can use. I am not a fan of this type of boxes because it makes things too stupid and sets you up for a fall in the future, my opinion and probably not really something you need to worry about at home.

A HP MSA is a more traditional box. Without controller you can hook it up to your system and see all the disks. With a controller it becomes an intelligent system which can RAID the disks for you and then present it to you.

What you need, it all depends on choices. Let me answer your switch question, that will clarify some more.

In most cases you are not required to have a switch to interconnect say and EVA with your HBA. In some cases I've seen, they might refuse to work together. Looking at the FC standard, this should just work.

Now, say you go with an EVA and a switch. Then you can buy multiple HBA's and put one in each system you wish to use storage on. This is the traditional configuration. Which is the most flexible, but also definitely the most expensive. 2 controllers are mostly used for high-availability. Since the disks are dual ported, each controller has a connection to it. Using dual paths to your host, if one controller fails, the other automatically kicks in and takes over. Not really needed at home I think.

Cheapest would be to buy a simple OEM box which can house your disks, connect that to your HBA directly and RAID the disks using software RAID on your host. No switch, no logic in the box, etc. For at home, a fine solution, just not boot drive compatible (probably, depends on OS). This could be your workstation which allows others to access the files using CIFS.

Or, you could build a dedicated storage box with something like FreeNAS or OpenFiler and then start using iSCSI to deliver the storage to the other boxes. This would allow you to use software RAID and not bother other systems with the processing power that takes.

Very many options, all dependent on what you want to do with it and how. Using these disks to hold movie files, etc. Don't even bother.... buy a 2TB disk, put it in your PC and you'll be much happier with the low power bill and low noise. I think if you would really run say 32 10K/15K FC disks of a few years old, it would cost you a 2TB sata disk a month. :P

If you indeed wish to run a hightraffic webserver, start looking at SSD's. Not saying this type of storage is unsuited for this purpose, on the contrary, just not that good for a "home" situation.

Hopefully this answers some of your questions. Let me know!

p.s. A warning. A HP EVA or MSA will *not* accept most other disks that are not HP branded or have HP firmware. Also blank caddies for unused drive positions are not provided, these are dummy caddies and cannot be filled with your own disks.

Share this post


Link to post
Share on other sites

Fibre channel and home servers...not something I expected to see here, but great response Quin.

@samm - welcome to the forums.

Share this post


Link to post
Share on other sites

Hopefully this answers some of your questions. Let me know!

p.s. A warning. A HP EVA or MSA will *not* accept most other disks that are not HP branded or have HP firmware. Also blank caddies for unused drive positions are not provided, these are dummy caddies and cannot be filled with your own disks.

Wow. Very informative post. Thank you very much for sharing.

Indeed, these will probably consume more power than I'm willing to pay for given it's ROI.

Thanks!

SAM

Share this post


Link to post
Share on other sites

Wow. Very informative post. Thank you very much for sharing.

Indeed, these will probably consume more power than I'm willing to pay for given it's ROI.

Thanks!

SAM

Sure, no problem. Knowledge exists to be shared! :D

If you are looking to build another server or type of storage, just let us known. I and people on here have a wide variety of experience from simple USB to high-end 100% uptime guarantee SAN storage (Hitachi Data Systems offers this).

I myself host a 21TB server at home which runs VMware and draws around 140 Watt fully tuned. A bit tuned (which I mostly run) is 170 Wat...

Share this post


Link to post
Share on other sites

I have 4G fiber channel network at home, but I'm not using FC disks or enclosures, I have SAS jbod chassis connected to SAS HBA in head unit. The idea should be same for FC. Cheapest and easiest way is to get FC jbod enclosure, connect to FC HBA directly.

Now, if you want multiple clients to access the storage simultaneously, you need cluster filesystem. Otherwise, depending on application requirement, CIFS and NFS may work. iSCSI can export LUN to clients, but you still need cluster filesystem for all the clients to access it at the same time.

Edited by dilidolo

Share this post


Link to post
Share on other sites

Cheapest and easiest way is to get FC jbod enclosure, connect to FC HBA directly.

First, I appologize for bringing up such an old post, but it came up in a Google search! lucky me!

I have found a business use for the server and fiber storage, and will be looking at renting some high-speed WAN access business space in town.

Like I said before, I'll need 3 enclosures. The HDDs I have are made by Hitachi, so they may not work in the HP EVA enclosure box?

However, I'm still confused at the HBA to enclosure connection process. I will only be using one host machine, with a dual port HBA. Is the 2nd port on the HBA only for high availability, or for a speed increase too? Given that I wish to obtain a EVA or MSA RAID controller, how would all connections be made? I understand that enclosures are to be wired in series, correct? So I would go from controller to one enclosure, to the next enclosure, to the last one, then back to the controller? Then one connection (or two?) from controller to HBA?

All SFPs and cables will be LC style.

If I wanted to eliminate the controller and do host based RAID processing, how would the wiring be different?

Again, Thanks for all of the wise advice.

SAM

Share this post


Link to post
Share on other sites
The HDDs I have are made by Hitachi, so they may not work in the HP EVA enclosure box

They won't work unless they have a HP firmware.

I will only be using one host machine, with a dual port HBA. Is the 2nd port on the HBA only for high availability, or for a speed increase too?

The 2nd port on the HBA is usually used for redundancy.

However, I'm still confused at the HBA to enclosure connection process. ...

Cabling on an EVA depends on the controllers you are using.

For an EVA 3000/4000/4100 you can use up to 4 drive enclosures. EVA 5000/6000/8000 require loop switches if you plan to use more than 4 disk enclosures.

Have a look at the EVA Hardware Configuration Guide: http://bizsupport2.austin.hp.com/bc/docs/support/SupportManual/c01364500/c01364500.pdf

This explains most of the cabling scenarios for the EVA.

If I wanted to eliminate the controller and do host based RAID processing, how would the wiring be different?

The EVA disk enclosures can be used as a JBOD enclosure but it's not a supported setup.

Don't forget that an EVA needs a management server with Command View installed to provide the disk storage to your server. command View licenes are the most expensive part or try and get a copy of Command View 6.0 which doesn't check for valid licenses ;)

Edited by TheEvil

Share this post


Link to post
Share on other sites

Bringing up the old thread again... I still have these fiber drives sitting on my shelf and I NEED TO USE THEM!!! or at least test them so I can sell them...

OK, so I see the cheapest route is to get the HP enclosures, but I cannot use my Hitachi HDDs with them... Can I flash HP firmware to the Hitachi drives? will they accept something like that? If that is possible, I would need very detailed instructions, as I'm sure the firmware has to match all specs of the hitachi drive (capacity, latency, etc) am I assuming correctly?

I am also running into a bit of an issue. My HDDs are 450GB a piece, but many of the storage sub-systems I've been looking at (HP EVA, IBM) only allow maximum per-drive capacity of 300GB.

Can anyone suggest a system that can support 450GB FC HDDs?

Thanks for the help people,

SAM

Edited by samm

Share this post


Link to post
Share on other sites

I found this firmware on HP's site. Its says its for the HUS154545VL300 HDDs, but in title of page it mentions the 15k450 SAS drives (HUS...VLS400), my drives are HUS154545VLF400, FC. Will the SAS firmware flash work for FC?

HP Firmware

I still have the issue of the per-drive capacity limit of 300GB to address...

Thanks

SAM

Edited by samm

Share this post


Link to post
Share on other sites

I found this firmware on HP's site. Its says its for the HUS154545VL300 HDDs, but in title of page it mentions the 15k450 SAS drives (HUS...VLS400), my drives are HUS154545VLF400, FC. Will the SAS firmware flash work for FC?

HP Firmware

I still have the issue of the per-drive capacity limit of 300GB to address...

Thanks

SAM

The SAS firmware will not work for FC. HP EVA should work with 450GB HDDs but it depends on the controller firmware. I'm not sure if it will work if you just use the drive enclosure as a JBOD.

Share this post


Link to post
Share on other sites

The SAS firmware will not work for FC. HP EVA should work with 450GB HDDs but it depends on the controller firmware. I'm not sure if it will work if you just use the drive enclosure as a JBOD.

Hey all.

I figured it out. I decided to shell out big bucks for a netapp system to fulfill my storage needs. will do the job I ask of it.

cheers

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
Sign in to follow this