Henri BR

72Tb+ NAS, Extensible - Newbie build

Recommended Posts

I could not find that UPS you mention when I were looking after one. The one from your link is 2700 Watts, that other is 2100 Watts; that's good and it costs much less. Another difference is that the SUA3000RM2U is not an on-line UPS. Sometimes, the power environment here is a bit noisy, and not that 'stable' when there are many equipments on. As it's a home office, there are a few utilities in 'concurrent' use during a typical day, like showers, microwave, freezers, air conditioning, etc - I haven't had any outage due to many equipments using the power, but I can see some power stabilizer taking action very frequently when many things are in use. That's the primary cause of choosing an on-line UPS.

As we have never used an UPS, what do you think?

For the PDU, I first selected that one you linked. I was not sure about it due to the "Total Current Draw Output", "Input Current" and "Load Capacity". I also think that 2 is frequently better than one - Even so, I thought about buying the AP7802 or 2x AP7801 due to the their ability to handle 'major loads'. I've been doing some calcs and all the hardware may consume from 800 to ~1.750 Watts. However, if two Zero U AP9567 can handle all the loads at no problem, it'd be much better.

Should I repeat the above/last question? :D

Steve, what I liked about that Chenbro expander is that it does not require any PCI-x slot, if I'm not wrong - It can be powered by 1x 4Pin PATA. With regard to your question, I'm not sure if I can understand it; what kind of stuff are that you're mentioning? I haven't seen an integrated expander backplane in the chassis I took a look.

Share this post


Link to post
Share on other sites

Correct, the SUA3000RM2U is not 'on-line' however it does have a high sensitivity rating and does have an internal buck-boost transformer to even out high/low currents (i.e. 82V-144V). And yes I would use a UPS with buck/boost especially in a noisy environment as it will help prolong your equipment life. Clean power is IMHO very important.

I would find it hard to see you pulling 1800VA on both PDU's with a 25U cabinet with the number of drives you're looking at unless you're doing blade chassis. Especially with staggered spin-up of drives. Remember that 'Major Loads' is not really a problem, your APC unit itself unless you're going to 220V lines is at 20A, that's ~2000W, but your APC itself is only rated for 2100 or 2700 depending on the model you're looking at. I would suggest having separate circuits (two pdu's) would be better. That APC7801 is only rated at 16A at 100/120V though it IS metered which is kind of nice (would avoid me having to use my AC clamp to get data. ;) It just takes up a U which I would rather use for servers.

The case(s) that I mentioned before all have SAS expander backplane options:

http://www.aicipc.com/ProductParts.aspx?ref=RSC-3EG2 (sku: RSC-3EG2-80R-SA2S-0[A-C]) for single expander versions. The XJ line is designed for drives only the 3EG2 is for MB's as well.

The Supermicro's as well SC837's; 938's, et al). There are quite a few but chassis like these are designed more for servers or business/enterprise farms where you need to string multiples together to get performance/storage goals.

Basically with an expander backplane (like the AIC ones I have here) you attach the drives to the backplane (hot swap) and then the backplane has two SFF-8087's (one in; one out) so you can daisy-chain chassis together. I normally use the supermicro CBL-0167L's in them so I can convert this to SFF-8088's and deal with each unit either directly tied to daisy-chained depending on needs.

Edited by stevecs

Share this post


Link to post
Share on other sites

Good to know it Steve!

I've been reading some articles about UPS. This at the Wikipedia is a good start for those who also need it. First, I thought that an on-line UPS would be better due to what I described before and also because as the inverter running continuously, it could provide much more safety with regard to granting that the hardware would 'never' be turned off, where an off-line solution needs to take action in the meanwhile we have a power outage - You know? No power transfer switches are necessary with an on-line UPS. If an off-line solution can do the job perfectly, there may be no reason for more costs/investments.

Where I live, the power lines are all 127V - 60 Hertz.

There are options for 220V, with a little change or transformers - Not an option right now.

I'll look again at the UPS and PDU you linked.

Probably a non rackmount UPS as it may save some space in the future and less heat in the rack, and a zero U PDU.

So, an off-line UPS can do the job (mostly) at no problem considering what I told?

Is there any reason for a rackmount UPS I must be missing?

Thanks for explaining about the integrated expander backplane. I'll study it a bit more.

A reason why I were concerned of two chassis was due to the fact I didn't understand how 'stuff' work.

I thought we would need a motherboard and all the hardware in all enclosures - But a single raid card.

Did you understand what I mean? 3 enclosures = 1 RAID card, 3MB, 3+CPU, 3#memories, 3 or more everything.

Now, if I understand it better, we'd need only a 1U or 2U server (head), and 1 or 2 enclosures; then we could use a SAS expander on each and nothing more.

Something I searched a LOT and could NOT find, is a high density enclosure like this (link).

I found SuperMicro with similar solution, as you mentioned.

There is this AICipc XJ-SA24-448R-B, which does not sound a good idea due to design (you remove 2 HDDs at once, and I'm not sure if it's 6Gb/s).

Here is a great model for 60 drives Nexsan E60X, and here it is its datasheet - Price? $45K to $60K. :unsure::(

Isn't there any manufacturer of these high density enclosures you know? Some SAS6G enclosure just for disks - No server (head). No problem if there's a MB tray, anyway...

Also, is there any reason it is so hard to find? Temperatures, vibration, or something that could be bad with these high density chassis?

Do you advise these kinds of enclosures?

By the way, this is a nice document about space organization within high density racks.

Edited by Henri BR

Share this post


Link to post
Share on other sites

On-line UPS's have the benefit of removing transfer delays, but that's generally not that important in general server environments (more for medical, or cases where your load doesn't have enough capacitance to handle a couple missed cycles, which most server PSU's have more than enough to handle that).

On the down-side is that you go through batteries faster due to their constant use, normal UPS's you want to replace them 3-5years with on-line if going direct to battery (opposed to super capacitors & batteries) probably 1-3 years depending on use.

Yes, XJ-SA24-448R-B's and similar are a bad design. STK was using that design in their SANs back about 5-8 years ago and was no end to problems with them (mainly drives falling offline, mating problems et al). To get around the issue of pulling multiple drives, STK had a dual-head design so the front drive was controlled by one raid system and the back drive by another storage processor. You would have to offline both A/B drives and wait until it was synced to a hot spare before you pulled the blade. Slow and yes if you had another problem at the same time you're kind of screwed (they were only doing RAID3/5/[1+0] back then).

The later ones you picked 48 drives vertically mounted in a 4U is a copy of the SUN Thumper systems, which are good for high density but air-flow is an issue due to the amount of restriction in the cases, plus to replace a dead drive you have to shut down/pull the entire unit out of the rack to get to them. To mitigate that you need a decent number of hot spares in there so you can do replacements in standard outage or downtime windows. I normally do a 1:10 - 1:15 ratio depending on quality of drives (consumer or enterprise). Another item is how those chassis are wired, the ones you linked do not show internal wiring schematics at all. You want to avoid situations where, like I mentioned before, a bad drive could take down an expander and all other drives attached to it. This is why I opted toward multiple chassis et al. It comes down to your HA requirements and what you can live with in cost.

Just FYI, SM is supposedly working on the issue I had with the SC837E* chassis and the LSI HBA's but no solution yet. I've since went with plan B here and just purchased some more of the AIC chassis. Not as good as a density solution but couldn't wait for SM (considering that they use the LSI2008 chip on their motherboards this implies that they didn't do much QA testing).

Share this post


Link to post
Share on other sites

Very instructive Steve!

We can find this UPS at the local market - APC SMX3000RMLV2U.

A modular model is also available, specifically this one: SUM3000RMXL2U - However, it does not seem to be of more worth than the other (SMX3000RMLV2U). PDF brief and specs.

I was still thinking about that issue you had with that SM chassis.

Do you think that chassis could work with that LSI we've been talking about? But with the external ports version (I think it is the MegaRAID SAS 9285-8e)

This SuperMicro (SC837E26-RJBOD1) chassis provides the possibility for 45 HDDs - It seems to be very similar to that one you have tried there. It has 2 backplanes in each chassis: One at its front and another at its back. The 847EL2 backplanes version have dual-port expanders that access all the hard drives. These dual-port expanders support cascading, failover, and multipath; as described in the product manual (PG 49... - C3). With regard to failover the manual says: "If the expander or data path in the primary ports fails, the system automatically switches to the secondary ports. This maintains a full connection to all drives". But...

Image for reference:

http://postimage.org/image/22dbwpyec/

Supermicro_BPN_SAS2_846_EL2.jpg

What is multipath in this context and what is its benefit?

As each chassis has 2 backplanes, does it mean we should need 2 ports of the RAID card (E.g. One for each BP) to connect to it? Or...

Should we plug the port 8/11 (see image) at the rear/back backplane? How does it work?

Also, what could we do with the ports 9 and 12? What is it for since we've probably used 7,8 and 10,11?

Edited by Henri BR

Share this post


Link to post
Share on other sites

Remember dual-porting does not work with SATA drives (they only have a single port) nor does this work with consumer raid cards (you need to use software raid and a volume manager to handle the multi-pathing). If you ever consider going to SAS or may want HA in the future I think it's a worth-while option to have even if not used right away as it's a small cost in the price of the chassis usually. In most cases to add it later may require a complete chassis swap.

Multi-pathing is a means to have redundant connections to a particular drive. With enterprise drives (FC; SAS; etc) you have two data ports on the drive. This is normally set up so that you have separate physical expanders; cables; HBA's and in some cases you can have them going to different computers (but this you'd normally use sas or FC switches). The idea is to make the data path itself redundant so if you loose say a HBA or cable you will still have your drives/data available. Generally this is not something a home user or small business would need initially.

As for card support this is what I got from SM:

"I got a compatible list from SM and find that AOC-USAS2LP-H8iR works with the EL2 backplanes in the JBOD chassis. However, it's an UIO interface so it will not work in the PCIe slots on your motherboard. The AOC-USAS2LP-H8iR does utilize a LSI 2108 chipset. I find MegaRAID SAS 9280-4i4e and 3Ware 9750-4i4e both using the same LSI 2108 chipset so they have better chances to be compatible with the EL2 backplanes."

Share this post


Link to post
Share on other sites

Thanks for clarifying about multipath and upgrade chances, Steve.

As for card support this is what I got from SM:

"I got a compatible list from SM and find that AOC-USAS2LP-H8iR works with the EL2 backplanes in the JBOD chassis. However, it's an UIO interface so it will not work in the PCIe slots on your motherboard. The AOC-USAS2LP-H8iR does utilize a LSI 2108 chipset. I find MegaRAID SAS 9280-4i4e and 3Ware 9750-4i4e both using the same LSI 2108 chipset so they have better chances to be compatible with the EL2 backplanes."

The LSI MegaRAID SAS 9285-8e is the LSI 2208 chipset, right?

What do you think? What are the chances for it to work?

My main doubts now are with chassis, backplanes/expander connections options and similar.

That's why the confusions on above questions.

But I'll post some specific questions once I study it a bit more. I need to ready more online manuals.

Edited by Henri BR

Share this post


Link to post
Share on other sites

Don't have enough data points to say it will or won't work with the SM chassis (the 2208 chip). That is the same chip that is used in the 9200-8e's that I have here w/ the SC837E's and it does not work. Would suggest you send an e-mail to SM for a complete compatibility list for that particular chassis, it's possible it has a different expander so may have other supportability.

Edited by stevecs

Share this post


Link to post
Share on other sites

I e-mailed them asking for compatibility between SC837E26-RJBOD1 and MegaRAID SAS 9285-8e.

Another point I'm concerned is with regard to the motherboard. The SM X8DAH+-F is an enhanced extended ATX (13.68" x 13" (34.7cm x 33cm)) and it seems that it won't fit in many chassis. I liked this Chenbro RM31408 to be used as 'head', but it seems that the SM mother board won't fit in it.

Is there a way to use that MB with this chassis?

Have no idea Steve. And Chenbro didn't answer a single e-mail untill now, what makes me a bit worried about their support.

A second chassis to be used as 'head' is the SM SC835TQ-R920B.

This one I know it will fit, but not sure about compatibility with that LSI RAID card series.

Guess we have to wait SM's answer regarding it.

Edited by Henri BR

Share this post


Link to post
Share on other sites

Yeah, that's a large MB. why are you looking at that one opposed to the X8DTH-6F which is a normal extended ATX size not the custom enhanced extended atx. You just have 12 dimm sockets so only 96GB of ram (or 192 if you can find the 16GB sticks which are hard to come by). If you do that you shouldn't have a problem with the fit.

As for Chenbro, I've had similar experiences with them (lack of response), I haven't used their chassis though mainly using SM, AIC, at least recently (past 3-5 years).

Share this post


Link to post
Share on other sites

At your first post, you were right when you said that it was more about a first generation list than a more specific solution. As you can see, we're about to use a chassis for the head/server separating it from storage chassis thanks to learning with you and reading more around there. One of the reasons about choosing that mother board was due to x16 slots for an higher end video card - Anyway, I've been reading that there's no significant performance differences between using a video card on a 16 lanes PCI and a x8 PCI; the difference is about 3% to 14% depending what you're doing. So, despite the X8DTH-6F MB has no x16 slots, it seems to be a very good MB from what I'm reading on its manual and some on-line contents. I'm wondering whether the integrated LSI SAS2008 will be a plus or a problem when we put other 1 or 2 RAID cards on it to work together - As I have never used RAID before, I'm really not sure about these kind of things; and here is why I'm telling it...

The 2 chassis design I mentioned will provide us the ability to deal a little better with some of our demands. The 8 bays are enough to keep our five computers backups - If something goes wrong with a storage chassis or a RAID card, the data will be there available; hopefully. Also, there are some interesting enclosures that converts one 5.25" drive bay into six 2.5" hot-swap hard drive bays, like this iStarUSA BPU-126-SA, this Chieftec CTM-1062S, and this TT RC1600101A (Some options are 6Gb/s SAS/SATA and some are not; and just bear in mind that these 6-Bays cages are mostly for 9.5mm thickness drives). Those 6 drives are more than enough for the O/S in RAID1 using 2 SSDs, and other 4 drives for other purposes like O/S backups, cache, VMs, or whatever. The MB you mention fits perfectly here - 6x SATAII and 8x SAS6G. Sounds like you predicted it! ;)

You already told that there is no problem using 2 RAID cards on a MB. However I'm not sure there would be a problem using 2 identic RAID cards and plus the integrated MB RAID controler, which will be a different chip; and, in addition to to the onboard SATA ports.

Is there any problem?

SuperMicro has answered my email, and here it is their answer:

Here’s a list of compatible cards with our expander backplane/chassis

http://www.supermicro.com/products/nfo/files/storage/SAS-CompList.pdf

Thanks

No specific answer.

I emailed them again trying to get more details and a specific answer to what we're looking for.

Edited by Henri BR

Share this post


Link to post
Share on other sites

Email:

I have read that document* before, and I'm wondering if SM did any test with those new LSISAS2208 cards.

We're looking forward to use the SM SC835TQ-R920B as 'head server', and SC837E26-RJBOD1 with those new LSI RAID cards**.

As we'll need to import the chassis it could be a bit harder to test it at our end.

Thanks for the input,

* The above document.

** MegaRAID SAS 9285-8e and similar from this same series.

New answer from SuperMicro:

According to our lab engineers, The backplane in SC837E26-RJBOD1 chassis should compatible with LSI 2208 card because it is same MegaRAID stack with LSI 2108 card. However our lab haven’t test it yet because it is new controller.

Technical Support

ES

Edited by Henri BR

Share this post


Link to post
Share on other sites

using identical raid cards helps with interrupt 19 handling from the bios/bootstrapping as well as, if written correctly, will cut down on ROM space used as cards with the same firmware/code could share the same space. That's the main reason why I was using the LSI2208 chips (9200-8e) as it's the same as the on-board chip, however even with that with effectively 7 chips (6 9200-8e's + 1 on-board) I can't get into the LSI bios as it doesn't have enough ROM space to run, I have to remove some cards to get in to configure them or do it from the OS utilities. This doesn't stop the cards from working, just the bios utilities are too large to fit in operational memory.

So yes you should be able to use different raid cards, though like anything testing is needed.

As for the 2nd response above from their 'lab engineers' unless they have just recently updated firmware (which would upset me as I just returned the SC837E26) it does /NOT/ work with the LSI2208. The last I heard ~week ago was that they were looking into it but nothing from that point onward. I would get them to validate it in their lab for assurance or be prepared to fight it with them. I couldn't wait as I needed the system up here in the next 2-3 weeks so had to go with a known working config.

Share this post


Link to post
Share on other sites

Oh Steve, it's also a comedy these undertakings. Changes here and there, something that didn't work (hopefully) start working, etc; we hardly know. I e-mailed them again with regard to these new LSI and their backplanes, and asking for some tests if possible. Hope they can do something. With regard to Chenbro, I already sent them 3 e-mails with different purposes and no answer until now - That's bad for a company...

For the SM X8DTH-6F and a 8-Bays 3U chassis like the ones I mentioned with a 5.25" drive cage for other 6x/4x 2.5" drives, I though the following:

LSI SAS 2008 Controller 8x Ports

-------------------------------------

a. 2x SSDs, boot in RAID1

b. 2x SSDs, in RAID0 for cache, VMs, and/or whatever needed regarding to disk performance

c. 4x 3.5" HDDs in RAID1 for critical non-shared/local data and server backups

SATA2 (ICH10R) 6x Ports

-------------------------------------

d. 4x 3.5" HDDs

e. 2x for other like optical drive/eSATA w. Card Readers/Port Replicators

Any advice for a better configuration or something that could work better? Or is it okay this way?

Share this post


Link to post
Share on other sites

Would suggest /not/ to use RAID0 for any type of business function really as you have MTTF/#drives minimally for availability. If you have it as purely 'scratch' (i.e. not cache or similar but literally stuff that will have zero impact if the drive/access to the drive fails to the rest of the system) possibly, but even then would strongly caution against it. striping/raid0 is like playing chicken, you are /never/ 'lucky' all the time.

For your setup, would probably save some $$ and do:

boot - 2 enterprise sas drives for boot, either 10Krpm or 15K rpm depending on how much swap you may need (though would probably put the $$ into more ram if possible)

VM guests - SSD's in RAID-1 mode. Make sure you turn off swap in your guests (use your host's memory).

backup - 4 disk raid5 or 6 depending on usage workload and drive size/type (if BER 1:10^16 and < say 500GB you could probably get away with raid-5), with only 4 drives you are really limited here if you need a lot of iops, so I'm assuming it's going to be as you stated, batch transfer data/backups et al.

If you're going w/ a 5.25" full height (or dual HH) enclosure you can find them with build in sas expanders that can handle 8 2.5" drives, that would let you use the two SFF-8087's on-board depending on your chassis setup.

Otherwise the SATA2 ports would be fine. SATA is not a bi-directional signaling (can't send/receive at the same time to the controller) which is why larger cache's are on sata drives to 'hide' this from the user's point of view. Though shouldn't be a problem with the spindle counts you have here.

Is this for another task machine (encoding or something?) Since I didn't see the large data store in your list trying to figure out what your workload will look like for this node.

Share this post


Link to post
Share on other sites

Steve, this is the 'head' for the storage/server system. I followed the advice of separating the 'eggs' for better flexibility and to avoid the nuisances that a 9U chassis like that one could cause. So, we'll use a 3U chassis as 'head' and another 1 or 2 for the disks. I'm also following the advice and changing the MB to the SM X8DTH-6F. My fault not to mention it in my last post.

Now I'm not sure if your (above) advice keeps the same.

That's right, no RAID0. I was planning for the best here, not the worst.

The 8 drives/bays in a SM SC835TQ-R920B, Chenbro RM31408 or similar design would be used for the server O/S backups, and our 5 computers backups – Nothing else for a while. Then we could buy an internal 5.25" enclosure for 4/6 2.5" drives to be used as boot and something else. Not sure about the optimal config, however.

Would 2 15K SAS disks be better than 2 SSDs for boot/system in our case?

As VMs guests are no more than 5, and also sporadically, would a single SSD be enough?

For some thousands of files, I'm wondering about the fastest way to find/access something? Database in another SSD? Actually need a few tips about it.

I'll try to find a similar chassis with an integrated expander. It would provide much more 'room' as the 8 build in drives/bays would be mostly not in use – Backups only – Maybe there's no need of RAID setup for those 8 disks (I mentioned 4 drives in RAID and plus other 4 drives controlled by the SATA2 ports because of the limitations of the mother board. Trying to figure out an optimal setup for the 'head' considering we won't need more than 15Tb to 30Tb of computers backups).

What I know is that a chassis design like those 2 are very good for the needs.

Considering the 'all the things' I don't know what exactly to do for the best performance and some level of reliability/availability.

With regard to the chassis we talked before (posts 31 and 32), possible setups, and other subjects, lets see what SM answer this week.

I started talking about that to check out your tips about what to do regarding to cabling setups for daisy chain, performance, 1 or 2 RAID cards, etc.

But to talk about it, we probably to need to know what chassis we'll be using for disks, no?

Share this post


Link to post
Share on other sites

ok, I thought you would have had more HBA's or raid cards for the head unit which is what raised the Q.

To me, for server platforms/main OS/application et al I would stick with rotating rust. SSD's are better for caching, or where IOPS are important and you can justify the cost of replacements (database solutions where transaction time is paramount). Most solutions I've seen with SSD's are more for ego than for real-world use. (BER rates; P/E cycles; internal data integrity checks; et al are all behind that of HD's. Unless you're mitigating this by external design or business needs it falls into the 'bling' category in my opinion.

For OS itself and since you have SAS controller on-board as well as add-in cards would probably go with two ST9146853SS's in a mirror for the OS and general apps.

For the VM's, it really comes down to what you are going to be doing in the guests. Most guests are rather light on I/O but chew up memory like no tomorrow. Would either use SSD's in a mirror here or another couple ST9146853SS or ST9300653SS's depending on more information as to the guests and what they're doing. Would never deploy a SSD or HD without some type of HA. Even with backups, the time generally taken to do restores in a production environment is too costly in comparison to the rather small $$ for the higher availability.

The X8DTH-6F has two SFF-8087's on-board, that SC835TQ appears to use a version of their M28E1/2 system which if true has an expander on that either 1 or 2). However I thought those backplanes were SAS/Sata 3Gbps only.

For the 'local backups' either something like the ST91000640SS or ST91000640NS (since they're only about $20 different would probably go w/ the sas versions just to keep everything sas but that's just me). In a raid-5 that would give 3TB of backup space.

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now