stevecs

Member
  • Content Count

    207
  • Joined

  • Last visited

Community Reputation

0 Neutral

About stevecs

  • Rank
    Member

Profile Information

  • Location
    US of A
  1. stevecs

    Intel SSD 520 Review Discussion

    Check to make sure you have the latest firmware on the dell as well as the ssd. I don't have a dell here, but I do have 10 of the intel 520 240GB ssd's here in various systems (native in laptops, as well as raided with LSI and areca) no issues on any. Albeit, this is a small sample size
  2. Now that's cool. Never built anything that large for the smaller projectile accelerator I put together years ago, had some issues with the photocell triggers and spacing. Looks like a fun project. and yes, I /think/ that will do a pretty good job of erasing data, or welding it if all else fails.
  3. stevecs

    Intel SSD 520 Review Discussion

    Yes, I know from testing that you can extend that sometimes up to an order of magnitude but still that's 1) not the design spec, and 2) much lower than what you can get from server classed drives which are at the same or lower price points. Just looked at one of my workstations here that I've just moved over from a 4-drive to an 8-drive raid10 (sas), and that's doing about 500GB/day in writes total (so about 125GB/day per drive assuming equal loading). The last system that I had running for about 5 years on sas was up into the multi-PB range of writes per drive. Just looking at some of my 'light use' drives and they are averaging about 4GB/hour of I/O.
  4. stevecs

    Intel SSD 520 Review Discussion

    Nice write up on the drive, and I'm glad intel/you have included the UBER rates which is something very lacking from other vendors. For me though the big killer here is the pathetically low 36TB write endurance or 20GB/day. That just rules it out completely (heck In the time I've written this message I've already written ~2GB per drive * 4 drives (or actually 8 as it's a raid 1+0). For the price points they're looking at (say $500 for the 240GB version) that's more expensive than a 2.5" 15K rpm SAS 300GB drive which has a latency of ~2ms). Unless these become significantly cheaper than the sas drives and at least come up an order of magnitude or more for write endurance I just don't see it. (and yes, I have intel 320's and 510's in my laptops, though frankly haven't really seen any 'big' improvement there at least under linux where you have the system already optimized and enough ram for your applications (i.e. no swap)).
  5. stevecs

    Seagate Barracuda Green 2TB Review

    yes, they are more expensive but: 1) they are still sold/supported and have a 5-year warranty opposed to 3 year. 2) designed for 24x7 (8760 hours/year) use opposed to ~6hours/day (2100 hours/year) 3) since they were speced for raid systems should be more robust in firmware support to avoid dropping out or causing errors like I'm seeing with all the ST2000DL003's. Having been 'wooed' (and yes, I /SHOULD/ know better as I do this all the friggen time) by the lower price of the ST2000DL003's, they are not worth it IMHO. I've had to replace about 20 of them so far due to hanging the bus or other similar 'soft' errors which is way more than the hitachi's or the better seagate drives. (been burned too often by WD in the past that I don't even try them anymore). Likewise though your mileage may vary. Just so you're aware and it may/may not affect your roll of the dice.
  6. stevecs

    Seagate Barracuda Green 2TB Review

    I don't have the QNAP but I am running about 120 of these drives now in a raid. Frankly, don't get them if you have a choice instead you may want to look at the SV35 series which are rated for 24x7 use unlike the 6-hours a day use for the 'green' ones. Main problems I've seen: Long error recovery periods both during initial spin-up as well as in operation (no means to enable TLER); Less compatibility with backplanes/controllers (LSI; areca (which use an LSI expander chip) and the older Vitesse chipsets, basically errors such as "log_info(0x31120303): originator(PL), code(0x12), sub_code(0x0303)" which are messages from the drives. I think they're on the ragged edge and not really designed for much more than a tertiary drive for /very/ light use.
  7. ok, I thought you would have had more HBA's or raid cards for the head unit which is what raised the Q. To me, for server platforms/main OS/application et al I would stick with rotating rust. SSD's are better for caching, or where IOPS are important and you can justify the cost of replacements (database solutions where transaction time is paramount). Most solutions I've seen with SSD's are more for ego than for real-world use. (BER rates; P/E cycles; internal data integrity checks; et al are all behind that of HD's. Unless you're mitigating this by external design or business needs it falls into the 'bling' category in my opinion. For OS itself and since you have SAS controller on-board as well as add-in cards would probably go with two ST9146853SS's in a mirror for the OS and general apps. For the VM's, it really comes down to what you are going to be doing in the guests. Most guests are rather light on I/O but chew up memory like no tomorrow. Would either use SSD's in a mirror here or another couple ST9146853SS or ST9300653SS's depending on more information as to the guests and what they're doing. Would never deploy a SSD or HD without some type of HA. Even with backups, the time generally taken to do restores in a production environment is too costly in comparison to the rather small $$ for the higher availability. The X8DTH-6F has two SFF-8087's on-board, that SC835TQ appears to use a version of their M28E1/2 system which if true has an expander on that either 1 or 2). However I thought those backplanes were SAS/Sata 3Gbps only. For the 'local backups' either something like the ST91000640SS or ST91000640NS (since they're only about $20 different would probably go w/ the sas versions just to keep everything sas but that's just me). In a raid-5 that would give 3TB of backup space.
  8. Would suggest /not/ to use RAID0 for any type of business function really as you have MTTF/#drives minimally for availability. If you have it as purely 'scratch' (i.e. not cache or similar but literally stuff that will have zero impact if the drive/access to the drive fails to the rest of the system) possibly, but even then would strongly caution against it. striping/raid0 is like playing chicken, you are /never/ 'lucky' all the time. For your setup, would probably save some $$ and do: boot - 2 enterprise sas drives for boot, either 10Krpm or 15K rpm depending on how much swap you may need (though would probably put the $$ into more ram if possible) VM guests - SSD's in RAID-1 mode. Make sure you turn off swap in your guests (use your host's memory). backup - 4 disk raid5 or 6 depending on usage workload and drive size/type (if BER 1:10^16 and < say 500GB you could probably get away with raid-5), with only 4 drives you are really limited here if you need a lot of iops, so I'm assuming it's going to be as you stated, batch transfer data/backups et al. If you're going w/ a 5.25" full height (or dual HH) enclosure you can find them with build in sas expanders that can handle 8 2.5" drives, that would let you use the two SFF-8087's on-board depending on your chassis setup. Otherwise the SATA2 ports would be fine. SATA is not a bi-directional signaling (can't send/receive at the same time to the controller) which is why larger cache's are on sata drives to 'hide' this from the user's point of view. Though shouldn't be a problem with the spindle counts you have here. Is this for another task machine (encoding or something?) Since I didn't see the large data store in your list trying to figure out what your workload will look like for this node.
  9. using identical raid cards helps with interrupt 19 handling from the bios/bootstrapping as well as, if written correctly, will cut down on ROM space used as cards with the same firmware/code could share the same space. That's the main reason why I was using the LSI2208 chips (9200-8e) as it's the same as the on-board chip, however even with that with effectively 7 chips (6 9200-8e's + 1 on-board) I can't get into the LSI bios as it doesn't have enough ROM space to run, I have to remove some cards to get in to configure them or do it from the OS utilities. This doesn't stop the cards from working, just the bios utilities are too large to fit in operational memory. So yes you should be able to use different raid cards, though like anything testing is needed. As for the 2nd response above from their 'lab engineers' unless they have just recently updated firmware (which would upset me as I just returned the SC837E26) it does /NOT/ work with the LSI2208. The last I heard ~week ago was that they were looking into it but nothing from that point onward. I would get them to validate it in their lab for assurance or be prepared to fight it with them. I couldn't wait as I needed the system up here in the next 2-3 weeks so had to go with a known working config.
  10. Yeah, that's a large MB. why are you looking at that one opposed to the X8DTH-6F which is a normal extended ATX size not the custom enhanced extended atx. You just have 12 dimm sockets so only 96GB of ram (or 192 if you can find the 16GB sticks which are hard to come by). If you do that you shouldn't have a problem with the fit. As for Chenbro, I've had similar experiences with them (lack of response), I haven't used their chassis though mainly using SM, AIC, at least recently (past 3-5 years).
  11. Don't have enough data points to say it will or won't work with the SM chassis (the 2208 chip). That is the same chip that is used in the 9200-8e's that I have here w/ the SC837E's and it does not work. Would suggest you send an e-mail to SM for a complete compatibility list for that particular chassis, it's possible it has a different expander so may have other supportability.
  12. Remember dual-porting does not work with SATA drives (they only have a single port) nor does this work with consumer raid cards (you need to use software raid and a volume manager to handle the multi-pathing). If you ever consider going to SAS or may want HA in the future I think it's a worth-while option to have even if not used right away as it's a small cost in the price of the chassis usually. In most cases to add it later may require a complete chassis swap. Multi-pathing is a means to have redundant connections to a particular drive. With enterprise drives (FC; SAS; etc) you have two data ports on the drive. This is normally set up so that you have separate physical expanders; cables; HBA's and in some cases you can have them going to different computers (but this you'd normally use sas or FC switches). The idea is to make the data path itself redundant so if you loose say a HBA or cable you will still have your drives/data available. Generally this is not something a home user or small business would need initially. As for card support this is what I got from SM: "I got a compatible list from SM and find that AOC-USAS2LP-H8iR works with the EL2 backplanes in the JBOD chassis. However, it's an UIO interface so it will not work in the PCIe slots on your motherboard. The AOC-USAS2LP-H8iR does utilize a LSI 2108 chipset. I find MegaRAID SAS 9280-4i4e and 3Ware 9750-4i4e both using the same LSI 2108 chipset so they have better chances to be compatible with the EL2 backplanes."
  13. On-line UPS's have the benefit of removing transfer delays, but that's generally not that important in general server environments (more for medical, or cases where your load doesn't have enough capacitance to handle a couple missed cycles, which most server PSU's have more than enough to handle that). On the down-side is that you go through batteries faster due to their constant use, normal UPS's you want to replace them 3-5years with on-line if going direct to battery (opposed to super capacitors & batteries) probably 1-3 years depending on use. Yes, XJ-SA24-448R-B's and similar are a bad design. STK was using that design in their SANs back about 5-8 years ago and was no end to problems with them (mainly drives falling offline, mating problems et al). To get around the issue of pulling multiple drives, STK had a dual-head design so the front drive was controlled by one raid system and the back drive by another storage processor. You would have to offline both A/B drives and wait until it was synced to a hot spare before you pulled the blade. Slow and yes if you had another problem at the same time you're kind of screwed (they were only doing RAID3/5/[1+0] back then). The later ones you picked 48 drives vertically mounted in a 4U is a copy of the SUN Thumper systems, which are good for high density but air-flow is an issue due to the amount of restriction in the cases, plus to replace a dead drive you have to shut down/pull the entire unit out of the rack to get to them. To mitigate that you need a decent number of hot spares in there so you can do replacements in standard outage or downtime windows. I normally do a 1:10 - 1:15 ratio depending on quality of drives (consumer or enterprise). Another item is how those chassis are wired, the ones you linked do not show internal wiring schematics at all. You want to avoid situations where, like I mentioned before, a bad drive could take down an expander and all other drives attached to it. This is why I opted toward multiple chassis et al. It comes down to your HA requirements and what you can live with in cost. Just FYI, SM is supposedly working on the issue I had with the SC837E* chassis and the LSI HBA's but no solution yet. I've since went with plan B here and just purchased some more of the AIC chassis. Not as good as a density solution but couldn't wait for SM (considering that they use the LSI2008 chip on their motherboards this implies that they didn't do much QA testing).
  14. Correct, the SUA3000RM2U is not 'on-line' however it does have a high sensitivity rating and does have an internal buck-boost transformer to even out high/low currents (i.e. 82V-144V). And yes I would use a UPS with buck/boost especially in a noisy environment as it will help prolong your equipment life. Clean power is IMHO very important. I would find it hard to see you pulling 1800VA on both PDU's with a 25U cabinet with the number of drives you're looking at unless you're doing blade chassis. Especially with staggered spin-up of drives. Remember that 'Major Loads' is not really a problem, your APC unit itself unless you're going to 220V lines is at 20A, that's ~2000W, but your APC itself is only rated for 2100 or 2700 depending on the model you're looking at. I would suggest having separate circuits (two pdu's) would be better. That APC7801 is only rated at 16A at 100/120V though it IS metered which is kind of nice (would avoid me having to use my AC clamp to get data. It just takes up a U which I would rather use for servers. The case(s) that I mentioned before all have SAS expander backplane options: http://www.aicipc.com/ProductParts.aspx?ref=RSC-3EG2 (sku: RSC-3EG2-80R-SA2S-0[A-C]) for single expander versions. The XJ line is designed for drives only the 3EG2 is for MB's as well. The Supermicro's as well SC837's; 938's, et al). There are quite a few but chassis like these are designed more for servers or business/enterprise farms where you need to string multiples together to get performance/storage goals. Basically with an expander backplane (like the AIC ones I have here) you attach the drives to the backplane (hot swap) and then the backplane has two SFF-8087's (one in; one out) so you can daisy-chain chassis together. I normally use the supermicro CBL-0167L's in them so I can convert this to SFF-8088's and deal with each unit either directly tied to daisy-chained depending on needs.
  15. First you should be looking rackmount units assuming that's what you're getting (a rack), and the newer versions with the more efficient inverters. http://www.apc.com/products/family/index.cfm?id=165#anchor1 The SUA3000RM2U is about right 3000VA/2700W: http://www.apc.com/products/resource/include/techspec_index.cfm?base_sku=SUA3000RM2U&total_watts=1400 Requires you to hook up a L5-30 outlet which shouldn't be much of a problem (you can find them usually at hardware stores if you're doing it yourself for ~30US or less) and then a 30A fuse for your breakout box. Or get your electrician to do it. As for the PDU's I use the AP9567 http://www.apc.com/products/resource/include/techspec_index.cfm?base_sku=AP9567, two of these which you can hang on the back cable tray of your rack. Each goes to a different 20A outlet on the back of your UPS (each outlet is on a different fuse) then you plug each server/chassis into BOTH PDU's that way if a PSU dies/shorts which would blow one circuit the other should still retain power. Each pdu can handle 1800VA/120V/15A which should be fine for your intended load, it doesn't have the fancy led meter on them but those are hard to get with a short rack in zero u format. As for those sas expander cards, those are just plain expanders, no logic (raid) at all. The Chenbro one at least mentions that it is using the LSISASII36 chip, the HP one doesn't specify (it may as they do source a lot of there stuff from LSI). Were you looking at that opposed to have an integrated expander backplane in the chassis for the drives or something?