superlgn

Member
  • Content Count

    118
  • Joined

  • Last visited

Community Reputation

0 Neutral

About superlgn

  • Rank
    Member
  1. superlgn

    LSI 9260-16i & WD Red Performance

    Are those drives supported by the controller? I didn't think red drives had the power management or error correction features that were needed by most (all?) hardware controllers. I don't see the WD30/40EFRX on their compatibility list. Sometimes a firmware update is needed to properly support cerrtain newer drives or you have to change phy settings, like dropping a drive down to 1.5Gbps. But I don't think these will work. The RE3/4 line of drives from WD are generally supported. Their 4TB RE4 model is WD4000FYYZ and that one is on the list. I have a 6 disk Linux md raid with some old 250gb sata hard drives and I'm able to get ~200MB/sec sequential reads and writes. With that many drives of that size I'd think you would be able to hit 400-600MB/sec on a sw raid. It should be better with hardware but these drives may be throwing the controller off.
  2. superlgn

    Found the GENUINE SATA to USB high quality adapter

    Nice. I just ordered one of those a/c power adapter -> 4-pin molex -> sata power and usb -> ide/sata adapters for work, but I only needed it for SATA so that's alot of extra junk... A single component adapter sure would be nice. PITA to take drives out of caddies and throw into an external trayless/other enclosure when you need to quickly overwrite some old raid disk format or whatever. Any idea if these are SMART capable?
  3. The ASM documentation says you should select the Actions menu, Update controller images, browse for the first .ufi file (as580501.ufi), then select the controller(s) and click Apply. Is there an error coming back from that action? I don't think I've ever used ASM to upgrade, just the Linux arcconf utility and a bootable CD with afu.exe.
  4. superlgn

    Best performance?

    I'm running an 8*1TB WD1002FBYS on a 9650se-12ml in a RAID6 and the last time I ran bonnie++ on it I got 325MB/sec sequential write and 618MB read. I'd expect you to see even better numbers with a 7 disk RAID5. 5MB/sec is about as bad as I can imagine... Could the controller be doing a background initialize while you're doing your testing? Even if it was and the rebuild rate was set to the fastest rebuild I'd still expect much better numbers. If it hasn't started or finished initializing you may want to (start it and) wait for that to complete before doing any additonal testing. What stripe size are you using? I had severe problems with > 64KB on my array, the controller was frequently timing out under high load. I don't recall any real impact on performance though. How Storsave? Balanced with a BBU is a good option for performance and data protection. For testing you may want to bump it up to Performance. If you haven't already you should check the drive compatibility lists for this specific model number and any special configurations you may need. I think it's pretty rare but some drives need NCQ disabled or the link speed forced to 1.5Gbps, stuff like that. If you aren't using the latest firmware (10.00.027 from the 9.5.5.1 code set) you should upgrade. If you're using an OS that can address > 2TB storage you could just make one large volume and skip the auto-carving stuff, maybe with a smaller boot volume if you wanted to put the OS on the same array. 3ware's user guide says Note: Carving a unit into multiple volumes can have an impact on performance. but I certainly wouldn't expect to see that alone take you from 5MB/sec to something more reasonable. As far as separation of OS and data goes I'd only expect you to see big performance gains if the OS volume was under constant heavy use. I've never used ESXi but I would think that once it's booted up there's probably not a whole lot going on there.
  5. superlgn

    Setting consumer SATA drives to RAID mode

    It's not just a matter of WD getting rid of the utility. I'm able to use smartctl -l scterc,70,70 ... to change the error recovery timeout on my older WD1001FALS drives, but this doesn't work on newer WD1001FALS drives. WD removed the ability to change this setting with a firmware update in late 2009. I heard the same goes for other models like the WDXXEADS. Other manufacturer's desktop drives may support it but you'll have to check that on a case by case basis. Could be something like WD where older drives support it and newer drives don't, maybe it looks like the setting ends up enabled but the drive firmware ignores it anyway, or you can't enable it at all.
  6. I imagine 3ware/Adaptec/Areca/other all use different on disk formats so you probably won't be able to move your array to a controller from another vendor. You should be able to use the same model card or anything newer from the same vendor. Firmware can come into play though.. 3ware introduced a faster rebuild/recovery feature with the 4.0 firmware but I think it changed the on disk format, which would make it incompatible with 3.0. If you're running a non-RAID6 array on your 9650 with a 3.0 firmware you may be able to get it up and running on a 9550. I'd always make sure to upgrade to the latest firmware before attaching the drives to a different controller.
  7. http://kb.lsi.com/Download16575.aspx The release notes mention the 10.00.024 firmware fixes some type of XFS filesystem corruption issue (amongst other things). No other information or time frame. 10.00.021 (Feb 2011) was the last stable release to my knowledge and the readme in the zip didn't say anything about XFS issues. I don't recall seeing a .022-023, so I'm assuming .021 to .024.
  8. superlgn

    New RAID startup questions

    Most of the servers I manage at work are software (Linux md) RAID1. I've never done anything different with those as far as boot/other partitions goes since each drive can fully function individually. It's a different story when talking about a software RAID5 though, in that case I'd recommend a mirrored boot or even a smaller mirrored root (10-20GB or so). I'm not sure what OS you're using, but if you're not really comfortable with the RAID management in the OS, you could always see if you have some onboard/fakeraid options in the BIOS... It's easy to setup and works alright for multi-booting. It won't be quite as portable as a Linux md array though, since the new motherboard would need to support the existing on disk format. Going from one Intel ICH controller to another shouldn't be an issue, but Intel to VIA, no clue. Personally, if the machine is a dedicated Linux box I'd always go md over fakeraid.
  9. superlgn

    Drives for a RAID setup

    RAID5 array size is n-1. If you have 6*1TB disks you'll end up with 5TB usable. RAID6 is n-2. You lose half with RAID10, but it's a good mix of performance and reliability. I'm not really sure if it's a good idea to use it with a larger number of drives though. I ran a 4 disk RAID10 on my 3ware 9650se at home a few years ago and it was extremely solid, but I needed the extra space and switched to RAID5 after 9 months. The more disks you have in a RAID5/6 he faster it should be. My 3ware 3 disk RAID5 (WD1001FALS) got 175MB/sec seq writes while my 4 disk RAID5 got 245. I'm currently running an 8 disk RAID6 (WD1001FALS+WD1002FBYS) on a 9650se-12ml and it gets 320 write and 600 read. RAID5 performance should be better than 6 since there's less going on, but a decent controller should pretty much even that out. Just remember the more disks you have the higher the possibility of multiple simultaneous failures. I mostly use 3ware controllers. There's a bunch of servers at work and my system at home. I'm comfortable with their cards and I love their management utilities, but I had some problems at home (see the big reset thread) that made me wish I tried something else instead. From what I've seen, 3ware, Adaptec, and Areca are all pretty much on par with one another. I've always heard that Areca has had great performance, especially their write cache, but a coworker had quite a bit of problems with their stuff on Solaris, the drivers just weren't up there with the quality of their Linux and Windows counterparts. He loved their out of band management though. I can't ever remember hearing anything bad about Adaptec's controllers, except that their 5 series ran hot... They've always had extremely good drivers. I'm not familiar with the Samsung SpinPoint drives. Just make sure you get something on (or in the family of) the drives on 3ware's compatibility list. You don't want to find out you got the wrong drives after the fact.
  10. superlgn

    Raid Controller/Drive Recommendation

    I have no experience with either controller, but I have worked with older Adaptec cards and they've always been solid. When I was having problems with my 3ware 9650se I was looking hard at the Adaptec 31605 and everything I read was positive. The only thing that stopped me from following through on that was a firmware update that's so far resolved my reset issue. The drives you're looking at are desktop grade, which have power management and error recovery features not intended for hardware raid. Most raid controllers will kick a drive out of the array after being unable to communicate with it for 15-30 seconds. I've seen desktop drives disappear for minutes at a time when they encounter bad sector(s), whereas my WD1002FBYS drives will kick back after 7 seconds at most and allow the controller to scrub the bad sector. I know some older WD drives could enable TLER using smartctl/wdtler.exe, but they disabled that after a while. Half of my 8 disk raid6 is made up of WD1001FALS+TLER. Every once in a while a bad sector shows up on those and everything works as expected, but that's an exception. I didn't plan to go hardware raid when I started buying those, I just happened to get lucky. Regardless, I'm still planning to phase them out. Other manufacturers have their own names for this time limited error recovery functionality. Some of their desktop drives appear to allow you to enable the feature, but I've heard it doesn't always work. Could just be advertised by the firmware but not actually implemented. Frankly, if you're not willing to buy the right drives for these controllers I wouldn't bother. You'll almost certainly end up without support from the manufacturer and will likely experience drive dropouts and/or have issues initializing or rebuilding your array at some point. Maybe you'll get lucky and everything will work fine for years, but if you value your data I wouldn't leave it to chance. If you can't afford the right drives, then I suggest an HBA and software raid. I've seen blog posts detailing 16+ drive configurations with Linux md getting 350+MB/sec sequential writes and I've got access to a 12 disk ZFS RAIDZ2 at work that manages the same.
  11. superlgn

    best drive cage

    I have 2*iStarUSA BPU-350-SA-SILVER (5*3.5 drives in 3*5.25 bays) for my 3ware 8 disk raid6. I use the 2 extra ports for rotating backups (via onboard SB700/SB800) and hot swap those once a month. My gaming system has a BPU-340-SA-BLUE (4*3.5 in 3*5.25), which I previously put through heavy use on ICH9R Linux md and later a 3ware 4 disk raid5. The huge power buttons on the front of that one used to bother me, but now it's convenient since I have 2 separate mirrored installations of Windows and just power on the pair of drives I want for the next boot. My only complaint with the 350 is that they have no grooves on the side so I had to screw with my case to get them in. Other than that and the weak documentation they're solid, never had a problem. I'm pretty sure someone else manufactures the components and iStar just throws their name on... I remember looking at Newegg and finding a ton of enclosures from other companies that looked just like these. Maybe slightly different trays, but everything else looks the same. Seems like the BPU-350SATA-* are all rated for SATA3.
  12. superlgn

    Which file system should I go with?

    Debian, Ubuntu, and probably every other Linux distribution out there. RHEL/Fedora/CentOS doesn't include support on the installation disc, but you can install the kernel module and utilities after with yum.
  13. superlgn

    Adaptec 5805 - Raid-5 setup suggestions

    If you want to use a hardware raid controller you really need to stick to their recommendations on hard drives or the family of supported drives. Unexpected problems and lack of support could come back to bite you. 3ware wouldn't even talk to me the first time I called them when I had issues with a server using Seagate AS drives on a 9650se. If you can't afford the right drives in the size you want, maybe you could scale back to used 1-1.5TB enterprise like the WD1002FBYS or similar. I have 6 of those, all purchased used from eBay for $65-75 used after verifying they had a warranty. 4 are in my 8 disk raid6, 2 for cold spares / occasional backups. The other 4 drives are WD1001FALS, older models that could enable TLER. I didn't plan on hardware raid when I started buying the FALS drives initially, but once I started down that path and went hardware it ended up costing me more than if I had just bought the right drives to begin with. I eventually found a home for some of the other drives I stopped using or couldn't use. For less expensive new drives you could consider OEM (WD/Seagate drives made for HP, IBM, etc). There's usually a bunch of eBay sellers with quantity, but those drives won't have a warranty from the drive manufacturer and I have no idea if the company they were made for would support you. It's risky.. Work picked up a batch of 20 Seagate ST31000340NS (1TB, same model, similar serial number, slightly different firmware) a while back for around $1250 I think, so far so good. Not a bad deal when you consider those drives sold for ~$150/ea. Unsure if there's anyone selling 2TB OEM enterprise. I have no idea what your storage needs are, if a 6TB array like mine would cut it or not. 1TB drives are common and fairly inexpensive. If I had to do it again and could properly plan, I'd go with an 8 disk raid6 again at home. Although maybe I'd take another crack at software... I ran a software (Linux, md) 3 disk raid5 a few years back, perf wasn't so great and it was rough when rebuilding. 8 disks vs 3 should have significantly improved performance and maybe some additional tuning could improve the rough rebuilds. If smaller size drives or used/OEM aren't going to work, maybe you could pick up a simple SATA HBA and do a software raid instead. I mostly use hardware at work, but a coworker does a decent amount of larger servers / storage (8-16 disks) with Solaris and ZFS. A 6 disk RAID-Z (like RAID5) I have access to has decent sequential writes at ~350MB/sec, similar to my array at home.
  14. superlgn

    RAID dropping drives on reboot

    That doesn't sound right to me... Staggered Spinup is just meant to reduce initial burst from powering up all the drives at boot by only doing X number at a time. (from the 9.5.2 User Guide) Maybe a firmware bug with the drives or drives <=> controller? I don't have any Constellation drives, but I do manage a number of systems with 3ware controllers, all of which have staggered spinup enabled and no problems like this.