Haversian

Member
  • Content Count

    142
  • Joined

  • Last visited

Community Reputation

0 Neutral

About Haversian

  • Rank
    Member
  1. Haversian

    beyond the 2TiB MBR partition limit

    As has been mentioned below, GPT will let you have >2TiB partitions, and EFI will let you boot them. Neither of these is a concern however, if you don't plan to boot the drive, and just put a filesystem on the raw block device. I don't know if you can do that in Windows, but linux is perfectly happy with it, and requires no special trickery.
  2. Haversian

    Home 8-drive RAID 5/6 array

    3Ware's linux software (back on the IDE RAID cards they started with) was quite good. I haven't used their SATA cards or newer software, but I see no reason why they wouldn't be as good. Easy to use; official linux support; etc. That said, I'm still unconvinced of the need for a RAID HBA for this usage pattern. Also, what LVM features are you talking about? I was under the impression that LVM works with generic block devices, be they disks, hardware RAID sets, software RAID sets, or anything else that presents itself as a block device. Ultimately, I decided not to mess with LVM for my own setup, as its limitations would likely be too annoying, so it's certainly possible I missed something in my research.
  3. Haversian

    RAID 5 Home System Recommendations

    'with adequate buffering' was really the key point there. The RAID code isn't pushing the hardware hard enough that it should have any impact on network bandwidth, so it all comes down to software. If the software is issuing large reads, or reads that samba/nfs/whatever passes to the kernel in such a way that they can be combined into large reads, then it shouldn't be an issue. But you're right - I should have been more clear. In any event though, the limitation won't be with the RAID setup, and making the RAID faster isn't going to help, if the OP does in fact have problems getting that speed across the network. I did briefly do some testing on my server with a gig-capable client (a laptop), and could readily saturate its disk's write bandwidth over the network with a standard windows file copy. That may or may not correspond to any particular application's usage however. -- FreeNAS no longer warns you not to use the software, so I'd guess that means it's come a long way since I initially looked at it. I don't see a comprehensive breakdown of its various features, particularly in the area of software RAID, so I can't say whether it's a good solution from that perspective. However, it will without doubt be easier to use than a full linux / md / samba setup.
  4. Haversian

    RAID 5 Home System Recommendations

    That's 100MB/sec (actually I peaked close to 180MB/sec) read bandwidth from the RAID device. With adequate buffering, there's no reason you shouldn't be able to get that to the network. In my case, I've got a 100Mbit network, so obviously I'm not seeing anything like that on my desktop, but it does mean I can abuse the array with other things without worrying about video skipping on the desktop. I did a write-up about linux md speeds when I built my current fileserver. It's going to need some overhauling this summer, so I'll post more info (particularly about the newer features in the md code, such as OCE) then.
  5. Haversian

    Home 8-drive RAID 5/6 array

    I'm not convinced a home media server really merits hardware RAID. You're talking about trivially few IOPS, and your total read/write bandwidth is going to be network-limited anyway. There's really not much to recommend a hardware RAID card in that situation, particularly when the price of that card will buy you the parts to make a dedicated software RAID system, to keep it separate from your gaming machine (say, in a closet where those dozen drives won't keep you up at night). Yes; staggered spin-up is an HBA feature. It's available in non-RAID HBAs. At least the Promise SATAII150 SX8, but probably others. I'd do some checking for you but Promise's website is apparently hosed.
  6. Haversian

    RAID 5 Home System Recommendations

    500s are about the same price/GB as 320s. 1.5TB of R5'd storage is going to be cheaper when using 500s than 320s if you factor in the price of controller ports and bays. Also, when you'll be upgrading, the 500s are going to be a good bit cheaper per GB than the 320s, as drives don't tend to go much below $75 each. Getting a new mobo isn't a bad idea. You can get a pretty good board with 8 sata ports for under $100 these days. Factor in the resale value of your old board, and you're pretty close to the cost of a pair of 4-port sata cards. Free motherboard upgrade, in other words. I'd favor linux, as it's what I'm familiar with for software raid. You should pretty easily see 100+ MB/sec read/write bandwidth. The distro isn't terribly important, though you'll want something fairly recent, as there have been some improvements (OCE, for example) to the raid5 code in the last couple kernels. I'd shoot for a 2.6.20 kernel. You may have to upgrade the kernel yourself to get something that new, but the procedure's fairly simple these days. I don't know what the feature comparison is like between the md driver in linux versus the Windows software raid code. I do know that the price of a Windows licence will buy you another drive or two, so from that standpoint it's a pretty clear-cut decision for me.
  7. Haversian

    2TB Boot partition in Vista

    According to whom? It's generally accepted practice to put the OS on one partition and user data on one or more others. That way they're separate. You can move the user data without bothering the OS; you can upgrade the OS without bothering the user data; you can image just the OS partition if you want. In fact, I'm having a hard time coming up with a situation where one partition is easier to manage than two. It may be easier to use. But certainly not easier to manage.
  8. Haversian

    What % of HD capacity is pushing it?

    No. You're either mis-remembering, or got your info from someone ignorant of the issues involved. There are various reasons why you don't want to completely fill up a file system, but that is only tangentially related to hard drives. Certain operations on certain file systems start getting very slow (or fail completely) when there isn't much (or any) free space left. It's even possible under certain circumstances to end up with a filesystem so full that one of the operations that fails is 'delete'. That's an exciting one. That said, I've used quite a few different file systems (FAT12, FAT16, FAT32, NTFS, HPFS, HFS, HFS+, ext2, ext3, xfs, etc), and done horrible things to many of them, without any problems. The short answer is: don't worry about it. The longer answer is: try to leave 5-20% of the file system free, depending on what you want to do with it. 5% should be fine for most things. Defraggers usually want closer to 20%. If you end up filling the drive, things might start to get slow, but nothing should break. And certainly the drive doesn't care one way or the other - its reliability is independent of how full it is.
  9. Haversian

    SCSI RAID benchmarking

    PCIe defines x1, x2, x4, x8, x16, and x32. I'm not sure how x32 works, since my understanding was that the physical slots are defined only for x1, x4, and x16. An x4 or x16 slot may be wired only for smaller numbers of lanes. It's fairly common to have x4 slots wired for x2 only in chipsets that provide 20 PCIe lanes (16 for graphics, 1 for ether or SATA or something else, and a 1x and 4x slot wired for 1x and 2x). If you have an x8 PCIe card, it needs to go in an x16 connector, and will work even if that x16 connector is only wired for x4 or even x1 operation.
  10. Haversian

    SCSI RAID benchmarking

    Pedant mode: ON Actually, the PCI 2.2 spec does define a 66mhz speed. Not that anybody used it. Pedant mode: OFF PCI-X goes faster (up to ~4GB/sec in PCI-X 2.0. Again, not that anybody uses it), as was mentioned, though the fastest incarnations of it are really more of a port than a bus, despite the architecture. To the OP: If you wanted high STR, why did you spend all that money on SCSI? If you wanted high IOPS, why are you benchmarking STR? I don't really understand the basis for your questions. For high STR (assuming no workstation class motherboard) your best bet by far is to run software RAID on the chipset-attached SATA ports. It's pretty trivial to get 100MB/sec sustained writes and 200MB/sec sustained reads off 4-6 cheap commodity SATA drives, even off a RAID 5 array.
  11. Haversian

    RAID5 fileserver recommendations

    Probably. If real-world writes of roughly bus speed / 3 hold for PCIe as well as PCI, you should see about 70MB/sec writes for 4 drives on an x1 controller (or possibly somewhat more since PCI is bidirectional and PCIe has 250MB/sec each way per lane). I don't have much practical experience with how well various chipsets scale when you're using several PCIe lanes simultaneously, but if you use an x4 card, or 2 x1 cards, or an x1 card and chipset-attached SATA ports, I'd be surprised if you can't beat at least 100MB/sec sequential writes. Good luck! Thanks! I hoped I managed to make it clear.
  12. Haversian

    RAID5 fileserver recommendations

    If all your software RAID5 devices are behind the PCI bus, writing is going to be slow. I got about 35MB/sec on a 4-drive array with all drives on a PCI controller. If I moved half the drives to chipset-attached SATA ports, I could roughly double that. And it is the PCI bus that's the limiting factor. Assume 100MB/sec real-world PCI bandwidth, and zero computation time. With a 4-drive array, you're reading 3 stripes, computing, and writing the 4th stripe, for about 25MB/sec throughput. I presume the stripe cache deserves credit for write performance in excess of that figure. My benchmarks (and other linux software raid5 info)
  13. Haversian

    RAID5 fileserver recommendations

    One of the SUN blogs has some numbers that you'll probably find informative. Basically RAID-Z trades small random reads for everything else. Small random read performance will not scale with number of drives in a RAID-Z. The trade-off is that it's faster for everything else. In particular, it writes faster than a mirror or RAID-10, while having RAID5-like read performance.
  14. Haversian

    RAID5 fileserver recommendations

    Forgot to mention: You'll need a copy of mdadm newer than 2.3.1 to use the RAID5 reshape code, in addition to the newer kernel.
  15. Haversian

    RAID5 fileserver recommendations

    2.6.16 cleaned up a bunch of RAID code, but you're correct that 2.6.17 introduced the code and interfaces for growing RAID5 arrays. Later kernels include various comments about fixing bugs in the RAID5 code, both specifically related to growability and not, so I wouldn't recommend using the minimum 2.6.17. 2.6.18 merges the RAID4/5/6 code, though it's not apparent from the changelog whether this means RAID6 arrays are growable or not. RAID5->RAID6 migration is not currently possible but it's a near-term feature and the code is actively moving in that direction. I tried to find in the changelog where someone (Andrew Morton?) commented that enough successful reports of RAID5 growing had come in that he's comfortable that it's pretty stable, but I've so far failed. 2.6.19 includes yet more md fixes; 2.6.19.1 does not, but it's the latest stable kernel, so that's probably your best bet. 2.6.20-rc[1-3] don't include many changes to md code.