The Belgain

  • Content Count

  • Joined

  • Last visited

Posts posted by The Belgain

  1. Well, I've added the 4th drive to my array now, and the results I'm seeing are below. This is now a RAID5 array of 4 320GB WD SATA drives.

    james@ubuntu-fileserver:~$ bonnie -d raid5array/bonnie/
    Writing with putc()...done
    Writing intelligently...done
    Reading with getc()...done
    Reading intelligently...done
    start 'em...done...done...done...
    Create files in sequential order...done.
    Stat files in sequential order...done.
    Delete files in sequential order...done.
    Create files in random order...done.
    Stat files in random order...done.
    Delete files in random order...done.
    Version  1.03	   ------Sequential Output------ --Sequential Input- --Random-
    				-Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks--
    Machine		Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP  /sec %CP
    ubuntu-fileserve 1G 29971  83 58920  13 20682   6 26587  70 86801  17 259.6   0
    				------Sequential Create------ --------Random Create--------
    				-Create-- --Read--- -Delete-- -Create-- --Read--- -Delete--
    		  files  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP
    			 16  2815  97 +++++ +++ +++++ +++  2935  98 +++++ +++  9034  99

    I'll rerun with the -s parameter to see if the results differ very much.

  2. Hi there,

    I've currently got a 4-drive RAID5 Linux software RAID5 setup, and am going to be adding an extra drive for capacity. I don't have any spare SATA ports, and need to get a hard drive with an external enclosure for a while. I was hoping to expand the array to a 5-drive RAID5 array with this drive (using EVMS).

    Is this going to work OK, or am I just asking for trouble? How bad is the performance likely to get? Will I hit issues with the drive dropping in and out of the array for example? Is firewire a better choice than USB2 (I'll probably get an enclosure with both, and my motherboard has both)?

  3. Hi there,

    I've just set up a Linux sofware RAID5 array, and was wondering whether the performance numbers I'm seeing are about right or a bit low? I had two 320GB drives and 2 160GB drives so I've RAID0'ed the 160s and created a "3-drive" raid5 from that and the 320s. The 320s are WDs, and the 160s are Maxtor DM9s (all SATA). They're running on the onboard ICH6 SATA controller, CPU is a 2.4GHz P4.

    Bonnie gives the following:

    james@ubuntu-fileserver:~$ bonnie -d /home/james/raid5array/bonnie
    Writing with putc()...done
    Writing intelligently...done
    Reading with getc()...done
    Reading intelligently...done
    start 'em...done...done...done...
    Create files in sequential order...done.
    Stat files in sequential order...done.
    Delete files in sequential order...done.
    Create files in random order...done.
    Stat files in random order...done.
    Delete files in random order...done.
    Version  1.03	   ------Sequential Output------ --Sequential Input- --Random-
    				-Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks--
    Machine		Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP  /sec %CP
    ubuntu-fileserve 1G 29959  82 54596  11 21789   7 28181  73 73167  15 281.6   1
    				------Sequential Create------ --------Random Create--------
    				-Create-- --Read--- -Delete-- -Create-- --Read--- -Delete--
    		  files  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP
    			 16  2687  96 +++++ +++ +++++ +++  2798  97 +++++ +++  8841  99

  4. Hi there,

    I'm looking into getting some hotswap drive racks for my SATA drives. Supermicro do a 5-in-3 SATA hotswap drive rack (link) which seems quite nice. The thing I'm slightly concerned about is cooling: there's just a single 90mm fan blowing over the 5 drives, and the drives are very tightly packed (5x 3.5" drives vertically in 3x 5.25" bays). Has anyone got any experience with these Supermicro racks, and do the drives stay cool enough?

    Also, regarding future SATA standards (eg. SATA2, and the next ones), do the drive racks limit you to a particular one, or do they just do electrical passthrough and nothing cleverer?

    Does anyone know af any cheaper hotswap racks than this one (4-in-3 or 5-in-3)? The Supermicro seems to be almost $200, which is pretty steep...

  5. There's loads of distros that are pretty good for people without much Linux experience.

    I'm using Ubuntu at the moment, which is a very nice distro (getting more and more popular) and easy to start with. And it comes with EVMS pre-configured for handling RAID etc..

    Fedora's pretty nice too...

  6. Apologies if this has been asked before, but I've just got a quick query about choosing stripe-sizes for my RAID-array. I'll putting together a RAID5 array with the following drives:

    4x 160GB, 2x 320GB

    I'll be doing it by creating two 320GB RAID0 arrays from the 160s, and then creating a 4-drive RAID5 array from these two arrays plus the 320GB drives. Now intuitively it seems to me like it would be a good idea to choose the stripe-size for the RAID0 array to be half the stripe-size of the RAID5 array? So that a chunk of the RAID5 array splits up into exactly one chunk for each drive in the RAID0 array.

    Is this correct? Also, what stripe-size would be good for a fileserver which will just be used for storing files, and streaming audio/video over a LAN (so STR is what I'd like to optimise). I was thinking 32KB for the RAID0 arrays, and 64KB for the RAID5 array?

  7. Hi there,

    I'm putting together a large software RAID5 array, and was just wondering what filesystem I should use? I'm using EVMS to manage it, and this is on an Ubuntu Breezy install; all the main filesystems are available on it (EXT3, Reiser, JFS, XFS, ...).

    I'm not really that bothered (within reason) about performace as this is just going to be accessed over a network - mainly for streaming audio/video. What I do need however is:

    - stability: I want to be as sure as possible that there's no chance of data corruption occuring.

    - it's got to be possible to expand the filesystem while keeping the data on it intact (I don't need it to be able to do this while mounted though).

    - ideally it should be possilbe to have very large partitions on it (it'll be only 1TB for now, but could conceivably become > 2TB).

    At the moment I'm thinking that plain old EXT3 may be best (especially for stability). Am I right in thinking the maximum size of the array is ((2^32) * chunk_size)?

  8. Hi there,

    I've recently come accross this thread in AVSForums about a new (to me at least) way of implementing RAID4 with slightly different aims to standard RAID4/5.

    Like RAID4, an n-drive array has a single dedicated parity drive, which stores XOR parity data for the n-1 drives and protect against loss of data when any single drive in the array fails. The difference is that rather than data being striped accross drives as in RAID4/5, each drive appears as a completely serparate drive to the OS, and data is written to a single drive (plus the parity drive).

    So when data needs to be read from a drive, it is simply read exactly as if the drive were a single drive. When data is written to one of the drives in the array, the existing data is first read from the drive and if the new data differs, then the parity data on the parity drive is reversed, otherwise it is left unchanged.

    The obvious disadvantage of not striping the data is that performance is significantly worse than a traditional RAID4/5 array; read performance is at most 2x the speed of a single drive, and write performance is slower than that of a single drive.

    The advantage and main reason for this method is flexibility; drives don't have to be the same size (the only requirement is that the parity drive is at least as large as each other drive in the array). Also it's very easy to expand the array (to add a drive, all that needs to be done is to add the drive to the array, and then recalculate the data on the parity drive - recalculating parity isn't even necessary if the drive is added formatted to a 0's!). Similarly it's very easy to remove drives, or swap drives out for larger/smaller ones.

    The reason I noticed this is that it's being offered by a company called Limetech. They are offering it as a slimmed-down Linux distro with a RAID driver they've written themselves, and a managment interface. They've announced that they're going to release the code for the driver module under the GPL.

    It seems like it would be really nice to have this functionality in the Linux software RAID kernel module. Does anything like this exist already? If so, then why has it never caught on (the video buffs with the 1000's of DVD seem pretty keen on the idea of just being able to add redundant storage without any hassle or having to worry about getting identical-sized drives)? I was wondering if anyone has any experience using a similar system before? Is this new?

  9. If only ONE drive fails you have a degraded array - in other words : You have simply MORE drives in the array so the possibility of a double-failure is much higher !

    This is true, but that's also the case of the other setup suggested...

    And Linux-SoftRAID-expansion ? There is one tool that seems to work, but this is very very experimental and time-consuming. You simply need a full backup i think.

    EVMS lets you do this and it's supposed to be pretty stable I gather... doing a full backup of 500GB just isn't really feasible unfortunately... I'd say about 70% of it is backed up to DVDRs though...

    Uhm... and EVMS ? Doesn't this monster need a bunch of nasty kernel-patches ? You even dont need this GUI (is it much more?), mdadm can do the job too.

    Yeah, that's what I'm most worried about... Ubuntu comes with EVMS installed and configured by default (even though it isn't completely upto date in terms of revisions), so I may just give that a go...

  10. I was going to set up a smaller RAID5 array first (i.e. with fewer drives), then use EVMS to expand it to an 4-drive array.

    1. Create a degraded 3-drive RAID5 array using the two new 320GB drives.

    2. Copy all the data (480GB of it) to the new array (which has a capacity of 640GB).

    3. Split up the old RAID5 array into 2 320GB RAID0 stripes.

    4. Add one of these 320GB stripes to the degraded array and let it resync (making it a healthy 640GB array).

    5. Take the array offline, and do an expansion using EVMS (turning it into a 960GB array).

    Is there any reason this wouldn't work? I'm kind of relying on it....

    What would be the advantage of using the setup you've suggested? If I used LVM to make it appear as a single partition, wouldn't I have very slow performance when reading/writing to both arrays simultaneously (which would presumably sometimes happen because of the way LVM splits them up under the covers)? Presumably though, with your setup I wouldn't have to back up data (I could just remove one 160GB partition at a time from the existing array, replacing it with a 160GB partition from the 320GB drive?

    Also, redundancy is slightly improved if I go for the nested arrays (I can get away with 2 of the 160GB drives failing if they're both in the same RAID0 stripe...).

  11. Hi there,

    I've currently got a 4-drive software RAID5 array on Linux. The drives are 160GB each, and I want to add capacity to the array. Rather than getting several extra 160GB drives, I'm like to get a couple of 320GB drives and have a "4-drive" RAID5 array with the following setup:

    Drive1: new 320GB drive

    Drive2: new 320GB drive

    Drive3: 2*160GB drives in RAID0

    Drive4: 2*160GB drives in RAID0

    Is there any reason why this would be a bad idea? The array is used only for storing audio/video and is on a dedicated networked PC. The OS is on a separate drive. I'm not really bothered about performance within reason - only sequential read/write transfer rate really matters. The PC is a 333Mhz P2.

    Anything in particular I should know about setting it up? I was wondering if it would be a good idea to pick the stripe size for the RAID0 arrays to be half of the one for the RAID5 array? Or would that not help?


  12. The Maxtor 320 GB drive is oddest capacity I've seen above 250 GB.


    Do maxtor have a 320GB drive out? I can't see any with that capacity on their website... is it a DM10?

    I know Western Digital do a 320GB drive, and was hopeing someone else would too, as I need some (I have 160s at the moment, and need to RAID0 them to go into an array with 320s...).

  13. I'm currently running a 4-disk Linux software RAID5 array with all 4 drives on a 4-port Highpoint Rocket 1540 SATA controller. I want to add more drives to the array, and am planning on just getting a second identical controller (they're cheap, work well, and have source available for their Linux drivers) to connect the extra drives to.

    Are there going to any issues with this, or will it be ok? I'll be using EVMS with the MD plugin to control these, and am running the 2.4.26 kernel (on Slackware 10.0). I can't imagine there'd be any trouble with this, but just wanted to check first...


  14. That machine will be pretty overkill for running a RAID setup to be honest. As the posts above have said, you'll be limited by the Gigabit interface and the bus speed.

    I'm running a software RAID5 server on a 333MHz P2 and it manages a throughput of about 20MB/s. So any P3 should really be ample...

    If you want really high performance, your best bet is probably to get a PCI-E system, with multiple Gigabit connections and load balance across them I guess.... a little OTT for a media server obviously :)

  15. That sounds pretty slow...

    I haven't tried Windows software RAID, but my fileserver is running Linux software RAID5 (4x160GB SATA Maxtor), and I get about 45MB/sec read, and 25MB/sec write on a 333MHz P2.

    Happily maxes out the 100MBit LAN, even when accessing multiple files (downloads t the drive + video encoding from/to it + streaming video for example). Overall, I'm very happy with it.

    The only issue is that I now want to add more drives to the array, and I don't think I can do that without having to build a new array from scratch (correct me if I'm wrong).