Shane GWIT

Member
  • Content Count

    9
  • Joined

  • Last visited

Community Reputation

0 Neutral

About Shane GWIT

  • Rank
    Member
  1. I can't remember to be honest. For anything with parity involved it takes a day before I can get real world results as it does a build & verify. I thought I tested 10 but I can't find any csv files, which means I tested it and didn't xfer them off the server before starting the next or forgot to test it. Either way, the server doesn't have to go into production for another week, so I'll put 10 results up there tomorrow. I want to start testing stripe size as well, but I'd figure the larger the better for the density of the drivers & platters. Oh and the RAID6 array died -- as in no arrays present. I was messing around with ext4 settings, but that should have nothing to do with the controller. Adaptec's official response (after 3 days) was I've updated the BIOS and nothing of the sort has happened since, but it is disconcerting to say the least. EDIT: Just as a FYI I have nothing against RAID 1+0 -- I deployed it on my second production server, which currently has ~2 years uptime on RHEL 5. I just feel like a 5805 is being wasted doing a simple mirror of stripes.
  2. Well after weeks of synthetic tests, and then opening and closing large tif files over the network I am going to stick with RAID 50 w/ stripe size of 1024kb and the XFS for the home partition. My results: Somethings were CPU bound R0 was done as theoretical max I can't find my R6 p=3 results, otherwise enjoy: http://www.godswordintime.com/results.php
  3. I went with a stripe size of 1024kb and have been running bonnie tests over the last few days. I'm not sure it is the most reliable benchmark, but at least I can get results that have some relative meaning.
  4. I've been playing around with Bonnie++ and although it is a synthetic benchmark, I'm not happy with the results. Then again perhaps I should have ran multiple copies. Running this one test placed a very high load on the system (>10)...I can only assume the CPU is being utilized so much due to the fact this is a synthetic benchmark as the 5805 has a 1.2ghz dual core processor with 512mb of RAM on it. Write cache is enabled on the card. Going to run some iozone benchmarks before I nuke everything and start over. I've thought about running RAID 10 and at a loss of another drive, but doesn't that seem like a waste of $$$ in addition to a waste of resources (ie the card would barely be utilized). Writing intelligently...done Rewriting...done Reading intelligently...done start 'em...done...done...done... Version 1.03d ------Sequential Output------ --Sequential Input- --Random- -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks-- Machine Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP /sec %CP twelve 68000M 111940 48 88708 26 314832 44 110.1 1 twelve,68000M,,,111940,48,88708,26,,,314832,44,110.1,1,,,,,,,,,,,,,
  5. It took almost a day to build/verify, but I'm now running RAID 6 on 64 bit Debian 5. Upon installing adaptec's software, it reported all 6 drives failed SMART w/ 75 error count. I'm assuming that is something erroneous either on Samsung's side or Adaptec. I'm currently running 3 instances of Bonnie++ retrieved from Debian repository. I plan on using amanda to backup everything offsite; you can never be too careful. Thanks for the input, I went from the initial idea of using raid 50 to 6. Unrelated, I think I'm going to nuke Debian as most of my experience has been with Red Hat. I only went with it as a friend suggested and Fedora Core 12 doesn't recognize any drives / raid cards. I know I can create a driver disk and use it, but that would involve a floppy, which isn't readily available nor does it appeal. I started using CentOS instead of RHEL recently and will most likely go that route.
  6. Going to have both sequential access, those who work with adobe products, and those who don't (most likely a bunch of small random hits). I'm going to try RAID 6 and see how the performance is with synthetic benchmarks. Are you worried about two simultaneous drive failures? What do you think about RAID 5EE? Also, if I increase the stripe size to 512 or 1024, would that help performance for those working with large files and hinder performance for everyone else?
  7. I already have the drives, 6 Samsung 2TB Spinpoint, was looking for input on which RAID level to use. My understanding is 6 is slow, but higher availability than the rest. For dishing up files across a network (2 bonded gigabit nics), am I going to notice a performance difference? What level would you use?
  8. Hi, I was hoping someone might have some experience with this card (5805) and various raid configurations. The server hosting this hardware will primarily be used as a file server, and I have it setup w/ 2 Intel EXPI9400PT nics to do bonding / link aggregation on a gigabit switch. My current plan is RAID 50 or RAID 6 - each allowing for two drive failures, given one in each sub array in 50. My two goals are high availability and performance -- I want the I/O limitation to be the 2 gbps network connection. Thanks Shane