chrispitude

Member
  • Content Count

    81
  • Joined

  • Last visited

Community Reputation

0 Neutral

About chrispitude

  • Rank
    Member

Contact Methods

  • AIM
    ChrispyPA
  • ICQ
    0

Profile Information

  • Location
    Saylorsburg, PA
  1. I ended up doing an Intel-based build with an Asus Z97-A motherboard. The operating system is Linux Mint 17.1 The 3ware 9650-4LPML works great in the middle x16 slot (PCIEX16_2). I originally tried the top slot (PCIEX16_1) but this caused the on-board gigabit Ethernet to become nonfunctional. The manual shows that the top x16 slot shares its IRQ with several other things. I guess we are back to the good old days of IRQ conflicts! Unrelated to the 3ware, I also had issues with this board with reboots hanging or not letting me into the BIOS. (This occurred even without the 3ware card installed.) These problems disappeared once I changed Boot > Fast Boot from Enabled (default) to Disabled.
  2. I have a trusty 3ware 9650SE-4LPML that has served me well for many years, and I have no desire to upgrade it. I would, however, like to upgrade the aging AMD socket AM2 motherboard and CPU that hosts it. The problem is, I read of many people going through painful trial-and-error to find motherboards that work with these controllers. They'll plug into the socket, but the card's not recognized, the machine crashes, the drives aren't recognized, etc. Apparently some motherboards expect only a graphics card to be plugged into their >x1 slots. Of course, many of these storires are now also several years old. So... What recently modern motherboard are you using with your 3ware controller? I'm especially interested in any AM3+ boards with onboard video that happen to work. Thanks! - Chris
  3. chrispitude

    Too many years of awful 3ware performance.

    Anything more on this front? - Chris
  4. chrispitude

    Class action lawsuit against Seagate

    You know, I was torn on whether to participate for the very same reason... But, I suppose the case is determined by the merit and not by the number of people signed up. - Chris
  5. I'm surprised I haven't seen this mentioned here yet. I just received an email regarding a Seagate class-action lawsuit: http://www.harddrive-settlement.com/notice-email.htm It appears to be over advertised capacity: - Chris
  6. chrispitude

    Too many years of awful 3ware performance.

    Very nice research. I have a 3ware and get great sustained read/write numbers (with the right buffering tweaks as you mention), but the machine always felt sluggish under heavy random I/O. It'd be great if some driver improvements come out of all this. Hopefully 3ware is listening. Perhaps you should open a support case with them and share your findings. - Chris
  7. chrispitude

    SR Ads

    I was about to start a new topic, and I spotted this. The pop-ups are bad to the point that I just don't bother surfing the forums as often as I used to. It's terrible. I thought SR was above this. - Chris
  8. Ugh, I've been had... So I guess my only hope is to hear back from Asus to see if there's a way to swap the video and 3ware cards. Physically they swap, but the board doesn't post. Thanks all for the help! - Chris
  9. I know what you mean. So that's the funny thing... The total number of lanes for the nForce4 chipset is limited to 20 (I think). If my x4 slot only has 2 lanes, then 16+2+1+1 reaches 20. But, why would the motherboard specs advertise that it's capable of 1Gb/s? The A8N5X manual is at http://dlsvr03.asus.com/pub/ASUS/mb/socket...e2138_a8n5x.pdf but there's nothing that seems to allow lane reallocation - no jumpers or BIOS options that I could see. However, I did notice the following text in the manual: Hmm, a slot that provides twice the bandwidth of an x1? Well that would be an x2, but that's 0.5 GB/s, not 1GB/s. Maybe this is why it negotiates two lanes. So, I tried another experiment this morning. The back of the x4 slot is open to allow for a larger card to be plugged in. I plugged my 3ware into the x16 slot (didn't fill the whole connector) and the low-end Radeon X300 into the x4 slot (hung out the back). I was hoping that the video card would negotiate down to 4 lanes, which is plenty enough to run a 2D linux server desktop. However, the machine doesn't post (beeeeeep, beep) with the video card in the x4 slot - even with the 3ware removed from the x16 slot. Bummer, that would have been the perfect setup. And then I was thinking - do I really have much to gain? With a 4-drive RAID5 array, I'm only getting data from effectively three drives, which means both drives are pumping out 70Mb/s reads and writes. I wouldn't think that a WD3200 RE2 would have much more to give. But if the writes are that fast, I bet the reads have a little more in them... - Chris
  10. Scratch that, I thought you had a bigger controller for some reason. So your same card is negotiating 4 lanes, and mine is negotiating 2. That tells me that I have some room for improvement... I've opened a support case with 3ware to get their thoughts. Thanks! - Chris
  11. I know what you mean, that's what I thought too. But two things throw me off a little: The numbers don't fall off with the smaller block sizes like they normally do. Read and write are the same speed. Normally - especially with a RAID5 array - if you are running with no bandwidth limitations, your write performance will be somewhere under your read performance. The above (especially #2) are what makes me wonder if I am being throttled somewhere. Your lspci results are interesting. The card is capable of x8 but has negotiated x4? I wonder if the controller somehow figures out how many lanes it wants based on its array configuration, to avoid wasting the lanes? - Chris
  12. Hi all, Well, I've upgraded from a 3ware 9500S-4LP to a 9650SE-4LPML. The setup now consists of: Asus A8N5X motherboard (socket 939, has PCIE x4 slot) Athlon 64 4000+ CPU 2GB memory 3ware 9650SE-LP4ML controller (PCIE x4) four WD3200 RE drives in RAID5 configuration I added the following lines to /etc/rc.d/rc.local based on 3ware's recommendations: echo 512 > /sys/block/sda/queue/nr_requests blockdev --setra 16384 /dev/sda echo "deadline" > /sys/block/sda/queue/scheduler My test filesystem is an XFS partition which is sector-aligned to a 64k boundary. Here are the test results: Iozone: Performance Test of File I/O Version $Revision: 3.239 $ Compiled for 64 bit mode. Build: linux Contributors:William Norcott, Don Capps, Isom Crawford, Kirby Collins Al Slater, Scott Rhine, Mike Wisner, Ken Goss Steve Landherr, Brad Smith, Mark Kelly, Dr. Alain CYR, Randy Dunlap, Mark Montague, Dan Million, Jean-Marc Zucconi, Jeff Blomberg, Erik Habbinga, Kris Strecker, Walter Wong. Run began: Tue Aug 14 16:13:57 2007 Auto Mode Using minimum file size of 8388608 kilobytes. Using maximum file size of 8388608 kilobytes. Command line used: iozone -a -n 8G -g 8G -i0 -i1 Output is in Kbytes/sec Time Resolution = 0.000002 seconds. Processor cache size set to 1024 Kbytes. Processor cache line size set to 32 bytes. File stride size set to 17 * record size. KB reclen write rewrite read reread 8388608 64 192193 103294 193565 193768 8388608 128 196954 101904 193439 194035 8388608 256 198341 97858 193509 194039 8388608 512 196343 100973 193721 194119 8388608 1024 196568 102744 193266 194123 8388608 2048 197253 101690 193493 194045 8388608 4096 197329 101785 193463 194116 8388608 8192 196293 120207 193451 194102 8388608 16384 197536 136602 193776 193890 iozone test complete. As you can see, performance is quite good - especially considering these are not especially cutting edge 320GB drives! The fact that all of the write/read/reread results are pegged around 193MB/s suggests that I am hitting a bandwidth limit somewhere. My first thought was the PCI Express slot - am I accidentally getting a 1 lane connection instead of a 4 lane connection? Each lane is 2.5Gbps. I did an "lspci -vv" and here is the 3ware entry: 04:00.0 RAID bus controller: 3ware Inc 9650SE SATA-II RAID (rev 01) Subsystem: 3ware Inc 9650SE SATA-II RAID Control: I/O+ Mem+ BusMaster+ SpecCycle- MemWINV- VGASnoop- ParErr- Stepping- SERR- FastB2B- Status: Cap+ 66MHz- UDF- FastB2B- ParErr- DEVSEL=fast >TAbort- <TAbort- <MAbort- >SERR- <PERR- Latency: 0, Cache Line Size: 32 bytes Interrupt: pin A routed to IRQ 17 Region 0: Memory at d4000000 (64-bit, prefetchable) [size=32M] Region 2: Memory at d7001000 (64-bit, non-prefetchable) [size=4K] Region 4: I/O ports at a000 [size=256] Region 5: Memory at d7000000 (32-bit, non-prefetchable) [size=4K] Expansion ROM at d6000000 [disabled] [size=128K] Capabilities: [40] Power Management version 2 Flags: PMEClk- DSI- D1+ D2+ AuxCurrent=0mA PME(D0-,D1-,D2-,D3hot-,D3cold-) Status: D0 PME-Enable- DSel=0 DScale=0 PME- Capabilities: [50] Message Signalled Interrupts: Mask- 64bit+ Queue=0/5 Enable- Address: 0000000000000000 Data: 0000 Capabilities: [70] Express Legacy Endpoint IRQ 0 Device: Supported: MaxPayload 512 bytes, PhantFunc 0, ExtTag- Device: Latency L0s <128ns, L1 <2us Device: AtnBtn- AtnInd- PwrInd- Device: Errors: Correctable- Non-Fatal- Fatal- Unsupported- Device: RlxdOrd+ ExtTag- PhantFunc- AuxPwr- NoSnoop+ Device: MaxPayload 128 bytes, MaxReadReq 512 bytes Link: Supported Speed 2.5Gb/s, Width x8, ASPM L0s L1, Port 0 Link: Latency L0s <512ns, L1 <64us Link: ASPM Disabled RCB 128 bytes CommClk- ExtSynch- Link: Speed 2.5Gb/s, Width x2 Capabilities: [100] Advanced Error Reporting Near the end, notice it says the link speed is 2.5Gbps (which is a single lane) but the width is x2. That really doesn't make much sense to me. Also strange is the line thaat suggests the supported speed is 2.5Gbps. Does anyone have a PCI Express 3ware or other RAID controller running under linux that you can post the "lspci -vv" entry? I'm not sure why the rewrite is lower. Perhaps because read/write cycles are needed? If so, this further reinforces that I am hitting a bandwidth limitation somewhere. Even so, boy these are some nice numbers. This has been a great budget system upgrade. The motherboard was a $38 shipped refurb from Newegg, I won the controller on Ebay for $271 shipped, and the CPU was on sale a few months ago for $70 (now cheaper). Thanks! - Chris
  13. Hi all, I did some tests to compare reiserfs and xfs today. reiserfs was a little slower in raw throughput on this array: Auto Mode Using minimum file size of 8388608 kilobytes. Using maximum file size of 8388608 kilobytes. Command line used: iozone -a -n 8G -g 8G -i0 -i1 Output is in Kbytes/sec Time Resolution = 0.000001 seconds. Processor cache size set to 1024 Kbytes. Processor cache line size set to 32 bytes. File stride size set to 17 * record size. KB reclen write rewrite read 8388608 64 79457 83484 106412 106500 8388608 128 71997 71669 103934 106837 8388608 256 79702 78614 103258 104130 8388608 512 79057 72089 101541 104087 8388608 1024 79495 80230 103795 103103 8388608 2048 77045 79761 103856 103687 8388608 4096 77014 80712 103484 103705 8388608 8192 73825 77929 102895 103539 8388608 16384 79887 77918 103357 102690 I did some more practical tests on a 9GB directory of files roughly 1-20MB each, first on XFS: [root@narf storage]# time cp -a dbase/ dbase2 real 5m23.047s user 0m0.912s sys 0m49.886s [root@narf storage]# time find /storage -name "doesnotexist" real 0m2.308s user 0m0.014s sys 0m0.049s [root@narf storage]# time rm -rf /storage/* real 0m52.739s user 0m0.012s sys 0m1.613s and on reiserfs: [root@narf storage]# time cp -a dbase/ dbase2 real 4m48.495s user 0m0.873s sys 1m40.965s [root@narf storage]# time find /storage -name "doesnotexist" real 0m0.586s user 0m0.011s sys 0m0.028s [root@narf storage]# time rm -rf /storage/* real 0m18.889s user 0m0.008s sys 0m8.642s The first command duplicates the directory on the same filesystem, giving a good mix of simultaneous reads, writes and directory manipulation. The second command tests search time of the directory structure, and the third command measures removal time. For reference, there were about 1800 files in the directory. - Chris
  14. chrispitude

    Tips for creating a RAID-optimized XFS partition

    Hi all, After some research, I now have my answers. First, creating partitions on 64k stripe boundaries is pretty easy. Use 'parted' and configure it to use sectors as the default unit. The default unit is cylinders, and due to the default 63 heads*255 sectors=1cylinder geometry, it's about impossible to align to a 64k boundary using cylinders. Below I print the partition table in GiB, then in sectors: (parted) unit s (parted) unit GiB (parted) print Model: AMCC 9500S-4LP DISK (scsi) Disk /dev/sda: 894GiB Sector size (logical/physical): 512B/512B Partition Table: msdos Number Start End Size Type File system Flags 1 0.00GiB 7.81GiB 7.81GiB primary ext3 boot 2 7.81GiB 9.77GiB 1.95GiB primary linux-swap 3 10.0GiB 30.0GiB 20.0GiB primary 4 30.0GiB 894GiB 864GiB primary reiserfs (parted) unit s (parted) print Model: AMCC 9500S-4LP DISK (scsi) Disk /dev/sda: 1874933759s Sector size (logical/physical): 512B/512B Partition Table: msdos Number Start End Size Type File system Flags 1 63s 16386299s 16386237s primary ext3 boot 2 16386300s 20482874s 4096575s primary linux-swap 3 20971520s 62914559s 41943040s primary 4 62914560s 1874933759s 1812019200s primary reiserfs (parted) For now, I am just concerned about partitions 3 and 4. I will repartition 1 and 2 the next time I reinstall the operating system. I want partition 3 to start 10GiB into the drive. Each sector is 512 bytes, so that means that we want partition 3 to start precisely at 10GiB * (1024MiB/GiB) * (1024KiB/MiB) * (2 sectors/KiB) = 20971520. This is the start value for partition 3. Partition 3 is 20GiB in size, so I calculate the start of partition 4, and end partition 3 immediately before it. I probably could have also done this by setting the units to GiB, MiB or kiB, but I felt more comfortable with this for now. When I reinstall partitions 1 and 2, I'll try MiB next. For optimal XFS filesystem creation, I settled on the following commands: mkfs.xfs -f -L /vmware -d su=64k,sw=3 /dev/sda3 mkfs.xfs -f -L /storage -d su=64k,sw=3 /dev/sda4 All the RAID5 XFS examples I found indicated that the stripe width should be (N-1) for RAID5 arrays with N drives. This makes intuitive sense to me, since the stripes will wrap back to the original drive every (N-1) stripes. In the end I ended up benchmarking xfs and reiserfs, and decided to go with reiserfs as an experiment. Unfortunately reiserfs has no switches to optimize it for a RAID array, but we'll see how it goes. It seems a little slower in absolute raw throughput, but faster in file manipulation (archiving, copying, etc.). - Chris
  15. chrispitude

    Tips for creating a RAID-optimized XFS partition

    Hi Frank, I agree, RAID5 parity is interleaved across the physical drives. Logically, however, a RAID5 array of N drives requires only N-1 stripes to span the physical drives in the array. - Chris