compwizrd

Member
  • Content Count

    404
  • Joined

  • Last visited

  • Days Won

    1

Everything posted by compwizrd

  1. With an 8 gig test, and only 1 gig of ram in the system, filesystem caching isn't going to affect it that muchAs for seeks and stuff, I've ran bonnie++ on the system in the past, but for this case, the original poster was interested in what kind of transfer speeds he should be getting, so I haven't done anything but try to replicate his testing a bit.
  2. This is the 15x300 Maxtor Maxline III, on a 3ware 9550SX-16LP, blah blah blah, in xfs on a small partition i blew away that used to be ext3. My first couple of lines got eaten by screen, but here's the results when that disk verify was done. Then again, iozone runs a LOT faster on this server 8388608 256 276449 174217 494496 485069 8388608 512 281060 193846 490476 483674 8388608 1024 290578 208586 485317 482701 8388608 2048 300852 218812 496912 484365 8388608 4096 290980 219433 493028 485935 8388608 8192 300822 229528 493321 500307 8388608 16384 301078 287514 479047 472087 That's not bad, ~290 m/s writes, ~490 m/s reads. Raid5, 64k stripe. I went with a smaller stripe, as most of the files on this machine are tiny. seeing how the other array did nearly half these values with a quarter the number of disks, I'm vaguely tempted to buy a 9650SE-16, and just swap over the cables.. thanks to the multilane stuff, it'd take me under 5 minutes including pulling the server off the rack... The motherboard has both pci-x and pci-e slots. It's nearly 1200 dollars plus tax/shipping for the card though, so I won't.
  3. Raid5. I'd have to have someone threaten me with a big stick to wipe it and retest with raid0.. actually, i wonder if the online capacity expansion can move from raid5 to raid0.
  4. This 9650SE sits in an ASUS P5B-VM, card is in the top PCIe slot(where video usually gets put, this has onboard video), and there's a adaptec ultra2 scsi controller in the bottom pci slot.The single 320 is plugged into SATA1, which is on the intel chipset. We're playing in Linux, I don't know too much about trying to tune the 3ware for Windows, I only have about 3 systems with 3ware cards under windows, and they're all simple raid1 setups.
  5. Controller is a 9650SE-4LP.. set to Performance storsave profile, write cache enabled, drive queuing on, 256k stripe. With Debian sarge defaults on the 4x320g of 7200.10's, 2.6.19 kernel: Auto Mode Using minimum file size of 8388608 kilobytes. Using maximum file size of 8388608 kilobytes. Command line used: iozone -a -n 8G -g 8G -i0 -i1 Output is in Kbytes/sec Time Resolution = 0.000001 seconds. Processor cache size set to 1024 Kbytes. Processor cache line size set to 32 bytes. File stride size set to 17 * record size. KB reclen write rewrite read reread 8388608 64 63231 60573 179082 190730 8388608 128 62894 60177 116616 116540 8388608 256 62793 58856 117168 117430 8388608 512 62568 59402 117142 117394 8388608 1024 63323 59639 116936 117062 8388608 2048 63483 60119 116884 116893 8388608 4096 64085 58660 116205 116690 8388608 8192 63033 59051 116622 116362 8388608 16384 62741 58626 116258 116385 Tuned the same way as done by poster. 8388608 64 67952 61644 212851 210888 8388608 128 67236 60962 211712 213822 8388608 256 67628 55876 212716 214484 8388608 512 55506 49025 210517 214418 8388608 1024 54256 52388 212619 214887 8388608 2048 59047 57576 210209 212900 8388608 4096 65450 57474 212991 214016 8388608 8192 67055 58469 211813 209999 8388608 16384 67887 59986 212567 212879 Wow, these are pretty crap writes. Reads improved, but writes still sad. Now wondering how much of it is because I can't use the outer zone of the drive.. probably not much... ... and how much is due to ext3 instead of xfs. From doing some quick reading, it appears the Ext3 is hurting it, and badly. Reads look like they'd be similar under another file system, writes would be greatly improved. So clearly I need to wipe the partition and do this again under xfs! 8388608 64 190132 138781 218580 220111 8388608 128 203286 163634 219109 219257 8388608 256 189791 95759 219159 220119 8388608 512 213187 124145 220456 220812 8388608 1024 197064 107131 219564 219470 8388608 2048 192325 149968 219438 219847 8388608 4096 158970 179762 220016 219781 8388608 8192 182769 117287 219654 218654 8388608 16384 187898 201902 219246 219729 Ding. We have a winner. Ext3 is just horrible on writes in iozone. Looks like we're now roughly hitting the actual maximum speed of the drive (40-60 M/s times 4 drives). Now I can test the single 320 gig drive. Ext3 single drive(and damn, iozone takes a long time to run, half an hour per pass on the raid, longer on a single drive!) I cut these off at 256 because I'm not that patient and the numbers don't vary or matter much anyways now. 8388608 64 58656 48569 49312 48587 8388608 128 58837 48964 50222 48136 8388608 256 58826 49049 50703 47658 XFS single drive: 8388608 64 67487 67596 64590 64455 8388608 128 67474 676077 64592 64530 8388608 256 67389 67496 64572 64501 Even on a single drive, there's an improvement by using xfs, on this particular setup of benchmarks. Oh, and on a 15x300 gig Maxtor Maxline III's, on a 9550SX-16ML, dual opteron 265's, ext3: again 8388608 64 210702 143891 384365 393959 8388608 128 209642 147947 376545 383936 8388608 256 217966 149650 389958 290538 8388608 512 195560 111733 253402 257626 8388608 1024 183919 109234 226248 234845 This system decided to do a verify in the middle of the day...so i aborted the iozone test, i'll have to try later. Still, 240 meg per second reads and 190 meg per second writes on ext3, in the middle of a verify operation is impressive. So in conclusion, it appears the 9650 would greatly help you, or even the 9590(which is pci-e as well).. the 9650 supports raid6, the 9590 doesn't if i remember right... probably double the write speeds, and over double the read speeds is my guess.. depends on how much faster the WD is than the Seagate, in the same system. They do seem roughly similar, with a small edge to the WD. Now i know how Eugene feels, benchmarking all those drives.. and this was just a simple benchmark.....
  6. It's running now, and you're lucky, because i have a single 7200.10 in there along with the 7200.10's, so we'll be able to see how much different the 7200.10 is compared to the WD320RE's.The setup on the c2d is a 6600(2.4ghz), 1 gig ram, the 9650SE-4LP, 4 7200.10's. Iozone is version 3.221, server runs debian sarge, amd64 version. I do have to test about 7.75 gig into the filesystem, as / and various other partitions aren't big enough to run an 8GB test. My filesystems are ext3. I haven't tuned the machine at all, it's still being tested before being put into service, so I'll check that and play with the settings, same as you have. So far the 64 reclen numbers aren't great, writes in the 60's, and reads in the 180-190 range. :/ I'll report back when I have a full set of tests, and when I do some tuning. The default setup is 128 for both max_sectors_kb and nr_requests, and getra is set to 256.. so i'll be playing with those
  7. You're going to be limited by the PCI bus, unless you can move that card into a 64 bit slot, which your desktop motherboard isn't going to have. I have a 9650SE-4LP sitting in a machine at the office, which uses PCIe, connected to a c2d 6600 and 4 320gig 7200.10's in raid5, i'll turn it back on and test it. I'm testing it on two other machines right now, but I'm limited because my / doesn't have 8 gig free, so I'm having to test a few gig further down into the filesystem. Still, those results are pretty good, you have a decent PCI implementation on that board.
  8. I'll be setting up an 8 drive raid10 or raid50 sometime in the next couple weeks, I'll have to play with that.. Mine will be on a 3ware 9650 though, and a Xeon 5345(quad core)
  9. compwizrd

    Backup advice required

    DLT-V4 might do what you want.. 160 gig native/"320" gig compressed on a tape that's about 35 bucks.. drives are around 800-900. Speed is decent, about 8 meg per second. They're scsi or SATA, so you can probably reuse your existing SCSI controller. They're rated for much higher usage patterns than DDS as well. The tapes are physically significantly larger than dds, but still manageable.
  10. That's the way it works. I have a 15 drive RAID5 array on a 9550SX-16 that I'd love to convert to a 15 drive RAID6 on a 9650SE-16, but I'd have to backup about 2.7 TB of data and then restore it all.. not going to happen.
  11. does it use the ram though? my old asus p4b533 (845 chipset) "supports" ECC , but doesn't actually do anything with it.. same as with the asus p5w dh that I put in a cadd workstation
  12. compwizrd

    Raid with USB components

    and lots of failures due to the low write count on flash devices...
  13. USB2/FW can't cope with Ultra/33 in most cases, let alone SATA 150.
  14. compwizrd

    Why I Hate Modern RAM

    I'm having the same issues, mostly because my supplier wants to stock mostly OCZ ram, and it's a good 50% cheaper. I've RMA'd many sticks for being DOA or failing memtest. I've seen PC5400 sticks fail and only want to run at PC4300 on an Asus P5B Deluxe, not exactly a cheap board. Our Asus P5LD2's are hit and miss with the OCZ EL stuff.. it's too bad the SPD stuff on the ram doesn't specify what voltage you need, as some of this stuff needs 2.1V or higher just to owrk, and the default is 1.8
  15. Nice, my mailing list post got referenced (http://groups.google.de/group/linux.debian.user/msg/f12dec920523a629?hl=de&) I replied letting him know the newest firmware appears to fix it.
  16. Time to just bite the bullet and buy a bigger case.. i upgraded my Sonata to a Titan 550, the sonata ran 4 drives at about 38 degrees.. the Titan 550 holds 6 in the bottom, and at about 29-30 degrees.. And i can safely wedge in another 4-6 drives up top in the 5.25" bays.. There's other similar cases that can hold that many drives, without costing a fortune. The titan 550 comes with a 550w truepower EPS power supply as well.
  17. compwizrd

    Why is my Maxtor doing this?

    Contact Maxtor about any possible firmware updates.. I had a 15x300gig MaxLineII with VA111630 firmware that when updated to VA111680 became stable... the drives were going to sleep randomly.
  18. compwizrd

    HdTach of 10x320Gb Seagates

    Bonnie++, with a large enough data set to get away from the cache plus, watching vmstat while it works shows it'll hit the 600 at times. It's 15 x 300 gig Maxtor MaxLine III's (7V300F0), on a 3ware 9550SX-16ML, raid5 Motherboard is a Tyan K8WE, with dual opteron 265, 4x256 mb ram.
  19. compwizrd

    HdTach of 10x320Gb Seagates

    I get around 600m/s reads and 200m/s writes with a 15x300 array, raid5.
  20. compwizrd

    Flashing Hard drive

    From upgrading 17 of the Maxtor Maxline III 7V300F0's, I've gotten quite experienced with this 1.. Theorectically, yes. However, hard drive manufacturers are generally careful to not provide updates to the end users that will kill/wipe their drives.. If you acquired this file from a non-manufacturer source, who knows what has happened to it. 2. Connect the SATA drive directly to the mb. The firmware updater cannot handle hardware RAID controllers, and in most cases software-raid controllers.. Sometimes the onboard Promise crap works, but I wouldn't trust it anyways.
  21. 08:51:36 up 13 days, 19:584980 hours combined so far here.
  22. 15 x 300gig 7V300F0's here for 4 days without issues on VA111680, 3ware 9550SX-16ML. Hopefully this is the one that fixes it.
  23. compwizrd

    Maxtor in RAID = ?

    Firmware updates are needed for these Maxtors.
  24. Which firmware corrects this? VA111670 doesn't, does the 111680?
  25. Mine still drop randomly, we've had two go out this week. Haven't heard of any new firmware newer than VA111670 though.