drescherjm

Member
  • Content Count

    555
  • Joined

  • Last visited

Community Reputation

0 Neutral

About drescherjm

  • Rank
    Member

Contact Methods

  • Website URL
    http://www.geocities.com/drescherjm/
  • ICQ
    0
  1. I agree this sounds absolutely ridiculous. Even counting the price of the tape drive in the calculations. SSDs are way too expensive per gigabyte. Also I thought that SSDs had issues with data retention when not powered up for 2 or so years.
  2. This is a backup software / underpowered hardware issue. The problem is caused by poor or poorly configured backup software not buffering the data before it is sent to the tape drive and also hardware that can not keep up with 120MB/s streaming. Both are easily solvable. A modern quad core server with 4+ GB of ram and a raid0 of ssds or velociraptors ... can go a long way for the hardware. Make sure the raid is only used for the buffer and not used at all for the source data. For the software I use bacula which supports buffering the data before it goes to the tapes. Backups spool to a spool area and that gets despooled at tape drive speeds.
  3. I think this depends on the controller. Some controllers have 1GB of battery backed cache so the delay can be longer. With that said raid5 is generally a bad choice for a database server.
  4. What I meant on the last part was to do selective NTFS folder compression on the lesser used parts of the filesystem while keeping the rest uncompressed for performance reasons...
  5. Now with 64kb cluster I got 1 GB write with dd in 16 seconds. Not bad (about the speed of 1 drive) but now this eliminates the possibility of using NTFS compression.
  6. Looks like I missed the 64kb cluster size requirement. Perhaps I can do 1 more test...
  7. With 32Kb stripe size and 3 drive raid 5 I see a tremendous improvement in the 10GB network transfer test however it is still a little slower than a single disk. On top of that it is significantly slower in a 1GB windows dd zero write test (15 sec vs 90 sec). On a linux system with the same hardware and software raid5 using 5 disks I get the following: # dd if=/dev/zero of=test.test bs=1M count=1000 1000+0 records in 1000+0 records out 1048576000 bytes (1.0 GB) copied, 3.42419 s, 306 MB/s I think I have spent enough time on this one and I will go with single 750GB drives over 3x 500 in raid5 and use the extra drives in my linux software raid servers.
  8. Yes, It looks like Wiinter spent a long time with that. I only wish that was for raid5 and not raid0.
  9. Recently I started a new topic on this subject but thankfully Madwand pointed me to this one so I will continue my discussion here. Here is what I posted in that topic. I have used linux software raid5 and raid6 for years and the performance for me is quite acceptable. I have 15TB of linux software raid currently at work using > 50 SATA drives... With linux software raid I expect sequential writes to be at minimum faster than the speed of 1 hard drive. Since the price of drives has gone down to less than $100 for 500GB and most motherboards have fakeraid5 I decided to try this with new computer builds. After playing around with this for a few days I am horribly disappointed with the write performance. One example of this is when transferring a 10GB file over a gigabit network. With a single drive the transfer is less than 5 minutes. With a 3 or 4 drive software or fake raid5 it will take > 25 minutes. From my linux experience the problem appears to be that the cache manager is forcing writes too frequently and the raid subsystem is not caching stripes so that every stripe is being read before it is written back. With linux you can lessen the impact of this by increasing the size of the stripe cache. However under XP I see no way to do that. I did enable LargeSystemCache but that appears to only help for the first few seconds where the gbit bandwidth holds steady at >50% for a few seconds then settles down on 3 to 8%. During all of these tests the cpu usage on the dual core machine with 4GB of memory is less than 10%. And now the progress. Like others, I have had some difficulty alining partitions in XP. Diskpart does not work under XP so I had to use the PTEdit method and I also used Vista. My latest test was with a 4 drive raid 5 and with the Vista 2048 sector alignment it did not help on my the problem with nvraid (expected from this thread). So now I am going to try to use a single drive with XP on it to boot and do my testing that way with a 3 drive raid5 config.
  10. drescherjm

    Poor software raid5 performance in XP

    WOW Thanks! Very good information. I think I will try the aligned partitioning on my desktop if I have time tomorrow.
  11. drescherjm

    Poor software raid5 performance in XP

    Some more testing... nvraid RAID10 is not faster than a single drive at writes in this 10GB test. It is also not faster at reads according to hdtach. It looks like I am going to give up at this and use 750GB Segate 7200.11 drives (in a single drive configuration) I bought for a linux sever in the 4 desktops and 12 x 500 GB drives in the server instead of the 8 750s I was planning...
  12. I have used linux software raid5 and raid6 for years and the performance for me is quite acceptable. I have 15TB of linux software raid currently at work using > 50 SATA drives... With linux software raid I expect sequential writes to be at minimum faster than the speed of 1 hard drive. Since the price of drives has gone down to less than $100 for 500GB and most motherboards have fakeraid5 I decided to try this with new computer builds. After playing around with this for a few days I am horribly disappointed with the write performance. One example of this is when transferring a 10GB file over a gigabit network. With a single drive the transfer is less than 5 minutes. With a 3 or 4 drive software or fake raid5 it will take > 25 minutes. From my linux experience the problem appears to be that the cache manager is forcing writes too frequently and the raid subsystem is not caching stripes so that every stripe is being read before it is written back. With linux you can lessen the impact of this by increasing the size of the stripe cache. However under XP I see no way to do that. I did enable LargeSystemCache but that appears to only help for the first few seconds where the gbit bandwidth holds steady at >50% for a few seconds then settles down on 3 to 8%. During all of these tests the cpu usage on the dual core machine with 4GB of memory is less than 10%.
  13. drescherjm

    'Merging' Two Drives

    This is normally called as disk spanning. I am not sure if most fakeraid or (BIOS raid) supports that. You do realize that this is not significantly better than raid 0 as if one drive dies it will be very difficult to recover the data on the good drive and also there is no performance gain by doing this.
  14. drescherjm

    slow/weird 15k scsi performance

    Are you sure that write cache is turned on? Performance will be crippled with write cache off.
  15. drescherjm

    Which PCI Card

    I have a couple of dozen of the SYBA 4P cards in my department and they work great. These are SATA 1 only but on a PCI bus that has a max bandwidth of less than SATA1 I am not sure there will be much benefit in using SATA2. http://www.newegg.com/Product/Product.aspx...N82E16815124006 http://www.newegg.com/Product/Product.aspx...N82E16815124020