drescherjm

Member
  • Content Count

    555
  • Joined

  • Last visited

Everything posted by drescherjm

  1. I agree this sounds absolutely ridiculous. Even counting the price of the tape drive in the calculations. SSDs are way too expensive per gigabyte. Also I thought that SSDs had issues with data retention when not powered up for 2 or so years.
  2. This is a backup software / underpowered hardware issue. The problem is caused by poor or poorly configured backup software not buffering the data before it is sent to the tape drive and also hardware that can not keep up with 120MB/s streaming. Both are easily solvable. A modern quad core server with 4+ GB of ram and a raid0 of ssds or velociraptors ... can go a long way for the hardware. Make sure the raid is only used for the buffer and not used at all for the source data. For the software I use bacula which supports buffering the data before it goes to the tapes. Backups spool to a spool area and that gets despooled at tape drive speeds.
  3. I think this depends on the controller. Some controllers have 1GB of battery backed cache so the delay can be longer. With that said raid5 is generally a bad choice for a database server.
  4. What I meant on the last part was to do selective NTFS folder compression on the lesser used parts of the filesystem while keeping the rest uncompressed for performance reasons...
  5. Now with 64kb cluster I got 1 GB write with dd in 16 seconds. Not bad (about the speed of 1 drive) but now this eliminates the possibility of using NTFS compression.
  6. Looks like I missed the 64kb cluster size requirement. Perhaps I can do 1 more test...
  7. With 32Kb stripe size and 3 drive raid 5 I see a tremendous improvement in the 10GB network transfer test however it is still a little slower than a single disk. On top of that it is significantly slower in a 1GB windows dd zero write test (15 sec vs 90 sec). On a linux system with the same hardware and software raid5 using 5 disks I get the following: # dd if=/dev/zero of=test.test bs=1M count=1000 1000+0 records in 1000+0 records out 1048576000 bytes (1.0 GB) copied, 3.42419 s, 306 MB/s I think I have spent enough time on this one and I will go with single 750GB drives over 3x 500 in raid5 and use the extra drives in my linux software raid servers.
  8. Yes, It looks like Wiinter spent a long time with that. I only wish that was for raid5 and not raid0.
  9. Recently I started a new topic on this subject but thankfully Madwand pointed me to this one so I will continue my discussion here. Here is what I posted in that topic. I have used linux software raid5 and raid6 for years and the performance for me is quite acceptable. I have 15TB of linux software raid currently at work using > 50 SATA drives... With linux software raid I expect sequential writes to be at minimum faster than the speed of 1 hard drive. Since the price of drives has gone down to less than $100 for 500GB and most motherboards have fakeraid5 I decided to try this with new computer builds. After playing around with this for a few days I am horribly disappointed with the write performance. One example of this is when transferring a 10GB file over a gigabit network. With a single drive the transfer is less than 5 minutes. With a 3 or 4 drive software or fake raid5 it will take > 25 minutes. From my linux experience the problem appears to be that the cache manager is forcing writes too frequently and the raid subsystem is not caching stripes so that every stripe is being read before it is written back. With linux you can lessen the impact of this by increasing the size of the stripe cache. However under XP I see no way to do that. I did enable LargeSystemCache but that appears to only help for the first few seconds where the gbit bandwidth holds steady at >50% for a few seconds then settles down on 3 to 8%. During all of these tests the cpu usage on the dual core machine with 4GB of memory is less than 10%. And now the progress. Like others, I have had some difficulty alining partitions in XP. Diskpart does not work under XP so I had to use the PTEdit method and I also used Vista. My latest test was with a 4 drive raid 5 and with the Vista 2048 sector alignment it did not help on my the problem with nvraid (expected from this thread). So now I am going to try to use a single drive with XP on it to boot and do my testing that way with a 3 drive raid5 config.
  10. drescherjm

    Poor software raid5 performance in XP

    WOW Thanks! Very good information. I think I will try the aligned partitioning on my desktop if I have time tomorrow.
  11. I have used linux software raid5 and raid6 for years and the performance for me is quite acceptable. I have 15TB of linux software raid currently at work using > 50 SATA drives... With linux software raid I expect sequential writes to be at minimum faster than the speed of 1 hard drive. Since the price of drives has gone down to less than $100 for 500GB and most motherboards have fakeraid5 I decided to try this with new computer builds. After playing around with this for a few days I am horribly disappointed with the write performance. One example of this is when transferring a 10GB file over a gigabit network. With a single drive the transfer is less than 5 minutes. With a 3 or 4 drive software or fake raid5 it will take > 25 minutes. From my linux experience the problem appears to be that the cache manager is forcing writes too frequently and the raid subsystem is not caching stripes so that every stripe is being read before it is written back. With linux you can lessen the impact of this by increasing the size of the stripe cache. However under XP I see no way to do that. I did enable LargeSystemCache but that appears to only help for the first few seconds where the gbit bandwidth holds steady at >50% for a few seconds then settles down on 3 to 8%. During all of these tests the cpu usage on the dual core machine with 4GB of memory is less than 10%.
  12. drescherjm

    Poor software raid5 performance in XP

    Some more testing... nvraid RAID10 is not faster than a single drive at writes in this 10GB test. It is also not faster at reads according to hdtach. It looks like I am going to give up at this and use 750GB Segate 7200.11 drives (in a single drive configuration) I bought for a linux sever in the 4 desktops and 12 x 500 GB drives in the server instead of the 8 750s I was planning...
  13. drescherjm

    'Merging' Two Drives

    This is normally called as disk spanning. I am not sure if most fakeraid or (BIOS raid) supports that. You do realize that this is not significantly better than raid 0 as if one drive dies it will be very difficult to recover the data on the good drive and also there is no performance gain by doing this.
  14. drescherjm

    slow/weird 15k scsi performance

    Are you sure that write cache is turned on? Performance will be crippled with write cache off.
  15. drescherjm

    Which PCI Card

    I have a couple of dozen of the SYBA 4P cards in my department and they work great. These are SATA 1 only but on a PCI bus that has a max bandwidth of less than SATA1 I am not sure there will be much benefit in using SATA2. http://www.newegg.com/Product/Product.aspx...N82E16815124006 http://www.newegg.com/Product/Product.aspx...N82E16815124020
  16. drescherjm

    Memory prices

    For the next 6 to 8 months I bet it stays about the same but eventually it will go up as ddr-2 becomes more widely used.
  17. drescherjm

    Moving an office.

    I have seen quite a few drives fail (that ran continuosly without a single problem 24 /7 for many months) by simply powering them off. But all of them were less than 2 GB so I am not sure how much that effects current drives.
  18. drescherjm

    Long IDE cable solution

    I would say is to get a quality cable that is long enough to suit your purpose. I have many 36 inch round cables in the lab and they work very well (at least with 120 GB WD drives...) even at ATA 100 speeds.
  19. drescherjm

    10,000rpm 2.5" Drives

    One thing I think of when I see this is that most SCSI drives use signifigantly smaller platters than desktop 3.5inch drives. I don't know if they are small enough to fit the 2.5 inch form factor but I see this as more of shrinking the rest of the drive than using smaller platters. As for the discussion of raided 2.5 inch drives it I believe most of this was done with notebook drives that are quite a bit thinner than these drives.
  20. drescherjm

    Is Linux Really Free?

    There are some things that are outside of our control. We are in a hospital environment in which a lot of the users (doctors, researchers ...) have laptops that they take home with them and then they bring them in to connect them to the network there is no way to keep these worms out. Being a department that is a customer of the hospital we are subject to the network problems of the hospital. We can complain but that only gets us red tape... Along with replacing our windows servers (which I will explain later if I have time) we are also going to install a firewall between us and the rest of the hospital to reduce the chance of this type of infection again..
  21. drescherjm

    Is Linux Really Free?

    No. With windows you have to constantly apply all updates which is very time consuming and hope they do no harm to your pc. It seems like a new update is released every week and to keep up with this very time consuming and is also a real pain. I agree but linux is many times more secure in this area and there are way less people trying to exploit it so you have less chance of this kind of attack.
  22. drescherjm

    Is Linux Really Free?

    Ahh. The mention of time... We are currently moving all our windows servers to linux for that very reason. I mean we have been infected with worms three times( slammer, blaster and sasser) in the last 8 to 10 months with windows and it has made administering active directory a very time consuming affair. I have spent weeks tracking down failing replications and have had to switch domain controllers 5 times to fix the problems.
  23. You are correct about the lack of support of jumbo frames. The 8 port Dlink Gigabit (DGS-1008D) switches we use does not support jumbo frames and will stop functioning if any card connected has jumbo frames on... My benchmark results will be posted later, sorry about the delay.
  24. drescherjm

    Is Linux Really Free?

    Yes. Linux is really free. As you only have to pay for distros that come with service. Even RedHat commercial is available free of charge (i believe it is called whiteboxlinux) if you do not want any support from redhat. You can download and install suse free if you want also. Both of these are legal as the vendor can not charge you for the os. What you pay for is the service. My advice is to try gentoo. I have tried several different distros and after trying gentoo I don't see myself using another distro. Everything seems to work better in gentoo. I am totally able to configure my system the way I want. And A simple command emerge lets me automatically download compile and install all my apps for my system (with smp and athlon k7 support). In gentoo there is no time spent searching for the site that contains the software and then looking for mirrors to download and install this is all automatic.
  25. These are netperf numbers. I have done lots of netperf numbers Thursday with several pcs in the lab and when I get time I will report the results. It's 3:16 am Friday and I just got home from work so it will have to be later today.