gtghm

Member
  • Content Count

    132
  • Joined

  • Last visited

Community Reputation

0 Neutral

About gtghm

  • Rank
    Member
  1. gtghm

    Which way would be better?

    Do we know if you can boot off of the 3ware 75** series cards into XP Pro? Thanks, "g"
  2. I am making some changes to my system. I have a WD JB 250gb drive that I am going to use as a C drive and back up for my 200gb Raid 0 array. The raid card is the 3 ware 7500 (4) port raid card. The choices are either I install it on one of the 2 remaining free ports and boot off the 3ware card and basically run off the 3ware card for everything. Or I install the drive on an IDE. I currently have an 80gb drive that I am useing for my C drive installed in an IDE. You should know, if you don't, that the board I am running is a Supermicro dual Xeon board. The 3ware card is in a 66/64 slot. I assume that If I install it on the 3ware card running in a 66/64 slot I should have an advantage because the 66/64 bus is not on the same buss as the 33mhz/PCI bus thus I should see a bit of a performace boost... yes? Thanks, "g"
  3. I'm looking at this card, I would look to put an 18gb 15.3K cheetah drive on it. I'm wondering what kind of performance I could expect form this combination? Is there any one using this combo? What are the average read/writes? According to the info at http://www.lsilogic.com/techlib/marketing_...bs/lsi21320.pdf there isn't any place to add on more RAM. It only comes with 4meg of on board cache... However for use as an OS disk combination would I really need more than the 4meg cache on the card and the 8meg that comes on the drive? Thanks, "g"
  4. gtghm

    Raptor question

    Ok I've read a couple of reviews on the new raptor drives from WD. My question is I have a Raid 0 setup useing 2 100gb JB drives that average 80K+ Reads and 75K+ writes and I am currently using a regular 80gb BB drive as a system/OS drive. I have figured out that while my raid is fast, when swapping data from the OS drive to the raid I'm still limited to the max transfer rate of the slowest drive, the OS drive... So I was thinking of going to SCSI 15.3 or something like that for my OS/system drive, that way I can maintain the transfer rates between the raid and the os drive. But now that the Raptor is out and the results are looking good would the Raptor drive be a good lessor expensive solution for my OS/system drive? The transfer rates while high still looked like they were less than my Raid but in one of the reviews, THG, they used SCSI drive as a comparaison, which showed the fastest drive that they used, maxtor atlas, as only able to reach about 69K+ reads which is still lower than my Raid. I'm wondering what kind of transfer rates would you be getting with the fastest SCSI, which I assume would be the cheeta 15.3K? Thanks for the input guys, "g"
  5. Ok..., I have 2 100wd 8meg drives and a 3ware 7450 card. I have had no problems with the setup. I am running raid 0 with atto scores running from 80-85K read and write. I have not heard of any problems with the WD 8meg drives and 3ware cards. There is/was some speculation that the 3ware cards do not take advantage of the 8meg cache which may be true, however that seems to be case for all of the manufactures that put out 8meg drives. The reason 3ware gave me was that the 8meg drives work on the single drive alogrithims which get lost when you raid them together. Makes sence to me however I'm not dissapointed by my drives even though the cached use might not be as usefull as it could be. Fact is that I have not seen any higher benches with other drives 2meg or more. They all seem to be about the same in a raid configuration. There is always going to be a 5% or less difference in bench marks between systems which could show in certian tests that a raided 2meg drive might be as fast or faster than an 8meg raid. AFAIK, one can expect to see performance differences between exact same setups due to manufacturing. As far as WDs and 3ware go I'd say that they are as good a choice as any. I perfer them because up to this point I've had excellent luck with them. The things I've run into is that WD drives when installed on a 3ware card have to be installed in the single drive configuration, meaning you have to remove the jumper completely (the easiest way) or put the jumper in the single drive position. If you don't the card/system will take forever to post the drives if it sees them at all. Secondly, form benching you might be able to see the write-through cache thing if you're useing XP or 2K useing basic disk configuration. However it's been shown in the real world that the performance hit shown in the benches do not really exist unless you are moveing files in explorer that are 750mb or larger. Unfortunatley I converted to dynamic disks before this was proven and lost the ability to use my Drive Image backup software but I use the XP built in back up that seems to work pretty well. At some point I plan on going back to a basic disk system which I understand is no different in real world use over the dynamic disk. It's just that until the write-through cache flag is applied to dynamic disks in some future service pack the copy time in explorer is a little faster on lager files. Not something that I use very much anyway... In my experience there is no problem with the 3ware card and WD drives. Its a complete myth... "g"
  6. gtghm

    Where can I find IOmeter?

    The file is 3K. I'll post the exact contents, it came to me as StorageReview.icf.txt file. I finally figured out that you have to rename the file StorageReview.icf and put it where the other .icf files are then you can use them. However there are some differences that you need to change like the drive names and stuff but basically the numbers are all the same. Might try coping and pasting the file below into a notepad document and then just saveing it as StorageReview.icf or any name.icf -------------------------------------------------------------------------------------- Version 1999.10.20 'TEST SETUP ==================================================================== 'Test Description 'Run Time ' hours minutes seconds 0 10 0 'Ramp Up Time (s) 30 'Default Disk Workers to Spawn NUMBER_OF_CPUS 'Default Network Workers to Spawn 0 'Record Results ALL 'Worker Cycling ' start step step type 1 1 LINEAR 'Disk Cycling ' start step step type 1 1 LINEAR 'Queue Depth Cycling ' start end step step type 1 256 4 EXPONENTIAL 'Test Type CYCLE_OUTSTANDING_IOS 'END test setup 'RESULTS DISPLAY =============================================================== 'Update Frequency,Update Type 0,WHOLE_TEST 'Bar chart 1 statistic Total I/Os per Second 'Bar chart 2 statistic Total MBs per Second 'Bar chart 3 statistic Average I/O Response Time (ms) 'Bar chart 4 statistic Maximum I/O Response Time (ms) 'Bar chart 5 statistic % CPU Utilization (total) 'Bar chart 6 statistic Total Error Count 'END results display 'ACCESS SPECIFICATIONS ========================================================= 'Access specification name,default assignment Database,DISK 'size,% of size,% reads,% random,delay,burst,align,reply 8192,100,67,100,0,1,0,0 'Access specification name,default assignment File Server,DISK 'size,% of size,% reads,% random,delay,burst,align,reply 512,10,80,100,0,1,0,0 1024,5,80,100,0,1,0,0 2048,5,80,100,0,1,0,0 4096,60,80,100,0,1,0,0 8192,2,80,100,0,1,0,0 16384,4,80,100,0,1,0,0 32768,4,80,100,0,1,0,0 65536,10,80,100,0,1,0,0 'Access specification name,default assignment Workstation,DISK 'size,% of size,% reads,% random,delay,burst,align,reply 8192,100,80,80,0,1,0,0 'END access specifications 'MANAGER LIST ================================================================== 'Manager ID, manager name 1,TESTBED 'Manager network address 127.0.0.1 'Worker Worker 1 'Worker type DISK 'Default target settings for worker 'Number of outstanding IOs,test connection rate,transactions per connection 1,DISABLED,1 'Disk maximum size,starting sector 0,0 'End default target settings for worker 'Assigned access specs File Server Workstation Database 'End assigned access specs 'Target assignments 'Target PHYSICALDRIVE:1 'Target type DISK 'End target 'End target assignments 'End worker 'End manager 'END manager list Version 1999.10.20 -------------------------------------------------------------------------------------- end file
  7. gtghm

    Where can I find IOmeter?

    Upaboveit, yes there is a new version out that has no problems with Xeons 2.0+ I DNL'ed it and it works fine. I can tell you that the old version does not work with 2.2 Xeons all of the time. I was not the only one to ever post about it and in fact from the research I did it was a known issue, just was that until now no one wanted to try and fix it. tygrus, send me a PM or email to gtghm@hotmail.com. I think I have a copy of what you're lookinf for... "g"
  8. gtghm

    16 lawyers need my (your) help

    LOL I can see the high regards that you hold Layers in... hehehehe
  9. Thats probably because you don't have the jumpers set correctly... If you are only running 1 WD drive per IDE then you need to either remove the jumper competely or put it in the single drive configuration. Do not use Master or CS with the drive all by its self on a cable and IDE. It will take forever for the drive to dectect... I'd just remove the jumper and try that... Cheers, "g"
  10. Actually for individual drives those are pretty normal. The first one looks a bit off but that could be for many different reseaons. Might defrag and try again... If the cluster size is different from the second drive that might be the difference there... "g"
  11. Change the "total length" from 4 mb to 32 mb in ATTO, bench and repost... "g"
  12. gtghm

    Post edit feature, please!

    Agreeeded, ooopss,
  13. Rumored to be getting fixed in XP under SP2 so the performance will be the same for both dynamic and basic. Hopefully they will include the option to disable it too, like they did in .NET. "g"