Madwand

Member
  • Content Count

    63
  • Joined

  • Last visited

Community Reputation

0 Neutral

About Madwand

  • Rank
    Member
  1. Madwand

    best stripe size for RAID5 on ICH10R

    64 KiB clusters can waste a lot of space if you have many small files. Test with the default cluster size as well to see if it makes a big difference.
  2. Madwand

    best stripe size for RAID5 on ICH10R

    The default stripe size should be fine for this configuration, provided that the write-back cache is enabled. There is some increased risk of data loss with this configuration, so it's not recommended if you have unstable power or an unstable configuration, random boots and crashes / etc., but otherwise it's a decision you can make yourself trading off some risk for greater write performance. Here's a chart of simple sequential performance measurements with a similar but older Intel RAID configuration:
  3. Madwand

    Poor software raid5 performance in XP

    See here: http://forums.storagereview.net/index.php?...=25786&st=0
  4. Read performance could be better, but it's better than what I'd expect for write performance with write caching disabled. Here's a chart with some sample figures for comparison: I don't think you can do much better unless you turn on write caching. Another option is to use Ciprico's new virtual RAID, which should have a richer feature set and also perform well. It's reasonably priced. http://episteme.arstechnica.com/eve/forums.../m/274000582931
  5. Madwand

    New RAIDCORE virtual storage software

    Looks very interesting, but support for only Intel ICH*R and server chipsets make this "meh". It could be really nice to have a decent chipset-independent software RAID solution for Windows..
  6. Madwand

    ICH9R RAID5 performance

    There have been other reports of unexplained poor performance with ICH9R. I suggest putting the OS on a single drive on the JMicron controller and setting up 5 drives in RAID 5 -- first with ICH9R with 16 KiB stripe size and write-back cache enabled; failing that, with Windows OS RAID 5. Windows OS RAID 5 write performance is not known to be good, but could be enough to meet your minimum requirements.
  7. Madwand

    USB 2.0 performance

    How do you tweak it to get past that 20-22MiB/s boundary? That is all I ever see on any type of 7200RPM disk attached via USB. First of all, don't use MiB/s for transfer rates. It'll get you a free ~5% boost! You can get better results with better chipsets, and perhaps using different file transfer software (e.g. xxcopy perhaps), and using larger file sizes (perhaps bundling files together) where that's a reasonable option. However these all pale in comparison to what you can do with eSATA, which was my point -- not to spend the effort measuring different chipsets, etc., but rather to go eSATA if that performance really matters to you.
  8. Madwand

    USB 2.0 performance

    USB 2.0 can be tweaked up to around 30 MB/s and perhaps a bit higher, but the better answer is to forget about USB and to use an enclosure which supports eSATA. USB is popular because it's popular. eSATA is naturally faster. eSATA enclosures are widely available and don't have to be expensive. eSATA support in computers is not as common, but can be added.
  9. Madwand

    Testbed 4 IO Meter Configuration

    The definition of Intel's IoMeter FileServer pattern can be seen here: http://www.digit-life.com/articles/hddide2k1feb/iometer.html
  10. I don't think there's an incompatibility issue. More likely answers are differences in test and configuration -- e.g. Highpoint perhaps using high queue depth (which 3ware shows affect performance), and perhaps configuration options on the controller itself (Highpoint didn't specify the stripe size or caching options; some controllers also improve read performance when write caching is enabled; I'm not sure about the 3ware, but I'd try such options to improve STR if possible.) Other controllers, even cheap ones such as the on-board Intel can do RAID 5 read STR around the level of drive STR * number of data drives. Why can't the 3ware? 200 MB/s / 6 drives = 33.3 MB/s is significantly below the STR of those drives. The RaidCore was also able to do STR close to drive speed mostly independently of the access size -- except around 64k accesses. It's possible that the 3ware's design is not tuned well for such STR. It's possible that there are some configuration options which help mitigate this.
  11. The following shows an 9650SE-8 running 8 Raptor 150's in RAID 5 doing 64K sequential accesses under IOMeter at 575 MiB/s. This comes to about 86 MB/s per data drive, which is a very reasonable figure for Raptor 150's. http://www.highpoint-tech.com/USA/rr3320_Performance.htm Based on that, for a clean array using 6 data drives at around 70 MB/s sequential each, you'd get ~ 420 MB/s ideally for the array. A problem with the Highpoint data is that they don't give all the details on the array and test setup. Notably, their 3ware RAID 6 normal sequential figures are much higher than 3ware's own figures when limited to queue depth = 1 even with a larger number of drives. This implies that Highpoint used queue depth > 1 or did some other tweaking. http://www.3ware.com/3ware_outshines.asp I suggest running IOMeter to further test out your array's performance in comparison to the figures from Highpoint. If you don't get good scaling, then you could try changing the caching options, reducing the stripe size, or for the purpose of testing the test, increasing the number of outstanding I/O requests. Once you get application access pattern data, IOMeter could help you to simulate the pattern for more controlled synthetic tests.
  12. ATTO works under Vista x64 -- at least when run as administrator. I'm not familiar with the particular 3ware controller, so wouldn't write it off offhand, but if it's not performing well-enough despite tweaks... If I was making a change (and perhaps especially if spending someone else's ample money), I'd would probably go with a higher-end Areca instead of RaidCore. Here's a HDTune graph of an Intel on-board array for comparison. It's 4-drive RAID 5 with 16k stripe. Write performance is still good (with write-back cache enabled), as can be seen with the following: Here's the ATTO graph from Vista x64 for completeness. The array is new & empty except for the IoMeter test file. Again, ATTO is less consistent than IoMeter, but quickly gives a similar overview picture, with writes being somewhat lower than reads for this configuration/array. Note again that although these graphs are fun, the real stuff's in the application performance, and using FileMon to capture some of that could be instructive.
  13. Sure, but you could also take the MB/s result and divide by the access size to get IO/s. IOMeter also shows the IO/s directly for convenience. A starting point for meaningful IO/s would be logging the details of the application's access pattern -- doable with the mentioned FileMon for example.
  14. SysInternals has a couple of tools which are great for logging file accesses, and can give you a good picture of what's going on -- the original FileMon, and the more complicated Process Monitor. I suggest starting with FileMon -- you shouldn't need any more complication for this task. http://www.microsoft.com/technet/sysintern...es/filemon.mspx ATTO is a quick & dirty tool which will give you a snapshot of how your file system's performing with varying access sizes. Here's an example setup with results. The tests were done on a pretty full array on a RaidCore BC4852 controller in a PCI-X 133/64 slot with 8 DiamondMax 10 drives, under Server 2003 x64. The firmware and drivers are a bit dated -- version 2.1, pre-dating Broadcom's RaidCore sell-off. ATTO results should be followed up with some other benchmarks or ideally application-level testing. IOMeter can do that and more, but with a cost of greater complexity. A sample setup can be seen here. I'd increase the test file size 10x and include 64k accesses. http://www.infrant.com/forum/viewtopic.php?t=265 Both of these work at the file system level, so are affected by the drive state / crowding. ATTO especially because it uses a temporary file which it constructs for the test. IOMeter creates the test file at first test run and leaves it there, so subsequent tests are more stable. E.g. the following shows the same 64k read performance anomaly as shown by ATTO, but in much earlier part of the drive, using a previously-constructed test file. (Graph was manually created from output data.) Both of these tests show a performance anomaly with 64k reads, which is bad because this is a common access size. This controller doesn't support variable stripe sizes. Vista has some very large access size potential, so may be much better in some cases for sequential accesses and make the stripe size matter less (stripe size might have to be lower in other cases where the access sizes are small), but this will also depend on the application being used.
  15. I have an AAK drive and a couple of older AAE drives, and can clearly see a difference between the two for sequential access in synthetic tests (e.g. using HDTach). However, I've read statements from Seagate elsewhere that they don't design or support synthetic benchmarks, and that one should try actual application usage with them. Strangely enough, when I tried simple file transfers with these drives in a similar state, I didn't notice any negative performance from the AAK drive. It might even have been that, counter-intuitively, the AAK drive performed better. I stopped caring about the problem at that point, and went on to use the drives to store real data in a RAID array, and lost the ability to run further tests comparing them individually. So: Does anyone have any application benchmarks which correlate with the observed synthetic differences?