SysInternals has a couple of tools which are great for logging file accesses, and can give you a good picture of what's going on -- the original FileMon, and the more complicated Process Monitor. I suggest starting with FileMon -- you shouldn't need any more complication for this task.
ATTO is a quick & dirty tool which will give you a snapshot of how your file system's performing with varying access sizes. Here's an example setup with results.
The tests were done on a pretty full array on a RaidCore BC4852 controller in a PCI-X 133/64 slot with 8 DiamondMax 10 drives, under Server 2003 x64. The firmware and drivers are a bit dated -- version 2.1, pre-dating Broadcom's RaidCore sell-off.
ATTO results should be followed up with some other benchmarks or ideally application-level testing.
IOMeter can do that and more, but with a cost of greater complexity. A sample setup can be seen here. I'd increase the test file size 10x and include 64k accesses.
Both of these work at the file system level, so are affected by the drive state / crowding. ATTO especially because it uses a temporary file which it constructs for the test. IOMeter creates the test file at first test run and leaves it there, so subsequent tests are more stable.
E.g. the following shows the same 64k read performance anomaly as shown by ATTO, but in much earlier part of the drive, using a previously-constructed test file. (Graph was manually created from output data.)
Both of these tests show a performance anomaly with 64k reads, which is bad because this is a common access size. This controller doesn't support variable stripe sizes.
Vista has some very large access size potential, so may be much better in some cases for sequential accesses and make the stripe size matter less (stripe size might have to be lower in other cases where the access sizes are small), but this will also depend on the application being used.