Chew

Member
  • Content Count

    1958
  • Joined

  • Last visited

Community Reputation

0 Neutral

About Chew

  • Rank
    Member

Contact Methods

  • ICQ
    122107898
  1. Chew

    Good software for mirroring?

    I've been using FileBack PC for a while now and can recommend it. Interface is a little unwieldly, but this is in part due to the overwhelming number of options available. A couple of features you might find interesting are it's ability to monitor folders at all times and automatically update the backup/sync immediately, and keeping a defined number of revisions or revision archiving.
  2. I'd be interested in seeing the actual numbers for that chart. It doesn't look like 'at least 5%' to me, with perhaps the exception of Gaming on the MAT. Certainly not the 8-9% CityK stated. It would also be interesting to see the results on a larger sample of drives. The Raptor results seem essentially unaffected by the hardware platform, whereas there is some variation on the MAT. Is one the exception and one the standard? Perhaps there is variation in all SCSI drives, pointing to the controller rather than the drives? I seem to having a growing wish list that Eugene certainly will never have time to complete. Sorry Eugene Just trying to assist with my thoughts on what can be done to ensure the validity of the testing methodology while there appears to be an opportunity to do so (ie. before TB4 project is finalised).
  3. I've seen this mentioned a couple of times, but I must have missed where this was revealed. Can somebody point me to it? The "editor's choice" seems to have had a significant effect on the results. I think part of the problem here is the extremely long period (in computer terms) between benchmark refreshes. It's been 3 or 4 years since TB3 was put together, and SR have been benchmarking new drives, that have likely been tuned to perform for today's applications, on benchmarks based on older applications. Having said that, for that possible explanation to ring true, you'd want to see newer drives benefitting under TB4 and the reverse for older drives. This doesn't seem to be the case. So it starts to seem a case of whether the drives have been tuned better or worse for the particular applications used in a benchmark. This would explain why other review sites provide end results that aren't always consistent with SR's results. While most other review sites don't use a testing methodology that matches SR's, many of them are still done well enough that their results are valid, while possibly contradictory to SR's. It seems to me that to provide the most applicable, general use benchmarks, you need to use as large an application set as possible. SR have always said that the one point they would consider conceding as inaccurate in their methodology is the applications used (my wording) - perhaps there was more to this than we previously expected? The DriveMarks have been designed as a way to measure the performance of the drives in 'real world' usage, but in isolation from all other factors. It's never been implied that a drive that's twice as fast in one of these benchmarks will result in the entire system running twice as fast. If a trace is taken of activity that only has disk activity during 20% of that time period, a drive that plays back that trace twice as fast obviously doesn't make any difference to the other 80% of that time period. What might be interesting, along those lines, is the overall time elapsed during the trace captures. And in addition to that, the time it takes to play back the trace. This would help put the benchmarks in to proper context. The DriveMark numbers are derived by simply dividing the total number of IOs in a trace by the number of seconds it takes to play back the trace, so the information is probably available, just not published. Eugene?
  4. The trace doesn't record a delay as you have described. Whether the request is serviced by a cache hit or not, it is only recorded in the trace as a request of the particular data block(s) along with the current queue depth. The slower response of the drive with the smaller cache in your example doesn't change the capture or playback stage of the process. Well, except for the fact that the slower drive will playback the trace slower, which is the whole idea of the trace/playback method of benchmarking.
  5. I agree completely. These are just general tests, and need to be viewed in context. However, some testing along the lines I have suggested will at least assist to validate these results as acceptable for even general use. My feeling is that the issue raised by MartinP would alter the results insignificantly, but if MartinP's suggestion that the results could be altered more significantly under some circumstances is correct, it would be nice to know by how much. What if it's to such a degree as to invalidate the results for even general use? What would be the point of even having these benchmark results then? We could theorise on this for a long time, or run some tests to possibly put the matter to rest. Some of us might enjoy the thinking exercise, but most would probably prefer just to know the end result
  6. Sure, I understand this. The main reason I noted that the statement I made seemed valid was to define when the trace method is valid and not flawed in relation to the points you have made. From there we can look further at the scenarios where the trace method in fact is, or might be, flawed, and just how much this affects the outcome. Which is where I left off originally and will have to do so again until I have more time. I did have a thought though. Eugene, as you can see there's some question as to the validity of the testing methodology. As I'm sure everyone, yourself included, would like to remove that question mark, is it possible to do some testing? Capturing another set of traces (probably just one desktop and one server) with a CQ capable drive, and running the benchmarks on a handful of CQ and non-CQ capable drives and examine the difference in the results between the two trace sets, would probably prove whether or not there is anything to this. I understand this is probably time consuming but it would put the question to rest.
  7. All very good points that hadn't occurred to me. The point about rotational latency is particularly important - rotational latency could often mean that, if using async IO, with the 16 blocks broken up in to 16 individual requests, blocks 9-16 (for example) could be fulfilled first if the head happens to land on the track when the platter is at that location. So I wonder then, how often is IO performed this way? I guess the low queue depths generated during the benchmarks suggests not very often, if hardly ever? And where does it leave us in the main purpose of this discussion? My initial impression is that my previous statement is still applicable. The same activity generates the same queue order (trace/capture) on any drive, CQ or not, as long as only one thread has outstanding i/o in the queue at any moment in time, as none of this still appears to alter the order requests are placed in the queue. It's probably important to remember the queue captured by the trace is the OS queue, and not the queue in the drive itself (which is the queue that CQ manages and sorts as appropriate). It occurred to me that this could cause for confusion for anybody who hadn't made the distinction so I thought would be worth pointing out.
  8. Sorry Martin, I'm having a little trouble following what you're saying. I get the idea, but without understanding exactly what you mean at each stage I can't really respond. Do you mind elaborating? I get the feeling that perhaps we have a different understanding of some aspects to how things work, and it's skewing our interpretation of what the other is saying. I'll give you some of my thoughts in regards to your post that might help you understand where I'm coming from. Why would a thread issue 16 individual requests of one block each for 16 consecutive blocks, rather than one request for 16 blocks? The ability for CQ drives to intelligently assess the list of requests and determine the optimal order to fulfil them would, in this scenario, just determine that they should be performed in the order of 1-16. - read-ahead caching is what places 2-16 in the buffer. This would also occur on non-CQ enabled drives - it still does not alter the order the requests were placed in the disk queue I didn't dispute this. My post you quoted was specifically in regards to when this is not the case - refer to the last line.
  9. I don't think that is true because SR will be testing different drives with the same trace. You have no control where each LBA resides on different hard drives. Even if a single thread is running I/Os at a Q greater than 1, different drives may have preformed the operations in different orders because LBA X may be halfway around the disk compared to the original trace drive. Because of this, the drive under test may have peformed even the single thread's I/Os in a different order. 213063[/snapback] An individual thread is always going to issue the requests in the same order, whether on CQ capable hardware or not. As the activity that requires disk i/o occurs in the system, the thread will place a request in the queue. Depending on the application, the activity will either be dependant or not. If it's dependant, the requests must be completed before additional requests are placed in the queue (it most likely will be on hold). The order that the application has i/o requirements doesn't change, so the order they are placed in the queue doesn't change. Or if it's not dependant, the application/thread will just keep dumping i/o requests in the queue as the requirements occur in the application. Again, the order these requirements occur doesn't change, so the order they are placed in the queue doesn't change. The capture/trace process captures the order the thread (or all threads) place(s) requests in the queue. As long as only one process has any outstanding requests in the queue at any point in time, the order they are fulfilled (in both normal operation or playback of a trace) will not alter the order they requests go in the queue. Therefore you can see that the same activity generates the same queue order (trace/capture) on any drive, CQ or not, as long as only one thread has outstanding i/o in the queue at any moment in time.
  10. He means thread as in stream of dependent requests, not as in threads as in processes/threads (I think). 213088[/snapback] Yes, I was loosely using the term thread for a couple of reasons - it was already being used, and I think you'd probably find a stream of dependant requests would typically come from the same thread anyway. Or at least dependant requests from another thread probably wouldn't begin until the requests from the original thread were finished, so for the sake of discussion it may as well have been the one thread.
  11. Do you know why the Disk Geometry and Partition information is only listed for drive 1 and drive 2? Can you post a screenshot of Disk Management? And do you know what windows service pack you are using? If you don't have the correct service pack installed, you can have problems with large hard drives. With this info we hopefully will have a better understanding of what is happening. Other than that, there a data recovery tools you can download and try. They don't rely on partitions still being correct and will scan the drive for files. You typically need another drive to copy the files to that it finds. I've never actually used one so I don't have any specific programs to recommend.
  12. Chew

    15K SCSI RAID 0 with crappy performance

    The burst results show that you are not PCI bus limited - no 32bit/33MHz problem here. And then ATTO read results prove it. If you want to see a nice STR graph, use Winbench99 Disk Transfer Rate test. I say everything looks good. These benchmarks aren't the best to use on modern RAID hardware.
  13. It looks to me like they are two partitions on one physical hard drive, and not two physical drives like you stated. Unless you have some sort of backup of the partition table it's not really possible to restore it back to how it was (assuming it has been changed even). What Service Pack for Windows 2000 are you using?
  14. I don't know if current HDTach versions have changed, but older versions were no good for benchmarking many RAID configurations due to the specific method it used for reading. The Disk Transfer Rate test of Winbench99 is much more reliable for testing sustained transfer rates.
  15. Chew

    Disappointed with SATA 2

    This is because have a much lower bandwidth than SATA/PATA. For example, the 480Mbps you might see advertised for USB2 is in megabits rather than megabytes. And due to using the common practice in serial technologies of encoding 8 bits of data in to 10 bits of transmission (to assist with timings, etc), this translates in to 48 megabytes per second maximum. And USB2 in general seems incapable of actually reaching that maximum, due to some combination of technology limitation or poor implementation.