KCComp

The Final Word on 'SCSI performance in Windows XP'

Recommended Posts

Might one not suggest that "copying files to SCSI drives" is going to be pretty important in a reasonably significant number of usage patterns?  Just a thought.

Perhaps, though applications copying and writing files to SCSI drives is not the same thing as copying files to SCSI drives utilizing Windows XP Explorers built-in copy utility.

As a result, my statement should have read like this:

"Unless your usage pattern consists predominately of running atto powertools and copying files to SCSI drives using XP Explorer's built in utility, all controllers 'Work OK under XP.'"

Thank you, Eugene -- I just managed to read all the rest of the thread, and that was abundantly clear. Sorry for the silly comment. :oops:

Share this post


Link to post
Share on other sites

Thanks Eugene, I think by re-reading the first post (yet again!) and melding it with your comments above I finally understand this problem.

For me, as I do perform large file copy/move jobs somewhat frequently I guess I'll remain with win2k/sp2 for now. I really appreciate all the effort you guys have put into this discussion.

Share this post


Link to post
Share on other sites
Guest Eugene
When you are copying multi-gigabyte files, you are looking at straight STR, the subsystem cache wouldn't have any real effect there, as it would with multiple small file copy tasks.

Oh, also, to address this slightly off-topic phenomenon... buffer hits actually do occur in write STR situations... likely due to missed rotations since write heads sometimes can't keep up with the bits per track of a platter multiplied by the drive's spindle speed.

Share this post


Link to post
Share on other sites
Many people found that changing from Basic to Dynamic disks in Windows XP "reclaimed" their performance in ATTO. This is because Microsoft forgot to fix the WRITE_THROUGH issue with the Dynamic disk code path. Dynamic disks do not "reclaim" any "lost" performance, because the performance was never lost. It's an artificial benchmark phenomenon. Oh, sure, some file copy operations were affected, because they were using the WRITE_THROUGH flag. That's by design to ensure files are copied/moved with integrity. It doesn't affect 99.9% of system performance on non-server platforms.

Ok if I understand this correctly, even though I saw a benchmark improvement by converting to dynamic disks there really was not any performance increase?

And, that there might be an issue with using XP explorer to copy files back an forth but that there is no loss in performance in apps like ones that use scratch disks (e.g. photoshop, AutoCad) or other programs that write to the HD?

Then I can convert back to basic disks and not take a performance hit and be able to use my back up programs like DI 2002 which does not currently support dynamic disks or should I stay with the Dynamic disks?

Sorry if I'm a little off post or rehasing things, but I just want to finally get a grip on this issue and put it to bed.... I purchaced a $60 copy of DI 2002 6 months ago and haven't been able to use it since, all because of a stupid bench mark..... although I thought my system felt faster after converting to dynamic disks... might be a myth on my part, I don't know...

"g"

Share this post


Link to post
Share on other sites
Here is the last, the final, definitive scoop on XP SCSI performance.

Well, sort of.

Having skirted this issue for months, I finally pulled out WinDBG to figure out what was going on. KCComp's excellent post, was pretty close to the mark.

As described, FILE_FLAG_WRITE_THROUGH is not honored on Windows 2000, nor does it propagate through the Dynamic Disk driver on Windows XP.

It is honored on Windows XP with Basic disks, and the operation is performed, as requested by the application.

Why then is performance still a bit lower (~2.8%) when an application doesn’t pass the flag. Furthermore, why do Explorer or xcopy operations take longer than they should? Surely they don’t set FILE_FLAG_WRITE_THROUGH.

Fortunately, they do not. As far as I can tell, all IO operations on XP will suffer some performance loss due to the synchronous flushing of meta data. I still have some digging to do, but it appears that the NTFS log file gets FUAed to disk, even while your own data can pend in the write buffer. If you think about it, this is really the way it should work. If a sudden power loss can blow away the log, it serves little purpose.

While this makes sense, there is still one behavior I haven’t figured out. Once in a while, copy operations seem to emit a SYNCHRONIZE CACHE (10) SCSI command. With the LBA and number of blocks set to zero, this command flushes the entire cache.

For those willing to sacrifice stability for speed, a lower filter driver for Disk.sys could both clear the FUA bit and block the sych cache commands.

Now a question for the SR community, is it worth it?

Share this post


Link to post
Share on other sites
Guest russofris
Now a question for the SR community, is it worth it?

Is it worth data loss?

For Windows XP, the answer is undboutedly yes. These are home systems, and the majority of us accept the fact that a sudden catostrophic power loss means we lose data that is currently moving/unsaved/uncommited.

Is it worth the time to re-write disk.sys?

I honestly have no idea, since the only driver that I have ever written was a Opti 929 directsound driver, I would not be able to assess the amount of work that has to be done.

I have a couple of friends in the XP group down the road (not sure which XP group.. Firewall I think). I might be able to mention it to them next time I see them.

Thank you for your time,

Frank Russo

Share this post


Link to post
Share on other sites
Is it worth the time to re-write disk.sys?

There is no reason to rewrite disk.sys. I could probably take care of this in a couple of days.

The question is, do I really want to spend time undoing something, that was written the way I would have wanted it written in the first place?

There was a similar disagreement surrounding the ATA write cache in FreeBSD. In the end, the users won, and the OS designers lost. For all their complaining about stability, most (power) users just want their systems to fly.

Share this post


Link to post
Share on other sites
Leave it to SR to have to dig up the answer that MS should have been able to provide on day 1 of this controversy.

Actually, if I had any self interest in this issue, you would have had your answer in the spring. Blame the weakening SCSI value proposition.

Share this post


Link to post
Share on other sites

Could someone explain this to me...

1. Perform a fresh boot of WinXP. Using 15K SCSI (boot disk), I un'rar a 600 MB file. It completes in 40 secs.

2. Reboot the machine (15K drive still the boot disk). I un'rar the same 600 MB file on my IDE drive. It completes in 25 secs.

I haven't read through all of the posts, but what I did read made sense. That was, only the copy command was slow, and other uses you would see the performance you expect. That just because a large copy was slower, was confusing people.

So, add unzipping to the list.

Share this post


Link to post
Share on other sites
Guest Eugene
Could someone explain this to me...

1. Perform a fresh boot of WinXP.  Using 15K SCSI (boot disk), I un'rar a 600 MB file.  It completes in 40 secs.

2. Reboot the machine (15K drive still the boot disk).  I un'rar the same 600 MB file on my IDE drive.  It completes in 25 secs.

I haven't read through all of the posts, but what I did read made sense.  That was, only the copy command was slow, and other uses you would see the performance you expect.  That just because a large copy was slower, was confusing people.

So, add unzipping to the list.

Which 15k drive and which IDE drive?

Share this post


Link to post
Share on other sites

Could someone explain this...........

I have just updated from a Soyo Dragon+ KT266, Athlon 1600+, to a Soyo Dragon Ultra Platium KT400, Athlon 2400+. Instead of reinstalling WindowsXP Pro, I just swaped the boards and CPUs. Thinking that both boards are about the same. Also I wanted to see if WindowsXP would freak or not. Worked with out a flaw. My system was alot faster, thinking W0W!! 600MHZ of CPU power really dose make a differants. I went strait into quake3 and started playing. I noticed that my map load time was alot faster too, beating most everyone to the nextmap. Well I got to thinking about it and desided to get out and run ATTO just for kicks. I just knew it was going tobe about the same as allways, 12mps read and write with write cache enabled, or 12mps write and 40mps read with write cache disabled. Not so this time, with write cache enabled I got 47mps write and 56mps read. Whoa!! all that from a MotherBoard change....WOOHOO!!! Well thats just great, I finally get my performance from my Quantum Atlas 10kIII and Adaptec 19160 SCSI setup.

Well I #&$@ed up today, I installed the new VIA Hyperion drivers and left the check mark by the IDE PCI bus driver. oops..I get click Happy sometimes....Now I'm back down to 12meg's per second read and write..#$*@$( I don't really know if that is the cause or not. But it worked fine till I installed that damn IDE driver.

Later this week I'm going to reinstall my old MB and CPU, reformat, and change back to my new MB and CPU. Just to see if it was just dumb luck or not....hehe worth a try right...I'll post agin to let ya know..

Share this post


Link to post
Share on other sites

I've read through this thread and don't recollect seeing anything about this issue: since add-in IDE cards like the Promise Ultra-100 are "considered" to be SCSI controllers in Win2k/XP, are they also affected by the problems identified by Cas and Eugene?

Thanks.

Share this post


Link to post
Share on other sites
Guest russofris
I've read through this thread and don't recollect seeing anything about this issue: since add-in IDE cards like the Promise Ultra-100 are "considered" to be SCSI controllers in Win2k/XP, are they also affected by the problems identified by Cas and Eugene?

Thanks.

The way I understand it, some are, some aren't. It is all dependant on whether or not FILE_FLAG_WRITE_THROUGH is properly (or not) implemented in their scsi miniport driver..

Please correct me if I am wrong.

In my opinion, HDD write caching should be a global variable for workstations. Having applications bypass it is only detremental to performance (I can't think of an app that would gain performance).

Thank you for your time,

Frank Russo

Share this post


Link to post
Share on other sites

Actually, KCComp’s comments are correct. The ATA command set lacks the Force Unit Access bit, used to commit writes on a per IO basis. Although Disk.sys will pass write commands with the FUA bit set, the bit itself will get lost in the translation to an ATA command.

Personally, I believe that any cache controls should be handled at the file system level. The OS itself should do everything it can to ensure the integrity of IO requests.

In the absence of such controls however, I am inclined go ahead and make a filter driver available.

Share this post


Link to post
Share on other sites

So then are you saying I can convert back to basic disks and not take a performance hit and be able to use my back up programs like DI 2002 which does not currently support dynamic disks or should I stay with the Dynamic disks?

Or is the performance really better using dynamic disks then.

"g"

Share this post


Link to post
Share on other sites
Alright, I bit the bullet and did what I should have done a long time ago when cas was willing to devote his time the SR community by programming his copy utility.
The question is whether this massive tide of anti XP-SCSI sentiment that's swept not just SR but the enthusiast community as a whole can be stemmed. 

One simply needs count the number of recent threads here depicting many users agonizing over whether they should use 2k instead of XP simply because of ths supposed xp bug.

I don't mean to come off as overly harsh on MS but lets face reality here - there have been write ups of this problem all over the net and nobody from MS released any information to explain what is going on, leaving us to speculate and frustrate for months. 

Leave it to SR to have to dig up the answer that MS should have been able to provide on day 1 of this controversy.

Actually, folks from Microsoft close to the Windows code in question have released information. Unfortunately who they are is not readily visible. Why are they being so coy and not blaring such credentials? I have no idea on that.

Eugene,

Considering KC's summary, cas's further investigation and refinements, and your own personal testing, might an official SR article/stance outlining the truth behind the "XP-SCSI issue" not be in order?

Whether it was directly attributable to the SR community I don't know, however, I do believe that the SR community definitely played an instrumental role in the propgation of this "issue" throughout the enthusiast community and computing circles. Therefore, what not a better source than SR to demystify this subject and to henceforth send out another splash, whose ripples shall bring forth enlightenment to the rest of the heard.

Perhaps you, cas and KC (if they were willing, their time permited, and if they deemed it was worth their while) could co-produce an SR article...one which also tries to bring MS on board with statements and credentials that are clearly articulated. It appears more than ever that now is the time to put this issue to rest.

The question is whether this massive tide of anti XP-SCSI sentiment that's swept not just SR but the enthusiast community as a whole can be stemmed.

I think you already know the answer, and I think it is you guys who are in the best position to do so........please, cast your stone.

Forgive me if you've addressed this elsewhere.

Cheers, CK

Share this post


Link to post
Share on other sites
So then are you saying I can convert back to basic disks and not take a performance hit and be able to use my back up programs like DI 2002 which does not currently support dynamic disks or should I stay with the Dynamic disks?

Or is the performance really better using dynamic disks then.

"g"

The idea is this - Some things ARE slower, but only due to things being done the safe, more correct way. It depends on the application in use.

To decide what's better for yourself, you would have to try both and compare the performance with the applications you typically use. Don't worry about benchmark-only tools such as ATTO, as these may or may not behave correctly. Try your applications, and if you observe any increased performance with Dynamic disks (or with Win2k for other readers) then there's your answer.

You have to remember though, the (possible) increased performance comes with the increased risk of data loss. Does this matter to you? Only you can decide that one for yourself.

Share this post


Link to post
Share on other sites

You have to remember though' date=' the (possible) increased performance comes with the increased risk of data loss. Does this matter to you? Only you can decide that one for yourself.[/quote']

Another way to read this is that Windows 2000 has been putting users data at peril for quite some time. Is this the case?

Are there many stories of data corruption that are simply unreported or that people maybe blamed on hardware?

Share this post


Link to post
Share on other sites
Guest Eugene
Another way to read this is that Windows 2000 has been putting users data at peril for quite some time. Is this the case?

Are there many stories of data corruption that are simply unreported or that people maybe blamed on hardware?

These are interesting questions that certainly are begged with the information we've been given... I'm not sure how much more corruption or general data loss would arise from the issue, but I suppose its on of the only motives that we can speculate Microsoft may have had for not declaring the issue more publicly.

Share this post


Link to post
Share on other sites
Another way to read this is that Windows 2000 has been putting users data at peril for quite some time.

Tread lightly Yuyo, Linux doesn’t support the FUA bit at all. Or at least it didn’t the last time I checked (around XP’s release date).

Share this post


Link to post
Share on other sites
So then are you saying I can convert back to basic disks and not take a performance hit and be able to use my back up programs like DI 2002 which does not currently support dynamic disks ... ?

Yes, that is what I am saying.

The code can be found here.

Share this post


Link to post
Share on other sites
Another way to read this is that Windows 2000 has been putting users data at peril for quite some time.

Tread lightly Yuyo, Linux doesn’t support the FUA bit at all. Or at least it didn’t the last time I checked (around XP’s release date).

Cas, my question is an exercise in intellectual curiosity. Yes, I am Linux user, but you are reading too much into my comments.

It would be interesting, although I imagine expensive and difficult, to set up a lab of equally equipped computers and subject them to a barrage of file copying routines –maybe for weeks at a time- with some sort of driver to monitor and verify the integrity of the data written to disk.

Try this in both Windows XP and Windows 2000 and find out what happens. Throw Linux in the mix if this scratches someone's itch, although I suspect this could prove disastrous and a distraction to the main goal of identifying any potential data reliability differences between XP and Win2K. Even this could make it hard to separate hardware from software issues.

BTW, Cas, your comment about Linux lack of support of the FUA bit actually reinforces the point that I am trying to make: stories of data corruption simply do not abound, either in Linux or in Win2K. There are Linux servers with uptimes of years, my home file server uses ext2, not the most reliable file system, and has been in use 24*7 for over 2 years without any downtime. I suppose that there are equally reliable Win2K installations.

Thus, how significant and reproducible is the supposed data corruption that led to the change in the way in which explorer handles the Write_Though flag?

My hypothesis: This must have been significant enough for Microsoft to be willing to take a performance hit. If so, maybe we have just been very lucky with our data up to now. Why Microsoft wouldn’t elucidate these important issues in a more public manner is something I do not quite understand.

Either way they approach this issue they stand to gain. Here’s how I see Microsoft’s PR machine addressing this.

1)We have discovered and fixed a problem to ensure the integrity of our users data.

2)There is no problem. Microsoft is certainly very comfortable with the data integrity provided by its products UP TO NOW, yet we have decided to err on the side of caution.

Either answer would be acceptable. No answer, hoping the issue will go away, is not.

Share this post


Link to post
Share on other sites

In the absence of power failures, you could run from a ram disk. If the conditions from which NTFS and ext3 are designed to recover, never appear, their logging is a waste of time. If you have chosen the additional integrity of a journaling filesystem though, it’s nice to be able to use the write back cache.

Remember too, that meta data is only part of the picture. NT permits application programs to write through the wb cache as well. This permits a RDBMS to use the write back cache, while maintaining the integrity of its transactions.

That you are unconcerned with these issues is fine. In fact, in that case, I wrote a filter driver just for you.

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now