MortySnerd

Partitioning For Speed?

126 posts in this topic

Well, now you're getting nasty. No need for that. This stuff isn't that important.

But it is incumbent upon the speaker to ensure he is clearly understood.

Can someone please tell me whether Gilbo is ceding validity to our point with what he said here:

This argument is essential to the thesis of those advocating partitioning to improve performance and the consequences of its inadequacy are far reaching. Because of localization this constituted the position's only genuinely valid argument.

Share this post


Link to post
Share on other sites

I am not Buddylite. Check my IP. Nor am I trolling.

I simply think you are arrogant with your assertion that anyone who doesn't agree with you is wrong. (You know that quote the made.)

If your are offended, I will cease posting now.

Share this post


Link to post
Share on other sites
if ur using windows(2k/xp/2003 for that matter) there is a tendency to do fresh reinstall every now and then becos windows get slower and slower  day after day..(you can defrag, but it doesnt help much.... no???? read on.....)

also  160G is  alot of space for windows system files + programs files. there is better way to optimzie ur disk for performance and better yet, to prolong the perfomance.

I have heard some suggest multiple partition on single drive is stupid. if you have only one single harddrive on your pc, multiple partition can benefit you.

You are to contain intensive disk activities to the beginning of the drive and the least activities to the end of the drive. My advice is to make 2 partition(say,  drive c and d).

1) Drive C for

- windows system files ("WINNT", "document and settings" folders, + ie cache )

- temporary files. like Photoshop Scratch disk, bt incomplete files,  winrar r01,r02,r03 , video clips editing...etc.

2) Drive D for

- "Programs files" (install all programs,like office2k, games.., here)

- "My documents" and personal data, stuff like doc, xls, mp3, jpg, mpg, divx, rm, mov,  zip, rar, ........ even software installation files,  etc.

Stuff in winnt and "document and settings" folder contains files that are  dynamic, the most frequently access,  and the most Fragmented. (esp when ur task is to do service pack, IE , program installation, service pack, IE, service pack. :) so placing it on the 1st partition of your disk will benefit.

choose a small parition for drive C. Defragmenting a 160G is a long task so please keep your drive C partition big enough for windows but not too big.  for win2k/winxp 20 G is more than enough.  defragment a 20G disk is fast and  fun, I am sure u will love it.

Also if u use BT, u should use Drive C for all your new BT incomplete download. remeber to move the completed download to Drive D after it finished, it will keep ur drive C and D data fragmentation to the minimium and BT will d/l alot faster.

You are to contain intensive disk activities to the beginning of the drive and the least activities to the end of the drive. My advice is to make 2 partition(say,  drive c and d).

i'd probably go one further

my 120gig HDD has 3 partitions

1st c:\drive is 10 gig that carries windows XP

(i did this to minimize the distance for the head to travel when accessing windows drivers) the bigger the partition the much further the WIndows files can scatter

2nd D: drive Dump 80 gig of storage

3rd E: 20 gig temp directory for incoming downloads from p2p, windows temp files

(the partial files on this drive as i type 3 meg file 30000+ fragments 600MB 4248375 fragments)

SEPARATE HDD for page file

i know it doesnt add to 120 but there is only 112gig space

Share this post


Link to post
Share on other sites

<<sigh>>

As a complete newbie (or at least most of the discussion here has gone way above my head) here are the conclusions I have drawn:

1:) The fastest part of a HDD (for transfer rates and access times) is the outer part, ie closest to the outer edge

2:) A drive will fill up from the outside in (ie, fastest, getting slower as it fills up)

3:) For maximum speed, the heads need to be as close as possible to the data they are accessing... ?

With all this in mind, I have my drives set up as follows (ignoring overheads for formatting etc):

HddO (System) (total 200GB)

30GB Windows XP (NTFS) (including paging file, I have 1GB of RAM, I rarely use Photoshop (and at a very amateur level), so the scratch file is here also)

170GB (various Linux partitions/data) for learning how to use Linux

Hdd1 (Data) (total 250GB)

50GB (NTFS) "working" data partition (first partition)

200GB (NTFS) "storage" partition (for "read only" mainly media files)

The point I'm getting at here is: if you have a large amount of effectively "read only" data (FLACs and DivXs in my case) then why should you waste the fastest part of your drive by storing it on the outer physical region? Surely it would be better to store this kind of stuff on a part of the drive where STR and access time is pretty much irrelevent? If you have a single big partition, then the heads would *always* have to traverse the static media files in order to get to the "working" data??? which would end up being saved to the slower parts of the HDD nearest to the spindle?

There's also the defragging aspect which someone mentioned above....why shuffle around hundreds of gigabytes of data if most of it is rarely modified? you could keep your "working" data on a seperate fast/small/easily defragged partition. Personally, I *HATE* defragging, it just seems like playing poker with my data...

Please be gentle with me if I've missed something obvious...

Share this post


Link to post
Share on other sites

If can't understand what Gilbo and the Konsensus have already stated, then

I don't think you have any business carrying on an argument with someone over the age of 12.

Share this post


Link to post
Share on other sites
If can't understand what Gilbo and the Konsensus have already stated, then

QUOTE

I don't think you have any business carrying on an argument with someone over the age of 12.

Was this aimed at me?

Share this post


Link to post
Share on other sites

Ummm...right...OK, well the whole issue hardly seems worth stressing or wasting too much time over.

Thanks anyway guys, see you around... :)

Share this post


Link to post
Share on other sites

Gilbo,

Enjoy the "noticeable" performance improvements of your partitioned hard drive.

It was me who said that ...... :blink: ...... please don't give others credit for my ignorance ...... :lol: !

Another thought or question:

On a NTFS drive the MFT is located somewhere in the middle of the drive. To read a file, the heads have to go to the MFT to pick up the first piece and then elsewhere to pick up the rest.

Starting Windows includes reading a lot of small files. I don't know how many but for the sake of argument, let's say 5,000 files.

The Hitachi 7K250 has an average seek time of 8.5 ms, track to track 1.1 ms, full track 15.1 ms, all excluding latency.

On a 250 GB drive with one partition only, we can estimate the reading of a file at least involves one ½ full stroke (the MFT in the middle and the system file at the front). 5,000 files x 7,55 ms = ~38 seconds.

If we partition and make the system partition 5 % = 12.5 GB, then the MFT and the rest of the files will be much closer to each other. I believe that seek time is reduced by much more than 50 %, maybe 75 % which will save 19-28.5 seconds.

Am I totally wrong?

Christer

Share this post


Link to post
Share on other sites

Nick9000,

The way you describe your partitions sounds full of sense to me.

Just two points, still :

1- You shouldn't notice in except in benchmark the differences in STR between outer and inner tracks, so don't bother ;

2- 170Mo for various OS testing is not the way to go. VMWare will allow you to test any Os you like without requiring any partitionning or even reboot. Check this babe out at www.vmware.com for the trial version (and after one month, well, you'll be on your own).

Share this post


Link to post
Share on other sites
On a 250 GB drive with one partition only, we can estimate the reading of a file at least involves one ½ full stroke (the MFT in the middle and the system file at the front). 5,000 files x 7,55 ms = ~38 seconds.

As I pointed out before in this thread, the MFT is will generally be cached into memory (assuming you have enough memory) so you shouldn't have to read the disk twice to get at a file. The problem is if there isn't enough memory and the disk cache is paged out or cleared. Then you'll be reading the MFT from pagefile or reading uncached from disk, and two accesses will be needed to get the file.

Share this post


Link to post
Share on other sites

Gilbo, you made some salient points about why partitioning for performance (-only) reasons, is really a fool's errand. However, I want to question a couple of the minor technical sub-points, and add some facts about some others that might slightly change the expected results in some small areas, but overall I agree with your reasoning.

The first is the assumption that installation of applications, results in them being written together to a localized portion of the disk storage. This is necessarily contingent on the fact that: a) the system is defragmented regularly, or at least immediately before application installation, and B) that defragmentation "properly" sorts file positions by association and locality. That is not always true, certainly not out-of-the-box, given the behavior of the default disk defragmenter in W2K and XP.

(OT, but I've thought about the "ideal" disk-layout optimizer, that would do just that, and calculate "relationship clusters", between all of the files on disk, and lay them out in such a way that seeks between both locally and globally-related filesystem objects would be optimized. I'm sure that perhaps there are still active research opportunities in this area. Microsoft's "application profile pre-load cache/optimizer" is a start.)

The second is a question about the statements that you made, referring to system files being cached, but not paged. You also mention that system files are loading into RAM on boot, and remain there. That does not match my understanding of how modern NT-based Windows' OSes work. NT has supported demand-paging of both user-mode and kernel-mode system files for some time now, certainly before Linux ever added that support. It is also unclear to me, when you say "system files", are you talking about core kernel files instead, like HAL.DLL, or does that include user-mode system .DLLs like MSVCRT4.DLL ? Because it was my understanding that as long as you didn't set the "DisablePagingExecutive" registry entry, the NT kernel would be able to page out, pretty much all of itself, except for the very core bits of code at the heart of the kernel.

I would also like to point out, that in the supposed "worst case scenario", of trying to open a large data-file, with a specific associated application, both from the same disk spindle at the same time, the effects of that (I think?) should be mitigated somewhat by the app-preload features present in most modern versions of Windows. I know that it doesn't eliminate the contention entirely, but it should allow batching of disk I/Os to load the application more optimally than simply demand-paging it in, 4KB by 4KB, as the application executes its internal bootstrapping routines.

I think that Longhorn will improve a lot in this area, doing massive batching of disk I/O and re-ordering the requests before sending them to the drive, and using "barriers" to allow the drive to re-order even further internally, within groups of I/O requests. It's concievable that it could even be smart enough, to send off two seperate groups of disk I/O requests, each pertaining to a localized area of the disk spindle, so that no excessive-seek contention was ever noticable, and the aggregate set of disk I/O requests could be completed faster.

(I don't have the Longhorn developer's beta, but I would think that SR would be very interested in testing its new caching and disk I/O strategies, and how they affect the performance of various representative workloads.)

I'll conclude by stating that my personal preference is for two partitions, one for the OS, and one for "bulk data", and optionally one for special purposes such as a staging area for video-capture or DVD ripping/encoding work. The reasoning is more for organisational (Ghosting) and data-integrity/security reasons (in case Windows' eats the MBR or something) before performance reasons, although I do like to keep the pagefile as part of the small OS partition near the fastest part of the drive. The delta in performance between the start and the end of the disk is not so much as it was back in the "old days" (of less than 1GB IDE HDs), but old habits die hard, and back then there were significant performance differences between having the pagefile at the beginning or at the end of the disk, not to mention having the pagefile closer to the FAT tables near the front of the disk, so that seeks between them could be minimized.

Share this post


Link to post
Share on other sites
There's also the defragging aspect which someone mentioned above....why shuffle around hundreds of gigabytes of data if most of it is rarely modified? you could keep your "working" data on a seperate fast/small/easily defragged partition. Personally, I *HATE* defragging, it just seems like playing poker with my data...

Now that, IMHO, is an excellent point. Some of Gilbo's reasoning for why partitioning for performance is pointless, was predicated on the existance of a regular defragmentation procedure. However, he did tend to skimp on considering the weight of the amount of time that the defragmentation procedure takes, each time it operates, relative to how much time is saved overall, in using the system in-between defragmentation operations.

(Think of it almost like a duty-cycle, where the defrag periods are the "off" duty-cycle, and in-between when the computer is usable by the user, as the "on" duty-cycle. The tradeoff is in allowing a known "off" duty-cycle, 100% loss of performance for a period of time, in exchange for unknown benefits of hopefully 0% loss of performance during the "on" duty-cycle periods, when the system is being used.)

Personally, I defrag.. well, mostly never. That would indicate to me, in my situation, that "partitioning for performance" is a tiny bit less than 100% irrelevant then. I guess I would prefer having a 100% "on" duty-cycle, at 70% efficiency, than having a 70% "on" duty-cycle, at 100% efficiency, but having to deal with a 30% "off" duty-cycle, which effectively means 0% efficiency during that period. I believe that partitioning, in this sort of instance, can marginally raise the working efficiency. However it is third in line for the reasons why I partition, not first.

Share this post


Link to post
Share on other sites

The best way to stop this argument seems to BUY MORE DISKS!!!

IMO the performance difference between a disk with a single partition and 2 partitions is negligible* while putting data on a separate partition from the OS, even on the same disk, makes sense in case of OS trouble.

*unless one is stupid enough to put OS on the first/outermost partition, have a data partition and temp files and/or swap on a third/innermost partition)

Share this post


Link to post
Share on other sites
Why the hell would ntfs put mft files in the middle of the drive ?

I'm not able to locate the source of that information but the reserved MFT zone starts at ~1/3 of the volume and with a 12.5% reservation it will end at the middle. This in order for the MFT to be "equidistant to all files on the volume but apparently slightly shifted towards the front of the volume. For the sake of argument, I made an approximation of the position to be "in the middle".

Isn't it a second copy of the beginning of the MFT? I've also read that a converted FAT volume has the MFT in the middle...

I think that You're right about the second copy of the beginning of the MFT but I don't know anything about converted volumes. However, it doesn't make me wrong.

A MFT consists of a minimum of two fragments, one with the meta data at the front of the volume and the other MFT records in the reserved MFT zone.

The reserved MFT zone ends at the middle of the volume and putting the backup of the first part of the MFT there seems logical to me.

Christer

Share this post


Link to post
Share on other sites

QUOTE (HisMajestyTheKing @ Aug 19 2004, 07:00 AM

I [i)

never[/i] have any data on my C: anymore.  "My Documents" is mapped to D: (which in my case is another disk entirely but this doesn't matter) as are my mails.  Only my Favorites are on C:  I guess I have to find where Firefox stores them so I can put them on my D: as well.

TweakUI will relocate your Favorites.

Share this post


Link to post
Share on other sites

No need to use TweakUI.

In Windows Explorer, right-click the Favorites folder, drag-n-drop to the desired location, when right mouse button is released, choose "move here".

The change is reflected by an automatic edit of the registry. To verify, in the registry editor, do a search for "User Shell Folders" and view the path to all of them.

Christer

Share this post


Link to post
Share on other sites

Partitioning for speed is IMO a mistake. (Possibly my car would be faster if I stripped out the brakes, but I don't care.) I always set up the system drive with multiple partitions including 2 or 3 primary (alternate C:) partitions. Currently I've got Win2k and WinXP on my main machine.

Share this post


Link to post
Share on other sites

Actually, you can handle one and only one volume relying on several disks, or several volumes on one and only one disk.

To sum it up, could we agree that partitions impact organisation and disks impact performance, and anything else would be just potential micro-optimisations hardly worthing any presumption ?

I have two partitions for OS and Data for organisation (easier backup/restoration). My data are on a distinct drive to provide better performance, but this has nothing to do with partitionning, as i could handle it by simply make a directory points to this second drive. I would agree that size of partitions can impact memory and speed on older FS, but isn't it become completely irrelevant on any modern FS ?

Share this post


Link to post
Share on other sites
HddO (System) (total 200GB)

30GB Windows XP (NTFS) (including paging file, I have 1GB of RAM, I rarely use Photoshop (and at a very amateur level), so the scratch file is here also)

170GB (various Linux partitions/data) for learning how to use Linux

Hdd1 (Data) (total 250GB)

50GB (NTFS) "working" data partition (first partition)

200GB (NTFS) "storage" partition (for "read only" mainly media files)

Nick,

The only recommendation I have for you would be to add a fixed-size pagefile to the first partition on your second, non-OS drive. You may as well leave the pagefile you already have on your OS-drive; it will only be used if Windows panics and needs to create a dump file.

As you have plenty of disk space I'd make the 2nd-disk pagefile equal in size to the amount of ram you have.

-- Rick

Share this post


Link to post
Share on other sites

By running one of those programs that eat all available memory and watching to see what happens when virtual memory is all consumed.

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!


Register a new account

Sign in

Already have an account? Sign in here.


Sign In Now