ericg

Pagefile Size Should Be >= Ram

Recommended Posts

Anyone with a decent amount of RAM can either set their pagefile size equal to RAM, or experiment and set it to peak need.

However, a WinNetMag article recommends equal to RAM because it can improve performance. Anyone noticed the slowdown that John says can occur with a small pagfile?

Two readers have written to question my comments on the Win2K pagefile size in my column in the January 2000 issue of Windows NT Magazine (soon to be Windows 2000 Magazine). To reiterate: You need a pagefile at least as large as the physical RAM in your system. And if you add RAM, you'll need to expand the pagefile. The reason is that Microsoft designed Win2K for nonstop operation. Other OSs—including early versions of OS/2—allow pagefile sizes significantly smaller than physical RAM. Under certain conditions, the system might need to page out more virtual memory than would fit in the file—and the system would hang. By making the minimum pagefile size in Win2K the same size as the RAM, this memory overrun won't happen. That doesn't mean you can't run out of virtual memory in Win2K—you can; the system will try to expand the pagefile, and in the process, the system will slow to a crawl, but it won't stop running. So, to those growing system requirements for Win2K Server, add at least as much hard disk space as physical RAM. On a Win2K AS system with 8GB of RAM, you'll need at least 8GB of space for a pagefile, over and above the 2GB minimum requirement.

Share this post


Link to post
Share on other sites

What is that guy smoking and what are the certain conditions? I worked with a measly 1.5 or 2GB of RAM for years in Win2K. The fixed 400MB page file never tried to increase nor were there any slowdowns. So with 4GB RAM in my current system, I should set the page file to 4GB? No way.

Share this post


Link to post
Share on other sites
Anyone with a decent amount of RAM can either set their pagefile size equal to RAM, or experiment and set it to peak need.

However, a WinNetMag article recommends equal to RAM because it can improve performance. Anyone noticed the slowdown that John says can occur with a small pagfile?

Step 1: Let windows controll your swap file

Step 2: Open perfmon and profile your usage via the tracing capabilities (for a day or more)

Step 3: Look at the trace

Step 4: Adjust the PF accordingly

Home users with >1GB generally don't need a PF at all (I usually recommeng 1.5GB). Couple things to note.. First, System cache is a good thing. If you disable the page file, and cut the anount of RAM available to the system cache in half, you're not buying yourself much. Second, some (broken) apps may break even more with the PF disabled.

Thank you for your time,

Frank Russo

Share this post


Link to post
Share on other sites

IIRC (and I may not), Pagefile = RAM is only actually required for a STOP error that results in a memory dump. Basically, if your software takes a dump, you'd better make sure your toilet can handle the size of the log... or something to that effect.

Share this post


Link to post
Share on other sites

On top of that, the maximum size for an individual pagefile (and a dump) is 4095 MB. If you've got 4GB of RAM or more, you can't dump anyway. If you're paging over 1GB of virtual memory, your systems will be crawling and you'll be in a world of hurt, unless it's a very bizarre, unique situation.

The only systems I've seen which require a large pagefile are large Terminal Servers (>50 users/ machine).

Most of the machines I work on have 1-2 GB pagefiles, though, since disk is cheap. I haven't noticed any difference on the machines with 4095MB pagefiles.

I recommend follwing BBH's advice.

Share this post


Link to post
Share on other sites

The Big Buck Hunter's method is good to fine tune stinker and find the optimal size however it is unlikely that a home user will have any of this consistency.

I have 512MB of RAM and pagefile being disabled. I don't have any complication with that. My system works great. I'll admit I don't use memory hungry apps or run any critical tasks.

Of course I'm not able to get dumps when system stops but I dont need that too. Recently I had to manually set the pagefile to 128MB just to run UT2004 properly but once I add another 512MB I surely am to disable it again, oh and the system shuts down much quicker.

My point, it is rather faulty to presume what a user should and should not do. It's all about needs and maybe taste :)

Share this post


Link to post
Share on other sites
My point, it is rather faulty to presume what a user should and should not do. It's all about needs and maybe taste :)

The only things that my advice presumes are:

1: The users application profile does not experience extreme variations; The system performance will be less than optimal during these variations.

2: The user doesn't add/remove physical RAM; The user will have to re-run the procuedure (falls under the category of "duh" though.

Thank you for your time,

Frank Russo

Share this post


Link to post
Share on other sites

I have 1GB of ram and swap file that is 4MB (megabyte) big. It's fantastic...it never swaps and Photoshop never whines that there is no PF.

W2k virtual memory implementation is lousy. If I copy 40Gb of files from one HD to another, system will come to halt when it's done for some odd reason. No such problems anymore.

Share this post


Link to post
Share on other sites

i personally believe no matter what machine you have, from large enterprise machines to home ones; one thing is clear! YOu need a paging / swap file. I dont care how RAM you have, how much or little you do with your computer, i have and always will recommend a SWAP / paging file. Notice that server O/S's do not allow the removal or disallow the use of paging files (talking strictly windows here, don't know about other O/S's). WHile in XP pro you can. To me, this is useless, always have paging file like half you ram size min. Even if you use performance monitors and little do dads, forget them. THere is very good reason why it was invented and it must be used! I find it beyond me why no one ever mentions the task monitor as a way of measuring speed, performance, it offers a wealth of knowledge for free, no cost required!

SCSA

Share this post


Link to post
Share on other sites
Of course I'm not able to get dumps when system stops but I dont need that too. Recently I had to manually set the pagefile to 128MB just to run UT2004 properly but once I add another 512MB I surely am to disable it again, oh and the system shuts down much quicker.

ut2k4 definetely needs a gig of ram. I just added 512 MB to mine and now it (the game) loads MUCH quicker! _MUCH_

Share this post


Link to post
Share on other sites

So are you saying that server OS's <> Linux? A few million people disagree with you including me. Linux easily lets you disable the swap file if you want. In fact you can disable it on the fly if you have more physical memory than active, it will just cannibalise some of the disk cache ... swapon, swapoff

Sure you wouldn't run a multiuser server without swap, it gives you insurance for unexpected memory load, but for a single user / single task machine where you understand the memory usage profile, esp with Windows, go for it!

In fact I run a server at home without any swap. I run my XP Workstation at home with a 4MB swap and 1GB RAM and it flies, Photoshop doesn't choke because windoze can't tell its arse from its elbow when deciding what to cache from the filesystem and what not to.

Here is a test. Try writing a CD in WinXP with a pagefile. Note how the box becomes more and more unresponsive and the hard-drive churns whenever you try to open anything, or even switch between windows. Now disable the pagefile and try again. You will notice a large improvement.

The only forced requirement usage for /tmp or swap space are when you get dumps, although Linux / Unix can core dump anywhere.... oh and Photoshop .... although you can work around that (haven't tried in xp) by having the swap file in a ram drive

If you have some reproducable tests that disprove the above I would love to hear it

S

Share this post


Link to post
Share on other sites

Just to add to this

Came in an email thing (Windows Tips & Tricks from WinNetMag.com :))

Aimed at XP though but the title of this post doesn't say it 2K only :D

....

Q. If I have a Windows XP machine that has a lot of memory, can I improve performance by removing the pagefile?

A. Any program that runs on an Intel 386 or later system can access up to 4GB of RAM, which is typically far more memory than is physically available on a machine. To make up for the missing physical memory, the OS creates a virtual address space, known as virtual memory, in which programs can see their own 4GB memory space. (This virtual address space consists of two 2GB portions--one for the program and one for the OS.) The OS is responsible for allocating and mapping to physical RAM those parts of the program or memory that are currently active.

To work around a machine's physical RAM limitations, a local file known as the pagefile stores pages (in 4KB increments) that aren't in use. (One installation can have multiple pagefiles.) When a program needs to access a page from the pagefile, the OS generates a page fault that instructs the system to read the page from the pagefile and store it in memory. Because disks are much slower than memory, excessive page faults eventually degrade performance. A computer's RAM consists of two sections. The first section, the nonpaged area, stores core OS information that's never moved to the pagefile. The second section, the paged area, contains program code, data, and inactive file-system cache information that the OS can write to the pagefile if needed.

Although the discussion so far might lead you to believe that Windows stores only active code and data (plus the core OS) in physical RAM, Windows actually attempts to use as much RAM as possible. Often, the OS uses RAM to cache recently run programs so that the OS can start these programs more quickly the next time you use them. If the amount of available free RAM on your computer is low and an application needs physical RAM, the OS can remove from RAM pages of memory used to cache recently run programs or move nonactive data pages to the pagefile.

So, if you have a lot of RAM, you don't need a pagefile, right? Not necessarily. When certain applications start, they allocate a huge amount of memory (hundreds of megabytes typically set aside in virtual

memory) even though they might not use it. If no pagefile (i.e., virtual memory) is present, a memory-hogging application can quickly use a large chunk of RAM. Even worse, just a few such programs can bring a machine loaded with memory to a halt. Some applications (e.g., Adobe Systems' Adobe Photoshop) will display warnings on start-up if no pagefile is present.

My advice, therefore, is not to disable the pagefile because Windows will move pages from RAM to the pagefile only when necessary.

Furthermore, you gain no performance improvement by turning off the pagefile. To save disk space, you can set a small initial pagefile size (as little as 100MB) and set a high maximum size (e.g., 1GB) so that Windows can increase the size if needed. With 1GB of RAM under typical application loads, the pagefile would probably never need to grow.

If you want to prevent Windows from moving any core OS kernel or driver files to the pagefile, perform the following steps:

1. Start a registry editor (e.g., regedit.exe).

2. Navigate to the

HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\Session

Manager\Memory Management registry subkey.

3. Set the DisablePagingExecutive registry entry to 1.

If you want to determine how much of the pagefile is actually being used, you can download Bill James's various pagefile utilities, which are available at http://billsway.com/notes%5fpublic/winxp%5ftweaks .

Among these tools is a WinXP-2K_Pagefile.vbs script that tells you the current and maximum pagefile usage.

....

Ponder -_-

Share this post


Link to post
Share on other sites
Just to add to this

Came in an email thing (Windows Tips & Tricks from WinNetMag.com :))

Aimed at XP though but the title of this post doesn't say it 2K only :D

....

Q. If I have a Windows XP machine that has a lot of memory, can I improve performance by removing the pagefile?

A. Any program that runs on an Intel 386 or later system can access up to 4GB of RAM, which is typically far more memory than is physically available on a machine. To make up for the missing physical memory, the OS creates a virtual address space, known as virtual memory, in which programs can see their own 4GB memory space. (This virtual address space consists of two 2GB portions--one for the program and one for the OS.) The OS is responsible for allocating and mapping to physical RAM those parts of the program or memory that are currently active.

To work around a machine's physical RAM limitations, a local file known as the pagefile stores pages (in 4KB increments) that aren't in use. (One installation can have multiple pagefiles.) When a program needs to access a page from the pagefile, the OS generates a page fault that instructs the system to read the page from the pagefile and store it in memory. Because disks are much slower than memory, excessive page faults eventually degrade performance. A computer's RAM consists of two sections. The first section, the nonpaged area, stores core OS information that's never moved to the pagefile. The second section, the paged area, contains program code, data, and inactive file-system cache information that the OS can write to the pagefile if needed.

Although the discussion so far might lead you to believe that Windows stores only active code and data (plus the core OS) in physical RAM, Windows actually attempts to use as much RAM as possible. Often, the OS uses RAM to cache recently run programs so that the OS can start these programs more quickly the next time you use them. If the amount of available free RAM on your computer is low and an application needs physical RAM, the OS can remove from RAM pages of memory used to cache recently run programs or move nonactive data pages to the pagefile.

So, if you have a lot of RAM, you don't need a pagefile, right? Not necessarily. When certain applications start, they allocate a huge amount of memory (hundreds of megabytes typically set aside in virtual

memory) even though they might not use it. If no pagefile (i.e., virtual memory) is present, a memory-hogging application can quickly use a large chunk of RAM. Even worse, just a few such programs can bring a machine loaded with memory to a halt. Some applications (e.g., Adobe Systems' Adobe Photoshop) will display warnings on start-up if no pagefile is present.

My advice, therefore, is not to disable the pagefile because Windows will move pages from RAM to the pagefile only when necessary.

Furthermore, you gain no performance improvement by turning off the pagefile. To save disk space, you can set a small initial pagefile size (as little as 100MB) and set a high maximum size (e.g., 1GB) so that Windows can increase the size if needed. With 1GB of RAM under typical application loads, the pagefile would probably never need to grow.

If you want to prevent Windows from moving any core OS kernel or driver files to the pagefile, perform the following steps:

  1. Start a registry editor (e.g., regedit.exe).

  2. Navigate to the

HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\Session

Manager\Memory Management registry subkey.

  3. Set the DisablePagingExecutive registry entry to 1.

If you want to determine how much of the pagefile is actually being used, you can download Bill James's various pagefile utilities, which are available at http://billsway.com/notes%5fpublic/winxp%5ftweaks .

Among these tools is a WinXP-2K_Pagefile.vbs script that tells you the current and maximum pagefile usage.

....

Ponder  -_-

Very good answer. So see people, disabling the paging file is a bad thing!

Thanks fluffy for extra verification!

SCSA

Share this post


Link to post
Share on other sites
Here is a test.  Try writing a CD in WinXP with a pagefile.  Note how the box becomes more and more unresponsive and the hard-drive churns whenever you try to open anything, or even switch between windows.  Now disable the pagefile and try again.  You will notice a large improvement.

I second that.

Share this post


Link to post
Share on other sites
So see people, disabling the paging file is a bad thing!

Don't please mislead people, you have very little idea what you're talking about, really.

I wish I could write a little bit more than a two word sentence I don't have time for.

Share this post


Link to post
Share on other sites
My advice, therefore, is not to disable the pagefile because Windows will move pages from RAM to the pagefile only when necessary.

Furthermore, you gain no performance improvement by turning off the pagefile. To save disk space, you can set a small initial pagefile size (as little as 100MB) and set a high maximum size (e.g., 1GB) so that Windows can increase the size if needed. With 1GB of RAM under typical application loads, the pagefile would probably never need to grow.

i think most people here agree that a page file with a dynamic size is a bad idea, since it will get fragmented.

Share this post


Link to post
Share on other sites

This is one of those times when my technical understanding of how the system functions does not match up with my real-world experiences.

There shouldn't be any disadvantage to having a large paging file, but Windows seems to make poor decisions about what/when to page out. Given the sophistication of the NT virtual memory system, I can't understand why this is the case.

Share this post


Link to post
Share on other sites
i think most people here agree that a page file with a dynamic size is a bad idea, since it will get fragmented.

The pagefile is written sequentially, but reads are completely random. So fragmentation doesn't matter much.

Share this post


Link to post
Share on other sites
i think most people here agree that a page file with a dynamic size is a bad idea, since it will get fragmented.

The pagefile is written sequentially, but reads are completely random. So fragmentation doesn't matter much.

well, if the pagefile is fragmented across a larger area of the HDD, the head will have to move greater distances to get to this random data, won't it?

Also, the transfer speed at the beginning of the HDD is greater (or so I heard), which might improve pagefile throughput, provided that the PF doesn't move from that location. when setting a static PF right after installing, it will be placed in the 'best' location and not be fragmented.

Share this post


Link to post
Share on other sites
The pagefile is written sequentially, but reads are completely random. So fragmentation doesn't matter much.

It does. Especially if the other half goes to 'slower disk section'. It's ultimately to have a separate partition in the disk beginning. Say if you have pagefile in 200 megs of reserved space in section 1, section 2-6 user date then if it's not fixed it doesnt run out of course, heh, the systems pages to section 7 respectively, if your working with big files you're bound to notice the difference. Good thing most decent sysutils let you defrag the pagefile.

This is one of those times when my technical understanding of how the system functions does not match up with my real-world experiences.

There shouldn't be any disadvantage to having a large paging file, but Windows seems to make poor decisions about what/when to page out. Given the sophistication of the NT virtual memory system, I can't understand why this is the case.

That's one of the reasons I tend to keep pagefile disabled, at least on my home workstation. Windows has never been good at it, face it. No matter how much RAM you have Windows always finds something to page, even if you're doing nothing at the computer. It is a disease and the only cure for it is to turn it off, that is for now.

Au contraire, *nix systems manage virtual ram SUPERBLY. Should I say my FreeBSD never exceeded the 10MB usage bar? Although it's set to 768 megs and I play games while compiling stinker too.

i think most people here agree that a page file with a dynamic size is a bad idea, since it will get fragmented.

This is funny since it was Microsoft who originally invented swap. The 'easier to use' slogan had to feature the 'dynamism'. Of course it's newbiesh.

Share this post


Link to post
Share on other sites

What I am saying is this, Micro$oft invented it for a reason and i think it is just! The paging is critical to windows in order to function smoothly. There are many systems that work differently down outright. Just cuz someone can shrink a paging file and then say that they can burn a cd rom without the system slowing down is one thing. But saying that all if we all did it it would work for all comps, i must disagree. I am not saying you are telling us to make the paging file small, but just cuz you can do it and it may work for these small issues; it is not a well-rounded recommendation!!!...

SCSA

Share this post


Link to post
Share on other sites

I was pondering about this...

It's all if and buts ;)

Setting a minumum means you have a fixed minimum pagefile size and that does not mean it is fragmented, if Windows need the extra space it may get fragmented (only this extra bit) But that is better than windows crashing because it needs that extra memory anyway. You would normally set the minimum to an amount where it wouldn't usually expand (the expanding is for emergency only)

So no it doesn't mean it'll be fragmented.

As for location, just use a defragmentation program to move it ;)

Also alough it maybe faster STR on certain parts of the drive it doesn't mean it faster for the swap file.

If the frequent accessed data is a long way from the swap file, then swapping too and fro between the two will be longer than if you place the swapfile near this data.

(assuming it's this data casuing the virtual/swap/paging, given this data is the most frequent I assume it was)

i.e. UT2004 is now installed in the last part of your drive (hey it's getting full now, what with the movies, picture, 40 other games and ISO copies)

Now if the swap is at the other end of the drive.

Get a large map (pretty much all) and you drives going to be going back and forward loading map etc, dumping mem etc... Get the idea.

Much quicker to have the Swap near UT2004.

(or on another drive or buy some more ram :rolleyes: )

Of course each setup will be different and personal preferance/bias counts for a lot in this.

I'm not saying I'm right, i'm just saying think about it and doan't be so close minded ;)

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now