Sign in to follow this  
bharkol1

What is best to decrease compile time?

Recommended Posts

I need to upgrade our visual studio systems to improve build times. Currently we are running:

P4 1.4 mhz

256 rambus mem

40gig fireball IDE drives

I am thinking of upgrading the systems to 80 gig WD special Addition drives and adding addition 256 meg ram. Would I see enough performance increase upgrading to the fujitsu 18 gig 15,000 rpm scsi drives to justify the extra cost of drive + 160 ultra board?

Any help would be great!

Thanks

Share this post


Link to post
Share on other sites

Sorry didn't read your post carefully.

Add at least 256MB more ram and I don't think the performance of the SCSI for a single user system will justify the cost. Unless of course the machine(s) is(are) some kind of server. Then SCSI is a good investment. However, if you want to invest in the programmer's happiness, buy the SCSI.

Share this post


Link to post
Share on other sites
Guest russofris

Please provide a more accurate description of what is being compiled, how many files are involved, where they are being compiled from (VSS? or local?), and what is going on with the system when it is compiling. Is this a debug or release exe/dll?

Compiler options would be handy.

Also, open Perfmon. Add disk (remember to do a diskperf -y), Mem, and CPU. Compile and see what the bottleneck is.

Thank you for your time,

Frank Russo

Share this post


Link to post
Share on other sites

You say atlease 256 megs.... We are compiling ce.net using platform builder.... Is a total 512megs enough or shouldd I shoot for more?

At this point I am just investing in the programmers sanity;)

Share this post


Link to post
Share on other sites

Yeah, more memory, faster drive, faster drive, FASTER DRIVE.

Or, maybe a compilercache program..

Share this post


Link to post
Share on other sites

The answer depends on what part of the process is slowing your developers down.

With a 1.4GHz P4, it is very likely that compilation is still CPU bound. As it turns out though, the build process spends plenty of time doing other things. For example, the sysgen process is IO bound on just about any modern system.

Although I have seen some gains from huge ramdisks, they are probably more trouble than they are worth. As a short term solution, I suggest more ram, and greater care in the build process. Be aware that depending on the project (MAXALL, MINKERN, etc.), the build process doesn’t reference more than ~400MB of source files, so you will probably see diminishing returns beyond ~512MB.

In truth, only the initial build should take a long time to complete. If your developers are debugging a specific .dll or .exe, I recommend removing the binary from their .bib file, so that it doesn’t get built in to the .bin. Your developers can then compile and link that file only, so long as they ensure that it ends up in the release directory(see WINCEREL). When the NK.BIN starts up, the module will be sucked over the debug transport as soon as it is referenced.

The code, build, test cycle can be reduced to mere seconds if the NK.BIN can be written to local flash on the device. In this case, the only thing you are downloading with each test cycle is the module you are working on.

Share this post


Link to post
Share on other sites

Compiling is almost always CPU bound unless the system in question is not at all well rounded. There are many other factors, of course, but CPU performance tends to be the big one. Adding RAM will generally help only if the compile uses more memory than the system has and thus starts swapping (which it is probably doing). You might want to run a program that can keep track of memory usage on the system. Presumably you are running Windows NT, 2000, or XP, so "Task Manager" will work. (Hit CTRL-ALT-Delete and then 'T'). Generally speaking, if memory usage exceeds physical memory while compiling, adding RAM will help. The greater the difference between memory usage and actual physical memory, the greater the effect.

The greatest overall gain may well come from plopping in a new CPU. If you have a 1.4GHz system, chances are it is using the old P4 socket, so IIRC the highest end chip you can upgrade to is a 2GHZ P4. That may yield up to a 25% decrease in compile time.

If you plan to get any new systems in the future, you might consider dual AMD Athlon MPs or dual Pentium IV Xeons, the latter of which are quite expensive. Adding a second CPU helps with compiling speed, but the actual increase depends on the compiler. I have never actually timed Microsoft Visual C++ with a single vs. dual processor, but GCC runs up to 90% faster when passing a "-j" option to "make."

Using a high end SCSI drive to replace the IDE drive will help, but probably not enough to justify the huge expense relative to getting a new IDE drive and more RAM.

Share this post


Link to post
Share on other sites

In case your developers have trouble finding it, the switch for SMP BUILDs under platform builder is -M.

On a modern SMP machine, you will need more than a single ATA disk to keep up with the processors.

Of course, at best this results in a 2x to 4x performance increase on commodity hardware. Eschewing the silly GUI, and using the techniques I described on the command line, will yield much greater results.

The GUI is safe, the CLI is fast.

Share this post


Link to post
Share on other sites

1.4Ghz... I can tell you I have compiled Linux kernels and MAME.. and they are LARGE.. most of the time its in the CPU that does most of the work... so another vote for a 2Ghz CPU.

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
Sign in to follow this