• Content Count

  • Joined

  • Last visited

Community Reputation

0 Neutral

About davidbradley

  • Rank

Contact Methods

  • Website URL
  • ICQ

Profile Information

  • Location
    Issaquah, WA
  1. davidbradley

    Photoshop Scratch Disks

    Some of the statements above are based in unstated (and probably unconsidered) assumptions about how Photoshop is being used. For example: 1) Don't set your Photoshop memory usage higher than 50% in order to leave enough memory for the OS to do disk caching on the scratch disk. This *might* make sense if the user is editing small files. But it is exactly wrong if the user is editing very large file, say 300MB or larger. You want to give Photoshop all the RAM possible. And because the scratch disk is used as an overflow for RAM, the idea of using a ram buffer for the scratch disks is nonsensical. It's like putting your page file on a RAM disk so that you can page faster, forgetting that the page file is used when you run out of RAM. Many people on the Adobe Forums site do recommend setting the PS memory limit at the 50% - 75% range. I've found that for my usage and my system setting it to 100% gives the be resutsl. Also, remember that Photoshop can use no more than 2G of physical RAM. If you've got a system with more than 2GB of RAM then you will probably always want to set the PS memory limit at 100%, which would cause PS to us 2GB. If you had, say 3GB of RAM and limited PS to 50% then you'd be limiting PS to only 1.5GB. As always, the correct answer is, "it depends". Beware of absolutes. 2) The speed of the scratch disk doesn't really matter because PS does all its operations in memory. Again, this *might* be true if you edit small files only a few at a time. However, if you edit large files (again 100MB - 300MB or larger) then you will quickly use up even 2GB of RAM during a typical editing session, and will quickly find yourself limited by the speed of the scratch disk. RAID0 is *highly* recommended in this case. Don't ask the people on Storage Review. Ask the people at www.adobeforus.com. But.. DON'T ask them really -- read the FAQ instead. This question is asked so much that you would be hounded if you ask again. The person who said "An easy way to test is to simply watch the CPU utilization, if it reaches 100% you're CPU-bound, if it doesn't you're IO-bound." got it exactly right. If you are limited by the speed of your scratch disk you will know it! Your CPU will drop nearly to zero and the disk containing your scratch file will be writing like crazy. 'Natch!
  2. davidbradley

    The Death Of Raid

    As my previous posts throughout this forum indicate, I believe that striping is a valuable tool when used in the right situation. I also do a lot of Photoshop work with very large files -- in fact this is my whole reason for being interesting in striping these days. So, even though I am a fan of properly-used striping, I have to say that the Photoshop components of this benchmark set are not representative of how a power Photoshop user would set up a multi-disk system. Specifically: In Photoshop the scratch file is the 3rd most important factor determining performance (after memory and CPU, and the speed of the scratch file may even be more important than CPU in many cases). A power Photoshop user would never put his scratch file on a single disk and his CPU and images on a RAID array. If he/she had two disks then the config would be one disk for OS/Programs/Images and the other for scratch. With three disks it would probably be one for OS/Programs/Images and the other two in a RAID0 array for scratch. With four disks it would probably be one for OS/Programs, one for Images, and two in a RAID0 array for scratch. When reading and writing files, Photoshop does a lot of concurrent disk I/O on the scratch disk and the disk containing the file being read/written. You want these two files (images + scratch) to be on separate spindles. Yes, heavy usage of scratch file. Again, a Photoshop power user would not set up a machine in the way used in these tests, with the scratch file in a single disk and OS/Programs/Images on a RAID array. That is backwards! A power Photoshop user would never do this. When doing serious editing it is easy to take up the entire available resources of a fully-configured Windows machine: full 2GB RAM address space fully used up, CPUs maxed out, disks cranking away. A power Photoshop user would shut down all nonessential applications and would never run backups while editing. He/she would *definitely* do backups, but not concurrently with edit sessions. This is not an important metric for a power Photoshop user, who typically starts Photoshop once then uses it for several hours. However, regardless of whether it is an important metric, the PHotoshop CS startup times would probably be faster if the OS, Programs, and Scratch were on multiple distinct spindles rather than grouped together on a single RAID array. A complete benchmark analysis should include this scenario along with the RAID-only solutions. I think the point that people are trying to make is that if you've got N disks available to solve a problem, is it better to configure those disks into a single N-disk RAID array (either RAID0, 1, or 5) vs. configuring them as N distinct independent spindles, or perhaps some combination? Most applications will speed up a little bit when going from a single disk to RAID0 -- that is not a surprise. But a true benchmark comparison would include test cases that compare striping vs. multiple distinct spindles as well. As for the argument that the "the problem with multiple separate disks is that you have to figure out in advance what files go on which disk", I say that yes, this is a problem, but is specifically the kind of problem a *power user* would solve. Another example: when I was doing full time software development on my machine I noticed that my productivity was often limited by disk I/O time. I did some investigation -- OK, a LOT of investigation -- and ended up configuriung my development machine with four small disks used as follows: one for OS, one for Programs, one for temp files, and one for source code. I chose this configuration because every time I compiled there was significant concurrent I/O to/from each of these four sources. I wanted each of those four sources on seprate spindles to minimize head movement, increase concurrency, etc. Switching to this configuration made a HUGE difference. Would I have set up those four disks in a RAID5 array? No way! A pair of RAID1 arrays? Probably not. Again, documenting that there are small speedups between a single disk and a RAID array is no big surprise. We would all expect this without even seeing the tests. Comparing a single RAID array to multiple spindles...now that would be interesting.
  3. davidbradley

    The Death Of Raid

    Never say never. The claims that RAID0 never has any place on any desktop computer ever are just as false as claims that RAID0 is always better. Both seem to be totalitarian points of view that try to force their perspects on everyone. The true answer is that it depends. I use a RAID0 array for my Photoshop scratch disk. Do I put my valuable data, programs, OS, email, etc. on that RAID0? Of course not. Does this make a huge difference to the productivity of my Photoshop time? You bet! I edit large files that max out the full 2GB RAM capacity of Photoshop, so the speed of the scratch disk determines the speed of my edits after a while. Smart use of RAID0! And, since I've got a RAID0 array for Photoshop scratch, and since the scratch partition uses only a tiny portion of the size of the array, why not use the remainder of the space for other non-critical uses where -- dare I say it? -- STR makes a difference. For example, I have copied several gigs worth of TOPO! topographical map data onto another partition on that RAID0 array. These consist of many files in the 2MB - 10MB range. Now I can scroll around multiple US States worth of detailed topo maps as fast as I please because all this big fat juicy detailed ata is coming off the RAID0 array. Is this a stupid use of RAID0? Heck no! It's smart. Oh my oh my, what if the array crashes? Well, I spend 20 minutes re-copying the CDs. And the space is on that array anyway, regardless of whether I use it. And the maps do, in fact, load faster than from a single disk. Oh wait, here is another use for my handy RAID0 array: a spool directory for my Epson photo printer. When I print a large picture, say a 13" x 40" panorama at 300dpi, the Epson printer driver generates an intermediate file that is a few hundred megabytes in size. Writing a file this large on my C drive takes a few 10s of seconds, whereas it takes a handful of seconds on yet another small partion on My Friend The RAID0 array. Is this stupid use of RAID0? Heck no! Is my computer not a "desktop computer" because I use Photoshop? Because I often browse through large topo maps? Because I print large pictures to a photo printer? It depends on who is doing the defining, I suppose. RAID0 is just like any other tool: it can be used properly or not. But to say it is never appropriate? Never say never.
  4. davidbradley

    Raid-0 SATA - speed limit?

    Yes, and this can be a good thing! These "software RAID" cards are often faster for RAID0 and RAID10 than higher-end "hardware RAID" cards. The onboard hardware that is such a help to RAID5 can be a bottleneck for RAID0 and RAID10. For example, the new 3ware 8506 series seems to max out at about 200MBs. Similarly, the Megaraid card originally discussed in this thread seems to be maxing out at about 240MB/sec, even though six Raptors are capable of a combined STR of about 360 MB/sec on the outer tracks. (The outer track STR for a single Raptor is about 60mb/sec.)
  5. davidbradley

    Raid-0 SATA - speed limit?

    Oh momma!
  6. davidbradley

    Raid-0 SATA - speed limit?

    The performance os hardware-based RAID controllers liked the Megaraids is capped by the speed of their onboard hardware. For RAID5 and potential other expensive RAID configurations the onboard hardware is usually an advantage, but for RAID0 it can be a bottleneck. My guess is that's what's happening in this case. That's just a guess, though. Future firmware upgrades might make more efficient used of the onboard hardware. Of course it could be something else such as contention on the PCI bus, etc. I'm assuming you've already rules those out.
  7. davidbradley

    What can we expect when PCI-Express arrives?

    Man, I'm having a hard time verifying this. There are lots of articles about PCI-Express, but none of them seem to mention the bandwidth for an x1 connection. One said it was 100MB/sec in each direction, another said 250MB/sec in each direction, and now here's a third value of 264MB/sec. Can anyone cite a definitive value? If an X1 connection really is 250MB/sec -- 264MB/sec in each direction then that would be excellent! It would solve all my RAID0 problems for now.
  8. It looks like motherboards and peripherals that support PCI-Express are 4 - 6 months away. From a storage perspective (this is Storage Review, after all), what do you all think we can expect? Personally, I do a lot of work that is limited by disk STR. I'm hoping that PCI-Express will make it possible to build a 4-way RAID0 subsystem that is not limited by the PCI-bus without having to resort to a workstation/server motherboard. With current generation disks it is possible to get about 240 MB/sec on the outer tracks of a 4-way RAID system. My admittedly limited understanding of PCI-Express is that this will still exceed the bandwidth of a standard X1 PCI-Express slot, and that it will be necessary to go to X2 or perhaps even X4. However, I'm guessing that non-server/workstation boards --- the next generation single-CPU desktop board that will replace, say, the i875 chipset --- will probably only support X1 PCI-Express slots. If that's true, we are still going to be limited by the PCI bus. Heck, even a 2-disk RAID0 setup might still be PCI-limited. I'm talking about peripheral slots, not the X16 AGP slots that have already been announced. I'm also talking, of course, of single card solutions. By using multiple controllers, each in a separate X1 slot, it should be possible to exceed today's limitations. What do you all think? What can we expect 4 - 6 months from now? (And this is not just an academic discussion. I'm planning my next system, and I'm sure others are too. So this affects the decisions that some of us will make over the next few months.) Thanks in advance!
  9. davidbradley

    SSD... the Fastest drives.

    Very cool BUT....it's yet another SSD solution that is bandwidth-limited by that dang 32-bit/32-Mhz PCI bus. My main application is Photoshop. I edit very large files (600MB - 1.1GB) and am limited by the bandwidth to/from my Photoshop scratch disk. I can already max out the bandwidth of my 32-bit/32-Mhz PCI bus with a simple and inexpensive two disk IDE RAID setup, so in this case (and I know it is a niche case) all the extra money for this particular SSD solution would not help. (Which is not to say this product isn't excellent for other applications.) There ARE a number of SDRAM PCI SSD solutions out there if you look hard enough, but I have yet to find one that is on a 64-bit and/or 66-MHz PCI connector. Seems weird to me, given that these products are targeted toward ultra high end servers. Give me bandwidth baby! By the way, do you have any procing info for the Platypus products?
  10. davidbradley

    Intel Application Accelerator

    One of the things that IAA does to improve performance is to flush data to the disk less often. This might explain why it takes longer to hibernate, i.e. there is more pending data in RAM that must be stored before shutting down.
  11. Are they even being manufactured?? Most manufacturers are producing drives with 60GB or 80GB per platter... The current-generation drives are 60GB to 80GB per platter as you say. But older generation drives are still being manufactured and sold. Go to any online retailer such as NewEgg, and you'll see a number of such drives available.
  12. Is this true? We've all heard of software implementations of RAID0 being faster than hardware implementations, but I've never seen or read of a single instance of software RAID5 being faster than hardware RAID5. Can you cite some references?
  13. Dan, 20GB drives are ultra low end these days. I don't mean capacity, I mean they are targeted to very inexpensive "value PCs", and it's not surprisig that corners might be cut to lower costs. You will probably find better reliability by going with more of a mainstream drive such as the WD400JB, etc.
  14. davidbradley

    Is 4 GB the max for Windows 2000 or is it 3.5 GB?

    According to Microsoft it is supported by Windows XP Pro.
  15. davidbradley

    Is 4 GB the max for Windows 2000 or is it 3.5 GB?

    The /3GB switch also exists in WinXP Pro, making it very common on Windows machines these days.