• Content Count

  • Joined

  • Last visited

Posts posted by btb4

  1. The usual method is to use storage for W2K3 Server "Cluster" Clusters, but not for NLB "Clusters". You are effectively negating the usual advantages of NLB by using attached storage (if whatever kind).

    Other than that, it is just a matter of what is required to keep up. Is this across a T1? Anything can saturate that. Heck, even 45Mb is saturated pretty easily for media servers - usually far fewer connections than a storefront, for example.

    This is why I love my rack full of P3-based Serverworks systems....

  2. It looks as though there are variants for about $700 that are 2G - but they need a 2.5" drive space instead of plugging into the socket.

    Still, 2G is not quite enough to run most bloatware OS variants.

    The trouble with the USB sticks is that many systems - esp. those you'd most want to - can not boot from them.

    I am being plagued lately by the hassles of the no-floppy server, and most will not boot from USB.

    Here's another SSD oddity - 1394B external SSD! Wonder what that goes for?

    If 1394 is too limiting for you - you can go USB, too.

  3. New one to me, anyway - looks like with the 512M limit it would be more useful for a utility, maybe a handy way to flash BIOS in all those 1U "who needs a stinkin' floppy drive...we don't need no stinking floppy drive" units, etc.

    Off the top of my head can't think of too many other uses - the capacity is just too limited.

    Anyone use these ever?

    Cheap enough I might just order one "for the fun of it".

  4. G is correct - this is why most encoding farms are many many "decent" systems, instead of a handful of monsters. I'll ignore load sharing etc. for this....

    I think you are overestimating the differences in time involved, esp. you plan on multitasking during encoding and thus stealing cycles from the encoder. Consider that Pentium® 4 Processor at 2.66GHz with 533MHz front side bus is a complete system, including a 17" LCD for $599. Mess around a bit and you can do even better for less - for example, lose the LCD, add memory and a GBe NIC, etc. Even if you are using a codec that supports dual CPU usage, and you don't multitask, a dual CPU 3.2G Xeon is not going to do that much better, for the 10X in $ you'll spend. If system "A" takes 12 hours, then system "B" might be 8 or 9 - maybe 5 or 6 at the low end for certain codecs.

  5. Imagine the competitive advantage a company has in delivering a 15K product that offers 90% (not rounding, exactly 100/111) of the performance of a 22K product at significantly less cost, with significantly less heat and power consumption.  I'm not saying 22K drives won't appear, I am simply noting a fact: that, at this point, increasing spindle speed no longer offers a competive advantage.  In a capitalist market place a lack of a competitive advantage is not a good thing. 

    Actually, at this point you have deviated from fact and moved to opinion. You have admitted a "10%" performance advantage, then opined that this would not be a competitive advantage. Obviously, if this is correct, then the drives are not likely to be manufactured.

    The critical language here appears to be "significantly less cost" when referring to the 15K option. What is the basis of this "significance"? Are you asserting that there is no chance that this 10% performance boost would make a drive option on an enterprise level viable over an even more expensive storage technology? What is you specific cost formula for making this assessment?

    The usual (or at least common) trade off for performance is to pay exponentially for linear (or worse) improvements in performance. This fact tends to support a small, but critical market for high-performance products. Moreover, it seems that this market permits inroads in technology that ultimately leads to more mainstream performance gains.

    I think that you can discount the "uneducated user" factor. The market for these hypothetical devices would be highly educated, large scale enterprise systems folks. If they deploy them, it will be because it makes sense.

    If curves are to be drawn to make it possible to draw any meaningful conclusion here, they have to consider cost. Specific, numeric, cost, at the same level of accuracy as the performance numbers being bandied about here. Assuming that ddrueding's plot is accurate it is still not possible to draw any conclusion - if there is some (purely imaginary) way to produce the 22K drive below the cost of a 15K drive, well then what? If the 22K drives are 10% more and give back 10% more performance, then what? If the 22K drives are 30% more for the 10%...? Etc.

    And I would make the same challenge to those folks who seem fairly certain that the drives will happen.

  6. Another possible option besides gimp is the spin off cinepaint

    Gimp really bogs down with anything over about 50MB and we regularly work on 350-500MB files.  Gimp doesn't support high dynamic range or even 16bit images very well.  Even Photoshop7's support for 16bit is poor.  And color management in Gimp is nonexistent.  (bummer for me)  Pantone colors would be nice too.

    Thanks for the cinepaint link - all of the above applies to us.

    Now, you mentioned you need CMYK support, out of curiosity... why? 
    So unless you are using a kodak medium format digital back with direct CMYK output (or better), Mac OS, and a really awesome printing system (Heidelberg perhaps? aka a CMYK RIP); in other words, a CMYK work flow, I'm not sure why you would want CMYK in your graphic editor.

    All that and more. Actually, for in house still shooting we run from D1X through Leaf to Sinar. But more critically, we work in and out of all manner of color spaces - for going to press, as well as because we often are handed work that originated elsewhere that we have to further manipulate to make into something else.

    Probably 85% of what we touch at one time or another in its life will be touched in CMYK space. I can guarantee you that if a client hires us and a designer and a catalog is produced, then the idea is hit upon to "make it interactive" or who knows what, well, it'll be a Quark doc and it will be in CMYK. Depending on what happens to it afterward, we may just have to dip into the CMYK, or actually work in it (in that example). Usually the desire is to make as few color space changes as possible, so even if our product will be RGB, if the source is CMYK we may need to work in CMYK, those files would be print-ready for future use, then we'd drop into RGB at the end.

    But, even more puzzling, why didn't you ask about CMS support?

    I think I was being naive...figured it'd have to have it. Heck, I even figured at least CMYK would be there, even if other spaces were not.

    --  Eeew yucky, they have an activation system like Quark now.  stupid.

    Uh, yeah, this and other comments about Adobe's wonderful customer service basically is what it boils down to.

    I live in (work in) an extremely customer service oriented world. Without going into all of the loveliness of Adobe and copyright issues, let us just say I HATE giving my money to companies that treat me in such a way that if I treated my customers that way I'd be out of business.

    They sure don't seem grateful to have my business....

  7. Both high-end video and CPU sales are supported by computer enthusiasts.  There are enough of them and they spend unnecessarily.

    Are you aware of any data to support this? I would be curious to know if this were true. Based on the ads for the high-end workstations I see, most with the TRUE high end video cards for example - many $Ks, and many of which will not even run games or common apps - they do not seem to be marketed toward "enthusiasts" - whatever they are.

    How many "enthusiasts" are running Itantium 2 systems?

    Did "enthusiasts" build the reputation of the HP Kayak series?

    Are $30K LCD displays for this "enthusiast" market?

    Sorry, but I have a great deal of doubt about that notion, frankly. Care to put the annual expenditure by "enthusiasts" against say, the pharmacology industry modeling drug effects and interactions? Against telcos? Etc.

    I am open to data that proves this otherwise, but I suspect that if a drive manufacturer is considering a 22K drive - unless it is some sort of hacked SATA drive for $500 - I doubt that the notion of a "PC enthusiast" ever enters the equation.

  8. Perhaps I am missing something here, but - forgetting the various notions of whether it would be "worth" it - the two "offset" conditions specified for higher RPM drives, if applied to the RPM drives themselves (more spindles, smaller platters) would push the higher RPM drive further still.


    I think the debate of "worth it" is different from, and perhaps in some ways confused by, the technical analysis. Well, perhaps not confused by, but perhaps things are being considered in an order that is not so clear.

    I think the usual order is more along the lines of: Consider a technology, in this case probably not in a vacuum (other technologies might follow along), consider its advantages, consider what is required to realize that technology (including cost of manufacture), consider if there is a market.

    I think there is little disagreement here until we get to the end of that process.

    The naysay side has yet to establish an argument that, for example, a 48 spindle small platter 22K array would not have a market. The "yeahsay" side has yet to establish that such would be a viable commercial product. This seems like a manufacturing/marketing question, one on which I may speculate by will admit that I clearly lack the knowledge to address.

  9. I would not care to try to evaluate the validity of the "22K in 24 months" prediction, but it seems to me that 22K could be an element of a successful enterprise design strategy - think 24 - 36 - 48 - + spindle arrays. Obviously there are companies that still find economic sense in producing $750-1,000+ drives, so it seems there is a market for some of these extremes. If the reduction in latency is sufficent to take a notch out of the SSD high-transaction server market, then it may well be worth it. If this proves to be the case, I suspect that it will drive other more pedestrian drives to better performance/price points as well.

    I recall when 640K was considered "ridiculous" "exorbitant" "unrealistic" etc....

  10. We use several of these S845WD-1 motherboards and they are very stable and reliable. They will support Celeron 1.7 & 1.8, though the URL above does not say so. This board replaced the S815EBM1, a P3/Celeron board. Though it is touted as a server board, and that is the way we use it, it does have an AGP slot so it combines basic server level stability with the ability to function as a workstation (I assume that you do not want ATI Rage graphics for a kiosk).

  11. Ah, OK, then that is a frame buffer M-JPEG board, right? I think you'll be fine for performance, then, so long as you do not have the OS on the RAID. I think as long as the minimum does not drop below 40 you're OK with that generation of board. The space/safety thing really is up to you. You may well find that overall backing up to make space takes more time than it would to re-render in the event of a data loss.

    For our MJPEG system (an iFinsh) we use Medea RAIDs. Most of them are older, 40M (35?) drives and they work fine.

  12. Isn't Retrospect a file-by-file backup product? Can it actually create an image of your disk or partition? Can it faithfully backup WinXP while you are logged into your system? If it can do all of that, then I will look into it for sure! :blink:

    The general answer is "Yes", though at this point they offer so many flavors, variants, etc. I'd check to make sure that what you want is specifically covered in a package/level that is suitable for you.

    If the distinction between "file by file" and "image" is the difference between "incremental" and "whole disk restorable backup - just install an OS (any that will run Retrospect - does not have to be the one the system will end up with) and Retrospect and go, then it does both. if you mean something else (e.g., a sector by sector forensic style "image") then it may not do that, though I have not clearly seen that requirement in this thread.

  13. If it will really do it - and I mean minimum, not average, you need to watch that - then it should be OK for acquisition. But how much content, and what sort of editing? Space may be an issue. You might be overall more satisfied with just striping - that's more like a Medea solution than a Rorke. Unless you are doing a lot of compositing, etc. if a drive fails you just re-acquire (store your project files on another drive, just use the RAID for data) - probably a better trade off than losing 200G of space in a 600G set and the performance will be better, IMHO.

    What card are you using? AJA?

  14. I'm coming away with the impression that an imaging product is best for restoring your own PC, but a file-by-file backup software is best for restoring to a new PC (should mine ever be flooded or burned beyond repair). If that's correct, it really complicates one's backup strategy!

    Good backup software - such as Retrospect - does both. With scripting you control when it does what. And it manages the issues related to moving an image to a new system.

  15. The usual application for devices like this is for applications that tend to be neither capacity nor STR limited, but seek-limited. To me the application that is suggested by the thread originator seems odd.

    Many successful high-end SSD installations are plain old Ultra or UW SCSI. Consider high-transaction servers, etc. Many small random accesses - you'll probably never hit, or even approach, 20M or 40M total xfer rate, so the interface matters little. However, you may hit many thousands of random seeks.

    These are not OS drives....