• Content Count

  • Joined

  • Last visited

Posts posted by btb4

  1. One more thing: you should do some searches here on defragmentation software. While absolute drive speed is not much of a concern, they frag up fast when editing and no drive will be able to perform (esp. capture) adequately once they are even moderately fragged. Defragging time can actually add significantly to your overall edit time, so good software that is faster than bare Windows is an option to consider.

  2. Thanks for the head-up on IBM.  I've been around this business for a while, but they never used to be an option because you where always paying too much for their name.  I'll be looking at them closer now...

    How's IBM the service work.  Is the on-site support as good as Dell's?

    I have to bash Dell here. Based on WS performance (always build my own servers) there is no comparision between IBM & Dell. I have had Dell on site show up to work on a system that was working when the guy got to the office and leave with it inoperable. Dell never did a thing about it - the guy lied and told them it was good, so I got screwed. No Dell I have ever bought has lasted more then 2 years. I like IBM a lot for workstations - though they are not quite as fast as the HP/Compaqs, they have been much more stable for me.

  3. I realize you don't need fancy RAID0 or sky-high STR for video editing, but wouldn't a Raptor still be a big help?  I could pull off the raptor by lowering RAM to 512MB and FSB to 333MHz on a gigabyte KT600 SATA board; price would be the same.  Also can get a rebate on a 160gb drive to make it the same price as the 120.

    Not as much as a separate OS & capture drive would help.

    I would suggest that for an OS drive something like this: would be more than adaquate. Given the decidedly "storage snobbery" around here you might get some naysayers, but for about $50 it is decent and then you could then perhaps afford something like this: for dedicated storage. If you don't like the 40G OS drive, get a 80G WD SE or something similar with the $ you were going to spend on the extra stick of RAM. in fact, I would probably prefer the 40G OS drive and spending the RAM money on a 250G or 300G storage drive.

  4. Were you planning on using the 15G drive as a OS drive?

    DV is not dificult to capture - 5400RPM drives do it very well. For a software-only system like Premiere, most of your rendering will be so slow that again, drive performance will not be an issue. As to memory, DV is so highly compressed and the frames so small that it is very possible that there wil be no difference between 512Mb & 1GB or RAM - not 100% sure on that as it has been years since I've even fired that software up.

    The real problem you are going to have is with pure real estate. It depends on how good an editor this guy is (many folks today - bizarre as it seems, "editing" is even "taught" this way by folks who should either know better or be fired - just capture everything and then "edit" with the software) and what sort of source material and final product he is working toward, but even 120G can get cramped pretty fast.

    As a comparison: I just completed two projects, one with 6 hours of raw footage across 10 cameras (=60 hours of footage) edited down to a final project run time of about 3 hours single-screen, and another with 20 hours of two camera footage (=40 hours of raw footage, though it was actually 60 because we recorded a live switch, too) edited to 20 hours. Both were live at the same time, so if I were a lousy editor and just dumped it all to HDD I would have needed 120 hours of on-line time just for capture, foget render space, and even if both were just at DV rez (the 6 hour shoot was HD) I would have needed about 1.2T.

    Just for the capture. As a typical example, triple it for render & scratch space.

    The 20 hour project was all in native DV and required about 700-800G, including render & scratch space, so I suppose you might be able to work with about 2-2.5 hours of content total with a 120G drive. I think you could just straight capture maybe 6 hours on it. Oh, and two of the drives for that project were Maxtor 5400 RPM drives and they captured and rendered just as well as the WD SE drives used. The only time you could see a difference was when transferring between drives.

  5. The sovereign power in enforcing the international law would be a body made under the aegis of the UN. I never said it should be done without rules. It's not oxymoron at all.


    I told you not to get me started on the U.N.

    First, "International Law" is and oxymoron, and for the reasons I stated above. There is no other reason, and if you can't follow simple logic then a restatement will not help.

    As to the U.N., I will keep it simple and merely state that at least for the US our Constitution is quite clear that there can be no other superior body. As to what you worthless socialist...aaaaaaah, there's just no point....

  6. Why? Is US above international law? Why they won't get a fair trial where everybody else believes will get...?


    In point of fact, there is no such thing. "International Law" is an oxymoron, inasmuch as for there to be "Law" there must be a sovereign, and the fact that it is "International" means that there is in fact no sovereign. The phrase "International Law" is at best a marketing ploy that is an attempt to elevate treaties - agreements between sovereigns - to that of law - dictates of the sovereign to their subjects.

    In the US the people are sovereign, there is no authority to remove that sovereignty and there can be no legitimate "Law" superior to that sovereignty, thus no "International Law" (don't get me started on the U.N.).

    Check and Mate.

  7. private companies being inherently much more efficient at managing it than govt

    Efficient from an economist's point of view but companies have to make profit. People left and right will not be able to get a decent insurance at normal rates. That's what scares me in a fully privatized healthcare system.

    Well, it really does not work that way for the simple reason that it can't. One of the simple and yet beautiful facets of capitalism is that you can not base a winning business model on pricing a product at a level at which your customers can not afford it.

    Can not.

    It is impossible.

    You will go out of business and someone with an appropriate model will win.

    What you may be struggling with is that you may have Ferrari tastes but a Hyundai pocketbook.

    Sorry, the government can not fix that. None can and none ever will.

  8. Part paid for by the state, part privatized seems a good combination to me.

    In the states we call that the "two pockets, same pair of pants" approach. Unless the govt. is printing new money & flooding the money supply to pay for their public policy, there ultimately is only ONE source of the cash, with private companies being inherently much more efficient at managing it than govt.

    And no, if you lose your job* you are not on the street - just the opposite, in fact, if you have no money you get any and all healthcare in the US and pay nothing. It is called "indigent care" and is often better care than the "working poor" receive who can afford a little care but not all. There are a multitude of facets and permutations to this in practical application, but that is more or less the bottom line net result.

    (*And actually if you lose your job your benefits are extended for up to 18 months so it is only after you have been out of work for 1.5 years that you would lose coverage.)

  9. The following is evidently the final word from Granite on the subject. Back to SCSI....

    First, let me say you have done a good job of investigating the Microsoft problem. Cabling is certainly not the only issue... just possibly one and if you are using our boxes and our cables you will find that we produce the best there is. Other things that I have seen that might improve problems is the use of a hub. We have also had some added luck with the new FW800 1394B Host Adapters. Microsoft has still not issued a 1394B driver but most of the B hosts still work. This new TI chip has a bigger buffer and seems to eliminate some problems.

    As you mentioned there are a variety of items that can cause this error and until we get some new input from Microsoft there will continue to be some mystery associated with it.

    Best regards, frank

  10. This is the reply from Granite. I objected, based on the notion that if 9 computers, four devices and about 3 dozen cables all exhibited the exact same failure, then it is tough to blame the cable:

    Yes, the Granite products set the max block size to 2048. This is true both for the OXFW911-based products as well as the OXUF922-based products. Granite has been using the 2048 block size setting since the original pilot production in early 2001.

    Note: Setting the block size to a smaller value reduces the maximum throughput which can be achieved.

    Regarding the "delayed write failed" errors, my guess, and this is just a guess, is that there is a data integrity problem on the cable. This could be the result of a poor connection or a poor cabling setup. By changing the max block size to 2048, you halve the number of packets which need to be sent in order to move the same amount of data. It's possible that this statically reduces the number of failed transfers. Keep in mind that there are automatic retries for corrupted packets in FireWire. The automatic retries may be masking the presence of a high packet failure rate, so failed packets might be happening more frequently than one realizes. Only those transfers which have more failures than the maximum number of retries will appear as "delayed write failures". So, the fewer the packets you send (ie., larger packet sizes), or any reduction in the chance of a given packet being corrupted, may result in a system that appears to be working better.

    Again, just a guess. But, I would look at cabling and the quality of connections.

  11. Thank you, I think I will give this a try. I may wait until the end of the week until I see if Granite comes up with anything, though, as they have indicated that if I use the Oxford software that will negate their custom firmware.

    I do appreciate your follow up. Given how widely the Oxfords are used - ADS uses them now, and WD came back with "Oxfod 911" as their answer to my question to them about their bridges - I am surprised that this is not a more widely known issue. It must be that the usage patterns are just not that intense for most folks and their 1394 drives.

  12. .. I do not recall...I think the cooling may have been the AMD retail solution.


    You got what you deserve by using a desktop retail cooling system in a rackmount system. Recomendations shouldn't be taken out of context.

    The charming and constructive nature of your commentary is noted, as is your attention to detail. Nothing was taken out of context, as these were end-to-end 100% approved AMD solutions - whatever they were. A 100 or so systems ago I do not recall the precise cooling solution, other than it was "correct". Of course, I already said that in my post. Perhaps a remedial reading course is in order for you, oh Kenny dearest, to improve your desperately flagging reading comprehension?

    Did you even cool the room this box lived in, or was it backing in a closet in an unairconditioned Tuson warehouse?

    Well, if I could get 4000 conference attendees into an "unairconditioned Tuson warehouse" then I think a couple systems would be the least of my worries. You really do not pay attention, do you Kenny? But thanks so much for the oh so helpful commentary, anyway.

    The only thing that has improved is your luck.

    Oh, that's too bad because other folks, and my own experience, was starting to convince me that AMD had overcome their at one time well-known and widely documented Athlon cooling issues. Not so? Thank you so much for clarifying things for me. I am sure that the poor soul trying to build a computer for the first time found your comments to be illuminating as well.

  13. You were running Athlons in 1U rackmounts?  Also, which core were you using and what cooling did you have?

    No, they were actually in 4U rackmounts. As to the rest, I do not recall, though I think they were just before the "XP" designation. I think the cooling may have been the AMD retail solution. I do recall that the heat was enough to make the system above spontaneously reboot w/o an airspace. I think the mobos were Asus, formerly one of my favorites, though I think they have taken quite a slide quality-wise.

    As I thin on it, something has gotten better - I am sufficiently perversely oriented to preferring the underdog that I realize that I did build another AMD box recently. It is one of my home "kick around" boxes based on the Tyan 400 board. I wanted to check out AMD again, try a Raptor drive (first SATA for me), etc. It has been very stable, and (now that I am sitting at it) the case is cool to the touch. I think it has an XP 2100 or something in it, a G of Crucial memory and maybe a couple 120G drives in some sort of RAID/JBOD as a data drive.

    To the thread originator: No matter what chip you decide on, it looks like you are somewhat new to system building, so I would strongly urge you to buy the retail verson of the chip. Stick with what is in the kit and save advanced cooling etc. for after you at least know how to put in a CPU without mashing the pins.

    And overall I still think that for a noob Intel is probably a safer bet. Why not a P4 2.4 or 2.8 retail kit for little $ and nice and safe and cozy big brother Intel? A good system, more than adaquate for the uses described, and an easy intro into box building.

  14. What has happened with heat and the AMDs? For a while, maybe two years ago, I started to move toward AMD and frankly I am tempermentally predisposed towards them, however I simply did not find them to be on the same professional/enterprise class as the P4 systems. I had enough problems that I even gave up entirely on three of the systems and junked their CPU/mobo/RAM before even trying to deply them and installed Tyan Trinity P4s. For quality mobos I stick with Tyan (or Intel).

    The AMDs tended to be extremely hot*, and nowhere near as stable as the P4s. If I have the output from a box going on a 20' screen with 4000 people watching I need the system to work, perfectly, every time - not to worry about some 2 or 5% performance edge. A blue screen in that instance can run into the six figures, cost-wise.

    And yes, I used all 100% "AMD-approved" stuff - case through memory.

    The original poster seemed to be of a somewhat conservative bent, and so I wonder if a P4 solution might not be better suited to his overall usage.

    *So hot that the exterior of the cases were unconfortable to the touch. So hot that for the systems to run at all I had to add spacing between rackmount cases that were supposed to be designed for "high density". And, reduce the density = ^cost, so much for the price edge of AMDs.

  15. They used to ship from their own warehouse - Chicago, I think - now they do like everyone else. A couple years ago I had no trouble getting 20 X15-36LPs from them overnight. More recent orders have not been so prompt - usually delayed at least 24 hours. To me, this places them in a category like Googlegear (ZipZoomFly) - OK as a vendor if you don't much care when you get the items.