JoeTheDestroyer

Member
  • Content Count

    193
  • Joined

  • Last visited

Community Reputation

0 Neutral

About JoeTheDestroyer

  • Rank
    Member

Profile Information

  • Location
    Rolla, MO
  1. JoeTheDestroyer

    Any hope for 15K SATA drives?

    I think this was true up to the release of SAS. We know SAS and SATA are electrically compatible. And the cost of including compatibility for both can't be too onerous since every SAS host controller has the capability. Why not do the same with the drives? Both Seagate and Hitachi already have experience with SATA drive controllers so the only real R&D would be including that capability in the SAS drive controller chips(+firmware support). And it's low risk. Unlike WD that had to build a market for the Raptor from scratch, hoping for it to succeed, this just adds enthusiast sales to the already established enterpise market. What's more, the R&D is essentially a one off cost as it can be reused in every following generation and across their entire product line (10k & 15k).And I don't see cost as a real problem either. Honestly, do you think all the Raid0 Raptor monkeys out there wouldn't jump at the chance for a 15k drive? Even the more moderate gamers (such as myself) would spend $250 on a good video card, so why not the same on a performance harddrive? -JoeTD
  2. JoeTheDestroyer

    Network Fax Appliance - Does it Exist?

    Asterisk can do this as well. See here for an example. However, since it's a full-blown VOIP PBX, I understand it can be somewhat daunting. (I've never used Asterisk myself, just done a little personal research.)-JoeTD
  3. JoeTheDestroyer

    What's the fastest processor available?

    Is there any way you could run multiple instances at once? Such as if you have multiple data sets that need run. Then dual-core would gain you a lot.Just a thought... -JoeTD
  4. JoeTheDestroyer

    SiI 3114

    I assume you're talking about recovery as any kind of raid won't save you from file corruption, file deletion, that sort of thing (which I imagine you know). For Linux, it's relatively straightforward. If the O.S. won't boot, you simply use some thing like the Knoppix LiveCD, boot, load the 'md' module, mount the array and begin recovery.The beauty of the Linux md setup and storing the configs on the drives is that you can move the drives around, to a different controller or even to a different machine and still have them mount just fine. -JoeTD
  5. JoeTheDestroyer

    Gigabit Switch Recommendations

    4 at the moment, though I expect that number to grow. The problem is I expect that all of them will be communicating heavily w/ the file server, so I want the possibility of expanding the available bandwidth between it and the rest of the network. Any manufacturer-level experience(D-Link vs Linksys, etc)? I assume you mean something like D-Link's DGS-3024? ~$600 is more than I really wanted to spend...Thanks, -JoeTD
  6. JoeTheDestroyer

    Gigabit Switch Recommendations

    Well, I've decided to build myself a file server. I won't get into the specifics of that, but the first step I need to take is upgrading my network to gigabit. Now I've read the other threads about Jumbo Frames, VLAN and all that stuff. Thus, I've concluded what I need is a switch with the following features: Gigabit Jumbo Frames VLAN 24 port (relatively) inexpensive Trunking (Link Aggregation) In past threads, the only qualifying product was the Dell PowerConnect 2724. At the time, reports on it were not good, but firmware updates may have changed this. Also, I've recently noticed that other manufacturers are offering similar products. This is the list I have: Dell PowerConnect 2724 ~$260 (from Dell) D-Link Web Smart DGS-1224T ~$290 (from here) Linksys SRW2024 ~$425 (from here) SMC SMCGS24-Smart ~$340 (from here) Netgear ProSafe GS724T ~$330 (from here) What I would like is anyone's personal experience with any of these units or recommendations for units with similar features and price that I have missed. Thanks! -JoeTD
  7. JoeTheDestroyer

    WinXP, 1GB RAM, What Size Page File?

    While this is technically true, it may not tell the whole story. In each page table entry, there is a bit that indicates when the page is resident or not resident. When resident, the rest of the bits indicate the physical address of the page. When not resident, the rest of the bits are don't cares and the OS may use them however it pleases. It is my understanding that most OSs use these bits as an index into the page file. This seems to fit as that would make the maximum addressable page file size equal to the maximum addressable RAM size, 4GB. Since PAE increases the number of bits in each page table entry, that would make it mostly painless to increase the max. page file size too. Hmmm, I think I found atleast one answer.Specifically: (DEP is Data Execution Prevention) The only real downside is that each page table entry is bigger, so the page table chews up more memory. I doubt there's really enough of a difference to matter, though.Besides, since it is required for DEP(which is by all accounts, A Good Thing), I wouldn't turn it off unless it was really bad. -JoeTD
  8. JoeTheDestroyer

    WinXP, 1GB RAM, What Size Page File?

    There wasn't any trick, I typed 8192 into the boxes, clicked Set and WinXP didn't complain.Also, there's only one drive in the laptop and it has 512MB of RAM. There is only one windows accessible partition(the others are Linux stuff). If you like, I'll post screen shots of the "Virtual Memory" dialog and the properties dialog for the page file itself. A thought occurs, is the machine you tested on PAE (Physical Address Extension) capable? If it is, it should be listed in the System Properties dialog along with processor type and speed. If not, this might explain the discrepancy we're seeing. However, I don't know enough about PAE to say for sure. -JoeTD
  9. JoeTheDestroyer

    WinXP, 1GB RAM, What Size Page File?

    Really? Sorry, couldn't resist... (This is a single 8GB page file on my Athlon 64 laptop running WinXP 32-bit.) -JoeTD
  10. Source Maybe I'm having a brain fart, but I don't think I understand this paragraph. The only time a page is purged from the page file if it is de-allocated. Windows(or any other OS) can't just arbitrarily unload pages because it can't presume to know where to get the data again. The only real counter case is with code, but actually that isn't true either, as code is loaded using memory-mapped i/o. If you don't know what that is(if you do, then this is just for others' education), basically the OS treats a file opened for memory-mapped i/o as a mini-pagefile. Thus it doesn't require any allocated space in the real page file and can be unloaded at will because the OS knows exactly where to get it again later.Still, in general this still falls under "know thy workload". I rarely run more than one hungry app at a time, so I can usually ignore this particular effect. Also note that the source is talking about server workloads, which have vastly different requirements and expectations that a desktop workload. -JoeTD
  11. I completely agree. That's why my first question whenever this subject comes up is what the machine is used for. For gaming machines, there's not really much harm if memory runs out. Also, in my experience, games tend to have a mostly static memory footprint so it's easy to find out if you have too little memory.However, for machines that have real work done on them, personal or professional, I can't see the risk being worth any possible benefit. -JoeTD
  12. This is, by far, the best advice. Agreed, and no matter how many times I swear I'll stay out of the next one, it never happens. If you will, allow me to tell you a tale about a man, let us say Bob.Bob bought a computer. Following the advice of his friendly local storage forum, he got 1GB of RAM and set his page file to 1GB. For years and years the computer served him faithfully and he had no problems and few complaints. In fact, his only complaint was that sometimes his games would stutter a little and he noticed the hard drive light flashing. He asked his friendly local storage forum about the problem and was informed that his computer had too little RAM and was making up for it with the page file. Now RAM was expensive so Bob decided to let it go. Years later Bob noticed that RAM was cheap now, and decided to upgrade. So now Bob has 2GB of RAM. But he wasn't sure what to do with his page file. The advice says he should increase the size, but Bob doesn't understand why he needs more "Backup" memory when he just added more main memory. Shouldn't he need less "Backup" memory? -JoeTD
  13. Nope, perfectly serious. To be honest, it's not much less reliable than your sources. One of them was from a guy's personal comcast account! And while the Microsoft ones are valid, the only relevant part was the warning I quoted. And it doesn't say you shouldn't disable the page file, it only recommends against it, which could just be them covering their butts. Furthermore, even if I treat those other sources as credible, all of them are still opinion and reasoning, no facts.As far as Wikipedia goes, I won't argue the reliability in general, but this particular article is accurate, atleast with regards to the information I was supporting(Virtual Addressing). If you don't wish to believe me, I suggest Operating Systems by William Stallings. First of all, if I have more memory that is being used (even by Windows "optimizations"), why would I not want it to come out of RAM?Second, Windows XP employs a technique call Lazy Allocation. Windows never actually allocates storage for a page until it is actually used(read or written). So, this whole argument is bogus anyways. ------------------------------------------------------------- The whole point here (as always with this argument) is that we all have opinions and personal experience, but there aren't really any facts in evidence, only more opinions and reasoning. If you wish to state your opinion that turning off the page file is a bad idea, that's fine. If you wish to provide other supporting opinions, that's fine too. Just don't try to pass it off as fact. Now, my opinion is that turning the page file off is ok in some cases. My best recommendation is that if you have to ask if it's ok, then don't do it. -JoeTD BTW, if you happen to dig up the performance tests you were talking about, I'd really like to see them.
  14. Anyways, my apologies to the OP for getting a little off topic. With 2GB or ram, I think you'd be fine with no page file. But if you use heavy programs and/or do serious work on this machine that might be lost due to an allocation error, don't risk it. In other words, if all you do on this machine is play games, then what's hurt if you run out of memory? The game crashes, you turn the page file on, reload and continue on. However, if this is a work machine, the little bit(if at all) of performance gain isn't worth the risk. If you decide you need a page file, bennt is absolutely right, put one on both drives. -JoeTD
  15. Trinary, Since you called my bluff, I'll show my hand: <snip> Support Info 3 Need I say more? I got a shovel you can borrow if you decide you want to dig yourself out of the hole you are in.... Interesting then, that in one of your own sources (from Microsoft no less) states: Why would they recommend against disabling the paging file if you couldn't do it?The problem here is that you(and your sources) have mixed and mudied the terminology. Virtual Addressing: An address mapping that decouples the apparent location of data with its phsical location. Page: Mapping memory on a byte-by-byte basis would be terribly inefficient, so most virtual addressing schemes use a higher unit of granularity called a page (4kB on x86). Sometimes page is used to describe only the virtual side while frame is used to describe the physical side. Paging: Since the apparent and physical location has been decoupled, we can use virtual addresses to reference data that is not actually in physical memory. When an access to this address is made, a page fault occurs. Paging is the the action of loading the data from wherever it phsically resides(often a file, but not necessarily) into a physical memory frame and adjusting the virtual address map to point to the new location. Paging also refers to the reverse process of unloading a page and freeing a frame for other use. Page file: Common name used to describe a file that pages can be stored in when not residing in physical memory. Virtual Memory: Unfortunately, this term has many confused definitions. It can be used to describe the accessible range of the virtual address space. It can also be used to describe the apparent extra memory resulting from the use of a page file. (Because of the confusion, I will not use this term.) In WinXP, you cannot turn of virtual addressing or paging. However, you can turn off the page file, with no ill effects(other than if you need the extra memory). I'll say it again, for clarity, Windows(and Linux and most other modern operating systems) require paging but do not require a page file. If you wish a reference, here. As for whether you should disable the page file, that is something that has been debated here many times with no real conclusion being reached. I'm of the opinion that disabling the page file is ok, if you know you have enough memory. In fact, Window XP will create a temporary page file if you run out of memory, but allocation failures will occur until that file has been established so it's not completely safe. -JoeTD