BarryKNathan

Member
  • Content Count

    378
  • Joined

  • Last visited

Community Reputation

0 Neutral

About BarryKNathan

  • Rank
    Member

Profile Information

  • Location
    Irvine, CA
  • Interests
    computers :)
  1. BarryKNathan

    Need suggestions on a used cheap Mac to get

    You could start by looking at the Mac OS X Panther (10.3) system requirements, the Mac OS X Tiger (10.4) system requirements, and the iLife '04-'06 system requirements. So, for running OS X and iLife '06, any G4 Mac with at least 256MB of RAM, a hard drive, and a DVD-ROM drive should do in theory. (You'll also want at least 20GB of hard drive space. Well, you probably could get by with a 10GB or 13GB drive, but there might not be enough room for a full iLife installation. In fact, if you want to make certain you have enough "breathing room," so to speak, get at least 30GB. In any case, anything less than 10GB would be like trying to run WinXP on a 4GB drive.) Regarding RAM, the conventional wisdom is to have at least 512MB, but the big speedup is in having at least 384MB of RAM, and I think that should be fine (as long as you don't run tons of programs simultaneously and you don't make heavy use of Fast User Switching). For light use, 320MB might not be much slower than 384MB. If you are sufficiently patient, you may be able to get by with 256MB of RAM. If you get a G3, you're giving up compatibility with iLife '06. (So if you get a G3, you'll need to make sure that it comes with iLife or that you can obtain iLife '04 or '05 separately.) I also suspect that Leopard (OS X 10.5) will be incompatible with G3's when it comes out. Furthermore, at a given clock speed, a G3 is going to feel considerably slower in OS X than a G4. (For instance, my experience is that a 450MHz G4 with 384MB of RAM feels much faster than a 500MHz G3 with 1GB of RAM.) Another way of looking at it is that YouTube is (for the most part) usable on G4's but a slide show with a soundtrack on G3's. So, a G4 would be much better than a G3, but a G3 may also do if the limitations (no iLife '06, probably no Leopard, really slow) are not a problem for you. If you decide to get a G3, make sure it's compatible with at least Panther (10.3). (Check Apple's system requirements page if you're not sure.) Actually, in addition to that, also make sure not to get an iMac with a tray-loading (as opposed to slot-loading) optical drive. While these older G3's (that is, tray-loading iMacs and G3's which are not supported by Panther) can technically run OS X, they have all sorts of weird quirks that could cause trouble (such as unaccelerated video drivers that are so painfully slow that they inspired a class action lawsuit and firmware that can only read data from the first 8GB of the hard drive). A few final notes: People seem to disagree about whether Panther or Tiger is faster on old Macs. Personally, I've found Tiger to be considerably faster. However, there are other reasons to prefer Tiger. Apple stopped producing security updates for Jaguar (10.2) around the release of Tiger (10.4), so it's possible that they might similarly drop security support for Panther (10.3) when Leopard (10.5) is released. Already, Apple has updated 10.4 for the DST changes, but not 10.3 (sort of the way that XP has a patch on Windows Update but Win2K doesn't). I guess what I'm saying is that if you care about being able to get security updates down the road, Tiger would be better than Panther. Man, that ended up being a bit longer than I intended. Anyway, I hope it helps...
  2. BarryKNathan

    OS X reporting incorrect drive size

    The drive that's reporting at about 278GB is probably working correctly. OS X reports drive capacity using binary gigabytes (a.k.a. gibibytes) instead of the decimal gigabytes used by hard drive manufacturers. 300 decimal GB should be about 279 binary GB if I've done the math correctly.
  3. BarryKNathan

    S.M.A.R.T. 0x72 (bad device!) error...

    DFT will usually offer to "repair" your drive by erasing the bad sectors or the entire disk. If it didn't give you that option, or if it gave you the 0x72 error after you chose one of the erasure options, then DFT concluded that remapping the bad sectors would be no more effective than rearranging the deck chairs on the Titanic. I just noticed that there is DFT documentation (PDF format) which contains descriptions of each error code, among other things. 0x72 is "Device S.M.A.R.T. Error." It could be useful to see the SMART values, which would give us insight into why DFT believes the drive is at death's door, but it's extremely likely that the drive is simply a goner at this point.
  4. BarryKNathan

    IBM deathstar 75gxp help with firmware update

    The firmware update can be hacked to apply (I've done this for a 30GB 75GXP which I still use -- sectors were continuously going bad for the drive's original owner, and for me before I applied the update). What's the current firmware version (all 8 characters) on the drive? With that info, I might be able to remember how I hacked the firmware updater & I might be able to provide instructions or something...
  5. BarryKNathan

    Weird HDD errors occuring-need help solving

    I've seen this type of problem a couple of times. Once it was the motherboard (it affected all drives, IDE and SCSI, even including optical drives), and once it was caused by putting a master and slave of two different brands on the same IDE cable (it only affected one of the drives on that cable, but I forget if it hit the master or the slave). I've heard of a case where it was caused by the drive failing, but I haven't seen that in person, and I don't think that's likely in your case because you're getting corruption on both drives. I see that you mention both "master/slave" and "SATA" in your post. This is a contradiction -- SATA doesn't have masters and slaves. I guess you meant primary and secondary, or something to that effect -- in any case, if it's SATA, the master/slave problem can't exist. So, like you, I would suspect the motherboard.
  6. I very highly doubt that the problem is the drive itself. I'll reiterate qawsedrftgzxcvb's question: What kind of IDE controller are you using? Some older controllers (particularly ALi) can only access beyond the 137GB barrier if DMA is disabled. I don't know what Windows does on these controllers, but Linux responds by enabling DMA for accesses below the barrier and disabling it for accesses beyond the barrier, which would cause the kind of speed problem you are seeing. Perhaps Windows does something similar. (Or maybe whatever big-drive enabler you used is causing the problem. Windows XP SP2 has built-in support for big drives.)
  7. BarryKNathan

    Linux 2.6 I/O Scheduler

    Oops, I just noticed this: Ok, that answers my question. Here goes; hopefully this info will help: I don't remember what situation no-op was created for, but I think it's for a specific type of kernel debugging or something equally specialized. Basically, if you don't already know that you need to use no-op, you should probably forget that it even exists. That leaves us with the three real schedulers: deadline, anticipatory, and CFQ. The deadline scheduler is, by far, the most similar to what kernel 2.4 uses. It's probably also the most mature. So, if you want stability above all, performance be damned, then I'd recommend using the deadline scheduler. However, I would only do that in situations where truly extreme paranoia is warranted (unless it's really a workload where deadline is the best scheduler, of course). In the real world, unless you have critical uptime requirements, I would pick the best scheduler and only use 2nd-best if the best choice takes the system down -- because for some workloads, using deadline instead of anticipatory or CFQ is almost as bad as taking the whole system down. If you're running some kind of database workload, or other stuff that involves tons of random accesses, the conventional wisdom is that deadline is best. If any scheduler could possibly beat deadline for this workload, it's CFQ. Anticipatory is going to be comprehensively and completely inferior for this type of workload, in any case (well, unless it gets tremendously improved by the kernel developers, but I'm going to discuss the present since I can't foresee the future). The other workload where anticipatory is highly inferior is on anything that does lots of swapping. If you MUST use anticipatory on such a box (e.g. a desktop that doesn't have enough RAM), then dedicate an entire hard drive just to the swap partition so that you can use a different scheduler (I guess CFQ would be the best choice, but I'm not certain, and deadline would also be good) on that drive alone. It's probably easier to just avoid anticipatory, or to add more RAM. Anticipatory's performance penalty on swap partitions is huge -- in and of itself, it was sufficient grounds for Red Hat to decide that anticipatory simply could not be the default scheduler in their kernels. However, anticipatory excels when the workload has an intermittent stream of writes competing against a large continuous stream of reads (or vice versa), such as web browsing while burning a DVD, or (especially) trying to do anything while a huge file is being written. In these kinds of workloads, anticipatory can increase performance by (in extreme cases) one or two orders of magnitude when compared to deadline. That, combined with the fact that CFQ didn't make it into kernel 2.6.0, is why anticipatory is the default scheduler for the mainline kernels. On workloads with competing streams (i.e. what I described above), CFQ probably falls somewhere between deadline and anticipatory, although much closer to anticipatory, especially on more recent kernels. In fact, on very recent kernels, CFQ might even be able to beat anticipatory at its own game, but I don't know for sure one way or the other. In any case, CFQ doesn't have anticipatory's weaknesses (at least, not that I've seen). So, for general-purpose stuff (including typical desktops, and probably typical servers), CFQ is the best choice, and in fact, it's what Red Hat ships as the default choice in their kernels. This post ended up being longer than I expected, so I'll try to summarize here: Database servers: 1st choice is deadline, 2nd is CFQ, distant 3rd is anticipatory Desktop/workstation with too little RAM: 1st choice is CFQ, distant 2nd is deadline, even more distant 3rd is anticipatory Desktop/workstation with lots of RAM: 1st is CFQ, close 2nd is anticipatory, and deadline is a distant 3rd. If there's lots of CD/DVD burning simultaneous with other heavy activity, then anticipatory may be the leader however, especially if the kernel isn't bleeding-edge. General purpose server with lots of sequential access to large files (e.g. file server with large files and a smal number of clients): 1st is CFQ, 2nd is anticipatory, somewhat distant 3rd is deadline General purpose server with only small files, or where large files are always accessed randomly: 1st is CFQ, 2nd is deadline, 3rd is anticipatory CFQ isn't quite a silver bullet scheduler, at least not yet, but it seems like it's slowly getting closer and closer as time goes on... Perhaps other people will have different experiences and will disagree with me somewhat, but hopefully my advice is close to being on the mark. I hope this helps.
  8. BarryKNathan

    Linux 2.6 I/O Scheduler

    I've used different I/O schedulers for different situations, although I'll have to elaborate after Christmas when I have more time. One interesting thing to keep in mind is that the schedulers' relative performance has changed as the 2.6 kernel has been improved. For instance, some workloads perform best with CFQ on newer kernels but might perform better with anticipatory or deadline on older 2.6 kernels. BTW, what kind of information are you looking for? (Guidelines for what scheduler to use, in general? Guidelines for a particular kernel release? Real-world anecdotes of situations where changing schedulers helped? Something else that I haven't thought of?)
  9. BarryKNathan

    Thermonuclear Cheetah?

    Really good point, and one that would have been more obvious to me if I slept before posting... I forgot to mention in my previous post, sda more or less disappeared off the SCSI bus until the whole system was power-cycled, and that's what prompted me to check the temperatures in the first place. Maybe I should consider that a warning (although all three drives seem to be operating properly at this point). Thanks very much for the advice.
  10. BarryKNathan

    Thermonuclear Cheetah?

    This is real output from a real Linux box, no kidding: These are IDLE temperatures. If they're not idle, sda and sdc go to 35-38 degrees Celsius, sdb goes to 115!! Unfortunately, in this particular instance, I can't open the box up without shutting it down and moving it first. Otherwise I would have opened it up and taken a look before posting here. Anyway, I'm wondering what the rest of you think (don't worry about answering everything, just answer what you feel like): a. How likely is it that 114-115 deg. C is a real temperature, as opposed to some kind of thermal sensor malfunction or something? (Keep in mind that all three drives are the same model, same firmware, and same monitoring software, while 2 temperatures look sane and 1 looks insane. Also, when smartctl fails to get a SCSI drive's temperature properly, in my experience the temperature reading is more like 0 C or 127 C. 114-115 is far enough off from 127 that it looks like it could be a real measurement.) b. If it's some kind of temp. sensor malfunction, is this worth worrying about? c. If this is a real temperature, then is there any significantly increased risk of fire? And how bad does the ventilation need to be for this type of drive (Cheetah 73LP) to get this hot, anyway? Assuming the case has no plastic parts near the drive, is there any significant risk that metal inside the case will get damaged? If the case uses plastic mounting brackets or anything like that, will they melt? d. The drive seems to store and retrieve data fine (i.e. no bad sectors and no performance problems). How long is this likely to continue at 114 C? How long can the drive even keep spinning at that temp? Can I afford to wait 24-48 hours or does it need to be shut down NOW? e. Given these temps, is it likely that sdb will fry either sda or sdc? f. Am I taking this too seriously? Not seriously enough? Both?? Anyway, this is one of the most weird-yet-serious things I've encountered in who-knows-how-long, so I'd be interested in knowing what other people think...
  11. BarryKNathan

    Screen resolution won't save

    I'd also be interested in knowing how to solve this. I haven't experienced this myself, but I know someone who is having this problem and wants me to fix it for him...
  12. BarryKNathan

    SCSI drive issues, random disconnects

    I would definitely disable write caching to begin with. AFAIK, delayed write errors are write errors that didn't get reported back to the OS immediately because of write caching; the situation's likely to be less complicated and easier to diagnose with write caching disabled. I think it's much more likely to be a dying drive than a cable or controller problem. (Or maybe just an overheating drive.)
  13. Even though it's not an IBM drive, you might want to try using the IBM Feature Tool to see if it can increase the drive to full capacity. (Like the AAM stuff, the ATA capacity clipping commands are part of the standard, so that's why the IBM utility happens to work with other brands for that too.)
  14. BarryKNathan

    Disappearing hard drive

    I've heard that cable select can be dodgy. I've never experienced that myself, but it would be good to try expliticitly setting the jumpers to master/slave on the two drives, and see if the problem happens again.
  15. BarryKNathan

    IBM 120Gb

    The newer IBM drives do make some weird noises. Is it more of a "screech" or a "clack"? IME, if it's screeching it's almost certainly normal; if it's clacking, it's not 100% normal, and possibly a cause for concern, but not necessarily a death sentence for the drive either.