Michal Soltys

  • Content Count

  • Joined

  • Last visited

Community Reputation

0 Neutral

About Michal Soltys

  • Rank

Contact Methods

  • Website URL
  • ICQ
  1. A friend of mine had rather nasty adventure recently, where his new Plextor drive (PX-256M6Pro, v. 1.03 fw) literally wiped itself clean after trying to shutdown win7 (which ended with hang - so probably the data was already gone at that moment, then he had to do hard reset). The motherboard on which it happen was ga-z68mx-ud2h-b3 (with latest bios as well). The disk is completely clean from the beginning to the end as if it was never used - or as if it was spuriously treated with trim commands for its whole area. I was wondering if anyone experienced anything like that with recent ssd models (maybe some incompatibility with older chipsets ? as the motherboard is not exactly the newest one). Somewhat extensive googling didn't return much - though we found references to very same case on polish hardware forum (http://forum.benchmark.pl/topic/172960-plextor-m6pro-gubi-dane/) and some references to other similar cases (e.g. here).
  2. Too bad WDC bought HGST ... guess we will be able to choose between seagate and wdc only soon (eventually samsung)...
  3. Michal Soltys

    4kB sector HDDs reporting 4kB sector

    You're really simplifying things far too much than they really are. Major pain would be boot drives - starting with bioses capable of understanding 4kb drives. Then either providing 512b -> 4kb translation (1) or going natively (2). In the latter case, you would automatically make old int13h interface incompatible relying only on extended int13h. In reality, bioses are often bugged as hell (if you're curious, get sources of e.g. syslinux and read the coments and code inside functions doing i/o). The potential for bugs in (1) is probably astonishing, relatively to stupidity bios makers are capable of, or concepts they have and sadly implement If you go (2), then you need mbr code (+ eventually more exotic thing like mbr->gpt handover) capable of dealing with 4kb drive as well. Then you need bootmanager/loader/whatever capable of handling 4kb drives. If you go (1), mbr + loader doesn't need any changes, but then at OS level, you have partition layout (for 512b) completely mismatched with how OS sees the drive (for 4kb). In context of M$ windows (all of them) and bios, you can forget anything that involves booting from 4kb drive and/or gpt. Your only choice is a board with EFI firmware, which should be 4kb ready (assuming current windows are in context of booting from such drive, and I won't bet my head they are). With non booting drives the case is more simple (assuming bios won't hang when it sees 4kb native drive, or if you hotswap the drive after booting). Even 2000/XP should be perfectly fine here. 512b "emulation" (which is barely one) provided by drives is really pretty optimal solution, all things considered. And comes at practically no cost (assuming reasonable user and non-ancient OS). Rather Microsoft and bios coders are, if you /really/ want to blame someone/thing. The real thing to blame is that magical 512 bytes have been assumed and hardcoded in almost everything since 1980s.
  4. That wasn't the issue here - "good stuff" starts around page 3 of this thread.
  5. No, my point is that they are likely tired of people reminding them about their screwup, and unless the publicity gets really, really bad, they will never officially and publicly admit to the fault. The typical stance a random company takes these days when there're problems with their products, is that thare are no problems with their products. It's "user fault" / "user's computer fault" / "we don't support linux" / "product is not used in the way it's supposed to be" / "we have years of experience" / etc. The firmware got released, as far as I can see, accidentally by some support guy with soft heart that wanted to do the right thing. The problem was being active for like what - 1.5 years ? Now it's the thing of the past, as any freshly produced velociraptor has upgraded firmware anyway. WDC just waited quietly. If occasionally someone will run into the problems again - seriosuly, who gives a damn. IMHO That's what I meant in my previous post.
  6. Well, I'd guess their official stance is that the problem doesn't exist Unless it gets /really/ public and ugly, I'm sure it will stay this way...
  7. I'd buy HGST drives - which I started doing recently, whenever I need a new one.
  8. Michal Soltys

    ICH10R RAID5 silent data corruption

    Some questions: - how quickly can you reproduce the corruption ? - how long did you run memtest ? I often like to run only test 1,2 or 5 for hundreds of times, as they are relatively quick and go over all memory (contrary to e.g. random generator based tests, which are rather slow) - in the past, I've found GoldMemory to be able to find errors far earlier than memtest ever could - with IntelBurnTest (http://www.xtremesystems.org/forums/showthread.php?t=197835) I managed to find problems, where neither GM, MT or Prime95 were able to find any (in OC scenarios) - simple copy / compare files in a loop between different disks - and with amount of data making sure that cache wouldn't suffice (i'm not sure how to do equivalent of linux's 1/2/3 >drop_caches under windows) produced interesting results, especially when ran along IBT (again, in OC scenarios) warning: IBT rapes cpu, literally. You will likely see temperatures you've never seen before.
  9. Michal Soltys

    RAID 5 with SATA vs. SAS

    Regarding earlier subjects - large chunk size (in raid 5 and similar setups) may improve random read io, as it's less chance that some (small) data will have to be read from few disks. Random writes scattered all over the disks will still be plagued with read-modify-write. It also has drawbacks - e.g. more demanding R-M-W with not so clever firmware / code that can only operate on full chunks. While playing with chunk size, a lot may depend on firmware's "cleverness". Linux has particulary impressive set of options for raid 10 setups (inc. 'near', 'far' and 'offset' layouts for 10). Be it raid 5, 6 or 10 - current multicore cpus are so powerful it's not a bottelenck by any means. Check http://linux.yyz.us/why-software-raid.html (also note that stripe cache size can be easily adjusted on linux as well). NTFS is pretty unflexible in configuration (unless there're some hidden not-easy-to-tune options for that, similary to e.g. xfs, ext2+ which are stride and stripe (chunk) aware). For what it's worth, make sure that your ntfs partition starts at raid's stride boundary and chunk is ntfs cluster's power-of-2 multiple. As for optimal cluster size or other ntfs related stuff, and how M$ systems approach it - I don't really know. On related subject, if you're using Velociraptors, and expect it to have any longer uptime, you should be aware of: http://forums.storagereview.net/index.php?...st&p=256912 ... or you might face potential problems ~ 1.6 months from server's boot time.
  10. I'd try any modern linux livecd with: gnu parted ntfs-3g inkernel or module driver supporting your raid (if it's hardware one) or device-mapper + dmraid (to assembly the raid, if it's some bios-assisted softwaird with more or less proprietary format) Then just create deleted partition exactly how it was before - the same GUIDs, the same place. If raw backing up of that 2.7TB is an option - do so, even if it feels far too paranoid. It won't (check with parted docs to be 100% sure) touch the data, and hopefully windows didn't do any stupid during deletion either (like erased first sector or something). Then just try RO mount with ntfs-3g and hope for the best OTOH - that TestDisk thingy looks like trying to be doing roughly the same....
  11. Speaking about which - weren't there two (or three depending on perspective) separate firmware issues recently with Seagate ? http://techreport.com/discussions.x/15954 (timeouts) http://techreport.com/discussions.x/16232 (bricking) http://stx.lithium.com/stx/board/message?b...ding&page=1 (bricking after fw update to correct bricking ) after quick googling
  12. Nice. Enterprise drives my ass..., this should be posted on the frontpage of SR. Although at least it doesn't turn your disk into a brick (like with recent seagate firmware problems). Justin - you've had problems with that (particular thing) since middle of last year iirc ? (recalling lengthy threads from related linux mailing lists).
  13. Michal Soltys

    WD Caviar Black Jumpers

    http://kerneltrap.org/mailarchive/linux-ra...ead#mid-4474784 Basically, what Justin mentioned. It's for staggered spinup. If you have hardware+bios+os that support it, it might be beneficial with plenty of drives and not too powerful PSU (basically, bios will detect the drives in non-spinning standby mode, and e.g. linux will spin them up one by one during kernel initialization). In typical home setup not needed at all.
  14. Michal Soltys

    Software Raid VS Hardware Raid

    If I were you, I'd just take some lightweight linux distro, and setup everything to my liking. It's not like you need plenty of things either way. Software raid under linux is pretty stellar feature, and its cpu cost, especially with current hardware, is neglible. Put xfs or even well set ext3, lvm2 on top of it, and you have very flexible solution open for expansion in the future.
  15. Michal Soltys

    Hot plug on the ICH9R

    Just to clear things - hardware wise hotplug is present since at least ich7r, and iirc since ich6r. I'd hace to check docs. [Officially] driver-wise - well, you're at mercy of M$ and/or intel. It will still work in xp, and very likely 2k, as driver should be able to cope with inserting/removing device, regardless if you have remove device icon in taskbar or not. What is out of control - caches, esp. drive's internal cache (afaik). So before you pull out drive - in device manager/isk drives uninstall the drive you want to pull (this is pretty much what remove device process in tray would do). If you're paranoid, you can get few tools - sysinternals' sync, linux port of sdparm or hdparm - sync sdparm -C sync <drive> sdparm -C stop <drive> and call them before uninstalling the drive, or hotplugging w/o doing it. STILL, if you don't have tons of disks you want to hotswap at the same time, imho, why not just use jmicron controller. It works just fine, it's on pci express, so it's not like your performance would be impaired.