Ralf

Member
  • Content Count

    147
  • Joined

  • Last visited

Community Reputation

0 Neutral

About Ralf

  • Rank
    Member
  1. the 2 tb limit is an issue of the operating system (winxp 32 bit) more info at http://www.microsoft.com/whdc/device/storage/LUN_SP1.mspx
  2. The OEM spec shows that track density for the 1 TB is 145 KTPI vs. 135 KTPI for the 750 GB.
  3. two thumbs up for whiic http://physics.nist.gov/cuu/Units/binary.html
  4. write performance typically takes a huge hit in raid 5 with many drives (if you write random sectors the controller has to pull in the whole stripe to calculate the new parity) large high-performance raid5 is thus often actually a raid5+0 - with win xp pro you may just use its built-in software striping to bundle a bunch of hw raid5 arrays / hbas
  5. simple marketdroid math: pcie is full duplex (i.e. there are separate wires/traces for read and write) thus the "total bandwidth" is the sum of read and write bandwith, on an x1: 250 + 250 = 500 on an x2 you get twice that: 500 + 500 = 1000 the hba is x4, afaik the pcie spec does not require it to actually use both lanes when inserted into a (physical) x4 slot where only 2 lanes are connected
  6. not quite LARGEADDRESSAWARE is a flag set by the linker, not by the compiler / code generator the program loader will honor that flag if it detects the proper prerequisites (e.g. /3GB switch) in any case it only affects the maximum size of a program's virtual address space and is not directly related to the amount of physical memory nor PAE http://forums.gaspowered.com/viewtopic.php...asc&start=0
  7. @classical: you may get up to 3gig virtual address space per process on 32 bit winxp if you set the /3GB switch in boot.ini and set the LARGEADDRESSAWARE flag in your .exe (via project properties or editbin.exe) caveat: some drivers (e.g. matrox) don't like the /3GB at all http://blogs.msdn.com/oldnewthing/archive/.../12/213468.aspx
  8. raid helps with uptime, but it is no substitute for a backup i'd use one or two external 500gig hd for backup and spend the rest of the budget on a raid0 anecdotal evidence: out of several hundreds of disks installed in many raid systems only two drives failed during a couple of months both failed drives were in the same raid5 array
  9. Ralf

    Problem with file creation

    for scsi drives there's always device manager / disk drives / <your disk> / properties / policies depending on model/make of your scsi hba and raid array they probably have additional layers of cache management in their bios and/or config tools a write cache setting of 'disabled' or 'write through' is bad for performance, 'enabled' or 'copy back' should help amount of cache ram and existence of a bbu (battery backup) on the hba and/or raid array may also be important factors
  10. Ralf

    Problem with file creation

    i'd check the write cache settings
  11. yep it works - my game rack (~1 yr old) runs xp home on a dual core amd 64 x2 from the start.
  12. @lunadesign: That big gap is the MFT zone - on a freshly formatted NTFS volume the first 1/8 (12.5%) is reserved for the 'housekeeping', actual file data is written past that area (i.e. starting at 1/8 of the volume). When the drive fills up the file metadata (name / time stamps / access rights) is added to the MFT and the actual file data is written to the 'upper' 87.5% of the volume. When that space is used up file data is also placed in the MFT zone, almost inevitably creating MFT fragmentation (thus the recommendation to never fill a NTFS volume to more than about 85% of its capacity). Defragmenters tend to have their own view of 'proper' file data placement, so their use may change the layout a bit. Also, converting a FAT volume to NTFS will probably result in a layout similar to your screenshot.
  13. That doesn't sound right. Let's take p = 0.01. Then p ^ 2 = 0.0001. I assume you wanted to define p as the probability of non-failure? Probability of non-failure (i.e. reliability) is 1-{probalility of failure}. Using your example: p = 0.01 = 1% chance for each disk to fail. A RAID1 fails if both of its drives fail. Probability for that to happen is 1% * 1% = 0.01% = 0.0001 = p^2. QED
  14. A RAID1+0 array as a whole gets less reliable when more disks are added. Let p be the probability of failure (data loss) for each individual disk (0 < p < 1). Then a RAID1 (2 disks) has a probability of failure of p*p, i.e. its reliability is 1-p**2. A RAID1+0 made of N pairs (2*N disks) has a reliability of (1-p**2)**N. A RAID1 with a failed disk has a reliability of 1-p (just the same as each individual disk). A RAID1+0 made of N pairs with a failed disk has a reliability of (1-p)*(1-p**2)**(N-1).
  15. Not really. In RAID5 all drives must not fail during a rebuild, and each and every of those drives has to sustain the same load as that single drive in RAID1+0.