• Content Count

  • Joined

  • Last visited

Community Reputation

0 Neutral

About JcRabbit

  • Rank
  1. I don't use Diskkeeper here, so I am not familiar with HyperFast. They do state, however, that "HyperFast SSD optimization technology uses specialized algorithms to reduce or eliminate free space fragmentation, while minimizing the number of file write operations it uses in the process." So it seems to me that what it does is perform free space consolidation. As such, it is essentially half-defragging the SSD, which would increase the number of writes and possibly explain the 100 TB you are seeing. I would turn it off and monitor how long it now takes to write another TB to the drive with it disabled, then do some quick calculations based on the power-on hours to see if it was affecting the number of bytes written to your SSD or not.
  2. Seems like an awful lot to me too. For comparison: your drive has 9,000 power-on hours. My drives have 19,000 power-on hours each, with 5 TB written to each drive in all this time. Because this is a three drive RAID 0 array, you can even multiply 5 TB x 3 drives to get a total of 15 TB data written in those 19,000 power-on hours. Are you sure your are not running a defragger on your SSD?
  3. Are you sure you're reading the correct field? How old is your drive? How many power on hours?
  4. 4.70-5.62 TB written to each of my three 80 GB Intel X25M G2 in RAID 0 now, with nearly 19,000 power on hours and a media wear out indicator of only 97%. Each of the three drives has a re-allocated sector count of 1. Still going strong! :-) At the end of April, or beginning of May, I should get a new system anyway - just waiting for Ivybridge and the dual-GPU Radeon 7990. When that time comes, I will opt for a couple of 240 GB Intel *or* Samsung SSDs in RAID 0. Leaning towards the Samsungs at this point. Wasn't a new version of the Intel RST drivers that supported TRIM on RAID 0 in the works? I read something about it some time ago and then not a peep.
  5. Err, I voted 1-3 TB, but that is for each of my three Intel G2 80GB SSD drives in my RAID 0 array. Perhaps I should have voted 8-11 TB instead? One of the drives has a single re-allocated sector. Media wear out indicator is 98% for all drives with +-10,768 power on hours (my system is on 24/7).
  6. Another reason could be because everything is aligned properly (see HERE). I take no credit for this, however: that goes to Windows 7 (for the 100 MB hidden boot partition) and possibly the guy who setup the RAID on this system and selected a stripe size of 128 KB.
  7. Most definitely so. Core i7-920 at 4Ghz, BCLK 200, NB Frequency 3200Mhz, QPI Link 3600Mhz, DRAM 800Mhz (1600). Perhaps I should have mentioned it before.
  8. Well, have a look for yourselves:
  9. I just found something very curious: remember when I said that it was the firmware updates that got HD TACH to finally report 700 MB/s for the 3 SSD drives RAID 0 array? Well, turns out it wasn't. I turned off volume write back cache yesterday to troubleshoot something here, and today, when showing the speed of the array to a friend, HD TACH had reverted to the previous 200-250 MB/s. Turning the volume write-back cache via the Intel Matrix Storage Manager back on returned the data transfer rate in HD TACH to 700 MB/s.
  10. I was just pointing out that it was Geshel, not me, who said that a queue size of 8 or more would ensure all SSDs were busy an any one time. I'm still learning about this things. Anyway, whatever it was, it was not a queue or block size problem, as the results I got after updating all 3 drives to the latest firmware prove: now both HD Tach and HD Tune show over 700 MB in throughoutput, the latter after reseting the block size back to 64. As for how this was actually affecting real word performance, I can not really say because I'm still prepping the system and wasn't using it that much - but let me tell you that seeing Photshop CS2 open in 3 seconds flat (I kid you not) is a real eye opener! Or not. Nobody here really knows, apparently. My feelings on this (uninformed) are that it might not be as beneficial as it is for hard drives, sure, but it doesn't hurt either (provided your system is protected by an UPS, of course).
  11. True. In the mean time, I have some GREAT news. I checked the firmware on the 3 drives and two of them had the very first firmware version released (2CV102G9), which meant they didn't even support TRIM. The other drive was from a later batch and had the 2CV102HA firmware - mismatched firmware revisions on a RAID array is not a good thing, I suspect. I updated the firmware on all 3 drives to the 2CV102HD revision (the latest) and now HD TACH shows 3,413.4 MB/s burst speed (previously it was 400 something) and an average read speed of 704.8 MB/s!!! Whoohoo! Problem solved, I guess.
  12. Read back, it wasn't me who said that - I'm the RAID 0 newbie, remember? Anyway, I didn't specifically choose 4, that's the default Queue size for the ATTO benchmark. As for the write-back, that is actually a feature of the *Intel* MSM drivers - I suppose they know what they are doing. I'll try making running some differential benchmarks with it enabled and disabled as soon as I have the time. What apparently really makes a difference, from what I have been reading, is using the Intel RST driver instead of the Intel MSM driver.
  13. For the curious among you, here are the benchmark results of my 3x80 GB Intel SSD's in RAID 0:
  14. Exactly. Which is why I find strange the low results in HD TACH and HD Tune with small block sizes. Do both programs have queue depths of one? If so, why have I seen pictures of HD TACH on the web displaying fantastic results for SSDs in RAID 0?
  15. I suppose the short answer is 'because I can' or even 'why not?' Seriously, it was time to switch my main development system to Win7 64 bit, so I might as well upgrade the hardware too - and make it a bit 'future proof' while I was at it (i.e.; get something good enough to last me for a few years). I'm still the proud owner of a 300 GB Velociraptor that is in my old system, but the obvious storage devices to move to at this time are SSD drives. A single 160 GB SSD ($549 at Newegg) did not have enough capacity to be my primary drive, so I figured I might as well go for three 80 GB SSDs in RAID 0 for a total of 240 GB of storage space at a cost of $867 (3 x $289). It was either that or two 160 GB SSDs in RAID 0, which would cost me $1098. I opted for the setup that costs less, provides the storage space I need without being overkill, and still manages to be faster than the alternative by 1/3.