• Content Count

  • Joined

  • Last visited

Everything posted by h4lf

  1. Performance looks great, be interesting to see the comparison with the M5P in the enterprise suite. Are Link_A_Media responsible for the Seagate Pulsar controllers?
  2. Good improvement overall. Maybe it's now starting to show some promise of the "special" (new?) Marvell controller. I can't understand why they released the drive with the older/poor firmware in the first place, though.. doesn't make any sense.
  3. Excellent review, great to see a full gamut of drives and controllers. Another typo: you write the Plextor M3P as having a capacity of 256MB instead of GB. Suggestion: I think you should mention that the Octane 4 comes with some DDR3 chips for caching.. what looks like 512MB to me? I presume this helps significantly for it's strong write performance. All said and done, I think this looks like a solid drive; but I can help but feel the review really highlights how strong the Plextor M3P is for every workload!
  4. Exciting, the previous gen controller looked really solid. If they can improve steady state, non-compressible, performance it sounds excellent. Eagerly awaiting the Crucial m5(?), Plextor M4(?) et al
  5. h4lf

    SanDisk Extreme SSD Review Discussion

    Thanks for the review. Is a SanDisk X100 incoming too? To be a bit more critical however, I'm really not sure the new vertical bar graphs are a step forward. I think horizontal bars (as before) and like: http://techreport.com/articles.x/22470/5 would improve clarity. The problem with the old SR format (http://www.storagereview.com/samsung_ssd_830_review_256gb) IMHO was the labelling. I suggest: 1) move drive labels next to the bars 2) use legend only to distinguish between say "read" or "write" and use a pattern (e.g. diagonal lines) 3) colour code bars according to controller, but use some outer glow or slightly darker colour shade to highlight the drive(s) currently being compared/evaluated To go into more details, I think figures like http://www.storagereview.com/images/sandisk_extreme_ssd_240gb_crystaldiskmark_500mbtest_fast.png are silly. Instead it should be a single consolidated graph with horizontal bars. To that end I suggest: 1) vertical axis should be labelled with categories, "sequential", "random 4K", "random 512K" 2) use legend only to distinguish between say "read" or "write" and use a pattern (e.g. diagonal lines) For the IOMeter graphs, I think the style is already great, but could be improved by: 1) making the legend to use columns so it spreads wider rather than taller 2) using the freed vertical space to increase the vertical height of the graph and then make the lines clearer with greater separation I assume you have scripts in place to automatically generate all the graphs so I would expect such changes should not be too difficult Anyway, please don't take my suggestions as an attack, I love what you guys are doing and so everything is IMHO only.
  6. h4lf

    IO's and Throughput on Linux

    For the first part, benchmarking: - To get a baseline on the theoretical IOPS/seeks I like to use: seeker or iopoing - To get a baseline on the sequential throughput: hdparm or simply dd with flags to disable chaching - For some more comprehensive benchmarking: bonnie++ and fio For the second part, observing load: - You could simply use top and watch for disk wait time (%wa). You can use something like sar to record it (along with other metrics) every X mins and review it later. - For a more detailed look into what is happening with the IO subsystem it's hard to go past: iotop
  7. Looks interesting. I just read the product brochure. Can't wait for the benchmarks to see what effect, if any, this has Is this drive based on the Marvell controller?
  8. h4lf

    Samsung SSD 830 RAID Review Discussion

    "Slow" garbage collection really means that garbage collection is "delayed" until the drive is idle. Garbage collection is used to clean up the drive in order to maintain it at peak performance. By delaying garbage collection you can increase performance *right now* but if you leave it for too long then you get a build up of garbage and then the performance turns to rubbish, excuse the pun. The SR steady state tests illustrate this concept. (note: for a client/consumer drive this doesn't really matter unless you use your SSD for long sustained periods).
  9. I think the "point" is that 1) this is their own new controller design, and 2) the performance claims are for incompressible (worst-case) data, albeit still using client workloads.
  10. h4lf

    Samsung SSD 830 RAID Review Discussion

    Would love to see steady state results for Samsung 830 single-drive and RAID. Would be very interesting as I heard garbage collection is quite slow in this model.
  11. I understand your frustration but you are making rather large accusations about this site and it's editors. In my opinion this has no place here unless you have some kind of "proof", otherwise it's simply unnecessary defamation. I think they are doing a great job, and responding to both criticism and suggestions from here. Well I think are clear speed differences with different work loads, but I do think any sort of long-term testing would be interesting.. but I think you'll need to be more specific about what you want to see here. If it's simply endurance or durability you can write 24/7 to the drives for days -- though I'm not sure exactly how useful it would be with a sample size of a few drives only, besides the point that most manufacturers provide TBW ratings now (I think?). If you are talking about long-term performance testing well I guess the SR Steady State IOMeter tests address this to some extent though I think SR can improve this by disclosing their methodology in more detail and I would love to see before and after HD Tach plots like this for Samsung 830 on AnandTech.
  12. That would be a wonderful line-up to see thoroughly tested (though I would like to see an Intel 320 included too), particularly with a steady-state comparison. My interest is high-performance for low-end servers (i.e. pricing dictates MLC drives) so it that would certainly be valuable for me and others I'm sure.
  13. Thanks for the update, the numbers look good. Very similar to the Plextor M2P, I guess the firmware must be near identical like the hardware?
  14. That isn't a typo. This drive uses 34nm NAND while the Plextor M3S uses 25nm. However, the Plextor M2P uses 34nm NAND just like the Corsair drive and if you look at the review, http://www.storagereview.com/plextor_pxm2p_ssd_review_256gb the power consumption is very similar. So I suppose you can chalk the power difference up to the difference in flash process size.
  15. Aww no Enterprise IOMeter test? Corsair claim they have advanced garbage collection and these are very suitable for RAID so would love to see how that transpires in the steady-state benchamrks!
  16. h4lf

    Plextor PX-M3S SSD Review Discussion

    Are you still going to run this drive through your Enterprise IOMeter test suite? I'm very curious to see how it compares against the PX-M2P which fared so well (i.e. what would be better in a light-use server environment).
  17. h4lf

    Plextor PX-M3S SSD Review Discussion

    @Brian, cool looking forward to the enterprise steady state results! One thing I was wondering is whether there is any change to the "official" durability specifications of this drive due to the transitions to 25nm? Theoretically this should result in lower write cycles and assuming the firmware is the same then would it be safe to assume this drive is less "durable"?
  18. Well.. according to the tech specs (on Corsair's website) there is no DRAM cache which is interesting (as opposed to the 512MB on the Plextor). Looking forward to the benchmark showdown The press release said these things are designed for advanced garbage collection without TRIM (therefore being very suitable for RAID), sounds interesting.
  19. Thanks for your reply. I'm still a little green with regards to SSD and caching. So if I'm reading you correctly.. 1) the normal write path for data is app -> OS -> RAID/controller -> SSD DRAM -> NAND 2) if you disable caching on the computer/RAID card then writes are acknowledged as complete to the OS ONLY when the data has been guaranteed to be written to NAND? That sounds good so far! But, now.. I assume (reading from Plextor's own website) the large chunk of DRAM is used to help optimize writes for both wear-levelling and performance (makes sense, clever stuff). However, I assume that then: 3) ideally written data should be buzzing around in the DRAM for quite some time for best performance? and then 4) how would disabling this behaviour (caching?) affect the benchmarks published here? Cheers!
  20. OK.. does this have any power-loss protection like supercapacitor, battery, or other arrangement like the Intel 320? It seems really solid performance wise but unless it has power-loss protection I would be concerned (especially considering the 512MB of DRAM onboard). This seems vital to be considered as an "enterprise" client drive.
  21. Hi guys, any idea on how these drives handle power loss? Is there any super-capacitor like with the Intel 320? With 256MB of DRAM onboard I would think it's highly important.
  22. Wow, how are they getting 15 *petabytes* out of their MLC drives? Compared to "enterprise MLC" like the Micron C400 which is rated at 72TB, this is 2 orders of magnitude better? Is this for real?
  23. Hi folks, Basically I have got 2 74GB 16mb cache Raptor drives (plus a storage drive/s) and I'm not exactly sure the best drive configuration I should use. I guess the 2 options I have been thinking about are; 1- Raid 0 on both the Raptors.. Use this for OS & Apps 2- one Raptor for OS & swap, one Raptor for Apps (& also swap?) I am running Vista 32-bit, and primarily am just after the 'snappiest' desktop experience. Though game performance/(level loading) will be a plus. What would you guys recommend? And why? :-)
  24. Thanks mate, but what about swap files? Should I have a swap on both drives or only one? Also should I have different block sizes i.e. large blocks on the OS and smaller on the Apps? Cheers