h4lf

Member
  • Content Count

    24
  • Joined

  • Last visited

Community Reputation

0 Neutral

About h4lf

  • Rank
    Member
  1. Performance looks great, be interesting to see the comparison with the M5P in the enterprise suite. Are Link_A_Media responsible for the Seagate Pulsar controllers?
  2. Good improvement overall. Maybe it's now starting to show some promise of the "special" (new?) Marvell controller. I can't understand why they released the drive with the older/poor firmware in the first place, though.. doesn't make any sense.
  3. Excellent review, great to see a full gamut of drives and controllers. Another typo: you write the Plextor M3P as having a capacity of 256MB instead of GB. Suggestion: I think you should mention that the Octane 4 comes with some DDR3 chips for caching.. what looks like 512MB to me? I presume this helps significantly for it's strong write performance. All said and done, I think this looks like a solid drive; but I can help but feel the review really highlights how strong the Plextor M3P is for every workload!
  4. Exciting, the previous gen controller looked really solid. If they can improve steady state, non-compressible, performance it sounds excellent. Eagerly awaiting the Crucial m5(?), Plextor M4(?) et al
  5. h4lf

    SanDisk Extreme SSD Review Discussion

    Thanks for the review. Is a SanDisk X100 incoming too? To be a bit more critical however, I'm really not sure the new vertical bar graphs are a step forward. I think horizontal bars (as before) and like: http://techreport.com/articles.x/22470/5 would improve clarity. The problem with the old SR format (http://www.storagereview.com/samsung_ssd_830_review_256gb) IMHO was the labelling. I suggest: 1) move drive labels next to the bars 2) use legend only to distinguish between say "read" or "write" and use a pattern (e.g. diagonal lines) 3) colour code bars according to controller, but use some outer glow or slightly darker colour shade to highlight the drive(s) currently being compared/evaluated To go into more details, I think figures like http://www.storagereview.com/images/sandisk_extreme_ssd_240gb_crystaldiskmark_500mbtest_fast.png are silly. Instead it should be a single consolidated graph with horizontal bars. To that end I suggest: 1) vertical axis should be labelled with categories, "sequential", "random 4K", "random 512K" 2) use legend only to distinguish between say "read" or "write" and use a pattern (e.g. diagonal lines) For the IOMeter graphs, I think the style is already great, but could be improved by: 1) making the legend to use columns so it spreads wider rather than taller 2) using the freed vertical space to increase the vertical height of the graph and then make the lines clearer with greater separation I assume you have scripts in place to automatically generate all the graphs so I would expect such changes should not be too difficult Anyway, please don't take my suggestions as an attack, I love what you guys are doing and so everything is IMHO only.
  6. h4lf

    IO's and Throughput on Linux

    For the first part, benchmarking: - To get a baseline on the theoretical IOPS/seeks I like to use: seeker or iopoing - To get a baseline on the sequential throughput: hdparm or simply dd with flags to disable chaching - For some more comprehensive benchmarking: bonnie++ and fio For the second part, observing load: - You could simply use top and watch for disk wait time (%wa). You can use something like sar to record it (along with other metrics) every X mins and review it later. - For a more detailed look into what is happening with the IO subsystem it's hard to go past: iotop
  7. Looks interesting. I just read the product brochure. Can't wait for the benchmarks to see what effect, if any, this has Is this drive based on the Marvell controller?
  8. h4lf

    Samsung SSD 830 RAID Review Discussion

    "Slow" garbage collection really means that garbage collection is "delayed" until the drive is idle. Garbage collection is used to clean up the drive in order to maintain it at peak performance. By delaying garbage collection you can increase performance *right now* but if you leave it for too long then you get a build up of garbage and then the performance turns to rubbish, excuse the pun. The SR steady state tests illustrate this concept. (note: for a client/consumer drive this doesn't really matter unless you use your SSD for long sustained periods).
  9. I think the "point" is that 1) this is their own new controller design, and 2) the performance claims are for incompressible (worst-case) data, albeit still using client workloads.
  10. h4lf

    Samsung SSD 830 RAID Review Discussion

    Would love to see steady state results for Samsung 830 single-drive and RAID. Would be very interesting as I heard garbage collection is quite slow in this model.
  11. I understand your frustration but you are making rather large accusations about this site and it's editors. In my opinion this has no place here unless you have some kind of "proof", otherwise it's simply unnecessary defamation. I think they are doing a great job, and responding to both criticism and suggestions from here. Well I think are clear speed differences with different work loads, but I do think any sort of long-term testing would be interesting.. but I think you'll need to be more specific about what you want to see here. If it's simply endurance or durability you can write 24/7 to the drives for days -- though I'm not sure exactly how useful it would be with a sample size of a few drives only, besides the point that most manufacturers provide TBW ratings now (I think?). If you are talking about long-term performance testing well I guess the SR Steady State IOMeter tests address this to some extent though I think SR can improve this by disclosing their methodology in more detail and I would love to see before and after HD Tach plots like this for Samsung 830 on AnandTech.
  12. That would be a wonderful line-up to see thoroughly tested (though I would like to see an Intel 320 included too), particularly with a steady-state comparison. My interest is high-performance for low-end servers (i.e. pricing dictates MLC drives) so it that would certainly be valuable for me and others I'm sure.
  13. Thanks for the update, the numbers look good. Very similar to the Plextor M2P, I guess the firmware must be near identical like the hardware?
  14. That isn't a typo. This drive uses 34nm NAND while the Plextor M3S uses 25nm. However, the Plextor M2P uses 34nm NAND just like the Corsair drive and if you look at the review, http://www.storagereview.com/plextor_pxm2p_ssd_review_256gb the power consumption is very similar. So I suppose you can chalk the power difference up to the difference in flash process size.
  15. Aww no Enterprise IOMeter test? Corsair claim they have advanced garbage collection and these are very suitable for RAID so would love to see how that transpires in the steady-state benchamrks!