Jump to content


Photo

OCZ Vector SSD Review Discussion


  • You cannot start a new topic
  • Please log in to reply
4 replies to this topic

#1 Kevin OBrien

Kevin OBrien

    StorageReview Editor

  • Admin
  • 1,440 posts

Posted 27 November 2012 - 11:58 AM

The OCZ Vector is a new client SSD designed to appeal to mainstream and high-performance enthusiasts. The Vector is designed around OCZ's Barefoot 3 controller and firmware, finally giving OCZ a near end-to-end in-house solution, which means improved reliability and support for consumers. This is OCZ though, and if we know anything about the company it's that they like to ensure their high-end SSDs have top-tier performance. The case is no different with the Vector, which brings burst sequential reads and writes of 550MB/s and 530MB/s to the table, along with random read and write IOPS of 100,000 and 95,000 respectively. OCZ calls this scale of performance the "fastest sustained computing experience there is;" a point that may be hard to argue in many cases as we dive into this review.

OCZ Vector SSD Review



#2 triodak

triodak

    Member

  • Member
  • 1 posts

Posted 05 December 2012 - 07:02 AM

I got a comment about this graph from the review:

Posted Image
We see here two 'blips' for 4T/16Q and 8T/16Q. In my personal (and fully subjective) opinion there is no way that it is poor SSD response for such a load. And it happens for all the SSD with different 'internals' (NAND, controller, firmware). Those 'blips' are connected to test platform components (hardware + drivers + testing software). One component or some interaction of two could be the source of such a strange 'blip' (wrong timing?, not the best threads overlapping?; etc.).

I am quite sure that if this test would be reproduced on different hardware, maybe different version of SATA drivers or some other change we would get more natural flat line at that part of the graph.

I do not know what 'testing software' is used, but I think it is worth a try, to run it on different machine and check how big difference is. If 20% then nothing unusual but it is possible that 8T/16Q would finish in around 5[ms] not 45[ms]. Or try to run 4/8 copies of the software with the same load pattern (maybe it is software related).



The other thing is that on many graphs the colour lines are all on top of one another and this makes it very hard to see any difference. It is more like a trend for good SSD. I think that different Y axis resolution would allow for better diversification.

It would be good to add also firmware version information but it is not critical.

That is all from me :)

#3 Kevin OBrien

Kevin OBrien

    StorageReview Editor

  • Admin
  • 1,440 posts

Posted 05 December 2012 - 01:53 PM

I got a comment about this graph from the review:

Posted Image
We see here two 'blips' for 4T/16Q and 8T/16Q. In my personal (and fully subjective) opinion there is no way that it is poor SSD response for such a load. And it happens for all the SSD with different 'internals' (NAND, controller, firmware). Those 'blips' are connected to test platform components (hardware + drivers + testing software). One component or some interaction of two could be the source of such a strange 'blip' (wrong timing?, not the best threads overlapping?; etc.).

I am quite sure that if this test would be reproduced on different hardware, maybe different version of SATA drivers or some other change we would get more natural flat line at that part of the graph.

I do not know what 'testing software' is used, but I think it is worth a try, to run it on different machine and check how big difference is. If 20% then nothing unusual but it is possible that 8T/16Q would finish in around 5[ms] not 45[ms]. Or try to run 4/8 copies of the software with the same load pattern (maybe it is software related).



The other thing is that on many graphs the colour lines are all on top of one another and this makes it very hard to see any difference. It is more like a trend for good SSD. I think that different Y axis resolution would allow for better diversification.

It would be good to add also firmware version information but it is not critical.

That is all from me :)


Right now all of our current enterprise synthetic tests are performed under a Linux CentOS 6.2 environment through the LSI 9211 HBA. We are using all stock settings and drivers across the board. We actually have a total of four servers with this configuration (identical servers though) where those spikes come up. Our main goal is to show performance in a static environment that is what many buyers might encounter if they also setup a plain Linux environment. We are currently investigating also including Windows performance data using the same testing software to show driver differences across those two environments, but at least right now we are still in the early stages of collecting that data. With machine availability being the main hangup (each SSD going through that process takes 48 hours per OS) it will still be a while before we could introduce that data.



#4 gabe75

gabe75

    Member

  • Member
  • 1 posts

Posted 11 December 2012 - 08:55 PM

i would like to see how are these drives(not only this one but all of them ) performing in steady mode. I know that u guys did this in Intel SSD 520 Enterprise Review, but i would like to see it in other reviews too.And in the consumer ssd reviews maybe more windows oriented programs like pcmark or other...
your reviews look great and i really enjoy your graphs and the simplicity of the results and informations. thanks

#5 Kevin OBrien

Kevin OBrien

    StorageReview Editor

  • Admin
  • 1,440 posts

Posted 12 December 2012 - 09:25 AM

Doing this testing on all drives is one of our goals we hope to hit in 2013. We are currently transitioning testing platforms and making optimizations in the lab which will greatly streamline the process. It also doesn't hurt to have seven new primary systems and countless others now ;)

Posted Image



0 user(s) are reading this topic

0 members, 0 guests, 0 anonymous users