Sign in to follow this  
Brian

OCZ Vertex 2 Pro Preview

Recommended Posts

Interesting. All that hashing / encrypting / dedup /compression going on within the drive itself. Good for a single-drive-general-use case. Might not work so well in a RAID where the incoming data is getting distributed to more than one drive (that will kill dedup unless the stripe size is much larger than the dedup chunk size).

And it's worse than useless for enterprise workloads like databases or content distribution, where everything is (hopefully) already deduplicated and/or compressed.

I wonder about an SSD that just uses its erase-block size as the LBA size. EG, 64KB. Forget all the page-mapping stuff. I don't remember the upper limit on sector size in recent Windows/Linux versions. They'll handle 4K, but I'm not sure about 64K.

Share this post


Link to post
Share on other sites

Well , they're not rally going for enterprise with this, but the progression is nice to see, specially from someone other than the name brands. Should make for a nice user-upgrade drive of for those building their own machines.

Share this post


Link to post
Share on other sites

Usually, enterprise databases like Oracle which usually run superb on reasonable SSD, don't use compression. Especially not, when huge DML statements occur. And for pure enterprise storage, SSDs are waste of money.

Edited by sasfan

Share this post


Link to post
Share on other sites

Well what I meant was more that (ideally) the DB isn't very compressible, as it's been organized to reduce redundancy. Then again, redundancy often creeps in as specialized indices needed for performance.

Anyway - from my POV -- we use a lot of white-box hardware and just write/architect things to make the best use of the inherent properties of those devices. These SSDs would get in the way of that. I would love to have a 640GB SSD-on-PCI where the controller had absolutely no wear-leveling and just exported its erase blocks as large sectors. But I understand there's no market for that. Hmm, maybe we should just build it :-)

Share this post


Link to post
Share on other sites
Interesting. All that hashing / encrypting / dedup /compression going on within the drive itself. Good for a single-drive-general-use case. Might not work so well in a RAID where the incoming data is getting distributed to more than one drive (that will kill dedup unless the stripe size is much larger than the dedup chunk size).

And it's worse than useless for enterprise workloads like databases or content distribution, where everything is (hopefully) already deduplicated and/or compressed.

I wonder about an SSD that just uses its erase-block size as the LBA size. EG, 64KB. Forget all the page-mapping stuff. I don't remember the upper limit on sector size in recent Windows/Linux versions. They'll handle 4K, but I'm not sure about 64K.

RAID may not be that important anymore, at least not when you talk about RAID Level 0. With the transfer rates achieved with those drive you pretty much saturate any Core2-based Intel system already because the FSB cannot keep up because of snooping. In the case of nVidia chipsets it is even worse because they don't do overlapped snooping of busmaster transfers so you are effectively limited to some 650 to 800 MB/sec, which is the sustained read transfer rate of 2-3 drives, no matter how wide your interface is (PCIe x8).

With the newer systems (x58, P55 or any of the AMD systems) you have more headroom but if you look at those numbers and add arbitration latencies, you may actually negate the bnefits of SSDs unless you are going ultra high-end on the RAID controller (including a large cache).

Share this post


Link to post
Share on other sites

With 3Gbps SATA, you need 3 or 4 SSDs to saturate the bus like you're talking about. A RAID card capable of that should be under $500.

RAID is necessary if you need more capacity from a single volume than current SSDs. You could just span the volumes using NTFS or whatever, I suppose.

And, RAID0 isn't RAID ;-)

Share this post


Link to post
Share on other sites
With 3Gbps SATA, you need 3 or 4 SSDs to saturate the bus like you're talking about. A RAID card capable of that should be under $500.

Depends on the interface, if you add protocol overhead you saturate the bus on a 4 lane card already with 2 drives and even an 8 lane card will not get you much further on the Core 2 platform because of the FSB bottleneck. PCIX maxes out at around 650 MBs.

On the current AMD / Intel Core ix platforms soft-RAID is actually fastest because you use system memory as cache. To get anywhere near that you need pipe burst SRAM on he cards, SDRAM is getting too slow.

RAID is necessary if you need more capacity from a single volume than current SSDs. You could just span the volumes using NTFS or whatever, I suppose.

It is not that easy because you throw off the wear leveling by hitting the drives sequentially instead of across the etire volume.

And, RAID0 isn't RAID ;-)

Really? I didn't know :P

Share this post


Link to post
Share on other sites

you can get some pretty amazing numbers with commercially available HBA that are 6gbp/s compatible that are coming down the channel. the real problem is not the capability of the bus to handle the load, it is the capability of the raid card IOP to have sufficient power to handle it. there isnt a raid card out there that is 3gb/s that can get over a gb sec in sequential reads. now the lsi 6gb/s 9260 and 9211 however, they can smoke!!

Share this post


Link to post
Share on other sites
you can get some pretty amazing numbers with commercially available HBA that are 6gbp/s compatible that are coming down the channel. the real problem is not the capability of the bus to handle the load, it is the capability of the raid card IOP to have sufficient power to handle it. there isnt a raid card out there that is 3gb/s that can get over a gb sec in sequential reads. now the lsi 6gb/s 9260 and 9211 however, they can smoke!!

Actually, that's not true, with the LSI 3801 and four Vertex drives I easily got over 1 GB/sec sequential reads. The problem was that for example Atto only returns "0" values once you hit 7 digit numbers. With 2 cards I came out at about 540 MB/s.

Share this post


Link to post
Share on other sites

that is a hba, not a raid card. its iop is not doing the processing are you using soft raid? that card has no hardware raid capability, thus no bootable operating system.

if you are using atto to bench a hardware raid card and ssd then that is a problem in and of itself. my statement that the iops on the previous gen cards remains correct. that card does not use a iop it uses the cpu for processing. you should use iometer and everest with ssd.

Edited by Computurd

Share this post


Link to post
Share on other sites
you can get some pretty amazing numbers with commercially available HBA that are 6gbp/s compatible that are coming down the channel. the real problem is not the capability of the bus to handle the load, it is the capability of the raid card IOP to have sufficient power to handle it. there isnt a raid card out there that is 3gb/s that can get over a gb sec in sequential reads. now the lsi 6gb/s 9260 and 9211 however, they can smoke!!

If you mean GB/sec (gigabyte/sec), then you may be right (I don't know, but I've never seen 8Gbps from a single card). If you mean Gbps (gigabit per second), then there many SATA-2/SAS RAID cards that will do 1Gbps sequential. The Adaptec 5x series have done 500MB/sec (4Gbps) on random reads (14 SATA drives) for me.

Share this post


Link to post
Share on other sites
that is a hba, not a raid card. its iop is not doing the processing are you using soft raid? that card has no hardware raid capability, thus no bootable operating system.

if you are using atto to bench a hardware raid card and ssd then that is a problem in and of itself. my statement that the iops on the previous gen cards remains correct. that card does not use a iop it uses the cpu for processing. you should use iometer and everest with ssd.

Actually, I flashed with the RAID firmware, and yes, it can be used as bootable OS RAID in that configuration. I have also used IOMeter, Sandra, Everest, CrystalMark, WB and whatever other benchmarks are out there. I am not a fan of IOMeter because it doesn't reall correlate with real world performance, one of the reasons why Intel has stopped supporting it. PCMark Vantage is an excellent tool, and there is iPeak for selective trace playback as well as SCSI Toolbox.

Share this post


Link to post
Share on other sites

i agree with the vantage being a great benchmark...i currently number ten in theHALL OF FAME

well that is cool that you have had great results with that in a bootable array configuration, what score in vantage did you get? i am curious as to its real world performance. also any bench runs you could show us?

Edited by Computurd

Share this post


Link to post
Share on other sites
i agree with the vantage being a great benchmark...i currently number ten in theHALL OF FAME

well that is cool that you have had great results with that in a bootable array configuration, what score in vantage did you get? i am curious as to its real world performance. also any bench runs you could show us?

Unfortunately, most of the stuff is under non-disclosure. What I can tell you though is that in PCMark Vantage, the Vertex score lower in RAID (Level0) than in single formation because you get additional latencies from the arbitration. That is, if you have a card with a large on-board memory buffer, you can get better performance (some of the HighPoint controllers or higher end LSI versions). Typical scores with the Vertex were around 23k (single drive), 19-20k dual and 16-18k 4-drives (depending on the age and firmware). What is interesting also is that if you configure the cards to run as "RAID" cards and make them bootable OS, you lose the "windows soft cache" and the performance takes a "minor" hit regardless of which benchmrk you are running. I ran a few comparisons also with the Summit drives and they are slower in single drive configurations but marginally faster in multiple drive configurations. I am not sure where this comes from but it seems that the actual burst transfers of the Vertex have a small onset latency (which shows up as scrw-ball result in HDTach for example where the burst speed is lower than the sustained speed :P )

IPeak.. the problem is that I can't say where I got it from :P and it is not publicly avalable. I know that some of the original SR staff had a copy, that would probably be the easier way to approach this.

I am also working with FutureMark (Oliver has been a good personal friend for over 10 years) and SiSoft (Adrian is pretty perceptive to suggestions) to get benchmarks more consistent and get rid of some of the old stuff that pops up in articles over and again and where the paradigms are so off that the results are not only useless but plain and simply wrong. This is BTW one of the things that I have high hopes for with the new SR. I hope I answered most of your questions, but that's about all I can disclose.

BTW, congrats to your Hall of Fame placement. Feels good, does't it :-) I was in the top 10 graphics some 8 years ago with a hand-modded el-cheapo FIC RADEON.

Edited by unregistered

Share this post


Link to post
Share on other sites
Usually, enterprise databases like Oracle which usually run superb on reasonable SSD, don't use compression. Especially not, when huge DML statements occur. And for pure enterprise storage, SSDs are waste of money.

SSD storage can be quite useful in the enterprise space. The consumer SSD manufacturers are not the only players in town. There are a number of ways where SSDs can augment or replace traditional mechanical storage in order to realize huge performance gains that really bring the ROI time down to acceptable levels.

For example, if you configure one or two large SSD drives as a hot spare, you can dramatically decrease the time required for a large array rebuild in order to substantially decrease the "window of vulnerability" (MTTR) for a degraded array and thereby really increase your MTTDL.

If you have a large customer facing application, you can place the "hot" tables on an SSD (or an array of them) to increase performance, or configure temp space to use it.

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
Sign in to follow this