mattsimis

Cheapest RAID controller that can do 6-7GB/s sustained in RAID0?

50 posts in this topic

2478 / 5 = 495.6 MB/second average

That's REAL GOOD!

CONGRATULATIONS!

Thanks, hopefully the last SSD for the set arrives in the post and I'll have 6x SSD rocking raid0 (and some sort of nightly back up to HDD of course!).

Share this post


Link to post
Share on other sites

How many SFF-8087 fan-out ports do you have now?

You must have at least 2 ports, fanning out to 8 SSDs,

because you plan to have 6 RAID-0 members.

If your RAID controller is running in PCIe 3.0 mode,

edge connector with x8 lanes @ 8G / 8.125 = 7.88 GB/second MAX HEADROOM.

Assuming your RAID-0 array has 8 SSD members,

and also assuming the above average stays constant,

then:

8 SSDs @ 495.6 MB/second = 3,964.8 MB/second max predicted speed (perfect scaling)

Thus, your original goal -- 6-7 GB/s -- is far above

the ceiling imposed by the max speed of each RAID-0 member.

At that average speed per SSD, you would need 13 of those

to reach 6,000 MB/second:

6,000 / 495.6 = 12.1 SSDs

13 SATA SSDs in turn require a controller with 4 fan-out ports

because 3 fan-out ports are not quite enough.

It would be useful, at this point, to predict the speed

of a RAID-0 array with SAS members transmitting at 12 Gb/sec

instead of SATA members transmitting at 6 Gb/sec.

I seem to recall that SAS SSDs hover around 750 MB/second READ speed,

but I would need to surf the Internet to locate more accurate measurements.

But, the extra speed of SAS SSDs comes with a much higher price.

Using a parametric approach, 6,000 / 750 = 8 SSDs (or 2 fan-out cables)

Thus, if you can find SAS SSDs with average READs of 750 MB/second,

you might reach your goal of 6 GB/sec with only 8 x SAS SSDs in RAID-0

(again, assuming perfect scaling).

So, compare 8 SAS SSDs at a much higher price

with 13 SATA SSDs at a much lower price.

Share this post


Link to post
Share on other sites

Not accounting for overhead, or changes to the bit encoding between the PCI express and chipset:

PCIe 1.x is 250MB/s per lane (8x = 2 GB/s)

PCIe 2.0 is 500MB/s per lane (8x = 4 GB/s)

PCIe 3.0 is 1GB/s per lane (8x = 8 GB/s)

SATA2 .. up to 300 MB/s

SATA3 .. up to 600 MB/s

Each SAS 6GB port supports 4 SATA3 devices; (Assuming max speeds)

6 ports, 24 devices, 14.4 GB/s

4 ports, 16 devices, 9.6 GB/s

2 ports, 8 devices, 4.8 GB/s

1 port, 4 devices, 2.4 GB/s

DMI 1.0 (intel software raid) is 1.16 GB/s

DMI 2.0 (intel software raid) is 2 GB/s

DMI 3.0 (intel software raid) is 3.93 GB/s (Only supported on skylake boards)

------

Software HBA raid cards are usually cheaper than hardware raid cards. Are there any specific hardware raid cards that do not cause slow pc reboots? I understand that the delay is typically because of checking all the hard drives in the array.

Edit: What's your CPU utilization like on that 9341-8i raid card?

Edited by LikesFastBicycles

Share this post


Link to post
Share on other sites

> PCIe 1.x is 250MB/s per lane (8x = 2 GB/s)

The latter is exactly correct, because each lane

oscillates at 2.5 GHz, using the 8b/10b "legacy frame":

2.5G / 10 bits per byte = 250 MB/second

> PCIe 2.0 is 500MB/s per lane (8x = 4 GB/s)

The latter is exactly correct, because each lane

oscillates at 5.0 GHz, using the 8b/10b "legacy frame":

5.0G / 10 bits per byte = 500 MB/second

> PCIe 3.0 is 1GB/s per lane (8x = 8 GB/s)

The latter is almost exactly correct, because each lane

oscillates at 8 GHz, using the 128b/130b "jumbo frame":

8G / 8.125 bits per byte = 984.6 MB/second

8x = 984.6 x 8 = 7.88 GB/second

130 bits / 16 bytes = 8.125 bits per byte, PCIe 3.0 "jumbo frame"

Share this post


Link to post
Share on other sites

Software HBA raid cards are usually cheaper than hardware raid cards. Are there any specific hardware raid cards that do not cause slow pc reboots? I understand that the delay is typically because of checking all the hard drives in the array.

Edit: What's your CPU utilization like on that 9341-8i raid card?

About 10-30% on the LSI 9341 (flashed to SAS9300 generic to avoid INT15 issues). It was 60-75% with software onboard raid0 with just 3 drives, I imagine it gets even worse the more you add.

With INT13 (diff int) boot support disabled the controller doesnt add much longer to reboots.. about 20sec maybe.

MRFS, the reason I wanted a controller that might do 6-7GB/s is I wanted to future proof 12Gb/s support later, so I could use cheap drives now and in a couple of years upgrade to hopefully cheap excess enterprise 12Gb drives. On a card like the 9341, its 4 channels x 12Gb x 2 Ports right? So ignoring the PCIe interface, its max connectivity would be limited to 2x 6GByte..?

Edited by mattsimis

Share this post


Link to post
Share on other sites

If this is your 9341:

http://www.newegg.com/Product/Product.aspx?Item=9SIA2F83Z94176&Tpk=9SIA2F83Z94176

It supports 2 x mini-SAS SFF-8643 internal connector (Horizontal mount)

e.g.:

http://www.newegg.com/Product/Product.aspx?Item=9SIA7363KE6759&Tpk=9SIA7363KE6759

Each of those connectors "fans out" to 4 separate data cables,

each capable of 12 Gb/second.

So, there are 8 data channels, total.

I'm not sure if the data channels on that controller

also use the 128b/130b "jumbo frame"

that is true of the PCIe 3.0 chipset and the

x8 edge connector.

So, let's compute it both ways.

(1) withOUT jumbo frames, theoretical max connectivity is:

8 channels @ 12 Gb/second / 10.0 bits per byte = 9.6 GB/second

(exactly 1.2 GB/s x 8)

(2) with 128b/130b jumbo frames, theoretical max connectivity is:

8 channels @ 12 Gb/second / 8.125 bits per byte = 11.82 GB/second

You should contact Tech Support at the manufacturer

and ask them to confirm the frame layout of each

12G SAS data channel (i.e. speed over cable, NOT over edge connector):

is it the 8b/10b "legacy frame" as in the PCIe 2.0 standard

-or-

the 128b/130b "jumbo frame" as in the PCIe 3.0 standard?

(I believe that 12G SAS merely doubled the transmission clock speed,

but did NOT change the 8b/10b legacy frame over the data cables.)

The edge connector is the limiting factor, however, because

each PCIe 3.0 lane oscillates at 8G, even though each lane

uses the 128b/130b jumbo frame:

x8 lanes @ 8 GHz / 8.125 = 7.88 GB/second MAX HEADROOM (upstream bandwidth)

Conclusion: your edge connector has enough raw upstream bandwidth

to reach your primary objective: 6-7 GB/second sustained in RAID-0 mode,

provided that you populate your RAID-0 array with enough fast SSDs

to reach that objective.

Edited by MRFS

Share this post


Link to post
Share on other sites

> PCIe 3.0 is 1GB/s per lane (8x = 8 GB/s)

[...] 128b/130b "jumbo frame":

8G / 8.125 bits per byte = 984.6 MB/second

8x = 984.6 x 8 = 7.88 GB/second[...]

I did preface that I'm not interested in trying to calculate overhead from bit encoding, it's way beyond my interest in just trying to reference how I would get past software raid DMI 2.0 (paltry ~2 gb/s) on a x99 chipset with a broadwell extreme CPU. (Only 6 SATA ports are raidable.)

About 10-30% on the LSI 9341 (flashed to SAS9300 generic to avoid INT15 issues). It was 60-75% with software onboard raid0 with just 3 drives, I imagine it gets even worse the more you add.

With INT13 (diff int) boot support disabled the controller doesnt add much longer to reboots.. about 20sec maybe.

See, for awhile the Intel 750 card I had was adding about 18 seconds to boot up. I was going to change it over to the Samsung SM961 1tb drive. However, Intel released a new driver that considerably sped up the 750 boot time, for my pc it's about 12 seconds now.

So your saying there's no way for the x99 bios to recognize the Dell OEM LSI 9341 unless I change the firmware to the 9300 generic rom? I bought the same card you bought on ebay for $108.

2 x mini-SAS SFF-8643 internal connector (Horizontal mount)

e.g.:

http://www.newegg.com/Product/Product.aspx?Item=9SIA7363KE6759&Tpk=9SIA7363KE6759

That's a nice cable. The cheapest I found was around $19.

Each of those connectors "fans out" to 4 separate data cables,

each capable of 12 Gb/second.

[...]

(1) withOUT jumbo frames, theoretical max connectivity is:

8 channels @ 12 Gb/second / 10.0 bits per byte = 9.6 GB/second

(exactly 1.2 GB/s x 8)

(2) with 128b/130b jumbo frames, theoretical max connectivity is:

8 channels @ 12 Gb/second / 8.125 bits per byte = 11.82 GB/second

[...] Conclusion: your edge connector has enough raw upstream bandwidth

to reach your primary objective: 6-7 GB/second sustained in RAID-0 mode,

provided that you populate your RAID-0 array with enough fast SSDs

to reach that objective.

Enough fast SSDs? For his current criteria, he would need expensive 12GB/s SSDs, or like you previously mentioned, he needs 13 SSD's to get to 6GB/s. For 12GB/s SSDs, I found a Seagate 200GB SSD, 12GB/s SAS for $500. For $216 on newegg, I found the 6GB/s OCZ Trion 150 (960gb) drive he's likely using.

I was curious, if the hardware/firmware support 12GB/s, technicaly the SAS should support 8 SATA3 per 12GB/s SFF 8643. I suppose the problem is the 12GB/s protocol is doubling the frequency, not the number of mechanical connections.

In my use case, I'm trying to get a large amount of SSD space with fast transfer speeds, without buying SAS SSDs like the Sandisk 4TB 6Gb/s for ~$2,500.

Edited by LikesFastBicycles

Share this post


Link to post
Share on other sites

> I'm trying to get a large amount of SSD space with fast transfer speeds, without buying SAS SSDs like the Sandisk 4TB 6Gb/s for ~$2,500.

I understand. Our situation today would be very different,

if SSD manufacturers had not cooperated like an oligopoly,

and instead endorsed a new SATA-IV standard that "synced"

with the 8G clock in PCIe 3.0 and that used the 128b/130b

jumbo frame also in the PCIe 3.0 standard.

oligopoly -- noun -- a state of limited competition, in which a market is shared

by a small number of producers or sellers.

By comparison, USB 3.1 adopted both -- by increasing

the transmission clock to 10G and implementing

a 128b/132b jumbo frame.

I believe BEST WAY for the future is to standardize

pre-set variable clock speeds e.g. 6G, 8G, 12G and 16G,

with an extra option to implement jumbo frames too.

The industry has supported CPU and DRAM overclocking

for a very long time now:

http://supremelaw.org/patents/SDC/Overclocking.Storage.Subsystems.Version.3.pdf

Edited by MRFS

Share this post


Link to post
Share on other sites

I just got that $108 Dell OEM LSI 9341 card, and I used the cables you suggested on the X99 Deluxe II ASUS board. When I use it in the first slot, and set "SLI" to 2X on the switch, everything worked right away. (I did not need to flash the LSI firmware) The card booted with no issues using 0401 bios. (Recently upgraded to 0601 bios.) I'm waiting for my graphics card to come in, it's still on order. Once thing I noticed is there's very few options on this HBA card.

I could boot windows off of this drive, but I was having some issues with 8tb, as windows 7 insisted that the drive use MBR (2tb max) instead of GPT. Won't matter anyway, as I plan on migrating the boot drive to a Intel 750 drive at a later date. This is the preliminary numbers I got:

CrystalDiskMark:

GTYCnVp.png

ATTO:

rGKk01b.png

 

Share this post


Link to post
Share on other sites

So, if I'm reading your numbers correctly,

with ATTO your best READ was 3,834 MB/second / 8 SSDs  =  479.25 MB/second per SSD average

Correct?  If so, I think you're very close to MAX HEADROOM with that setup.

Remember from above, with 5 SSDs your average was 495.6 MB/second,

and one should never expect perfect scaling as more array members are added

(from 5 to 8 RAID-0 members).

My experiments with ATTO also showed WRITEs jumping around,

so I suspect that behavior is an artifact of ATTO's programming,

not your hardware configuration.

p.s.  Please elaborate the meaning of "SLI to 2X".

Share this post


Link to post
Share on other sites

Here's how I extrapolate your measured overhead to an NVMe RAID controller

with x16 edge connector:

479.25 / 600  =  ~ 0.80 or 20% aggregate controller overhead in RAID-0 mode

1 x U.2 port = 8G x 4 / 8.125 bits per byte  =  3,938.4 MB/second per NVMe SSD

(PCIe 3.0 uses a 128b/130b jumbo frame, or 130 bits / 16 bytes = 8.125 bits per byte)

4 x U.2 ports @ 3,938.4 MB/second  =  15,753.6 MB/second MAX HEADROOM

Highpoint calculated 15,760 (almost identical):
http://highpoint-tech.com/PDF/RR3800/RocketRAID_3840A_PR_16_08_04.pdf

15,753.6 x 0.80  =  12,602 MB/second predicted MAX READ speed

(assuming 20% average controller overhead)

Share this post


Link to post
Share on other sites

One other thing:  the X99 chipset uses a DMI 2.0 link:

4 PCIe 2.0 lanes @ 5G / 10 bits per byte  =  2.0 GB/second MAX HEADROOM

So, if those 2 x U.2 ports are downstream of that DMI link,

a RAID-0 array with those 2 x U.2 ports won't get you very far.

The exact same problem has been replicated several times

with multiple M.2 NVMe SSDs, all of which were downstream

of the DMI 3.0 link in the Z170 chipset:

4 PCIe 3.0 lanes @ 8G / 8.125 bits per byte  =  3.938 GB/second MAX HEADROOM

(this is the exact same bandwidth of a single M.2 NVMe slot)

So, you're on the right track:  exploit the raw upstream bandwidth

of your x16 PCIe 3.0 slots, ideally with an NVMe RAID controller

like Highpoint's RocketRAID 3840A (or other, as soon as they are available).

Share this post


Link to post
Share on other sites

Here's an idea I was exploring earlier his week:

rather than to pay a premium for Intel's 2.5" model 750 NVMe SSD,

here's a neat 2.5" enclosure designed for M.2 SSDs:

http://www.newegg.com/Product/Product.aspx?Item=N82E16817801139&Tpk=N82E16817801139

... and it comes with thermal material, to help with heat dissipation.

Then, you can wire them "straight in" just like SATA SSDs

with this type of U.2 cable:

http://www.newegg.com/Product/Product.aspx?Item=9SIAA6W3YY8665&Tpk=9SIAA6W3YY8665

This arrangement eliminates the need for an NVMe backplane

assuming your chassis has room for more 2.5" drives.

Share this post


Link to post
Share on other sites

Here's a simplified version of
our bandwidth comparison with DDR3-1600:

Assume:
DDR3-1600 (parallel bus)
1,600 MHz x 8 bytes per cycle  =  12,800 MB/second
(i.e. exactly TWICE PC2-6400 = 800 x 8)

Now, serialize with PCIe 3.0
(8G transmission clock + 128b/130b jumbo frames):

1 x NVMe PCIe 3.0 lane
= 8 GHz / 8.125 bits per byte  =  984.6 MB/second

4 x NVMe PCIe 3.0 lanes
= 4 x 984.6 MB/second  =  3,938.4 MB/second

4 x 2.5" NVMe SSDs in RAID-0 (zero controller overhead)
= 4 x 3,938.4  =  15,753.6 MB/second

Compute aggregate overhead:
1.0 - (12,800 / 15,753.6)  =  18.7% total overhead

Highpoint calculated 15,760 (almost identical):
http://highpoint-tech.com/PDF/RR3800/RocketRAID_3840A_PR_16_08_04.pdf

Conclusion:  
assuming aggregate controller overhead of 18.7%,
four 2.5" NVMe SSDs in RAID-0
exactly equal the raw bandwidth
of DDR3-1600 DRAM.

Share this post


Link to post
Share on other sites

I have a affiliation with Intel, so I get their hardware cheaper than retail pricing. I'm aware of the internal 2x U2 ports & M2 vertical slot (All support 4x NVME), but I avoid them because as far as I know the U2 ports borrow PCIe lanes from the PCIe slots. The other complication is again, price. Any NVME drive typically is much more expensive than a ordinary SSD. I need lots of slots because I plan on shoving a ASUS ThunderboltEX-3 card, 2 Titan X Pascal cards, the 9341-8i raid card, and a Intel 750 card.

The fastest cost-effective 1TB M2 SSD, the Samsung PM961 Drive is very difficult to buy anywhere. I'm using a Enthoo Evolv ATX case (there are 0 drive cages "per say"). But I'm still waiting for parts before I finalize anything. I also have a water block for the 750 drive, however, I dont really think it will change the thermals of the card by much. As a temporary fix, I'm using rubber bands between 2 sets of 4 SSDs. In fact, the only thing that gets heat build up is the RAID card itself. Depending on where the GPU's go, will determine what kind of cooling solution I will use on the 9341 raid card.

Interestingly enough, boot up time is about 5 seconds with the RAID card (by itself), much faster than the PERC cards (~30 seconds) I use in my rack servers. Either that's because SSDs report to the RAID card faster, or they optimized the firmware on the RAID card. I selected specifically the LSI card because of their track record with providing dependable software/firmware. Where I've read online about complications between highpoint, or Adaptec. Kinda hard to compete at $100 for 8x sata RAID card though.

Edit: I will probably run the tests again, and see what sort of CPU utilization I see with CrystalMark/ATTO (RAID is software-based from what I understand) with the Broadwell Extreme 6850K 6-core I selected. (For single threaded performance, 3.6GHz stock, 3.8 GHz burst)

Edit 2: I looked at numerous ways of using M2 drives instead of SSDs, but typically M2 drives have reduced performance, and there's alot less competition for pricing. Your basically looking at Samsung, Crucial, Sandisk, or OCZ. Most price-competitive M2's I found where at 480gb, which would require double the amount of drives.

Edit 3: I did find this page from another enthusiast where he bought sheet metal pre-holed, and used slices of it to create a "Drive Cage": https://forums.servethehome.com/index.php?threads/anyone-with-4-x-samsung-840-pros-on-raid5-with-lsi-card.1610/page-3

Edited by LikesFastBicycles

Share this post


Link to post
Share on other sites

>  the only thing that gets heat build up is the RAID card itself.

We use a cheap twin-fan card that plugs into an empty PCI slot

immediately adjacent to our RocketRAID 2720SGL.

this requires an empty PCI slot, however:

http://www.newegg.com/Product/Product.aspx?Item=N82E16835888112&Tpk=N82E16835888112

(This is one of the most trusted members of our FAN CLUB!  LOL!! :)

MANY THANKS for the detailed updates.

Edited by MRFS

Share this post


Link to post
Share on other sites

Here is something fascinating. When I checked the 9x read tests in CrystalMark, there was a small impact. About 20% for one of the 12 hyperthreaded cores, and about 85% utilization on the 9x write tests. So I went into the settings, and set CrystalMark to use 4 threads, that's when I got these numbers:

m9MxVVO.png

While sequential write speed suffered a little, the write speed is way faster. It's pretty obvious if I want to maximize the R/W with a PCIe RAID card, I think I'll need to buy 8 more SSDs. Perhaps in about 4 months I'll consider upgrading to a 16 port 6GB/s card. Very interesting. Note: On the Broadwell Extreme 40-lane 6850K, the processor "tops" at 15% overall between the 12 virtual cores using Windows 10 Professional. Not bad. :)

Share this post


Link to post
Share on other sites

>  the write speed is way faster.

No kidding!

2,390 to 3,409 is a BIG DIFFERENCE.

And, that one change was attributable to the

CDM setting of 4 threads?

Does CDM default to 1 thread?

Did you get 2,390 with CDM set to 1 thread?

(your first ATTO graph does not specify the thread count)

Share this post


Link to post
Share on other sites

Teaming?

What happens if/when you install two identical RAID controllers

in matching PCIe 3.0 slots, assigned the same number of PCIe lanes?

2 @ x8 = ~x16 (effectively)

I'm very curious now what sort of performance you will see

if you could install 2 x RAID controllers in matching PCIe 3.0 slots,

and wire half of the SSDs to one and the other half to the other e.g.:

http://www.newegg.com/Product/Product.aspx?Item=9SIA67S46V0318&Tpk=9SIA67S46V0318

 

This one user review is very revealing:

[begin quote]

Cons: The bad news is that you get exactly what you see in the picture here- Two HighPoint 2720SGL raid cards that normally cost $160 each. The only difference between purchasing this or purchasing two separate 2720SGL's is that this is $100 more. This is a good option if you have an extra $100 that you are looking to throw away. LOL

Other Thoughts: The description for this product claims that this kit will allow you to run both of these 2720SGL's in the same server and combine all 16 ports to create a single Raid array. What they don't tell you is you can set this same Raid array up even if you purchase these cards separately. We've been doing this same thing for several years in our other storage servers, which also run two independent cards each. When we opened this package and dumped it out, we saw two 2720 retail boxes taped together and a fuzzy, hard to read photo copied sheet of paper that showed how to plug both cards into our computer.

[end quote]

 

 

 

Edited by MRFS

Share this post


Link to post
Share on other sites
On 8/19/2016 at 0:18 PM, MRFS said:

And, that one change was attributable to the CDM setting of 4 threads?

Does CDM default to 1 thread?

Did you get 2,390 with CDM set to 1 thread?

(your first ATTO graph does not specify the thread count)

CDM default is 1 thread, it was configurable from the "settings" tab. I didnt try different numbers of threads, I can try that too though. I have 12 cores, so technically up to 12 threads would benefit from multi-threading support. I'll retry ATTO again, I dont know if there's a thread count on it.

On 8/19/2016 at 3:10 PM, MRFS said:

Teaming? What happens if/when you install two identical RAID controllers in matching PCIe 3.0 slots, assigned the same number of PCIe lanes?

2 @ x8 = ~x16 (effectively)

I'm very curious now what sort of performance you will see if you could install 2 x RAID controllers in matching PCIe 3.0 slots, and wire half of the SSDs to one and the other half to the other e.g.:

http://www.newegg.com/Product/Product.aspx?Item=9SIA67S46V0318&Tpk=9SIA67S46V0318

This one user review is very revealing:

[begin quote]

Cons: The bad news is that you get exactly what you see in the picture here- Two HighPoint 2720SGL raid cards that normally cost $160 each. The only difference between purchasing this or purchasing two separate 2720SGL's is that this is $100 more. This is a good option if you have an extra $100 that you are looking to throw away. LOL

Other Thoughts: The description for this product claims that this kit will allow you to run both of these 2720SGL's in the same server and combine all 16 ports to create a single Raid array. What they don't tell you is you can set this same Raid array up even if you purchase these cards separately. We've been doing this same thing for several years in our other storage servers, which also run two independent cards each. When we opened this package and dumped it out, we saw two 2720 retail boxes taped together and a fuzzy, hard to read photo copied sheet of paper that showed how to plug both cards into our computer.

[end quote]

I haven't reached the bandwidth limitation of one PCIe slot, I can buy a older 9260-16i, however... it's hitting around $490 on ebay. (Ouch!) The problem with using two 8-port RAID cards is I have to use windows software raid on top of the card's RAID 0, which means if I ever wanted to use the RAID 0 array as a boot drive, windows would frown, and say no. Also, I still dont know how many free slots I'll have (40 lanes max on Broadwell-Extreme). However, if I was running a FreeNAS server, it would make sense to use multiple cards. 8TB is not bad (Ex: $180 Ultra II 1TB SSDs, $1,440 total, 5.6 cents/gb) with the $100 raid card and roughly $30 of cables.

I think I mentioned this before. Since I bought a 12GB/s card, I'm still confused why it doesn't support 8x 6GB/s channels per port. Each drive taking "one direction" of the bidirectional interface. I could use a sas backplane, but that would make the backplane itself the bottleneck. I read online a SAS3 backplane will make SAS2 drives run "30% faster". Probably has to do with the bit encoding you mentioned earlier.

I need the big SSD makers to get more competitive on pricing for high speed SSDs/NVME drives, before I can go even higher capacity/faster speeds. The current "budget" oriented 1.2TB 750 Intel 3.5 inch drive on newegg is ~$700 or a rare Samsung M2 SM961 1TB is ~$521. Limited competition is fantastic for manufacturers. (Even the "cheap" samsung drive is still 2.89x more expensive than a cheap Sandisk Ultra II 1TB SSD. Although.. your comparing 550MB/s read/write vs. 3.2GB/s read 1.8GB/s write.)

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!


Register a new account

Sign in

Already have an account? Sign in here.


Sign In Now