LikesFastBicycles

Member
  • Content count

    7
  • Joined

  • Last visited

Community Reputation

0 Neutral

About LikesFastBicycles

  • Rank
    Member
  1. CDM default is 1 thread, it was configurable from the "settings" tab. I didnt try different numbers of threads, I can try that too though. I have 12 cores, so technically up to 12 threads would benefit from multi-threading support. I'll retry ATTO again, I dont know if there's a thread count on it. I haven't reached the bandwidth limitation of one PCIe slot, I can buy a older 9260-16i, however... it's hitting around $490 on ebay. (Ouch!) The problem with using two 8-port RAID cards is I have to use windows software raid on top of the card's RAID 0, which means if I ever wanted to use the RAID 0 array as a boot drive, windows would frown, and say no. Also, I still dont know how many free slots I'll have (40 lanes max on Broadwell-Extreme). However, if I was running a FreeNAS server, it would make sense to use multiple cards. 8TB is not bad (Ex: $180 Ultra II 1TB SSDs, $1,440 total, 5.6 cents/gb) with the $100 raid card and roughly $30 of cables. I think I mentioned this before. Since I bought a 12GB/s card, I'm still confused why it doesn't support 8x 6GB/s channels per port. Each drive taking "one direction" of the bidirectional interface. I could use a sas backplane, but that would make the backplane itself the bottleneck. I read online a SAS3 backplane will make SAS2 drives run "30% faster". Probably has to do with the bit encoding you mentioned earlier. I need the big SSD makers to get more competitive on pricing for high speed SSDs/NVME drives, before I can go even higher capacity/faster speeds. The current "budget" oriented 1.2TB 750 Intel 3.5 inch drive on newegg is ~$700 or a rare Samsung M2 SM961 1TB is ~$521. Limited competition is fantastic for manufacturers. (Even the "cheap" samsung drive is still 2.89x more expensive than a cheap Sandisk Ultra II 1TB SSD. Although.. your comparing 550MB/s read/write vs. 3.2GB/s read 1.8GB/s write.)
  2. Here is something fascinating. When I checked the 9x read tests in CrystalMark, there was a small impact. About 20% for one of the 12 hyperthreaded cores, and about 85% utilization on the 9x write tests. So I went into the settings, and set CrystalMark to use 4 threads, that's when I got these numbers: While sequential write speed suffered a little, the write speed is way faster. It's pretty obvious if I want to maximize the R/W with a PCIe RAID card, I think I'll need to buy 8 more SSDs. Perhaps in about 4 months I'll consider upgrading to a 16 port 6GB/s card. Very interesting. Note: On the Broadwell Extreme 40-lane 6850K, the processor "tops" at 15% overall between the 12 virtual cores using Windows 10 Professional. Not bad.
  3. I have a affiliation with Intel, so I get their hardware cheaper than retail pricing. I'm aware of the internal 2x U2 ports & M2 vertical slot (All support 4x NVME), but I avoid them because as far as I know the U2 ports borrow PCIe lanes from the PCIe slots. The other complication is again, price. Any NVME drive typically is much more expensive than a ordinary SSD. I need lots of slots because I plan on shoving a ASUS ThunderboltEX-3 card, 2 Titan X Pascal cards, the 9341-8i raid card, and a Intel 750 card. The fastest cost-effective 1TB M2 SSD, the Samsung PM961 Drive is very difficult to buy anywhere. I'm using a Enthoo Evolv ATX case (there are 0 drive cages "per say"). But I'm still waiting for parts before I finalize anything. I also have a water block for the 750 drive, however, I dont really think it will change the thermals of the card by much. As a temporary fix, I'm using rubber bands between 2 sets of 4 SSDs. In fact, the only thing that gets heat build up is the RAID card itself. Depending on where the GPU's go, will determine what kind of cooling solution I will use on the 9341 raid card. Interestingly enough, boot up time is about 5 seconds with the RAID card (by itself), much faster than the PERC cards (~30 seconds) I use in my rack servers. Either that's because SSDs report to the RAID card faster, or they optimized the firmware on the RAID card. I selected specifically the LSI card because of their track record with providing dependable software/firmware. Where I've read online about complications between highpoint, or Adaptec. Kinda hard to compete at $100 for 8x sata RAID card though. Edit: I will probably run the tests again, and see what sort of CPU utilization I see with CrystalMark/ATTO (RAID is software-based from what I understand) with the Broadwell Extreme 6850K 6-core I selected. (For single threaded performance, 3.6GHz stock, 3.8 GHz burst) Edit 2: I looked at numerous ways of using M2 drives instead of SSDs, but typically M2 drives have reduced performance, and there's alot less competition for pricing. Your basically looking at Samsung, Crucial, Sandisk, or OCZ. Most price-competitive M2's I found where at 480gb, which would require double the amount of drives. Edit 3: I did find this page from another enthusiast where he bought sheet metal pre-holed, and used slices of it to create a "Drive Cage": https://forums.servethehome.com/index.php?threads/anyone-with-4-x-samsung-840-pros-on-raid5-with-lsi-card.1610/page-3
  4. I just got that $108 Dell OEM LSI 9341 card, and I used the cables you suggested on the X99 Deluxe II ASUS board. When I use it in the first slot, and set "SLI" to 2X on the switch, everything worked right away. (I did not need to flash the LSI firmware) The card booted with no issues using 0401 bios. (Recently upgraded to 0601 bios.) I'm waiting for my graphics card to come in, it's still on order. Once thing I noticed is there's very few options on this HBA card. I could boot windows off of this drive, but I was having some issues with 8tb, as windows 7 insisted that the drive use MBR (2tb max) instead of GPT. Won't matter anyway, as I plan on migrating the boot drive to a Intel 750 drive at a later date. This is the preliminary numbers I got: CrystalDiskMark: ATTO:
  5. I did preface that I'm not interested in trying to calculate overhead from bit encoding, it's way beyond my interest in just trying to reference how I would get past software raid DMI 2.0 (paltry ~2 gb/s) on a x99 chipset with a broadwell extreme CPU. (Only 6 SATA ports are raidable.) See, for awhile the Intel 750 card I had was adding about 18 seconds to boot up. I was going to change it over to the Samsung SM961 1tb drive. However, Intel released a new driver that considerably sped up the 750 boot time, for my pc it's about 12 seconds now. So your saying there's no way for the x99 bios to recognize the Dell OEM LSI 9341 unless I change the firmware to the 9300 generic rom? I bought the same card you bought on ebay for $108. That's a nice cable. The cheapest I found was around $19. Enough fast SSDs? For his current criteria, he would need expensive 12GB/s SSDs, or like you previously mentioned, he needs 13 SSD's to get to 6GB/s. For 12GB/s SSDs, I found a Seagate 200GB SSD, 12GB/s SAS for $500. For $216 on newegg, I found the 6GB/s OCZ Trion 150 (960gb) drive he's likely using. I was curious, if the hardware/firmware support 12GB/s, technicaly the SAS should support 8 SATA3 per 12GB/s SFF 8643. I suppose the problem is the 12GB/s protocol is doubling the frequency, not the number of mechanical connections. In my use case, I'm trying to get a large amount of SSD space with fast transfer speeds, without buying SAS SSDs like the Sandisk 4TB 6Gb/s for ~$2,500.
  6. Not accounting for overhead, or changes to the bit encoding between the PCI express and chipset: PCIe 1.x is 250MB/s per lane (8x = 2 GB/s) PCIe 2.0 is 500MB/s per lane (8x = 4 GB/s) PCIe 3.0 is 1GB/s per lane (8x = 8 GB/s) SATA2 .. up to 300 MB/s SATA3 .. up to 600 MB/s Each SAS 6GB port supports 4 SATA3 devices; (Assuming max speeds) 6 ports, 24 devices, 14.4 GB/s 4 ports, 16 devices, 9.6 GB/s 2 ports, 8 devices, 4.8 GB/s 1 port, 4 devices, 2.4 GB/s DMI 1.0 (intel software raid) is 1.16 GB/s DMI 2.0 (intel software raid) is 2 GB/s DMI 3.0 (intel software raid) is 3.93 GB/s (Only supported on skylake boards) ------ Software HBA raid cards are usually cheaper than hardware raid cards. Are there any specific hardware raid cards that do not cause slow pc reboots? I understand that the delay is typically because of checking all the hard drives in the array. Edit: What's your CPU utilization like on that 9341-8i raid card?
  7. Wouldn't you technically need a 16x PCI 3.0 card to get that kind of performance?