Using Flash-Memory and/or storage would make this Graphics-WS even more expensive - but certainly worth to think about twice.
In this scenario PCIe Rev.3 would really take off - so YES, we will take this into more serious consideration.
But then funds are not unlimited.
So we have to propose solutions for 2 scenarios. One for 'current optimal' and one for 'low cost'.
What's your take?
found the Block diagram for ASUS P9X79E-WS: http://www.anandtech.com/show/5089/sandy-bridgee-and-x79-the-asus-p9x79-pro-review
According to ASUS the Slots 1,2,3,5,6,7 are all PLX routed. One PLX for slots 1,2,3 another for 5,6,7. Slot 4 is direct to the CPU.
>> infusing any flash into the rig?
I don't quite understand. Dou you mean Flash-Memory like an SSD?
What exactly do you mean?
>> target use case for ... the creative space
Exactly -.You hit the nail on its head!
Yes, we do consider the ASUS P9X79E-WS.
Apparently this board does support PCIe Version 3.0!
In contrast to Intels specification of the Chipset X79: http://www.intel.com/content/www/us/en/chipsets/performance-chipsets/x79-express-chipset.html)
I find this rather confusing.
As of now I have not yet found a block diagram for this MoBo and wonder how ASUS actually implemented PCIe Version 3 into this MoBo.
I'd assume that ASUS (just like Gigabyte, ASRock, EVGA, MSI) are using the Processors Sandy-Bridge-E and/or the Ivy-Bridge (40-lanes PCIe 3.0) via a multiplexer like the PEX8747 Chip.
We're aiming at a Chipset that fully supports PCIe Version 3.0. But I am afraid that does not exist yet.
(hope we're missing something)
Lian Li PC-343B Case $500 shippedI am Very interested!
I assume the 5-Drive 3-Bay 3.5" SATA Hot Swap cages are included - correct ?
I've got the same case already but need another one. Though the new D600 is also quite tempting.
Does your shipping price include a shipment to Zurich, Switzerland as well ?
Let me know please.
I need to build a powerful Graphics-Workstation (Windows7-64 prof.) for Video editing.
WS shall be equipped with one professional graphics controller (PCIe 16x) and 2x LSI MegaRAID-9270-8i controllers (both at PCIe 8x) reuslting in two RAID5-Arrays consisting of 8 drives each.
The workload is as follows: Large files 2-6GB+ (sometimes several) are read from any of the 2 LSI MeagRAIDs while (often at the same time) Video-sequences are cut, pasted, rendered etc..
No other PCIe slots will be used.
I find it hard to find a suitable motherboard that supports PCIe Version 3 without bandwidth limitation on any of the PCIe-lanes or slots.
Which Motherboard(s) would conform to (or best meet) the setup described?
Chipset Z87 supports PCIe Ver 3, but most boards I've seen multiplex the lanes with a PEX8747 chip.
AFAIK Chipset X79 supports PCIe Ver 2.
However, I would think the primary bottleneck might be the disk-write-speed (despite an 8-drive RAID5-array). Therefore a MoBo with PCIe Ver 2 would probably do the job as well.
But which one?
Any other thoughts or suggestions?
Your advice is highly appreciated.
You might try this one: PEXSAT34 from www.startech.com
works fine - uses PCIe (2.0) 4x.
Had similar issue with an ASUS P8P67-Deluxe. Couldn't get the LSI-9260-4i properly recognized by the UEFI BIOS.
After a few days trial-and-error, LSI support, reasearch etc. I eventually I gave up on this and now use an SSD as boot device.
Once a week I pull an image from that SSD with Acronis true image (the restore procedure has been successfully tested - so I am safe).
I assume/suspect that ASUS-UEFI-Bios and LSI-9260 for some reason don't like each other.
Hope this helps.
I am using the WD SE drives instead and they run just fine - with write speeds at around 450MB/sec.
Had the same issue like by using the REDs.
The REDs are now being used in several Lian-Li EX-503.
Hope this helps.
I've got a LSI9270-8i to build 2 RAID5-arrays (VD0 and VD1) with 2x 4x 1.8TB WD-RE drives.
I'm curious to learn about the best option to wire the 8 drives to the 2 ports for most efficient operation.
Here are my thoughts:
The LSI 9270-8i got 2 ports p0-3 and p4-7.
In order to use both ports efficiently no matter whether accessing VD0 or VD1, then I should connect as follows:
2 drives of VD0 to p0-1 and the other 2 drives of VD0 to p4-5
2 drives of VD1 to p2-3 and the other 2 drives of VD1 to p6-7
This should give a balanced usage of both ports at any time.
What do you think?
Are there better ways to achieve a balanced usage of the ports?
What are the disadvantages of the setup described?
I know best would be to create one VD0 in RAID6 mode (correct?).
But for plausible reasons we do need 2 arrays.
Thanks for feedback and sharing your thoughts.