Ron_Jeremy

Initial SLI #'s look promising

Recommended Posts

The mighty FX-55 becomes the bottleneck! All in all an interesting read. Although I knew about the PCB doohicky that physically bridges the cards, I was unaware of the sodimm-type card that's also needed.

Another tidbit I found interesting was:

Remember that despite the fact that there are two x16 slots on the motherboard, there are still only 16 total lanes allocated to them at most - meaning that each slot is electrically still only a x8, but with a physical x16 connector. While having a x8 bus connection means that the slots have less bandwidth than a full x16 implementation, the real world performance impact is absolutely nothing. In fact, gaming performance doesn't really change down to even a x4 configuration; the performance impact of a x1 configuration itself is even negligible.

Share this post


Link to post
Share on other sites

I'd recommend a dualie for SLI cards.

With that said I'm not impressed. The cost for benefit doesn't say me especially since my native refresh is at 60hz. Also I'd rather the companies work out the upcoming issues with SLI before I invest into something like this.

Share this post


Link to post
Share on other sites
I'd recommend a dualie for SLI cards.

With that said I'm not impressed. The cost for benefit doesn't say me especially since my native refresh is at 60hz. Also I'd rather the companies work out the upcoming issues with SLI before I invest into something like this.

I agree with the dualie thing.

As for the rest of it, I think that they are going for the top 1% of the enthusiast market with this one. I can also see this having great benefits to professional engineers doing cad/cam work, as well as 3d Graphic artists. Combine this with a softimage certification or full compatability with maya, we may see companies like pixar picking up a few.

Thank you for your time,

Frank Russo

Share this post


Link to post
Share on other sites

I don’t know about all ya’ll, but I am totally excited about SLI.

Why?

Its simple

By most all accounts two 6600 GT are faster then one 6800 Ultra and about $140 cheaper…

Cant beat that.

Share this post


Link to post
Share on other sites
I don’t know about all ya’ll, but I am totally excited about SLI.

Why?

Its simple

By most all accounts  two 6600 GT  are faster then one 6800 Ultra  and about $140 cheaper…

Cant beat that.

Actually two 6600s are slower than a single GT. Also the GT is cheaper.

Share this post


Link to post
Share on other sites

I am a lil thrilled as well from the possibilities of SLI and the innitial numbers that Anand got ... as a gamer that is.

Dual CPU for SLI? I don't think so ... at least not for regular desktop usage plus gaming needs. I only see dual cpu options as interesting when yer just can't stop your number crunching client while gaming ("whatever"@home) which keeps one cpu occupied and you put your games to run on the second one.

Otherwise games won't see any big improvements from a dual cpu setup since they're not multithreaded.

SLI on the other hand is quite interesting, buy a nforce4 SLI mobo today with one 6800GT and you have an awesome gaming setup .... and in a year or two when your latest games run kinda slow just add a second 6800GT which'll be much more affordable at that time and give you a decent performance boost. It'll also give nvidia time to work out the last kinks for their SLI drivers.

At least that's the combo I'm looking at right now to become available so I can replace my 3 year old hardware, Half Life 2 is just around the corner and I really want to play it with full eyecandy.

Dual CPU only makes sense for some professional apps that're multithreaded but those probably won't take much use of SLI. So it's really about your needs.

And don't let me start about dual mobo cost/performace value for gaming needs ;)

Share this post


Link to post
Share on other sites

I've read that it's possible to run two cards in non-SLI mode and therefore have four monitors running from two cards.

What I'm really curious to know is if it will be possible to run a video card in one slot and a 4X PCIe Handware RAID controller in the other slot?

I want to setup a RAID 5 system for my PC, I've already got 4 HDDs of files I don't want to lose and I need to buy more disk again soon, Raid 0+1 looks like a pretty expensive option in my case (even if it is free on the motherboard).

Write performance isn't such a big issue as I'll have a separate (non-RAIDED) drive for my Operating system and another for my temp files, swap space, and other frequently written to data.

Has anyone heard anything along these lines?

Share this post


Link to post
Share on other sites
I don’t know about all ya’ll, but I am totally excited about SLI.

Why?

Its simple

By most all accounts  two 6600 GT  are faster then one 6800 Ultra  and about $140 cheaper…

Cant beat that.

Actually two 6600s are slower than a single GT. Also the GT is cheaper.

Actualy two 6600 GT's are faster then a single 6800 ULTRA. ULTRA not GT

Per AbabdTech:

16x12 4xAA

6600 GT SLI

Halo 58.58

3DMark05 5698

6800 Ultra Single

Halo 57.21

3DMark05 5211

Theres two to prove it.

Share this post


Link to post
Share on other sites

Yes, the somewhat older Halo seems to get more power out of the 6600GT SLI and so does the synthetic benchmark 3DMark05.

But there're also some new numbers from other games on prerelease hardware.

Here are some newer games that stress the hardware features a lil more than Halo. I marked the 6600GT SLI and 6800GT Single numbers for quick comparison (ah yes, I could've just linked to the Anand article but ...). So here we go:

6600GT / 6600GT SLI / 6800GT / 6800GT SLI

Doom3

1024x768 HQ 4xAA

41.7 / 71.1 / 71.8 / 104.8

1280x1024 HQ 4xAA

27.1 / 48.3 / 50.7 / 85

1600x1200 HQ 4xAA

17.3 / 32 / 38 / 66.6

Counterstrike:Source Visual Stress Test

1024x768 HQ 4xAA

97.2 / 144 / 149.3 / 178.3

1280x1024 HQ 4xAA

54.1 / 78.8 / 89.9 / 147

1600x1200 HQ 4xAA

25.9 / 42 / 72.5 / 123.2

Far Cry 1.1

1024x768 HQ 4xAA

58.3 / 87 / 94.5 / 129.5

1280x1024 HQ 4xAA

38 / 58.1 / 68.6 / 117.1

1600x1200 HQ 4xAA

22.2 / 36.4 / 51 / 105

Those numbers tell me that 6600GT SLI is about equal to one 6800GT at medium res, reason is most likely that those newer titles stress the videocards a little bit more and the 6600GT shows its avantage of less pipelines compared to the 6800GT. The performace gap widens as you increase the res while keeping eyecandy and 4xAA enabled.

We will have to wait for final hardware/drivers and the following tests to get a more accurate overview but those numbers shouldn't be too far from the truth.

Share this post


Link to post
Share on other sites

I'm not planning to buy SLI myself. If a customer demands it, maybe, but a single 6800GT looks like a cheaper overall implementation once you factor in the cost of the motherboard and stuff.

I can see SLI as an easy upgrade option for someone with money. Go with a 6800Ultra now and in 6 months when you're really sure you can get better performance, add the second one for a very nice performance jump.

Share this post


Link to post
Share on other sites
...two 6600 GT  are faster then one 6800 Ultra  and about $140 cheaper…

Actually two 6600s are slower than a single GT. Also the GT is cheaper.

Yeah, but other than that....

-- Rick

Share this post


Link to post
Share on other sites
I've read that it's possible to run two cards in non-SLI mode and therefore have four monitors running from two cards.

What I'm really curious to know is if it will be possible to run a video card in one slot and a 4X PCIe Handware RAID controller in the other slot?

I want to setup a RAID 5 system for my PC, I've already got 4 HDDs of files I don't want to lose and I need to buy more disk again soon, Raid 0+1 looks like a pretty expensive option in my case (even if it is free on the motherboard).

Write performance isn't such a big issue as I'll have a separate (non-RAIDED) drive for my Operating system and another for my temp files, swap space, and other frequently written to data.

Has anyone heard anything along these lines?

No offense but you clearly dont know how a computer works, and on top of that, how pci-e works. first of all, u commonly hear pci-e lanes. thats because in the standard, EACH lane has a dedicated 250Mb/s (500Mbits/s full duplex (upstream and downstream combined)). A pci-e X1 has 1 lane, a pci X4 slot has 4 lanes, etc. So your 4X hd controller will have plenty of bandwidth for a whole buch of drives (in fact a total of 2Gb/s, which is enough to handle 13 individual SATA devices, with an extra 50Mb/s unused in the total allocated bandwidth for the 4x slot). Hope this answers your question.

on another subject, I am incredibly amuzed how nvidia implemented SLI. SLI takes the 16 anes for the first pci-e 16X slot and splits it over the 2 16x each with 8 pci-e lanes when sli is activated. now, how the hell does that allow the graphics cards to even work? well first of all, the truth is graphics gard dont need the full 16x bandwidth, in fact, they don't need the current 8X agp. AGP 4X will keep any of the g-cards happy. the reason for the extra bandwidth (that none of them use) is memory access. however, you can logically deduct that with the upto 256MB of on card ram, they really dont need access to the main memory. the bandwidth is actually used for texture/pixel loading. its not a bad thing to "waste" all the bandwidth, its jsut unnecessary and if 16 ever does become needed for the most powerful g-card ever, then u'll have the bandwidth. Cool huh?

Share this post


Link to post
Share on other sites
No offense but you clearly dont know how a computer works, and on top of that, how pci-e works.

Hi. No offense taken, I'm always happy to hear other peoples opinions, but I think in this case you've been far too quick to judge me. I've been building computers for myself and others for 10 years and I've done a lot of reading on PCI Express. I assure you I know how a computer, and PCI Express work.

It looks like your first post, so maybe this is a troll, but if not... here goes...

first of all, u commonly hear pci-e lanes. thats because in the standard, EACH lane has a dedicated 250Mb/s (500Mbits/s full duplex (upstream and downstream combined)).

Actually it's 250Mbytes/second (500Mbytes/s full duplex). 250Mbps is slower than the maximum sustained transfer rate on modern Hard drives.

A pci-e X1 has 1 lane, a pci X4 slot has 4 lanes, etc. So your 4X hd controller will have plenty of bandwidth for a whole buch of drives (in fact a total of 2Gb/s, which is enough to handle 13 individual SATA devices, with an extra 50Mb/s unused in the total allocated bandwidth for the 4x slot). Hope this answers your question.

I understand how PCI Express works with multi serial channels (lanes). I don't understand why you think otherwise??? Unless you misinterpreted my post as thinking that I thought a PCIe 4X card has four SATA ports??? That would be quite silly (You've assumed I must be stupid and you're so much smarter - assumptions like that can get you in a lot of trouble... Pool Sharks must love you :) ).

The reason I'm so interested in 4X cards is most of the Hardware RAID controller manufacturers are looking at using PCIe 4X cards as PCIe 1X can potentially be bandwidth limited with 4 drives. This is a (major marketing) problem as the manufacturers like to be able to say there is plenty of bandwidth to go around... they also like to use the same layout for 4, 8 and 12 drive cards making the PCIe 4X slot a logical choice. New Intel based server motherboards usually come with PCIe 4X slots on board.

on another subject, I am incredibly amuzed how nvidia implemented SLI. SLI takes the 16 anes for the first pci-e 16X slot and splits it over the 2 16x each with 8 pci-e lanes when sli is activated. now, how the hell does that allow the graphics cards to even work?

You've asked a question you already think you know the answer to? And now you're going to answer your own question???

I'm not sure why you find this amusing, as you yourself point out below 4x is plenty for current video cards, NVidia's implementation makes perfect sense.

well first of all, the truth is graphics gard dont need the full 16x bandwidth, in fact, they don't need the current 8X agp. AGP 4X will keep any of the g-cards happy. the reason for the extra bandwidth (that none of them use) is memory access. however, you can logically deduct that with the upto 256MB of on card ram, they really dont need access to the main memory. the bandwidth is actually used for texture/pixel loading. its not a bad thing to "waste" all the bandwidth, its jsut unnecessary and if 16 ever does become needed for the most powerful g-card ever, then u'll have the bandwidth. Cool huh?

I'm sure most people reading this forum would have known all that.

If you're interested though check out these links about using a GPU as an audio effects processor. It will potentially need the bandwidth provided by PCIe (and particularly the full duplex nature of it).

Older Theory Article:

http://www-sop.inria.fr/reves/publications...4/posterGP2.pdf.

Modern Practical example:

http://www.bionicfx.com/

You didn't actually answer my question though which was: Is it possible to run a PCIe X4 slot RAID controller in the 2nd PCIe 16X slot on an nVidia SLI based motherboard whilst using a PCIe 16X Video Card (electrically running as an 8X) on the 1st PCIe X16 slot? I'll be starting out with 3 (or 4 depending on budget) * 400G drives in a raid 5 array for redundant storage and adding extra drives as my storage needs increase (and my budget allows, up to a maximum of 8 drives). Currently to do this I have to buy a server motherboard that supports PCI-X (as opposed to PCIe) or go with an Intel Xeon board and give up having a PCIe Video card. I'm sure someone will suggest Software RAID but I still don't trust it (especially not software RAID-5) , and besides I dual boot between Linux and Windows and I want my data drive available from both Operating Systems.

I want to be able to run a PCIe RAID card (which look like they're going to PCIe 4X cards) and a PCIe Video card (which are mostly PCIe X16 - physically, for marketing reasons) on the same motherboard.

Would you care to have another go at answering my question Snyper (or anyone else for matter)? Eagerly awaiting any knowledge / news you guys might have on this topic.

Share this post


Link to post
Share on other sites

I just had a thought. If HP servers go dual oppie with PCI-Xpress I might as well grab a machine from work and write off 2 6900Golds (or whatever).

Then hopefully Neverwinter Nights will run faster than 24fps (geez even Doom3 gives me better performance).

Share this post


Link to post
Share on other sites
... answer my question though which was: Is it possible to run a PCIe X4 slot RAID controller in the 2nd PCIe 16X slot on an nVidia SLI based motherboard whilst using a PCIe 16X Video Card (electrically running as an 8X) on the 1st PCIe X16 slot?

I don't see any reason why not. We know that the card would work in a normal x16 or x8 slot. According to Anand's article the nForce4 motherboards will use a little card to switch 8 of the lanes between the two slots. The way it sounds to me, there are no special tricks to make the PCIe part of SLI work. That is, I think this is all still within spec for PCIe so any card plugged into the second physical x16 slot would act just as if it had been plugged into a x8 slot.

I can't say that I've heard of anybody that has tried it though...

-JoeTD

Share this post


Link to post
Share on other sites
That is, I think this is all still within spec for PCIe so any card plugged into the second physical x16 slot would act just as if it had been plugged into a x8 slot.

Thanks, I think it will work as well. I hope someone will give this a run. I like to play games so a PCIe video card would be nice on my next PC, but RAID 5 is much more important to me for safely storing all my data.

Share this post


Link to post
Share on other sites
What I'm really curious to know is if it will be possible to run a video card in one slot and a 4X PCIe Handware RAID controller in the other slot?

I'm also wondering if this possible.

Share this post


Link to post
Share on other sites

I'm waiting on some SLI action...unfortunatly it seems there are only going to be like 100 Ultra Extreme PCI-E cards brought into AU so It's going to be a expensive bitch of a project

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now