Vega's *Heavyweight* display and computer; edition 2012

Got bored since my third 680 isn't in to run my 3x FW900 setup so I tested one FW900 at 2560x1600 with each applicable game having all settings maxed.


580vs6802560x1600.jpg



The most VRAM use I saw in BF3 was 1930 MB, Crysis 2 was 1971 MB, and the most in Skyrim was 2028 MB, all with no slow-downs with the 680's. The other games were well under 2GB usage.

I am not sure if the limit on the 8x/8x PCI-E 2.0 slots were hurting the 680's more than the 3GB 580's but the 680's in SLI only ended up an average of 24% faster than the 580's. A bit lower than I was expecting.

There was an anomaly that I reproduced having over 100% scaling in 680 SLI Skyrim. If you include that result, SLI scaling is a perfect 100%. If you exclude that result and go off the other four games, the SLI scaling is 90%. Still pretty darn good. As for the reason Skyrim was scaling so incredible in SLI, might be a driver issue with single card as in both instances the GPU's were at max utilization.
 
Hmm, I was thinking since I am at a decision point with a new motherboard, what would you guys do. This MB will be for my full-on 4-way GTX 680 setup.

Just go ahead and purchase a X79 board now like the Rampage IV Extreme, MSI Big Bang - Xpower II or ASRock Fatal1ty X79 that can do native PCI-E 3.0 at 16x/8x/8x/8x with a 3820/3930k/3960k? Or just get a temp Z68 MB to run 3-way GTX 680 and then purchase the Gigabyte Z77 Sniper 3 when it launches with a 3770k that with a PLX chip can do 8x/8x/8x/8x? I wonder if that PLX chip will lower performance compared to the native speeds of X79.

I am kinda weary about the SB-E chips even under chilled water barely being capable of 5GHz and then regret the purchase if IB launches and can do something like 5.5GHz under chilled water. Thoughts?

Or I could be really crazy and wait for the ASRock Extreme11 X79 that has two PLX chips for 16x/16x/16x/16x but that won't release for another couple months and not sure how much help that super bandwidth will be over native X79 PCI-E 3.0 speeds.
 
Last edited:
Go for the Asus X79 and 3930k.. You will be sure to have a very solid multi card setup then, the 4 16x slots on the Asrock will do nothing for you apart from trouble.. :S I think anyway, on the very first z68 extreme4 the PLX PEX8608 chip gave me problems with multi 5850's.. Well that's what I put the blame on as my X59 board didn't have the same faults.. just like the NF200 chip did on the extreme 7 it would run fine on a single screen but not on Eyefinity..

I see you have the Extreme IV. How do you like it?

I was just reading on Newegg and the average review are only 3 out of 5 egg's (stars). That seems kinda low for a flag-ship MB.
 
5th GTX 680 inbound (mission critical apps like Surround BF3 need's a replacement backup in case of failure). Couldn't let the 4-way SLI grid be down for too long. :D
 
Sweet, got a Rampage IV Extreme and a nice clocking 3960X inbound and the other three GTX 680's in the mail. Now I just need to find some of that 2666 MHz DDR3 4x 4GB. Or do you guys think 2400MHz will suffice?

I might have to re-think my cooling loop on the CPU side to give it more breathing room now. ;)
 
Out of curiosity, how come you chose this picture?

I live in that town o_0.

Loving what you have done so far!

You live in that town, seriously?

It was my attempt to convey (Ivy on the house) with the bridge in front of it with my picture puzzle lol. I like the English landscape, my last trip we visited Warwick castle it was pretty cool.
 
Last edited:
I do indeed, It's a little market town in North Wales called Llanrwst.

The little house is called, Tu Hwnt i'r Bont.

Nice. I love that about Europe. You could have taken a picture of that house two hundred years ago and it would have looked the same. ;)

Where I live this is considered an old building:

walmart_supercenter.jpg
:eek:




Well I broke down and have four of these inbound:




I can't help it, EK water blocks draw me in like crack to a crack-whore! :D

I am going to hold off on the Rampage IV Extreme motherboard VRM and chip-set liquid blocks and the four RAM blocks until I am sure that is the route I want to take for my permanent setup.
 
Last edited:
Hi Vega, I've just done some benchmarks with 4 & 8 sticks of memory, well I'm suprised but 8 stickS have improved my fps.

4 sticks on stock cards at 1000mhz gets 82fps average on Dirt Benchmark @5760x1200 Max settings

8 sticks 89.9fps

So if your going X79 best get 8 sticks, this was at 1600mhz, so I'm going to clock the cards and ram tomo see what she does :)

Something doesn't sound right there! I take it you went from 16GB to your 32GB in your sig? I bet if you load up just windows and run the benchmark under the performance monitor you won't use more than 6GB of RAM total. Just increasing a very large amount of RAM to an absurd amount of RAM shouldn't effect performance when size is not the limiting factor.

Are you sure the speed of the RAM didn't change? Sounds like there are other factors at play. ;)

Usually the less RAM sticks, the easier it is on the processor memory controller and the higher the RAM overclock. Did you previously have the four RAM sticks installed in all of the red slots? That would give full quad-channel speed. Installing four more sticks in the four black slots would still keep the same quad channel speed, just increase the total memory amount.
 
Working on the ambient side of the loop today. Setting up to test the 3960X and RIVE under water.


SANY0014-3.jpg




Mocking up components to test routing of liquid lines. 1/2" ID - 3/4" OD Norprene tubing is hard to bend and fit in tight places.

SANY0015-2.jpg




The Team Group 2400 9-11-11-28 RAM came in (4x 4GB).

SANY0017-2.jpg




Working on some of the supply/return valve systems.

SANY0018-1.jpg




Made a custom stand out of some old Ikea speaker stands.

SANY0011-4.jpg




Reservoir up top with a couple silver kill coils.

SANY0012-3.jpg




Liquid line routing. The open ended valves that are shut will attach to the Geo-thermal section of the cooling loop.

SANY0019-1.jpg




Testing out the loop and checking for leaks.

SANY0020-2.jpg




Getting rid of air in the system as been a huge PITA. I am going to have to come up with some sort of custom pump/system to force water through the system and flush all the air out under pressure. The Iwaki RD-30 is a beast of a pump in a closed system but if there is some air in the lines it has a hard time getting going. The system already used 1 gallon of distilled water and I ran out so I wasn't able to fire the rig up. Tomorrow is another day.
 
Im sorry but you are just completely nuts

I admire you for the dedication if nothing else, but yur still completely crazy :)

:D



I am thinking of re-arranging the cooling loop so that:

Branch #1 = CPU > X79 > VRM/MOSFET

Branch #2 = 680 > 680 > 2x RAM Sticks

Branch #3 = 680 > 680 > 2x RAM Sticks

I think the balance between those would be fairly close. The VRM/MOSFET coolers are really low restriction and would pretty much balance the resistance of the 2x RAM sticks. So essentially it will be one CPU block versus two GPU blocks to balance resistance. Anyone think the resistance balance would be way off on the above configuration? (Doesn't need to be perfect)
 
Got my RAID 0 setup (boy that was a nightmare on X79), Win 7 installed. Games are downloading. Got three GTX 680's now. This is why I love nVidia:

FW900Surround.jpg


Even something as complicated as running three CRT's in portrait is a snap. Install driver, hit configure Surround displays, bam - organize screens and your done. Even these early drivers work really well. Thankfully nVidia allows each card to use it's own RAMDAC for each FW900, something AMD cannot do.


Setup is kinda a mess while I install and test stuff:

SANY0001-17.jpg



The 3960X at stock setting under maximum-Intel Burn Test only reaches a max temp of 41 C on the cores using the ambient radiator. I used Thermaltake Chill Factor III this time around and it appears to be doing quite well.
 
Expect PCI-E 3.0 vs 2.0 tests, what is required to reach 2GB of VRAM and what happens when that VRAM is reached tests etc. So far in BF3 the results are pretty bad news once the memory reaches 2048MB! (Although it takes quite a bit to surpass the 2GB VRAM amount even at extremely high resolution and settings). More to follow...

I love this new nVIdia Surround. It keeps the desktop task-bar only on the center monitor and when I maximize windows it only maximizes on the center screen. Awesome features! With the simple registry edit, I've got all of the cards running at PCI-E 3.0. In the BIOS I can switch between 1.0/2.0/3.0 at will so this will make for some nice tests.

Who want's to bet there will be appreciable differences on my setup? ;)
 
Nothing that advanced. Just an extended desktop as it's my work machine and as such games aren't installed *officially... ;)*

They are amazingly good monitors though, been using them for years now. Until someone comes up with a LCD/LED screen that trumps these, then they will not be going anywhere.

By the nature of the tech, LCD cannot surpass CRT. The only thing that will be able to is OLED, possibly laser projectors and direct-view Sony (crystal-LED).


On another note:

Well I tried the registry fix to try and get my GTX 680's running at PCI-E 3.0 with my X79 MB and in Surround mode Windows fails to boot. So the article was right that there can be issues using that registry key.

So hopefully nVidia properly enables 3.0 or I will have to go back to the launch drivers which I think were 300.83?
 
Well, the results of my PCI Express 2.0 versus 3.0 on my 4-way SLI GTX 680 FW900 Surround setup are in. The results are so incredible I had to start the tests over from scratch and run them multiple times for confirmation! :eek:

Test setup:

3960X @ 5.0 GHz (temp slow speed)
Asus Rampage IV Extreme with PCI-E slots running 16x/8x/8x/8x
(4) EVGA GTX 680's running 1191MHz core, 3402 MHz Memory
nVidia Driver 301.10 with PCI-E 3.0 registry adjustment turned on and off for each applicable tes
GPU-Z 0.6.0

SANY0003-12.jpg



After PCI-E settings changed, confirmed with GPU-Z:

PCI-E20.gif


PCI-E30.gif



All settings in nVidia control panel, in-game and in benchmark, EVGA precision are all UNTOUCHED between benchmark runs. The only setting adjusted is the PCI-E 2.0 to 3.0 and back and forth for confirmation (Reboots obviously for registry edit).


PCI-ETests.jpg



I kid you not, that is how much PCI-E 2.0 running at 16x/8x/8x/8x versus PCI-E 3.0 bottlenecks BF3 and Heaven 2.5 at these resolutions. I attribute this to the massive bandwidth being transferred over the PCI-E bus. We are talking 4-way SLI at up to 10-megapixels in alternate frame rendering. Entire frames at high FPS are being swapped and PCI-E 2.0 falls on it's face.

The interesting part was that while running PCI-E 2.0, the GPU utilization dropped way down as would be typically seen if you are "CPU" limited. In this instance I am not CPU limited, nor GPU limited. We are really at a point now that you can be PCI-E limited unless you go PCI-E 3.0 8x (16x PCI-E 2.0) or faster on all GPU's in the system. GPU utilization dropped down into the ~50% range due to PCI-E 2.0 choking them to death. As soon as I enabled PCI-E 3.0, the GPU utilization skyrocketed to 95+% on all cores. I was going to run more benchmarks and games but the results are such blow-outs it seems pretty pointless to do any more. It may interest some of those out there running these new PCI-E 3.0 GPU's in which they think they are CPU limited (below 95% GPU utilization) yet might have PCI-E bandwidth issues.

Down to the nitty gritty; if you run a single GPU, yes; a single 16x speed PCI-E 2.0 slot will be fine. When you start to run multiple GPU's and/or run these new cards at 8x speed, especially in Surround/Eyefinity, make sure to get PCI-E 3.0. ;)
 
Do you think this would apply to 7970's in crossfire ?

Like I said, I am being just about as stressful as possible on the PCI-E bus with my setup. How many 7970's, what chipset you use, what speed slots are you running and what resolution/displays are you using will all factor in.

Interesting results.

You're obviously at the extreme end of the spectrum running 4 GTX 680's.

The vast majority of users running multiple GPU's will be using no more than 2 so the question for them would be if that sort of difference exists in that scenario.

Perhaps you could take out 2 cards and run the same tests again.

It seems like the most popular request would be 3600x1200 @ 2-way SLI. So that would run at 16x/8x at 2.0 and 3.0. I could just switch out the 4-way SLI bridge and put in the 2-way bridge and re-arrange the video cables. Is that the settings you guys want to see?

EDIT: Just remembered it is impossible to run three CRT's with less than 3 GTX 680's due to RAMDAC output limitations.
 
Last edited:
Back
Top Bottom