Vega's *Heavyweight* display and computer; edition 2012

Associate
OP
Joined
12 Dec 2010
Posts
1,837
Location
Washington D.C.
Working on the ambient side of the loop today. Setting up to test the 3960X and RIVE under water.


SANY0014-3.jpg




Mocking up components to test routing of liquid lines. 1/2" ID - 3/4" OD Norprene tubing is hard to bend and fit in tight places.

SANY0015-2.jpg




The Team Group 2400 9-11-11-28 RAM came in (4x 4GB).

SANY0017-2.jpg




Working on some of the supply/return valve systems.

SANY0018-1.jpg




Made a custom stand out of some old Ikea speaker stands.

SANY0011-4.jpg




Reservoir up top with a couple silver kill coils.

SANY0012-3.jpg




Liquid line routing. The open ended valves that are shut will attach to the Geo-thermal section of the cooling loop.

SANY0019-1.jpg




Testing out the loop and checking for leaks.

SANY0020-2.jpg




Getting rid of air in the system as been a huge PITA. I am going to have to come up with some sort of custom pump/system to force water through the system and flush all the air out under pressure. The Iwaki RD-30 is a beast of a pump in a closed system but if there is some air in the lines it has a hard time getting going. The system already used 1 gallon of distilled water and I ran out so I wasn't able to fire the rig up. Tomorrow is another day.
 
Associate
OP
Joined
12 Dec 2010
Posts
1,837
Location
Washington D.C.
Im sorry but you are just completely nuts

I admire you for the dedication if nothing else, but yur still completely crazy :)

:D



I am thinking of re-arranging the cooling loop so that:

Branch #1 = CPU > X79 > VRM/MOSFET

Branch #2 = 680 > 680 > 2x RAM Sticks

Branch #3 = 680 > 680 > 2x RAM Sticks

I think the balance between those would be fairly close. The VRM/MOSFET coolers are really low restriction and would pretty much balance the resistance of the 2x RAM sticks. So essentially it will be one CPU block versus two GPU blocks to balance resistance. Anyone think the resistance balance would be way off on the above configuration? (Doesn't need to be perfect)
 
Associate
OP
Joined
12 Dec 2010
Posts
1,837
Location
Washington D.C.
Got my RAID 0 setup (boy that was a nightmare on X79), Win 7 installed. Games are downloading. Got three GTX 680's now. This is why I love nVidia:

FW900Surround.jpg


Even something as complicated as running three CRT's in portrait is a snap. Install driver, hit configure Surround displays, bam - organize screens and your done. Even these early drivers work really well. Thankfully nVidia allows each card to use it's own RAMDAC for each FW900, something AMD cannot do.


Setup is kinda a mess while I install and test stuff:

SANY0001-17.jpg



The 3960X at stock setting under maximum-Intel Burn Test only reaches a max temp of 41 C on the cores using the ambient radiator. I used Thermaltake Chill Factor III this time around and it appears to be doing quite well.
 
Associate
OP
Joined
12 Dec 2010
Posts
1,837
Location
Washington D.C.
Expect PCI-E 3.0 vs 2.0 tests, what is required to reach 2GB of VRAM and what happens when that VRAM is reached tests etc. So far in BF3 the results are pretty bad news once the memory reaches 2048MB! (Although it takes quite a bit to surpass the 2GB VRAM amount even at extremely high resolution and settings). More to follow...

I love this new nVIdia Surround. It keeps the desktop task-bar only on the center monitor and when I maximize windows it only maximizes on the center screen. Awesome features! With the simple registry edit, I've got all of the cards running at PCI-E 3.0. In the BIOS I can switch between 1.0/2.0/3.0 at will so this will make for some nice tests.

Who want's to bet there will be appreciable differences on my setup? ;)
 
Man of Honour
Joined
18 Oct 2002
Posts
40,191
Do you have them set up in Eyfinity/Surround or separate?

Nothing that advanced. Just an extended desktop as it's my work machine and as such games aren't installed *officially... ;)*

They are amazingly good monitors though, been using them for years now. Until someone comes up with a LCD/LED screen that trumps these, then they will not be going anywhere.
 
Associate
OP
Joined
12 Dec 2010
Posts
1,837
Location
Washington D.C.
Nothing that advanced. Just an extended desktop as it's my work machine and as such games aren't installed *officially... ;)*

They are amazingly good monitors though, been using them for years now. Until someone comes up with a LCD/LED screen that trumps these, then they will not be going anywhere.

By the nature of the tech, LCD cannot surpass CRT. The only thing that will be able to is OLED, possibly laser projectors and direct-view Sony (crystal-LED).


On another note:

Well I tried the registry fix to try and get my GTX 680's running at PCI-E 3.0 with my X79 MB and in Surround mode Windows fails to boot. So the article was right that there can be issues using that registry key.

So hopefully nVidia properly enables 3.0 or I will have to go back to the launch drivers which I think were 300.83?
 
Associate
OP
Joined
12 Dec 2010
Posts
1,837
Location
Washington D.C.
Well, the results of my PCI Express 2.0 versus 3.0 on my 4-way SLI GTX 680 FW900 Surround setup are in. The results are so incredible I had to start the tests over from scratch and run them multiple times for confirmation! :eek:

Test setup:

3960X @ 5.0 GHz (temp slow speed)
Asus Rampage IV Extreme with PCI-E slots running 16x/8x/8x/8x
(4) EVGA GTX 680's running 1191MHz core, 3402 MHz Memory
nVidia Driver 301.10 with PCI-E 3.0 registry adjustment turned on and off for each applicable tes
GPU-Z 0.6.0

SANY0003-12.jpg



After PCI-E settings changed, confirmed with GPU-Z:

PCI-E20.gif


PCI-E30.gif



All settings in nVidia control panel, in-game and in benchmark, EVGA precision are all UNTOUCHED between benchmark runs. The only setting adjusted is the PCI-E 2.0 to 3.0 and back and forth for confirmation (Reboots obviously for registry edit).


PCI-ETests.jpg



I kid you not, that is how much PCI-E 2.0 running at 16x/8x/8x/8x versus PCI-E 3.0 bottlenecks BF3 and Heaven 2.5 at these resolutions. I attribute this to the massive bandwidth being transferred over the PCI-E bus. We are talking 4-way SLI at up to 10-megapixels in alternate frame rendering. Entire frames at high FPS are being swapped and PCI-E 2.0 falls on it's face.

The interesting part was that while running PCI-E 2.0, the GPU utilization dropped way down as would be typically seen if you are "CPU" limited. In this instance I am not CPU limited, nor GPU limited. We are really at a point now that you can be PCI-E limited unless you go PCI-E 3.0 8x (16x PCI-E 2.0) or faster on all GPU's in the system. GPU utilization dropped down into the ~50% range due to PCI-E 2.0 choking them to death. As soon as I enabled PCI-E 3.0, the GPU utilization skyrocketed to 95+% on all cores. I was going to run more benchmarks and games but the results are such blow-outs it seems pretty pointless to do any more. It may interest some of those out there running these new PCI-E 3.0 GPU's in which they think they are CPU limited (below 95% GPU utilization) yet might have PCI-E bandwidth issues.

Down to the nitty gritty; if you run a single GPU, yes; a single 16x speed PCI-E 2.0 slot will be fine. When you start to run multiple GPU's and/or run these new cards at 8x speed, especially in Surround/Eyefinity, make sure to get PCI-E 3.0. ;)
 
Caporegime
Joined
5 Sep 2010
Posts
25,572
Down to the nitty gritty; if you run a single GPU, yes; a single 16x speed PCI-E 2.0 slot will be fine. When you start to run multiple GPU's and/or run these new cards at 8x speed, especially in Surround/Eyefinity, make sure to get PCI-E 3.0. ;)

Interesting results.

You're obviously at the extreme end of the spectrum running 4 GTX 680's.

The vast majority of users running multiple GPU's will be using no more than 2 so the question for them would be if that sort of difference exists in that scenario.

Perhaps you could take out 2 cards and run the same tests again.
 
Associate
OP
Joined
12 Dec 2010
Posts
1,837
Location
Washington D.C.
Do you think this would apply to 7970's in crossfire ?

Like I said, I am being just about as stressful as possible on the PCI-E bus with my setup. How many 7970's, what chipset you use, what speed slots are you running and what resolution/displays are you using will all factor in.

Interesting results.

You're obviously at the extreme end of the spectrum running 4 GTX 680's.

The vast majority of users running multiple GPU's will be using no more than 2 so the question for them would be if that sort of difference exists in that scenario.

Perhaps you could take out 2 cards and run the same tests again.

It seems like the most popular request would be 3600x1200 @ 2-way SLI. So that would run at 16x/8x at 2.0 and 3.0. I could just switch out the 4-way SLI bridge and put in the 2-way bridge and re-arrange the video cables. Is that the settings you guys want to see?

EDIT: Just remembered it is impossible to run three CRT's with less than 3 GTX 680's due to RAMDAC output limitations.
 
Last edited:
Associate
OP
Joined
12 Dec 2010
Posts
1,837
Location
Washington D.C.
Associate
Joined
4 Jul 2009
Posts
1,008
Hitting VRAM limit and PCI-E 2.0 vs 3.0 battle video tests. Make sure to watch in 480P instead of lower (I forgot to set the camera back to 720P and I don't feel like recording everything all over again lol). Sorry about the video quality but it is still view-able.

http://www.youtube.com/watch?v=S0-x...DvjVQa1PpcFOpwHlB70YlOHjBhzZ8mtcWAKgkFdz3LBo=

http://www.youtube.com/watch?v=tkZz...DvjVQa1PpcFOpwHlB70YlOALsTk3pZxTxt4tbGt5H9Yk=

Incredibly interesting! I am intrigued to know what it's like in multiplayer with those settings, no video necessary but is VRAM an issue (aero disabled) at all with HBAO on/off?

Thanks for this! It does seem to put a lot of VRAM myths to bed!
 
Associate
OP
Joined
12 Dec 2010
Posts
1,837
Location
Washington D.C.
Nice, glad to see someone else backing up my tests. Although the difference isn't as dramatic as mine due to a lower resolutions and card count, they are still very impressive differences. :thumb:


10b499c8_DIFFERENCE.jpeg


From user: psikeiro.
 
Caporegime
Joined
26 Dec 2003
Posts
25,666
The difference here is Vega is *truly* hitting the VRAM limit with the insane resolution and settings that he's running at, most people in the GPU forum simply look at MSI Afterburner and see 1500/1500 usage and then think they too have hit the limit, when in actual fact there's probably plenty of space available on their card if old data/textures are cleared out to make room.

Vega's cards have reached a point where everything in memory (and more) is required simply to display what's onscreen and so performance is tanking as expected whilst data is constantly streamed, all of the VRAM fearmongers in the GPU forum claiming 1Gb not to be enough are yet to show the same thing happening at normal resolutions (1080P etc).

PCI-E 3.0 has less overheads and is more efficient than PCI-E 2.0 so I think it was to be expected that it would be a bit faster overall even at lower resolutions.

The PCIe 2.0 bit rate is specified at 5GT/s, but with the 20 percent performance overhead of the 8b/10b encoding scheme, the delivered bandwidth is actually 4Gbps. PCIe 3.0 removes the requirement for 8b/10b encoding and uses a more efficient 128b/130b encoding scheme instead.
 
Back
Top Bottom