• Competitor rules

    Please remember that any mention of competitors, hinting at competitors or offering to provide details of competitors will result in an account suspension. The full rules can be found under the 'Terms and Rules' link in the bottom right corner of your screen. Just don't mention competitors in any way, shape or form and you'll be OK.

Possible Radeon 390X / 390 and 380X Spec / Benchmark (do not hotlink images!!!!!!)

Status
Not open for further replies.
Also if it sips power in such a frugal way, why has it been suggested that it will use a hydro cooling solution as a lot of previous leaks have stated.

Design it with watercooling from the outset/ground up, unlike any other GPU thus far (to my knowledge) and I who knows what the benefits are across the board?
 
Apple left the sapphire glass people out to dry, maybe AMD will do the same with Asetek? :D

Of course that cooler was just a rumour, and I do not think the details of the deal were known other than "major contract" and I'm not sure it was even confirmed AMD were the other party either.
 
Design it with watercooling from the outset/ground up, unlike any other GPU thus far (to my knowledge) and I who knows what the benefits are across the board?

Just how is someone suppose to run four in quadfire then if each has its own 120mm rad and don't forget whatever cooling you will have on the CPU.

Even a case like this you would have trouble fitting them all in.

YOUR BASKET
1 x Lian Li PC-343B Cube HPTX Housing - Black £284.99
Total : £284.99 (includes shipping : ).

 
No as it's a 4 gpu benchmark basically 4 Titans v 4 gtx680's. It's not completely accurate as clock speeds are different. I would say the Titans at boost clocks would be slightly higher. Maybe wrong about that as i am not sure if Kaap is listing the Titans at normal or boost and same for gtx690.

Taking into account actual boost speeds they are going around at near enough the same clockspeeds which is just below 1200mhz for both of them.

And all 4 GPUs are most definitely in play on the 690s.:D
 
When not bottlenecked by the CPU/Settings going from GK104 to GK110 scales almost perfectly in line with the number of SMX modules used. In the example below with the GTX 690s and Titans it is 8 to 14 SMX modules.



The above table is taken from the Heaven 4 1600p bench.

Using this setting is enough to remove any CPU bottleneck once you reach about 4.4ghz.

The 690 you have listed there ^^^ is only running one GPU, IE its a 680. Haven is also very 'Shader and' Nvidia friendly. in a mixture of games it doesn't look like that, Unigine is pretty unique in that.

It does not matter if it is NV friendly as we are only comparing NV GPUs.

The table above is for 4 GPU setups as well so they are all in use.:D
 
Kaap your not suggesting that Unigine is the go to for all round gaming performance, ^^^^ you know better than that.
------------

The Hydro Cooler rumours started before Maxwell, there may have been a Hawaii XTX in the works which has been dropped since Maxwell.
 
Last edited:
Kaap your not suggesting that Unigine is the go to for all round gaming performance, ^^^^ you know better than that.
------------

The Hydro Cooler rumours started before Maxwell, there may have been a Hawaii XTX in the works which has been dropped since Maxwell.

No but it is a good guide to GPU performance.

What I am saying is it shows the difference in scaling between different NV GPUs very accurately.

Here is another one with the GTX 980s coming unstuck against the ancient Titans @4K this time.

2160p

4 GPUs

  1. Score 1759, GPU nvTitan @981/1788, CPU 3930k @4.8, Kaapstad Link
  2. Score 1702, GPU 980 @1472/1962, CPU 5960X @4.0, Kaapstad Link
  3. Score 1682, GPU 290X @1230/1500, CPU 4930k @4.8, Kaapstad Link
  4. Score 1382, GPU 290X @1000/1250, CPU 3970X @4.9, AMDMatt Link

The 980s have not run out of VRAM, they have run out of silicon @4K.:D

I have posted the above as a word of caution about upcoming next gen cards, what works @1080 with very high clockspeeds does not always work @4K with a very heavy workload.:D
 
No but it is a good guide to GPU performance.

What I am saying is it shows the difference in scaling between different NV GPUs very accurately.

Here is another one with the GTX 980s coming unstuck against the ancient Titans @4K this time.



The 980s have not run out of VRAM, they have run out of silicon @4K.:D

I have posted the above as a word of caution about upcoming next gen cards, what works @1080 with very high clockspeeds does not always work @4K with a very heavy workload.:D

So your saying to judge its overall Gaming performance we should ignore overall gaming performance and instead constraint only on one the thing that isn't even a game, Unigine?

1X 780TI (Full Fat GK110) vs 1x 770 (Full Fat GK104)

+ 25% @ 1080P + 31% @ 1440P



 
Last edited:
!!!!!Breaking news!!!!!

The previously mentioned leak is accurate it really is that fast while being able to use only that tiny amount of power.

Introducing AMD's new Uber mode/ low power mode switch. ;)

390x-power-switch.jpg
 
So your saying to judge its overall Gaming performance we should ignore overall gaming performance and instead constraint only on one the thing that isn't even a game, Unigine?

Not at all

What I am saying is to judge a GPUs true performance we need to use tools that do not suffer from CPU bottlenecks and poor multi card scaling with bad drivers (games do suffer from both of these problems). Heaven 4 is one such tool but there are a lot of others, some of them are games too like TR at very high settings works well.
 
!!!!!Breaking news!!!!!

The previously mentioned leak is accurate it really is that fast while being able to use only that tiny amount of power.

Introducing AMD's new Uber mode/ low power mode switch. ;)

390x-power-switch.jpg

Somehow i doubt using the BIOS from what is very likely to be a completely different GPU to reduce the power consumption of Hawaii will have the desired effect.
 
I'm confused... Samsung's 14nm is the same one that GF are using right? The mobile low-power one that can't do big & fast? Same goes for 20nm too.

Nope, there is no mobile only stuff, you can make any chip on any process. Almost every process has "great for mobile" slapped all over it seeing as mobile is the highest volume market that shouldn't be surprising. 28nm was a "mobile" process, they all are. What the heck is mobile, a chip that runs at 2.5Ghz now, or a chip that runs at 1.5Ghz. Clock range and voltage are more what a specific process is tuned to, not a die size, transistors don't change characteristics based on the number of transistors in a chip or the die size. Overall power consumption leakage increases with a bigger die, but not power or leakage per transistor.

The fastest use and highest power chips in any modern gpu are in the memory controller..... gddr5 works at significantly higher clock speeds than HBM anyway, Ie the interface on a gddr5 gpu has to have part of it running at the clockspeed of the memory, with HBM this will be 1Ghz.

Depending on the chip there will be a best process for it, maybe one tuned for high power will give 5% lower leakage, but it will also be bigger. There are trade offs. There is effectively no chance for any version of 20nm to offer worse power/performance/size/leakage/etc characteristics than any 28nm process. You might lose 10% going for a process tuned for a specific range of speeds, but you're gaining 50% performance instead of 60% moving from 28nm.

It's a non issue, however HBM can certainly be helpful in making the fastest/leakiest parts of the core less fast and less leaky.
 
Not at all

What I am saying is to judge a GPUs true performance we need to use tools that do not suffer from CPU bottlenecks and poor multi card scaling with bad drivers (games do suffer from both of these problems). Heaven 4 is one such tool but there are a lot of others, some of them are games too like TR at very high settings works well.

Your adding in far too many variables, your actually making it far more likely to be a completely inaccurate picture by citing a 4 GPU SLI in a benchmark that pretty much exclusively scales from shaders.

Your introducing the likelihood of CPU Bottlenecking, SLI scaling issues and an abstract reasoning.

If we are to understand how they scale then we need to look at real world scaling, not some narrow abstract concept.

One GPU against one GPU, so unlikely CPU Bottlenecking and no SLI issues over a mixture of actual games.
 
Your adding in far too many variables, your actually making it far more likely to be a completely inaccurate picture by citing a 4 GPU SLI in a benchmark that pretty much exclusively scales from shaders.

Your introducing the likelihood of CPU Bottlenecking, SLI scaling issues and an abstract reasoning.

If we are to understand how they scale then we need to look at real world scaling, not some narrow abstract concept.

One GPU against one GPU, so unlikely CPU Bottlenecking and no SLI issues over a mixture of actual games.

I am taking out variables actually as every game listed in those graphs you linked behaves differently.

I am saying if we were to use something like Heaven 4 for example we have the option of comparing multi GPUs if we want to as they scale very well on from AMD and NV.

There is also no driver problems with either AMD or NV.

Where the fps are quite low at high resolution this also removes another variable as there is no CPU bottleneck.

There are other tools we can use as well, like I said earlier some of these are games. Another one that is very good is SEV2 at high resolutions and there are plenty more. Valley also works but if using more than a single card you need to run @4K to remove the CPU bottleneck.
 
I am taking out variables actually as every game listed in those graphs you linked behaves differently.

I am saying if we were to use something like Heaven 4 for example we have the option of comparing multi GPUs if we want to as they scale very well on from AMD and NV.

There is also no driver problems with either AMD or NV.

Where the fps are quite low at high resolution this also removes another variable as there is no CPU bottleneck.

There are other tools we can use as well, like I said earlier some of these are games. Another one that is very good is SEV2 at high resolutions and there are plenty more. Valley also works but if using more than a single card you need to run @4K to remove the CPU bottleneck.

Its best case scenario, perhaps not even that, perhaps unique to Unigine and Possibly FutureMark.

I said Shaders alone do not scale 100%, they don't, not even in Unigine, and i wasn't talking about Unigine, i was talking about real world use.

If the GTX 980TI gets its +40% Shaders over the GTX 980 it will no doubt will scale 30 or 35% in Unigine 'best case scenario'
In about 80% of games that scaling is (as GTX 770 to GTX 780TI scaling levels) likely to be around 20% given that most games are games as opposed to benchmarking tools and scale with Shaders, ROP's and Memory Bandwidth.
 
Last edited:
So your saying to judge its overall Gaming performance we should ignore overall gaming performance and instead constraint only on one the thing that isn't even a game, Unigine?

1X 780TI (Full Fat GK110) vs 1x 770 (Full Fat GK104)

+ 25% @ 1080P + 31% @ 1440P




That isn't reflective of the percentage increase that the 780Ti has over the 770. Maths fail.
 
Last edited:
If the leak holds truth either way you look at it I'm disappointed.

IF it is 20nm, those performance results are dire.

IF it's 28nm and circa 200w gaining 10% over gm204, then I can honestly say I'd prefer a 300w monster battering a 980 by 30%+.

I know power efficiency has been the buzz word as of late, but at the end of the day, deep down, we all just want higher frame rates not a few pence off the leckie Bill a year. To me efficiency for power vs performance has most relevance to the highest performing part and what can be squeezed out of it.
 
Status
Not open for further replies.
Back
Top Bottom