• Competitor rules

    Please remember that any mention of competitors, hinting at competitors or offering to provide details of competitors will result in an account suspension. The full rules can be found under the 'Terms and Rules' link in the bottom right corner of your screen. Just don't mention competitors in any way, shape or form and you'll be OK.

Possible Radeon 390X / 390 and 380X Spec / Benchmark (do not hotlink images!!!!!!)

Status
Not open for further replies.
If the leak holds truth either way you look at it I'm disappointed.

IF it is 20nm, those performance results are dire.

IF it's 28nm and circa 200w gaining 10% over gm204, then I can honestly say I'd prefer a 300w monster battering a 980 by 30%+.

I know power efficiency has been the buzz word as of late, but at the end of the day, deep down, we all just want higher frame rates not a few pence off the leckie Bill a year. To me efficiency for power vs performance has most relevance to the highest performing part and what can be squeezed out of it.

Basically performance matters more to us and if you have a decent PSU then power usage is down the list. Thing is it looks like Nvidia are cashing in on 980/970 which performance wise is pretty disappointing. People seem to be lapping up small gains and opening there wallets so i would imagine it's an open invitation for this to be the way things go in the future.

When i was big into gaming it was all about the performance upgrade and how much it was going to cost me. Just ordered a 290 pcs+ oc which should feel like a decent upgrade from my 6870. this is the kind of upgrade that was expected every couple of years max yet i have waited a good bit longer this time. The kind of money getting spent these days is crazy for a 20% performance boost. The 290 should feel like money well spent when i play the latest games.
 
If the leak holds truth either way you look at it I'm disappointed.

IF it is 20nm, those performance results are dire.

IF it's 28nm and circa 200w gaining 10% over gm204, then I can honestly say I'd prefer a 300w monster battering a 980 by 30%+.

I know power efficiency has been the buzz word as of late, but at the end of the day, deep down, we all just want higher frame rates not a few pence off the leckie Bill a year. To me efficiency for power vs performance has most relevance to the highest performing part and what can be squeezed out of it.

It's the 380X. And an ES without drivers or even final silicon.

Wish I hadn't posted it now, people are just jumping on the hate train / assuming it's the 390X without even reading.
 
It's the 380X. And an ES without drivers or even final silicon.

Wish I hadn't posted it now, people are just jumping on the hate train / assuming it's the 390X without even reading.

True. If it is indeed the mid high range card then it's a good step forward. I have seen other sites repeating the info but using the 390 instead of the 380 which could be confusing what has been said.
 
If the leak holds truth either way you look at it I'm disappointed.

IF it is 20nm, those performance results are dire.

IF it's 28nm and circa 200w gaining 10% over gm204, then I can honestly say I'd prefer a 300w monster battering a 980 by 30%+.

I know power efficiency has been the buzz word as of late, but at the end of the day, deep down, we all just want higher frame rates not a few pence off the leckie Bill a year. To me efficiency for power vs performance has most relevance to the highest performing part and what can be squeezed out of it.

The more you post the more it shows you have no clue what you're talking about.

I'll give you a hint, Nvidia for 3 generations had huge problems with yields of 500mm^2+ cores on new processes, the entire rest of the industry thought making big cores on a new process is retarded. After the 480 fiasco Nvidia themselves finally cottoned on and went with a 300mm^2 core to start off with at 28nm. 20nm is even lower yields, even more expensive and even slower to make chips so it makes a large chip on 20nm even less viable than on 28nm.

You won't see GK110, nor Hawaii sized cores with the first 20nm chips(16/14nm maybe because it's really 20nm with finfet.. it's still quite likely we'll not see a big core straight off though). This is a given, the entire industry stopped doing huge dies on new low yield processes 6+ years ago, even nvidia stopped doing it 3 years ago. It's not going to happen, this is widely known by 99% of the world.

The first 20nm chips will be your 7970/gtx680's, they will be 250-350mm^2 die size range, they will be the midrange cards of the next generation, not the high end.

Next gen AMD's midrange is going to be "more" midrange than last gen, 350mm^2 is too big to be really profitable in midrange, 250-300mm^2 makes a lot more sense. The only issue is 20nm has no where near the power reduction of a typical new process node, it's 20-25% off ideal(which will be clawed back by finfet) thus your normal 150W midrange becomes a 200W part.
 
Its best case scenario, perhaps not even that, perhaps unique to Unigine and Possibly FutureMark.

I said Shaders alone do not scale 100%, they don't, not even in Unigine, and i wasn't talking about Unigine, i was talking about real world use.

If the GTX 980TI gets its +40% Shaders over the GTX 980 it will no doubt will scale 30 or 35% in Unigine 'best case scenario'
In about 80% of games that scaling is (as GTX 770 to GTX 780TI scaling levels) likely to be around 20% given that most games are games as opposed to benchmarking tools and scale with Shaders, ROP's and Memory Bandwidth.

The bottom line is I have tested this for my self and not just on Heaven 4 but in games too. Providing there are no artificial barriers like a CPU bottleneck or rubbish drivers a single GK110 chip scales very well when compared to a GK104 with near perfect results. The same applies when comparing multi GPU setups when comparing anything up to 4 v 4 CPUs.

I am not the only one who has found this as there are plenty of Titan owners who upgraded from GTX 680s who could tell you the same.

Comparing Hawaii and Tahiti GPUs also work the same as they are very closely related. If you remember i predicted what score i would get on the Valley bench at a given clockspeed with a 290X long before I even got my hands on the card and I was practically spot on. This was also by using Heaven 4 to work out the likely performance difference.
 
How far is big Maxwell off,when Nvidia has just launched the GK210 recently??

One of the main reasons Nvidia can make large die top tier GPUs is since commercial sales help subsidise R and D and sales.

Unless Nvidia were to have TWO large GPUs serving the compute segment,it might be a while until we see it,and I somewhat doubt(since the current iteration of Maxwell does not appear to a massive improvement in compute loads regarding power consumption unlike gaming).

Edit!!

It appears the GK210 was delayed by six months - I suspect the GM200/GM210 is going to be either a very expensive 16NM/20NM card or a huge 28NM one,which might explain the delay and Nvidia needing to release a stop-gap GK210 to serve its commercial customers.
 
Last edited:
The GTX980 is top end and as good as it gets for single chip Geforce cards.
It's really not. The 970 and 980 are really just midrange Maxwell, as big maxwell is stil not out yet.

Since the 670/680, Nvidia has simply splitted the release of the 60/60Ti and the 70/80 cards over two gens instead of one, and called the 60/60Ti the 70/80 one gen, and the 70/80 cards the 80/80Ti on the next...
 
How can anyone call a 980 GTX midrange?

It's high end, and whatever comes next from Nvidia with be Niche enthusiast.

I think people have taken what i said previously, completely wrong.

The GTX 980 is currently nVidia's high end GPU.
The GTX 980 GPU is mid-range Maxwell. There's still a lot to come yet, including full-fat maxwell and die shrink.
 
Status
Not open for further replies.
Back
Top Bottom