• Competitor rules

    Please remember that any mention of competitors, hinting at competitors or offering to provide details of competitors will result in an account suspension. The full rules can be found under the 'Terms and Rules' link in the bottom right corner of your screen. Just don't mention competitors in any way, shape or form and you'll be OK.

980 Ti vs 1080 - WHAT???

Soldato
Joined
27 Jan 2009
Posts
6,563
Until Kepler that is exactly what happened the high end GPUs were near the upper end of what was possible without going completely silly and the cores that didn't make the grade were used in the rest of the lineup - high end and volume don't generally go together - as soon as you are talking volume you know you are talking a mid range product and not high end when it comes to GPUs. A "volume high end product" is not the definition of a high end GPU.

There is a huge gulf between the GP100 and the 1080 - easily enough space to make something that is more like high end in the region of the TX stats. 16nm is more expensive than some previous processes but not to the extent you are portraying and the process itself quickly hit a good level of maturity once it was out of risk - nVidia is pushing the clock speeds a bit more than the process was really envisioned for which has had some impact on availability.

If you look at ~300mm2 cores that were around about the same performance faster over the previous high end on the node before them we are talking 8800GT, GRS250, GTX460, etc. you have to completely rewrite the rules to make the 1080 high end. Meanwhile nVidia is laughing all the way to the bank off those who buy into it.

So you have again completely ignored the possibility that the technical problems of the more recent 'nodes' may have changed the economics of GPU's so that releasing bigger dies up front might not make the best sense. Your 'high end' metric has already been rubbished there are other limiting factors then net power draw for a GPU. Net power draw may have been more of a limiting issue for NVIDIA at and before Fermi's release not so much now.

Up until Kepler GPU's were to an extent the limit of what was feasible due to their power draw its just that the limiting factors have most likely shifted since then.

Why is it so hard to understand that something being technically feasible does not mean it would make sense economically?

The ever reducing quantity of companies (fabs) making computer components suggests that the ongoing costs of successive node shrinks are becoming prohibitive

http://www.extremetech.com/computin...its-of-smaller-processes-and-new-foundry-tech
 
Last edited:
Man of Honour
Joined
13 Oct 2006
Posts
91,177
So you have again completely ignored the possibility that the technical problems of the more recent 'nodes' may have changed the economics of GPU's so that releasing bigger dies up front might not make the best sense. Your 'high end' metric has already been rubbished there are other limiting factors then net power draw for a GPU. Net power draw may have been more of a limiting issue for NVIDIA at and before Fermi's release not so much now

That doesn't make it a true high end GPU just the best they can manage.

A high end GPU doesn't need to be economical - that can be left to other GPUs like the one that normally sits in the position the 1080 is sitting in.

I guess if I'd spent many £100s on what is basically this generations GTX460 I'd try and convince myself it was high end too - I know it made me wince enough spending £360 on what in any other generation would have been the one down from the middle of the line up card.
 
Last edited:
Soldato
Joined
27 Jan 2009
Posts
6,563
That doesn't make it a true high end GPU just the best they can manage.

A high end GPU doesn't need to be economical - that can be left to other GPUs like the one that normally sits in the position the 1080 is sitting in.

Your high end is a figment of your imagination. The 1080 was in short supply for a good while after launch. It went beyond artificial scarcity to drive up the price as there are still some of the higher end SKU's with supply lagging demand. Any attempt to make more of the 1080 would have exacerbated these issues. The 1080 already isn't economical (for the consumer) which is why the 1070 offers a much better bang for the buck and ditto down the line to a degree

And excluding some 'halo' products to drive a brand a product had better be 'economical' for the producer as they wont get very far otherwise!
 
Last edited:
Man of Honour
Joined
13 Oct 2006
Posts
91,177
Your high end is a figment of your imagination.

Not really - in any other generation the Pascal Titan X would have come out with 6-8GB VRAM and been the GTX780, GTX470, GTX260, etc. of this generation at around £450-480 and so on, dunno why people are so eager to buy into the games nVidia are playing this round.

And excluding some 'halo' products to drive a brand a product had better be 'economical' for the producer as they wont get very far otherwise!

Its the lower end and mid-range cards that have been the profit drivers - the GT430, GTX460, 740/750, GTX970, etc. if we use your metric then should we call the RX480 high end as well?

EDIT: The cost of the P100 has little reflection of what a larger core on a GeForce card would cost - there are many more factors in play there including the current status of that kind of HBM2 configuration.
 
Last edited:
Soldato
Joined
27 Jan 2009
Posts
6,563
Its the lower end and mid-range cards that have been the profit drivers - the GT430, GTX460, 740/750, GTX970, etc. if we use your metric then should we call the RX480 high end as well?

The mid range maybe where the bulk of sales are and hence bulk of the profit but NVIDIA cant afford to make a loss at the top end.

Its quite simple you contend that there's no reason that NVIDIA cant supply big die GPU's as flagships early on in a design process cycle like they did previously.

I contend that moving to smaller fabrication methods to dramatically increase the transistor count especially since 40nm has meant that it is no longer feasible/ economical to work like this

But you don't just have to take it from me.... the chief architect of AMD Radeon Tech agrees

'With changes in Moore’s Law and the realities of process technology and processor construction, multi-GPU is going to be more important for the entire product stack, not just the extreme enthusiast crowd. Why? Because realities are dictating that GPU vendors build smaller, more power efficient GPUs'

http://www.pcper.com/news/Graphics-...past-CrossFire-smaller-GPU-dies-HBM2-and-more

The RX480 is not high end because of its PERFORMANCE nothing else. A card that can only just beat a GTX980 some of the time isn't high end now
 
Last edited:
Man of Honour
Joined
13 Oct 2006
Posts
91,177
The mid range maybe where the bulk of sales are and hence bulk of the profit but NVIDIA cant afford to make a loss at the top end.

Its quite simple you contend that there's no reason that NVIDIA cant supply big die GPU's as flagships early on in a design process cycle like they did previously.

I contend that moving to smaller fabrication methods to dramatically increase the transistor count especially since 40nm has meant that it is no longer feasible/ economical to work like this

But you don't just have to take it from me.... the chief architect of AMD Radeon Tech agrees

'With changes in Moore’s Law and the realities of process technology and processor construction, multi-GPU is going to be more important for the entire product stack, not just the extreme enthusiast crowd. Why? Because realities are dictating that GPU vendors build smaller, more power efficient GPUs'

Not suggesting they make a loss - but mid range is where you aim volume and profit at. 16nm might not be the cheapest to design or produce a product on but it isn't so significantly more expensive to require shifting all the product tiers as they effectively have - the fact they can even consider producing a 600mm2 industry grade card from the word go tends to indicate there isn't too much wrong with 16nm which would otherwise additionally increase costs.

We still aren't seeing dramatic effect from Moore's law with 16nm FF+ it is still possible to build those 450-500mm2 cores and while there is an increase in cost it isn't to a level that would dictate design considerations in any significant way.
 
Soldato
Joined
22 Jun 2012
Posts
3,732
Location
UK
Not suggesting they make a loss - but mid range is where you aim volume and profit at. 16nm might not be the cheapest to design or produce a product on but it isn't so significantly more expensive to require shifting all the product tiers as they effectively have - the fact they can even consider producing a 600mm2 industry grade card from the word go tends to indicate there isn't too much wrong with 16nm which would otherwise additionally increase costs.

We still aren't seeing dramatic effect from Moore's law with 16nm FF+ it is still possible to build those 450-500mm2 cores and while there is an increase in cost it isn't to a level that would dictate design considerations in any significant way.

But from what I have read, they are pretty much releasing Volta on 16nm now and are concentrating on that instead of releasing huge pascal 600mm... why keep releasing massive cards when there is no competition is my question to you.
 
Soldato
Joined
27 Jan 2009
Posts
6,563
We still aren't seeing dramatic effect from Moore's law with 16nm FF+ it is still possible to build those 450-500mm2 cores and while there is an increase in cost it isn't to a level that would dictate design considerations in any significant way.

Possible does not equate to practical....

As per my earlier referenced quote a chief architect and amd does not support your claim.

'With changes in Moore’s Law and the realities of process technology and processor construction, multi-GPU is going to be more important for the entire product stack, not just the extreme enthusiast crowd. Why? Because realities are dictating that GPU vendors build smaller, more power efficient GPUs'

Ill leave it to forum members to decide whether they take the opinion of a random forum member over that of a chief architect involved in gpu design.....
 
Last edited:
Man of Honour
Joined
13 Oct 2006
Posts
91,177
Ill leave it to forum members to decide whether they take the opinion of a random forum member over that of a chief architect involved in gpu design.....

Oh we are going down that road are we....

Slightly flippant but we are talking someone who is struggling to compete with nVidia on the same density how serious should we really take them? :p

But from what I have read, they are pretty much releasing Volta on 16nm now and are concentrating on that instead of releasing huge pascal 600mm... why keep releasing massive cards when there is no competition is my question to you.

I think people are going to be surprised by the realities of Volta - everyone's mindsets seem to be stuck in something that Volta is very much not.
 
Last edited:
Soldato
Joined
22 Jun 2012
Posts
3,732
Location
UK
Oh we are going down that road are we....

Slightly flippant but we are talking someone who is struggling to compete with nVidia on the same density how serious should we really take them? :p



I think people are going to be surprised by the realities of Volta - everyone's mindsets seem to be stuck in something that Volta is very much not.

What do you mean? I think volta will be the usual 30% gain over the last generation with a few new things like possibly better DX12 support and async compute support etc. What do you think people are thinking? That it will be some magical 200% better? I do not think it will be more than the standard 30%, but still that is 30% which would be TXP performance from 1170/1180 etc.
 
Last edited:
Man of Honour
Joined
13 Oct 2006
Posts
91,177
Techspot - 6 Generations of Geforce

Take a good look at these charts. Anyone expecting massive gains between generations is dreaming

The amount of time on 28nm kind of skews that a bit.

What do you mean? I think volta will be the usual 30% gain over the last generation with a few new things like possibly better DX12 support and async compute support etc. What do you think people are thinking? That it will be some magical 200% better? I do not think it will be more than the standard 30%, but still that is 30% which would be TXP performance from 1170/1180 etc.

The whole focus and approach with Volta is different to what people generally think. Volta is not what people think it is.
 
Last edited:
Soldato
Joined
9 Nov 2009
Posts
24,846
Location
Planet Earth
What do people expect?? Nvidia made a brilliant marketing move with the re-organisation of the card tiers.

For example if Fermi followed the same naming scheme today:

GTX580 = Titan X MK2
GTX570= GTX1080TI
GTX560TI =GTX1080
GTX560 =GTX1070
GTX550TI = GTX1060

This is how the naming should be:

GTX1080= Titan X MK2
GTX1070= GTX1080TI
GTX1060/GTX1060TI =GTX1080
GTX1060/GTX1050TI =GTX1070
GTX1050TI/GTX1050 =GTX1060


GTX980 = Titan X MK1
GTX970 = GTX980TI
GTX960/GTX960TI = GTX980
GTX950/GTX950TI = GTX970
GTX950TI/GTX950 = GTX960

The GTX1080 looks meh compared to a GTX980TI since its a small chip and more like a midrange chip of old.

Hence a GTX1080 compared to GTX980 or GTX970 is impressive. However,when compared to the higher end chip,it looks less impressive. But if this was badged as a midrange series part it would look awesome,like some sort of HD4870 or 8800GT.

Its impressive but the pricing is out of whack for such a small chip(even if you consider extra costs elsewhere).
 
Last edited:
Man of Honour
Joined
13 Oct 2006
Posts
91,177
What do people expect?? Nvidia made a brilliant marketing move with the re-organisation of the card tiers.

For example if Fermi followed the same naming scheme today:

GTX580 = Titan X MK2
GTX570= GTX1080TI
GTX560TI =GTX1080
GTX560 =GTX1070
GTX550TI = GTX1060

This is how the naming should be:

GTX1080= Titan X MK2
GTX1070= GTX1080TI
GTX1060/GTX1060TI =GTX1080
GTX1060/GTX1050TI =GTX1070
GTX1050TI/GTX1050 =GTX1060

In terms of power use and transistor count (assuming commensurate performance for those) versus the max you'd ever likely to see for a node the Pascal Titan X is more like GTX570 territory albeit with more VRAM than that would likely have, 1080 v 980ti is more like GTX560 v GTX285

EDIT: Should be using the 400 series really but aside from some slight gains from 480 to 580 and the 570/560ti 448 model being basically 480 performance it isn't a hugely different story. Can you imagine the outcry if nVidia had wanted ~£400 for the 460 768MB/550ti heh.
 
Last edited:
Soldato
Joined
9 Nov 2009
Posts
24,846
Location
Planet Earth
In terms of power use and transistor count (assuming commensurate performance for those) versus the max you'd ever likely to see for a node the Pascal Titan X is more like GTX570 territory albeit with more VRAM than that would likely have, 1080 v 980ti is more like GTX560 v GTX285

Good point,but sadly it seems a lot of enthusiasts want to accept the way things are going sadly without questioning it,and you are more likely to be jumped on if you try to question the status quo.

I am hardly even a high end card purchaser,but I have resigned myself to either pay a lot for a midrange card(60 series Nvidia or AMD equivalent and I expect the RX580 and GTX2060 to be even more expensive) with less longevity or to wait longer and longer for one that does.

The marketing has done its job and people are just accepting it now. Even if AMD were to have competitive products higher up,I don't think they want to push things too much pricewise now,since it is easier to have a two horse cartel.

The issue is that as time progresses,consoles will start to become more of an alternative if performance updates on the PC are being drawn out. One of the major draws is better graphics on the PC and if the midrange is getting more and more anaemic,it might explain why so many devs are targetting consoles as lead platforms.

Planetside 2 is now on console - for me that is a big deal as it was a major PC only franchise.

Edit!!

If you go back to the Nvidia financials thread,I did some digging - they are raking it in. 60% margins,$5 billion in the bank,etc.

The move has worked brilliantly.
 
Last edited:
Man of Honour
Joined
13 Oct 2006
Posts
91,177
Good point,but sadly it seems a lot of enthusiasts want to accept the way things are going sadly without questioning it,and you are more likely to jumped on if you try to question the status quo.

I am hardly even a high end card purchaser,but I have resigned myself to either pay a lot for a midrange card(60 series Nvidia or AMD equivalent and I expect the RX580 and GTX2060 to be even more expensive) with less longevity or to wait longer and longer for one that does.

The marketing has done its job and people are just accepting it now. Even if AMD were to have competitive products higher up,I don't think they want to push things too much pricewise now,since it is easier to have a two horse cartel.

The issue is that as time progresses,consoles will start to become more of an alternative if performance updates on the PC are being drawn out. One of the major draws is better graphics on the PC and if the midrange is getting more and more anaemic,it might explain why so many devs are targetting consoles as lead platforms.

Planetside 2 is now on console - for me that is a big deal as it was major PC only franchise.

Edit!!

If you go back to the Nvidia financials thread,I did some digging - they are raking it in. 60% margins,$5 billion in the bank,etc.

The move has work brilliantly.

If it wasn't for the GP100 being possible pretty much from the word go and how soon the Titan X has shown up I might buy the other argument a bit more but along with the fact the recent financial report shows they aren't exactly struggling to make money from this node and while it is a fair bit more expensive it isn't stupidly more expensive on 16nm versus 40, 28, etc. it is obvious they are trickling the cards out and laughing to the bank and people are actually wanting to accept it :(
 
Soldato
Joined
9 Nov 2009
Posts
24,846
Location
Planet Earth
If it wasn't for the GP100 being possible pretty much from the word go and how soon the Titan X has shown up I might buy the other argument a bit more but along with the fact the recent financial report shows they aren't exactly struggling to make money from this node and while it is a fair bit more expensive it isn't stupidly more expensive on 16nm versus 40, 28, etc. it is obvious they are trickling the cards out and laughing to the bank and people are actually wanting to accept it :(

Yep,I agree entirely and I have tried explaining it but its not worth the effort as you then get made to look the bad person for pointing it out,since you don't put the company financials first,beyond being a consumer and having some expectations beyond what marketing wants to tell you(that includes many of the tech channels which quite happily play along too).
 
Associate
Joined
14 Oct 2010
Posts
331
Location
Birmingham
Yep,I agree entirely and I have tried explaining it but its not worth the effort as you then get made to look the bad person for pointing it out,since you don't put the company financials first,beyond being a consumer and having some expectations beyond what marketing wants to tell you(that includes many of the tech channels which quite happily play along too).

Yup, 980 Ti will stay in the rig, next week will be on OC . No point to upgrade.
I do the same with GTX 580 3GB vram. I OC max and even rise of the tomb raider is happy as doom or dying light and other games to run with high or mix high and very high settings in 1080p or 1440p. I don't care about marketing.

All i do care is support , support and support.
 
Back
Top Bottom