• Competitor rules

    Please remember that any mention of competitors, hinting at competitors or offering to provide details of competitors will result in an account suspension. The full rules can be found under the 'Terms and Rules' link in the bottom right corner of your screen. Just don't mention competitors in any way, shape or form and you'll be OK.

Nvidia Ampere might launch as GeForce GTX 2070 and 2080 on April 12th

Hence why I included the 1080/1080ti Nvidia statements in anticipation of this point.

Point remains that lots of people label the 680 as being a mid range (non flagship) card due to die size and memory bus width not its performance

As do I if we are taking the entire Kepler range into account.

The flagship Kepler card was the Titan Black.

Having said that even the Titan Black was a fraction slower than a GTX 690.
 
As do I if we are taking the entire Kepler range into account.

The flagship Kepler card was the Titan Black.

Having said that even the Titan Black was a fraction slower than a GTX 690.

I don't disagree if you want to add a qualifier like 'for the whole Kepler range' (an assessment made in retrospect)
 
I don't disagree if you want to add a qualifier like 'for the whole Kepler range' (an assessment made in retrospect)
Surely "mid-range" and "top-end" must by necessity be a comparison across the whole range?

And by range the overwhelming majority will be referring to architecture - ie "the Pascal range"; "Kepler range", etc.
 
Surely "mid-range" and "top-end" must by necessity be a comparison across the whole range?

And by range the overwhelming majority will be referring to architecture - ie "the Pascal range"; "Kepler range", etc.

You can only say that in retrospect and only with a qualifier ... i.e the Titan X (Maxwell) card was the flagship of the Maxwell range of cards.

Nvidia had no problems calling their cards like the GTX 680/980/1080 either 'high end' and/ or 'flagships' on release

Here part of the press release for the GTX 970/980...

 
You can only say that in retrospect and only with a qualifier ... i.e the Titan X (Maxwell) card was the flagship of the Maxwell range of cards.

Nvidia had no problems calling their cards like the GTX 680/980/1080 either 'high end' and/ or 'flagships' on release

Here part of the press release for the GTX 970/980...
Yes but two points in reply to that...

1. It's marketing.
2. We know that the full range in every case will include some variant of the full-fat chip. (In every case so far).

nV will naturally call their xx80 card the "flagship"... until the Ti is released. They want you to pay top dollar for it. It will even be (partly) true for a period of a few months.

But sure as night follows day (where I live), the xx04 chip/xx80 card is only the best-performing card of that range for a few months before it's de-throned. First by the Titan, then a (relatively) short time later by the Titan-owner's pet hate - the vastly cheaper but similar performing Ti card.

So yes... nV call the xx80 their "flagship" card for a while. It's in their interests to big it up so they sell loads before the inevitable Ti comes along.

Basic marketing. You wouldn't get so much excitement for the xx80 card if you promoted it as "Our soon-to-be 3rd best card - if you'd like to wait a short while for the full fat chips."

There's marketing and then there's truth. Sometimes they are slightly entangled.
 
Rubbish analogy.... the amount of beer bottles in a crate and the alcohol content matter to a customer unlike the die size of a gpu and the memory bus width size.....

Unless you have some unusual phillia for bus width size and die sizes on GPU's

Full fat chips have more cores than cut down chips so offer better performance
 
Full fat chips have more cores than cut down chips so offer better performance


Demonstrably total nonsense.....

The GTX 980ti has 2816 cores

The GTX 1080 has 2560 cuda cores

The GTX1080 is an almost universally a better card (gaming performance wise) then the 980ti

If I buy a crate of beer the amount of bottles in the crate, the amount of beer in each bottle, the taste and the alcohol content matter.

If I am buying a GPU more cores does not automatically equate to better performance.

The 980ti and the Titan X were Nvidia's flagship cards on their release the 1080 was the 'mainstream' flagship on its release.... (with the ultimate flagship, at the time, being the previously released initial Titan x pascal

The release of the 1080ti many months later doesn't stop the 1080 being the mainstream flagship gpu at release.

The last time nvidia tried to release a 'big' chip on a new architecture for a volume consumer GPU release (the GTX 480) it was heavily delayed and had issues over power consumption and heat dissipation resulting in the next generation flagship (GTX 580) just being a refined version of the same chip....

NVIDIA learnt their lesson... It makes good business sense to release a smaller chip on a new architecture before releasing a larger chip. This ensures that yields don't cause so many issues for the bigger chips as the process can be somewhat refined with the production of the smaller chips and allows for sufficient 'big' dies to be stockpiled for the release of the bigger die product(s).

Its pointless to obsess over die size, memory bus width or core count.... The main metrics that matter when comparing GPU's are performance and price with lesser considerations being heat dissipation, size, and range of cards available direct/ from OEM's.

If you buy a 'full' (ish) fat 1080ti today it may well be surpassed (performance wise) by a (what some people would call 'mid range') 'GTX 2080' likely with a smaller die and potentially with less cuda cores...

A 2080ti and /or another Titan will probably follow and the cycle repeats....
 
Last edited:
I think he meant to say, again "within the same range/ same architecture".

Let's explore this a bit and see how far the rabbit warren goes, OK?

Let's say the new 1170 beats a 1080 by 10%. The 1170 is a cut-down Gx104. nVidia now decide that because it's better than a 1080 the 1170 will become the 1180 (costing £500). What would have been the 1180 (Gx104 full chip) now becomes the Ti (or whatever).

Next gen, the 1280 is the Gy106 chip, because it beats the cut-down Gx104 chip by 10%. So now the 1280 is a Gy106 chip costing £500. The "Titan" is now the full Gy104 chip. The Gy102 is now the Super Titan and costs £5000.

How far would you accept 10% gains each gen knowing that you are no longer getting anything "for free". What do I mean by "for free"? The idea that each gen you get more for your money. It's a well-established norm by now, only really being challenged very recently.

When you don't get anything "for free", you need to spend more each new gen to get upgrades. Have a £500 card this gen? You need to spend £800 next gen. Then £1000. Then...
 
I think he meant to say, again "within the same range/ same architecture".

Its completely irrelevant to say within the same architecture as this is an assessment that can only be made in retrospect after all gpu's in a range have been released...

An xx80 gpu is released then a xx80ti then another xx80 GPU (with Titans and other gpu's inyerspaced). You can't 'play' memory bus width, bus size or cuda core count.

You might want to make a generalised statement that xx80ti cards offer the best price/performance/longevity at the top proposition but that has no bearing as to whether the xx80 cards are 'high end' or 'flagships' upon release.....

This gen the GTX 1080 has offered what I would suggest is a pretty good perfomace/longevity/ price position (at launch)compared to previous xx80ti cards like the 780ti

GPU's much like CPU's are starting to approach the physical limitations of what is likely possible using current silicon processes.... It's hardly surprising therefore that we are not seeing the large generation on generation gains at the same price point we saw previously.

NVIDIA doesn't owe you or anyone else ****. They produce their products its up to consumers if they buy them.

If their new xx80 card offers a performance uplift over last gen with a chip the size of the previous gen xx60 I could not care less. Price and performance are the main considerations.
 
Last edited:
So you'd be fine if nVidia did this...

Call the Ampere/Turing cut-down Gx104 the 1180. It's 10% faster than the 1080. Call the full Gx104 the 1180 Ti. Reserve the Gx102 for something else.

Then the gen after Ampere/Turning, call the Gy106 the 1280 - it's 10% faster than the 1180. Call it the flagship. Release the Gy104 as a "separate line" of ultra-enthusiast cards that aren't GeForce but are called something else. This "extreme" range uses a different naming scheme. So whilst the Gy106 is the 1280, the Gy104 is the Titan Red, the Gy102 is the Titan Black.

The Gy106 is the "flagship" of the GeForce range. The Titan Black is the "flagship" of the Titan range.

Naturally the Gy106 in the 1280 costs £500. The Titan Red costs £1500. The Titan Black costs £2500. So now all you get for your £500 is the 6th best card in the architecture/range, but it's OK so long as they call it the flagship of the GeForce range...
 
Nvidia are selling beer - 20 bottles in the luxury 470 pack and 24 bottle crate 480 in the Deluxe pack.

You buy the luxury 20 pack with 470 stamp on it and are happy, you know there is a better pack with 24 bottles called the Deluxe 480 but you're happy with your 20 bottles

Next year they increase the alcohol content and call it the 570 twenty bottle luxury pack and the 580 twenty four bottle deluxe pack. Feeling good about the beer, you decide to go for the 24 bottle delux pack as it's the top pack you can get. You know they can't fit any more bottles into the crate so you go for it.

The following year Nvidia again increase the alcohol content and release the 20 bottle 670 pack and the 24 bottle deluxe 680 pack....with 22 bottles in it and call it the flagship 680 pack. You're like....hang on a minute, you can still fit another 2 bottles into the crate of 22, to make 24. 24 bottles is what the top pack always was from your experience, so this 22 bottle pack isn't the flagship, the 24 bottle pack is the flagship pack to buy.

This is what Nvidia have done. Re-named the beer pack with the high end branding as being the highest bottle count you can get in the crate....but it's not.

480 WAS the full chip
580 WAS the full chip
680 WAS NOT the full chip but the naming would lead you to believe it was.

My comments aren't about timescales they are about the core count, full usage chip. It's Nvidias sneaky use of the branding that bugs me. They put the 680 out knowing it wasn't ever going to be the top card of that chipset when in fact the 680 should really have been the 670 or possibly 670Ti. Slight of hand renaming...

The 680 was a full chip? It wasn't until the original Titan & then the 780 that they started using a larger chip which was originally a pro chip with double precision floating point whatever it was.

Demonstrably false ....

NVIDIA referred to their own GTX680 as a flagship product, on release, despite releasing a faster card (the 690) on the same architecture afterwards....



And how did they refer to the 690 on its release....



They also called their GTX 1080(non ti) their 'flagship' card on release despite subsequently releasing the 1080ti....( just in case anyone want to try and make silly points about the 690 being an Sli card so not comparable to the 680)




I don't think I need to tell you how they referred to their 1080ti on release do I?


Are nvidia 'livid' .. At themselves for describing the 680 and 1080 as being flagships on release because they released higher performing cards in the same series subsequently?

The obviously didn't call their first maxwell card, the 750ti, a flagship as it wasn't performance wise.



The 690 used two 680 chips with 2gb's of memory for each which is completely different to the 1080, 1080ti scenario. Even though there was a 690 the 680 was still the flagship card for gaming until the GK110 cards released. You need to be able to differentiate between the fact that some cards had 2 gpu and some didn't it was no different with the 290x and 295 x2. They are more than happy to call both flagships because they're different, it's just marketing.

it depends on if the 690 was on the said roadmap as the same time as the 660 670 and 680 though. if it wasn't, then referring to the 680 as a flagship would be correct.

Even with the 690 they're still able to call the 680 their flagship card because it is the flagship chip. They don't care about accuracy, If called out on it they'll reply "Yes but ones a dual gpu putting it in a different category". It's promotional bull but that's business for you.
 
I suppose in reality NVidia can call which ever chip they like whatever they like, seeing as they are the ones making them. Of course we don't have to buy them, but many of us will anyway.
 
So you'd be fine if nVidia did this...

Call the Ampere/Turing cut-down Gx104 the 1180. It's 10% faster than the 1080. Call the full Gx104 the 1180 Ti. Reserve the Gx102 for something else.

Then the gen after Ampere/Turning, call the Gy106 the 1280 - it's 10% faster than the 1180. Call it the flagship. Release the Gy104 as a "separate line" of ultra-enthusiast cards that aren't GeForce but are called something else. This "extreme" range uses a different naming scheme. So whilst the Gy106 is the 1280, the Gy104 is the Titan Red, the Gy102 is the Titan Black.

The Gy106 is the "flagship" of the GeForce range. The Titan Black is the "flagship" of the Titan range.

Naturally the Gy106 in the 1280 costs £500. The Titan Red costs £1500. The Titan Black costs £2500. So now all you get for your £500 is the 6th best card in the architecture/range, but it's OK so long as they call it the flagship of the GeForce range...

If the gpu is faster I would assess the price performance proposition and decide if I wanted to buy. The size of the die, memory bus width and the amount of cores would not otherwise interest me.
 
Its completely irrelevant to say within the same architecture as this is an assessment that can only be made in retrospect after all gpu's in a range have been released...

Thing is though semiconductor manufacturing is a pretty known quantity these days - once ~300mm2 dies are doable with reasonable power and frequency performance the rest pretty much falls into place and it is pretty much inevitable that bigger cores are coming sans business decisions - with Pascal and Volta the big cores were even ready first GP-100 (610mm2) and GV-100 (815mm2) have been launched ahead of any consumer offering so to represent cores half that size as anything resembling high end is BS.
 
with Pascal and Volta the big cores were even ready first GP-100 (610mm2) and GV-100 (815mm2) have been launched ahead of any consumer offering so to represent cores half that size as anything resembling high end is BS.

In nothing like the volume of a consumer GPU release like the 1080 and at nowhere near the price. It's not that Nvidia could not lead new generations with a large die consumer gpu on a new process... It just doesn't make sound business sense
 
In nothing like the volume of a consumer GPU release like the 1080 and at nowhere near the price. It's not that Nvidia could not lead new generations with a large die consumer gpu on a new process... It just doesn't make sound business sense

Pull another one - there is a huge huge gulf between the 314mm2 die in the 1080 and the 610mm2 die that is the GP-100 - sure 20nm planar based nodes are a bit more expensive (~30%) than 28nm for design and production but not that much more expensive. There is absolutely no reason they couldn't have released a die around the traditional high end GPU spot 450-520mm2 at around the price the 1080 was launched at other than milking their customers and playing on people who'll just open up their wallets or even justify such behaviour.

Semiconductor manufacturing hasn't intrinsically changed - sure yields go down and bigger cores get harder to make as you increase the die size but even with very problematic nodes they've managed to produce a 529mm2 die with only one SM disabled and they didn't exactly struggle for money during the Fermi years.
 
This (die) size obsession is a bit tiring.... the 1080 was a suitably faster card then the previous gens mainstream offering. A smaller die makes it easier to run the GPU at faster speed negating the need to have a bigger die to reach the same performance point.

You (or I) can't authoritatively state that Nvidia could just have released a `1080` gpu with a 60+% bigger die without making a potential loss once r +d had been accounted for. They certainly would not have made as much profit (an absolutely undeniable fact) and despite what certain whinning forum members would have you believe nvidia don't owe accountability to their customers.... rather they do to their shareholders. AMD are no strangers to charging top $$$'s when they have had the fastest available product out there in a market segment much like a lot of other companies.
 
This (die) size obsession is a bit tiring.... the 1080 was a suitably faster card then the previous gens mainstream offering. A smaller die makes it easier to run the GPU at faster speed negating the need to have a bigger die to reach the same performance point.

Only upto a point otherwise there would be no point having the 1080ti with its bigger core at all if you could just ever increase the frequency on a smaller core to get identical performance.

Sure nVidia are a business but that doesn't mean the customers should just let themselves be fleeced by simply falling in line and opening their wallets when nVidia pull stuff like this I can't even understand the mentality of justifying it - there is no way not even a remote chance nVidia would have been making a loss with a ~450-500mm2 Pascal core at around the price the 1080 was released - with inflation and the extra cost of 16FF production I'd give them a pass for some increase in price or other slight re-jigging. I do some electronics on the side including having looked into having my own custom ASIC produced on TSMC's shuttle service so I have a vague idea as to the ballpark costs.
 
Maybe of fabrication costs... R+d costs are eye-wateringly expensive and require a robust product release/pricing strategy to ensure a reasonable prospect of payback

The R&D costs are going to be a cost anyhow - especially as they had the very large core GP-100 in production done and dusted so it isn't like that would have any implications on the difference between a ~300mm2 and ~400-500mm2 consumer card. Very slightly longer return but nothing close to the difference between profit and a loss.

EDIT: As before there might be some considerations there as you often reach a point towards the high end where those last few percent can incur hideously out of proportion increases in cost but there is a massive amount of room there between 300m2 and where you start to encounter that.

(To go from 600mm2 Pascal to 800mm2 Volta is basically like throwing space shuttle budget and development at a Veyron).
 
Back
Top Bottom