• Competitor rules

    Please remember that any mention of competitors, hinting at competitors or offering to provide details of competitors will result in an account suspension. The full rules can be found under the 'Terms and Rules' link in the bottom right corner of your screen. Just don't mention competitors in any way, shape or form and you'll be OK.

So, 19th March GDC Nvidia Key Note

Yeah, anything could happen :)

I just don't like been accused of making things up.
Lawd, you have no idea what consistent reasoning is do you.

First you say it should logically be the 1100 series, because reasons, then you say "it could be anything". Whatever, man. This is tiresome.

There's no "research" to be done here. If nV's marketing dept like 2000 series better they'll go with it. There's no way to arrive at a "correct" answer. I've said this all along.

It'll be a decision based on marketing and nothing else.
 
Hopefully they actually bump up the xx60 card. (performance not price lol)

Don't want to see another 760 -> 960. :o

Looking for a 80-100% increase from my r9 290. I can dream anyways.
Same, I won’t pay more then the £320 I payed for my r9-290 on release, still waiting for a GPU to meet that criteria
 
Same, I won’t pay more then the £320 I payed for my r9-290 on release, still waiting for a GPU to meet that criteria

Yeah £300 is my max.
£300 for a xx60 would still be mad imo. Depends on it's performance though I suppose.
I remember paying about £140 maybe for a gtx460 when that came out. Granted that is a few years ago now, but still...
 
It isn't.

The 8800 and 9800 (G92) are referred to as having "128 CUDA cores". G80 supported CUDA v1.

There is no science to this "11th generation" claim at all.

Yes. 8800 was 1st gen. CUDA

2nd gen CUDA was called 2xx series (9800 was rebranded to reflect on that)

Volta is 11th gen
 
Yes. 8800 was 1st gen. CUDA

2nd gen CUDA was called 2xx series (9800 was rebranded to reflect on that)

Volta is 11th gen
The 200 series was made up of at least two different architectures. How can the entire 2 series be "2nd gen"?

https://en.wikipedia.org/wiki/GeForce_200_series#GeForce_200_Series

And what about the nV 300 series? Were they 3rd gen? Because they were just rebrands of the 200 series for the OEM market.

You see how much nonsense this "series name corresponds to CUDA generation" really is?

Unless you do what melmac did and define generation as the number of releases since the last naming scheme change, then it's not true. And if you do what melmac did, then the "generation" is essentially meaningless.
 
Arguments over a naming scheme..

The correlation between the rise of graphics card prices and Gibbo's motor..

This thread, man...
 
Arguments over a naming scheme..

The correlation between the rise of graphics card prices and Gibbo's motor..

This thread, man...
If you're looking for peace, love and harmony, then the GPU subforum will be one endless stream of disappointment :p No matter when you visit :p
 
https://www.anandtech.com/show/12345/sk-hynix-lists-gddr6-memory-as-available-now


GDDR6 available now. I wonder if we will have the same situation as with Pascal with some of the ome crowd shouting that there can't be a new Nvidia card because the memory won't be available, despite public announcements that is was and even ahead of schedule.

Heh - also companies like nVidia don't necessarily have to wait until volume production to be able to start building and bringing to market products based around it - albeit that might be reduced capabilities and/or risk production, etc.
 
Honestly, who really cares. Games are not bandwidth stricken. The performance will speak for itself, not the memory IC.

Vega is staved for bandwidth, Pascal/Maxwell love it to but not held back as much
my 980ti got bumps in performance from Ram OC close to the core OC
Both GPU vendors have been pushing Z compression a lot to make better use of memory bandwidth.

I don't know how GPU memory bandwidth can affect if at call compute functions but its still something worth considering.
 
Vega is staved for bandwidth, Pascal/Maxwell love it to but not held back as much
my 980ti got bumps in performance from Ram OC close to the core OC
Both GPU vendors have been pushing Z compression a lot to make better use of memory bandwidth.

I don't know how GPU memory bandwidth can affect if at call compute functions but its still something worth considering.

Vega is starved for performance in general, though. Talking about GDC here ;).

Not sure where you're seeing these gains from memory bandwidth, either. Least ones that aren't negligible.
 
Lawd, you have no idea what consistent reasoning is do you.

First you say it should logically be the 1100 series, because reasons, then you say "it could be anything". Whatever, man. This is tiresome.

what the hell are you talking about? Can you really be that bad at following a conversation? We are both making guesses as to what the naming the scheme will be. You are guessing 2000 because 20 is bigger than 11. I am guessing 11 based on the current naming scheme and 11 is the next number in line.

But, the post you quoted was my reply to Rroff, where he said they might go with a completely different naming scheme and in that sense, yes, anything could happen. It still doesn't change my reasoning for what the next line of cards will be called.

There's no "research" to be done here.

You need to do a lot of research. The marketing department pick the names sure, but, they don't just pick names/numbers off the top of their heads. The numbers mean something. Throughout this whole conversation you have refused to accept that the numbers stand for anything at all. I even pointed out that if it was purely for Psychology and bigger numbers catch the eye, they would have gone 10800 instead of back to 100.

The 200 series was made up of at least two different architectures. How can the entire 2 series be "2nd gen"?

https://en.wikipedia.org/wiki/GeForce_200_series#GeForce_200_Series

And what about the nV 300 series? Were they 3rd gen? Because they were just rebrands of the 200 series for the OEM market.

I have to believe that you are been deliberately obtuse now. I explained this to you already. They were rebrands. It's consistency across the naming scheme. They couldn't really have some cards called 2xx and others called 9800 could they?

The 9800 GT is called a ninth series GPU, even though it was a rebrand of the 8800 GT.

Basically simple marketing, a word that you use when it suits your argument, but, conveniently forget about when it doesn't.

As for the 300 series cards, Remember Nvidia have trouble back then and so had TSMC. Nvidia were 6 months late with Fermi, They released the 300 series cards to OEM, mainly rebrands with some on the new 40nm process hence the rename. Nvidia decided to skip 300 and go straight to 400 for Fermi.

Nvidia got a lot stick around that time for all the rebranding and renaming they did.
 
You need to do a lot of research. The marketing department pick the names sure, but, they don't just pick names/numbers off the top of their heads. The numbers mean something. Throughout this whole conversation you have refused to accept that the numbers stand for anything at all. I even pointed out that if it was purely for Psychology and bigger numbers catch the eye, they would have gone 10800 instead of back to 100.
The numbers are part of the name. They have meaning, but the *only* meaning they have is conveying which card is "newer" and which card is "more powerful".

The actual number chosen by nVidia could be anything. They don't even have to choose numbers if they decided not to.

It could be nVidia A-Bob, nVidia A-Gareth, nVidia A-Paul. Next year moving on to the B-series: B-Jasmine, B-Rachel, B-Lilly. That would be a terrible naming scheme btw because I don't immediately know if Bob is more powerful than Paul. But we could make it alphabetical.

But it highlights nicely why numbers are chosen as part of the name. And that is that we consumers have come to understand/expect higher numbers mean better, mean newer.

The numbers in 1080, 1070, 1060 are chosen, not calculated or deduced; not linked to CUDA cores or phase-plasma-induction-coils, or anything at all. They are assigned numbers chosen by nVidia to differentiate between chronological releases, and to differentiate between relative card performance.

Literally nothing else. Thus the marketing team can have free reign to devise any scheme they like which meets this purpose (of differentiating cards).

Ergo, nVidia could choose to name their new cards the 1100 series. They could also choose to name them the 2000 series. Unlike you, I'm not claiming that the latter would be a "violation of their naming scheme" or something completely daft like that. Naming schemes come and go. They are not set in stone and they are not governed by anything other than what help sells cards.

e: Let's think about this a different way. Let's assume that for the next four years nV name their cards like this:

1100 series, 1200 series, 1300 series, 1400 series.

Now let's assume that after the 1400 series their next architecture is radically different, with an amazing jump in performance. Rather than call it the 1500 series, they want to give it a new name which reflects the massive leap they've made.

So they call their next cards the 2000 series.

My question to you in this scenario is: what would this behaviour prove?
 
Last edited:
Honestly, who really cares. Games are not bandwidth stricken. The performance will speak for itself, not the memory IC.
whether or not next gen GPUS nee more bandwidth s irrelevant (they do), the fact is next gen NVidia GPUS are rumours to use GDDr6, SK Hynix reports that they have a Major GPU customer for GDDr6 in Q1/2018, and now we know GDDR6 is indeed in production

GDC should be interesting
 
whether or not next gen GPUS nee more bandwidth s irrelevant (they do),

No, they don't. Elaborate on why you think they do? This is why this section is vitriolic, people spread misinformation. Note there is a big distinction between having ample memory bandwidth, and being bandwidth starved.
 
Last edited:
Maybe the wrong thread, but apart from what kind of numeric designation the cards will have, does anyonw know anything about the performance compared to 1080Ti ?? And no, don't have that care but with Gibbos and OcUK's generouse offer thinking maybe, all depending what the news cards might bring to the table ? So yeah, just curious. :o
 
Maybe the wrong thread, but apart from what kind of numeric designation the cards will have, does anyonw know anything about the performance compared to 1080Ti ?? And no, don't have that care but with Gibbos and OcUK's generouse offer thinking maybe, all depending what the news cards might bring to the table ? So yeah, just curious. :o

Nobody really knows anything to be honest, my guess is 25-30% ish percent increase in performance from the last generation, but could be closer to 20%, or could be more like 40% like it has the past. Obviously I hope the latter.
 
https://translate.google.com/translate?sl=auto&tl=en&js=y&prev=_t&hl=en&ie=UTF-8&u=http://www.3dcenter.org/news/nvidia-ampere-geruechtekueche-gp102-produktion-bereits-eingestellt-ga104-seit-februar-produktio&edit-text=&act=url

Nvidia discontinued GP102 production, Titan X Pascal, Titan Xp and 1080 Ti now reached EOL.

Nvidia GA104 already in production since February probably over the last week.

Nvidia to unveiled GA104 chip based on Ampere architecture, announce GTX 2070 and 2080 cards at GTC 2018 between 26-29 March 2018.

Ampere GTX 2070 and GTX 2080 with GDDR6 launch date 12 April 2018?
 
Last edited:
Nobody really knows anything to be honest, my guess is 25-30% ish percent increase in performance from the last generation, but could be closer to 20%, or could be more like 40% like it has the past. Obviously I hope the latter.
Thanks. Considering I am on a FuryX and if the new cards be around 20% better then a 1080Ti, then I think I hold off and wait. I understand it is just pure guess work but even if it is 15% it be better then a Ti now, or os is my assumption at least. Appreciate your reply. :)
 
Back
Top Bottom