• Competitor rules

    Please remember that any mention of competitors, hinting at competitors or offering to provide details of competitors will result in an account suspension. The full rules can be found under the 'Terms and Rules' link in the bottom right corner of your screen. Just don't mention competitors in any way, shape or form and you'll be OK.

NVIDIA Volta with GDDR6 in early 2018?

https://translate.google.com/transl...104-seit-februar-produktio&edit-text=&act=url

Nvidia discontinued GP102 production, Titan X Pascal, Titan Xp and 1080 Ti now reached EOL.

Nvidia GA104 already in production since February probably over the last week.

Nvidia to unveiled GA104 chip based on Ampere architecture, announce GTX 2070 and 2080 cards at GTC 2018 between 26-29 March 2018.

Ampere GTX 2070 and GTX 2080 with GDDR6 launch date 12 April 2018?


Poor Volta Vega if this is the case.
 
I didn't post it yesterday but came across an article saying Navi is going to flop also.

How they know this is anyone's guess.... :o

I don't really see GCN as "geriatric" as per the article - there are some questionable implementation decisions/direction but its not a bad architecture or really outdated at all.
 
Only rumours but I would rather buy a 2080Ti than a 1180Ti. Just sounds better. You are quite correct though and a couple of people got massively upset when someone suggested 2080/70 :D
It still could be the 1170, 1180. Who knows :) I was massively guilty in that other thread of just not letting the argument go. Must learn to put the keyboard down and back away from the PC :p
 
It still could be the 1170, 1180. Who knows :) I was massively guilty in that other thread of just not letting the argument go. Must learn to put the keyboard down and back away from the PC :p
Indeed it could well be. Not fussed either way in truth and it will be whatever NVidia decide.
 
I don't really see GCN as "geriatric" as per the article - there are some questionable implementation decisions/direction but its not a bad architecture or really outdated at all.

GCN hasn't scaled well though. Vega and Fiji offer very similar performance
 
GCN hasn't scaled well though. Vega and Fiji offer very similar performance

There is a certain stubbornness in approach that is holding the architecture back - for a long time holding out for when games make optimal use of a future vision for the architecture and it simply isn't happening - even with the hardware in consoles it isn't coming around like people would like to see and with nVidia so dominant it just ain't happening.

With a change of focus on the implementation so that it is better loaded up and less under-utilised by the type of processing it has to deal with here and now today (and on a better node than GF 14nm) it would still compete with anything out today or likely even the next generation.
 
So guys, you think its really coming ? 2070/2080

1. Whats the probable % increase in performance over 2080Ti, vs 2080.
2. Is there any legid info(legid rumors) that it is really coming March 2018+ ?
3. I guess its not worth it to buy 1080Ti now, even with discount codes, right ?
:)
 
I fully expect to see 1080ti beating performance, maybe even Titan, beating from the GA104. Probably with a lot of the compute functionality stripped out, just like the 900 series.
With any luck the xx70 card, will beat the 1080ti.
Of course pricing will be all important but I fully expect to only be $50 or so more than the last generation at launch.
They will be announced at GTC in march with availability a few weeks later.
 
I fully expect to see 1080ti beating performance, maybe even Titan, beating from the GA104. Probably with a lot of the compute functionality stripped out, just like the 900 series.
With any luck the xx70 card, will beat the 1080ti.
Of course pricing will be all important but I fully expect to only be $50 or so more than the last generation at launch.
They will be announced at GTC in march with availability a few weeks later.
I hope that perf is attainable.

I've made an appointment in my diary to pop down to Boots sometime before release. With a little preparation, I should be ready for the new prices.
 
There is a certain stubbornness in approach that is holding the architecture back - for a long time holding out for when games make optimal use of a future vision for the architecture and it simply isn't happening - even with the hardware in consoles it isn't coming around like people would like to see and with nVidia so dominant it just ain't happening.

With a change of focus on the implementation so that it is better loaded up and less under-utilised by the type of processing it has to deal with here and now today (and on a better node than GF 14nm) it would still compete with anything out today or likely even the next generation.

Congrats on the MOH.:)

Did it come as a surprise when you logged in.:)
 
There is a certain stubbornness in approach that is holding the architecture back - for a long time holding out for when games make optimal use of a future vision for the architecture and it simply isn't happening - even with the hardware in consoles it isn't coming around like people would like to see and with nVidia so dominant it just ain't happening.

With a change of focus on the implementation so that it is better loaded up and less under-utilised by the type of processing it has to deal with here and now today (and on a better node than GF 14nm) it would still compete with anything out today or likely even the next generation.


But that is the point. AMD designed an architecture that doesn't scale well with the APIS and game engine designed that exist. AMD have repeatedly tried to change this, first with the failed mantle API, then a heavily influenced DX12 API that very few developers care about at all. AMD scaled GCN by adding more and more shaders to the 4 compute units, trying to use brute force to increase throughput. This just hasn't scaled under the current environment. By most account Nvidia has much more technology to drive efficiency, with each generation not simply upping shader counts bu significantly extending or improving how the extra compute resources can be utilized and properly loaded. Nvidia have taken a lot of care of bottlenecks, while AMD's approach seems to be to tell developers to ignore the bottlenecks but use Async compute to do a load of work while you are waiting for underutilized shaders.

GCN is now liek an American muscle car, very powerful and fast on the straight, terrible on the corners. Nvidia's designs are like German performance cars, less raw power but better refined. AMD's approach is to make their cars faster on the straight while ignoring the horrible corner handling, with the advice to developers to put your foot down on the straight.
 
Back
Top Bottom