• Competitor rules

    Please remember that any mention of competitors, hinting at competitors or offering to provide details of competitors will result in an account suspension. The full rules can be found under the 'Terms and Rules' link in the bottom right corner of your screen. Just don't mention competitors in any way, shape or form and you'll be OK.

NVIDIA Volta with GDDR6 in early 2018?

Why does the xx80 have to be faster than the 1080ti? And why does the xx70 have to be faster than the 1080?
Nothing has to be anything. It will be what Nvidia want it to be. But going by historical figures, that is what tends happen. Does not mean it is guaranteed to happen however.
 
That wasn't true for the 700 series tho was it. 770 was a 680 more or less.
I thought Nvidia said it was a big architectural change bringing lots of performance? Like Maxwell did? Not to mention a pretty big node jump.

Personally I am expecting xx70 to be 1080Ti or within reach. But it could obviously end up like a 1080, but then that would be silly if it does. We will see soon enough :D
 
I thought Nvidia said it was a big architectural change bringing lots of performance? Like Maxwell did? Not to mention a pretty big node jump.

Personally I am expecting xx70 to be 1080Ti or within reach. But it could obviously end up like a 1080, but then that would be silly if it does. We will see soon enough :D
AFAIK they've only stated a large performance jump in compute tasks. And from what we've been reading, a lot of the architectural changes in Volta are aimed at faster computer/ AI/ deep learning, etc (eg Tensor cores).

A Volta gaming card without those new features could be basically an improved/tweaked Pascal.

From what we know about Volta so far, all bets are off. I wouldn't be surprised if gains in gaming were much more modest.
 
AFAIK they've only stated a large performance jump in compute tasks. And from what we've been reading, a lot of the architectural changes in Volta are aimed at faster computer/ AI/ deep learning, etc (eg Tensor cores).

A Volta gaming card without those new features could be basically an improved/tweaked Pascal.

From what we know about Volta so far, all bets are off. I wouldn't be surprised if gains in gaming were much more modest.

This is what I am expecting too.
 
I thought Nvidia said it was a big architectural change bringing lots of performance? Like Maxwell did? Not to mention a pretty big node jump.

Personally I am expecting xx70 to be 1080Ti or within reach. But it could obviously end up like a 1080, but then that would be silly if it does. We will see soon enough :D

well if the 2070 only ends up 1080 performance then that is really disappointing considering it will most likely cost £400~. another couple years of stagnation on price/performance

Unless they go back to sane prices and have the 70 card at or below £300 (considering 70 cards are now mid range)

at £400 it has to match the 1080ti
 
well if the 2070 only ends up 1080 performance then that is really disappointing considering it will most likely cost £400~. another couple years of stagnation on price/performance

Unless they go back to sane prices and have the 70 card at or below £300 (considering 70 cards are now mid range)

at £400 it has to match the 1080ti
Yep. This is why I am expecting 2070 to be at worst 10% slower than a 1080Ti, but running much cooler and efficiently. Some saying it won't, but I doubt that very much.
 
Should always talk about the dollar price really. Whatever results in £ after its converted is then more accurate. Nvidia has little choice in UK prices being £400 or £300, in dollars that could be the same exact launch pricing for them but one is our exchange rate 1.3 and the other is 1.7

I would guess 1.3 of the two scenarios currently but I'd expect better performance possible. When is 7nm likely

https://www.pcgamesn.com/nvidia/nvidia-volta-gpu-specifications
 
I think at the moment, the only company offering anything other than stagnation is Nvidia.

I meant in price/performance. Under £300 market has not changed really in the past 3 years, and if the 2070 is 1080 perf then we can say that £400~ performance has stagnated.

It wasnt only the exchange rate that messed it up it was also the founders edition crap. No cards came in at the actual MSRP, they were all founders MSRP or higher. If it was just exchange rate then the 1070 should have been around £330~ as the 970 was £260-270 on launch
 
AFAIK they've only stated a large performance jump in compute tasks. And from what we've been reading, a lot of the architectural changes in Volta are aimed at faster computer/ AI/ deep learning, etc (eg Tensor cores).

A Volta gaming card without those new features could be basically an improved/tweaked Pascal.

From what we know about Volta so far, all bets are off. I wouldn't be surprised if gains in gaming were much more modest.
I'm not saying your wrong but it's basing it on the fact that we haven't heard about gaming optimizations is a little premature. Volta as it exists now is a compute card so obviously Nvidia have talked about it's compute abilities rather than it's gaming chops.
 
I don't get you calculations. GV100 is already a larger die as GP100 and get's 50% perf/w. No idea how you're getting to 70% from that. Other point is, that only the Nvlink GV100 is 50% better in perf/w. PCI-E Version is only 40% better in perf/w. So i'll take this as a starting point and expect 2080 to be 40% faster than 1080, which makes it ~10% faster than 1080Ti. That's way more realistic than your stuff.

That's only because the NVlink one is 300W, so you're going well over the sweet-spot for the cards. i.e. the 25% increase from 250W to 300W doesn't give you 25% more performance because it's at the stage of diminishing returns.

The 1070 die is only 314mm2, and cut-down too, and 150W TDP. It's very small and power efficient at the moment.

If the 2070 die is a cut down 400mm2-ish, and around 200W TDP, it is highly plausible to expect more than 50% gain out of that. Because you should expect a 50% performance uplift if it was still 150W TDP.


Why does the xx80 have to be faster than the 1080ti? And why does the xx70 have to be faster than the 1080?

It would be more accurate to say it HAS to have higher perf/w and perf/mm2, otherwise shareholders and investors will see the company unfavourably.

It has to show progress, and/or be cheaper from Nvidia to manufacture the same/better performance. To drive future profit.

And this basically translates into it very very likely being the case the 2080 will be faster than the 1080 Ti.

Also just think about it mathematically. Unless the 2080 has a very small die size, around 350mm2 or lower, it will be faster anyway unless Volta isn't a very good arch. The 1080 Ti is only 471mm2 (cut-down), that's not that big.

So if the 2080 is about 400mm2 (full die), it'll be close to the same true size as the 1080 Ti, but then made on a better process with a better (supposedly) architecture. How could it not be faster?


That wasn't true for the 700 series tho was it. 770 was a 680 more or less.

That one doesn't really count, since the 770 was literally an overclocked rebadged 680.

I'm talking about examples where the xx70 of the next generation has either undergone an arch change, or process change, or both.

680/670 -> 770 was neither, and 1070 -> 2070 is both.
 
680/670 -> 770 was neither, and 1070 -> 2070 is both.
Do we know that for sure? I read somewhere that GeForce Volta could be on 16nm still, with only GV100 being on the new "12nm" process.

Basically that nV have completely split their gaming and compute lines now, and that what holds true for compute isn't automatically going to hold true for gaming cards.
 
Do we know that for sure? I read somewhere that GeForce Volta could be on 16nm still, with only GV100 being on the new "12nm" process.

Basically that nV have completely split their gaming and compute lines now, and that what holds true for compute isn't automatically going to hold true for gaming cards.

I'm not sure they've 100% confirmed they're using 12nm for their consumer cards, but we can be 99.9% sure for 2 reasons.
  1. Because of the need to make progress I covered before. They're already near the power limit and performance limit they can get out of Pascal on 16nm, so if they didn't use 12nm for Volta they'd be completely relying on the architecture for all of the performance gain made
  2. Cost. It should actually be cheaper overall for them to build Volta on 12nm because otherwise they'd have effectively had to make 2 versions of Volta, 1 for 12nm and 1 for 16nm. You can never simply shrink/enlarge dies for different processes any more. So while splitting Volta between compute-12nm and consumer-16nm might lower the manufacturing cost of the dies slightly, it would increase the R&D budget, likely making the net overall cost greater
There is also a slightly tangential third reason, and that's Time. Volta will be competing with AMD's 7nm product for around a 3rd of its product lifecycle (i.e. AMD will have a process advantage for something like 9 months, and a substantial process advantage as 7nm is massively better than 14/16nm), so it needs to be as good as possible on the off chance AMD's Navi is very good. So this also points towards using 12nm as it's the best they can get right now.
 
I'm not sure they've 100% confirmed they're using 12nm for their consumer cards, but we can be 99.9% sure for 2 reasons.
  1. Because of the need to make progress I covered before. They're already near the power limit and performance limit they can get out of Pascal on 16nm, so if they didn't use 12nm for Volta they'd be completely relying on the architecture for all of the performance gain made
  2. Cost. It should actually be cheaper overall for them to build Volta on 12nm because otherwise they'd have effectively had to make 2 versions of Volta, 1 for 12nm and 1 for 16nm. You can never simply shrink/enlarge dies for different processes any more. So while splitting Volta between compute-12nm and consumer-16nm might lower the manufacturing cost of the dies slightly, it would increase the R&D budget, likely making the net overall cost greater
There is also a slightly tangential third reason, and that's Time. Volta will be competing with AMD's 7nm product for around a 3rd of its product lifecycle (i.e. AMD will have a process advantage for something like 9 months, and a substantial process advantage as 7nm is massively better than 14/16nm), so it needs to be as good as possible on the off chance AMD's Navi is very good. So this also points towards using 12nm as it's the best they can get right now.

Unlike 28nm to 16FF its a lot easier to directly shrink 16FF to 12FF depending on product and libraries used only some areas might need reworking.

The incarnation that is Volta now was originally for sub 16FF so should be relatively possible to take it to 10nm or below.

I can't tell you what but nVidia is amongst TSMC clients testing products on 7nm - IIRC 25 major clients including nVidia will have test samples back by the end of this year while 7nm at GF is yet again not producing inspirational results according to some so I'll be surprised if AMD pull an advantage on 7nm out the bag.
 
Last edited:
Tsmc's 12nm process is also custom made for nvidia. Should be much more cost effective for Nicosia to use that 12no process since they already paid so much for a bespoke node. This will also give them a half node size advantage over AMD's Vega 20 next year.
 
Tsmc's 12nm process is also custom made for nvidia. Should be much more cost effective for Nicosia to use that 12no process since they already paid so much for a bespoke node. This will also give them a half node size advantage over AMD's Vega 20 next year.

MM dunno how sure it is about the vega 20 as from everything ive seen its 7nm and being whacked out by samsung, so amd are jumpin right from 14 to 7. with that is the rumor that im not totally sure on due to heat that there is gonna be a dual vega 10. And nvidia has already stated the main hold back for volta is costs and that volta is out in the wild, i have a hinky feeling that volta is not going tobe cheap or here anytime soon. And after hynix basically screwed amd over with the hbm2 so amd had togo with samsung for it late minuite means if samsung are doing the gpu and the hbm that that could push the price down on vega 20.

And just an aside, there are rumors volta is gonna be on gddr6 and gddr5x, i highly doubt its gonna be both so that could be from nvidia not even sure due to pricing.
 
Unlike 28nm to 16FF its a lot easier to directly shrink 16FF to 12FF depending on product and libraries used only some areas might need reworking.

The incarnation that is Volta now was originally for sub 16FF so should be relatively possible to take it to 10nm or below.

I can't tell you what but nVidia is amongst TSMC clients testing products on 7nm - IIRC 25 major clients including nVidia will have test samples back by the end of this year while 7nm at GF is yet again not producing inspirational results according to some so I'll be surprised if AMD pull an advantage on 7nm out the bag.

Of course they're going to have 7nm products, just after AMD. Due to their release cycle, and due to TSMC being later.

TSMC's first 7nm process is only for low-power chips, so their good yields and being ~3 months ahead of GloFo is only for mobile processors. Also it uses a very different track height, so is a different density and unlikely Nvidia will even use that process for a pipe-cleaner like the 750 Ti.

TSMC's high-performance 7nm will be available possibly as much as 6 months after GloFo's, since GloFo is going high-performance first.
 
Back
Top Bottom