Please remember that any mention of competitors, hinting at competitors or offering to provide details of competitors will result in an account suspension. The full rules can be found under the 'Terms and Rules' link in the bottom right corner of your screen. Just don't mention competitors in any way, shape or form and you'll be OK.
Nothing has to be anything. It will be what Nvidia want it to be. But going by historical figures, that is what tends happen. Does not mean it is guaranteed to happen however.Why does the xx80 have to be faster than the 1080ti? And why does the xx70 have to be faster than the 1080?
That wasn't true for the 700 series tho was it. 770 was a 680 more or less.xx70 has matched/beaten xx80 Ti from previous generation since the 600 series at least
I thought Nvidia said it was a big architectural change bringing lots of performance? Like Maxwell did? Not to mention a pretty big node jump.That wasn't true for the 700 series tho was it. 770 was a 680 more or less.
AFAIK they've only stated a large performance jump in compute tasks. And from what we've been reading, a lot of the architectural changes in Volta are aimed at faster computer/ AI/ deep learning, etc (eg Tensor cores).I thought Nvidia said it was a big architectural change bringing lots of performance? Like Maxwell did? Not to mention a pretty big node jump.
Personally I am expecting xx70 to be 1080Ti or within reach. But it could obviously end up like a 1080, but then that would be silly if it does. We will see soon enough![]()
AFAIK they've only stated a large performance jump in compute tasks. And from what we've been reading, a lot of the architectural changes in Volta are aimed at faster computer/ AI/ deep learning, etc (eg Tensor cores).
A Volta gaming card without those new features could be basically an improved/tweaked Pascal.
From what we know about Volta so far, all bets are off. I wouldn't be surprised if gains in gaming were much more modest.
I thought Nvidia said it was a big architectural change bringing lots of performance? Like Maxwell did? Not to mention a pretty big node jump.
Personally I am expecting xx70 to be 1080Ti or within reach. But it could obviously end up like a 1080, but then that would be silly if it does. We will see soon enough![]()
Yep. This is why I am expecting 2070 to be at worst 10% slower than a 1080Ti, but running much cooler and efficiently. Some saying it won't, but I doubt that very much.well if the 2070 only ends up 1080 performance then that is really disappointing considering it will most likely cost £400~. another couple years of stagnation on price/performance
Unless they go back to sane prices and have the 70 card at or below £300 (considering 70 cards are now mid range)
at £400 it has to match the 1080ti
I think at the moment, the only company offering anything other than stagnation is Nvidia.
I'm not saying your wrong but it's basing it on the fact that we haven't heard about gaming optimizations is a little premature. Volta as it exists now is a compute card so obviously Nvidia have talked about it's compute abilities rather than it's gaming chops.AFAIK they've only stated a large performance jump in compute tasks. And from what we've been reading, a lot of the architectural changes in Volta are aimed at faster computer/ AI/ deep learning, etc (eg Tensor cores).
A Volta gaming card without those new features could be basically an improved/tweaked Pascal.
From what we know about Volta so far, all bets are off. I wouldn't be surprised if gains in gaming were much more modest.
I don't get you calculations. GV100 is already a larger die as GP100 and get's 50% perf/w. No idea how you're getting to 70% from that. Other point is, that only the Nvlink GV100 is 50% better in perf/w. PCI-E Version is only 40% better in perf/w. So i'll take this as a starting point and expect 2080 to be 40% faster than 1080, which makes it ~10% faster than 1080Ti. That's way more realistic than your stuff.
Why does the xx80 have to be faster than the 1080ti? And why does the xx70 have to be faster than the 1080?
That wasn't true for the 700 series tho was it. 770 was a 680 more or less.
Do we know that for sure? I read somewhere that GeForce Volta could be on 16nm still, with only GV100 being on the new "12nm" process.680/670 -> 770 was neither, and 1070 -> 2070 is both.
Do we know that for sure? I read somewhere that GeForce Volta could be on 16nm still, with only GV100 being on the new "12nm" process.
Basically that nV have completely split their gaming and compute lines now, and that what holds true for compute isn't automatically going to hold true for gaming cards.
I'm not sure they've 100% confirmed they're using 12nm for their consumer cards, but we can be 99.9% sure for 2 reasons.
There is also a slightly tangential third reason, and that's Time. Volta will be competing with AMD's 7nm product for around a 3rd of its product lifecycle (i.e. AMD will have a process advantage for something like 9 months, and a substantial process advantage as 7nm is massively better than 14/16nm), so it needs to be as good as possible on the off chance AMD's Navi is very good. So this also points towards using 12nm as it's the best they can get right now.
- Because of the need to make progress I covered before. They're already near the power limit and performance limit they can get out of Pascal on 16nm, so if they didn't use 12nm for Volta they'd be completely relying on the architecture for all of the performance gain made
- Cost. It should actually be cheaper overall for them to build Volta on 12nm because otherwise they'd have effectively had to make 2 versions of Volta, 1 for 12nm and 1 for 16nm. You can never simply shrink/enlarge dies for different processes any more. So while splitting Volta between compute-12nm and consumer-16nm might lower the manufacturing cost of the dies slightly, it would increase the R&D budget, likely making the net overall cost greater
Tsmc's 12nm process is also custom made for nvidia. Should be much more cost effective for Nicosia to use that 12no process since they already paid so much for a bespoke node. This will also give them a half node size advantage over AMD's Vega 20 next year.
Unlike 28nm to 16FF its a lot easier to directly shrink 16FF to 12FF depending on product and libraries used only some areas might need reworking.
The incarnation that is Volta now was originally for sub 16FF so should be relatively possible to take it to 10nm or below.
I can't tell you what but nVidia is amongst TSMC clients testing products on 7nm - IIRC 25 major clients including nVidia will have test samples back by the end of this year while 7nm at GF is yet again not producing inspirational results according to some so I'll be surprised if AMD pull an advantage on 7nm out the bag.