• Competitor rules

    Please remember that any mention of competitors, hinting at competitors or offering to provide details of competitors will result in an account suspension. The full rules can be found under the 'Terms and Rules' link in the bottom right corner of your screen. Just don't mention competitors in any way, shape or form and you'll be OK.

NVIDIA Volta with GDDR6 in early 2018?

Man of Honour
OP
Joined
21 May 2012
Posts
31,940
Location
Dalek flagship
They run it 'ok'. They're borderline if you want to do 60fps in nearly everything with good settings. Some games you're gonna have to make more notable compromises.

And games will get more demanding. These two cards you mention might still be 'ok' for a couple years, but for those who really want to push 4k *comfortably*, we definitely need more. Extra horsepower will obviously be the biggest thing, but if super high bandwidth HBM offers notable benefits at these high resolutions, I would imagine consumers, especially enthusiast consumers like those here, would want that. I find it absolutely bizarre that some of y'all are pushing against this.

Combined with GDDR6 for the x70/x80 cards, I think it would create a pretty great 'high resolution-capable' lineup.

Maybe GDDR6 will still be used, though. Of course it will still be 'enough', even GDDR5X will be 'enough', but who pays top dollar for 'enough', ya know? A bit of overkill ala the Kepler and Maxwell Titan's is not necessarily a bad thing.

Something most people don't even consider when talking about memory ability is GDDR5X is more than fast enough @2160p for 4 way SLI with a stack of Titans. Don't forget that in SLI the memory is mirrored on all cards and with 4 cards in play the fps even at 2160p is very high. The biggest problem I find with mGPU is not the memory but how the cards are connected when using 4 cards, AMDs solution using the PCI-E slot is bad as it has to share that bandwidth with the normal GPU stuff and NVidia are not much better where there are no HB 4 way SLI bridges.
 
Man of Honour
OP
Joined
21 May 2012
Posts
31,940
Location
Dalek flagship
To put it in perspective:

256-bit at 14 Gbps = 448 GB/s bandwidth

That's pretty impressive for 256-bit.

Though I wonder if this is enough for 4K60. We haven't had the chance to see any 600+ GB/s cards yet, to see if extreme memory bandwidth really makes a big difference at current compute power.

Volta might also have better memory efficiency though too.


Yes we have and they are very fast.:)

Uw8h1Xy.jpg
 
Man of Honour
OP
Joined
21 May 2012
Posts
31,940
Location
Dalek flagship
OK. Realistically how long before we see Volta. And more importantly how long till we see the "2080Ti" or whatever it will be? That, for me I think will be the first no compromises 4K card. Thats what I am betting on. Assuming the "2080" is about on par with the 1080Ti as is usually the case.

Probably sometime around March 2018 for the mid range cards and August 2018 for the Titan. The Ti will probably turn up about 6 months after.

This is based on what NVidia did with Pascal.

This also means that there won't be any big jumps in performance until the arrival of the Titan.
 
Man of Honour
OP
Joined
21 May 2012
Posts
31,940
Location
Dalek flagship
The titan is off the cards for me. I don't see it as good value compared to the Ti. In fact... God-awful value.

If someone has already got a 1080 Ti I don't think they will see anything noticeably faster with Volta until the Titan turns up in 12 months time or the Volta Ti in about 18 months time.

The best anyone is likely to get with the Volta mid range cards is about 25% over the current 1080 Ti which makes upgrading not very appealing.
 
Man of Honour
OP
Joined
21 May 2012
Posts
31,940
Location
Dalek flagship
I think that would be a push? was the 1080 not equivalent to a decent OC 980Ti?

It's complicated.

If you overclock both the 980 Ti and 1080 the Pascal card is about 25% faster.

If you run a Kingpin 980 Ti on LN2 it will beat any 1080 on LN2.

Maxwell is more efficient per clockcycle than Pascal.

For normal use though a 1080 is about 25% faster than a GTX 980 Ti.
 
Man of Honour
OP
Joined
21 May 2012
Posts
31,940
Location
Dalek flagship
I can't see this happening.

AMD's cores badly need the power savings (to budget more to the core) and the extra memory bandwidth vega's design is staved of it), AMD GPU need R300 or a Tesla like shift in design to fix this and get back on top.

Which I hope Navi will be so they can use more cost effective GDD5X/GDDR6

There is no reason to stop a GDDR5(X)(6) card from using a bit more power than a HBM based one. There are also advantages too in moving the memory away from the GPU core as it makes cooling easier and higher clockspeeds more possible.

If the next round of AMD cards do use GDDR6 and consume an extra 20 watts who cares if it means the cards are easier to manufacture and there are plenty of them on the shelves.
 
Man of Honour
OP
Joined
21 May 2012
Posts
31,940
Location
Dalek flagship
It all depends, 20w on a top tier 250w cards is not much in the grand scheme of things but when your memory controller and ram is sucking up 1/3 of the boards TDP like with RX480/RX580 it makes you wonder how much performance is being held back to get inside the typical board TDP's of 250/150/75w.

I thing it be better for AMD to come out with a 5970 type product, ie top quite taking the performance crown but 90-95% of the cutting edge performance on a smaller cheaper to produce chip which much less power consumption.

Its happened in that past when nvidia (NV30/Fermi) or AMD (Rage128/R600/Vega) have come out with big hot power hungry stinkers they had to start from scratch to get back in the "game" so to speak.

I think design is more important than how many watts a card pulls, the GTX 480 was just a very poorly designed card as it used less power than a lot of todays top end cards yet it ran very hot.
 
Man of Honour
OP
Joined
21 May 2012
Posts
31,940
Location
Dalek flagship
There is a certain stubbornness in approach that is holding the architecture back - for a long time holding out for when games make optimal use of a future vision for the architecture and it simply isn't happening - even with the hardware in consoles it isn't coming around like people would like to see and with nVidia so dominant it just ain't happening.

With a change of focus on the implementation so that it is better loaded up and less under-utilised by the type of processing it has to deal with here and now today (and on a better node than GF 14nm) it would still compete with anything out today or likely even the next generation.

Congrats on the MOH.:)

Did it come as a surprise when you logged in.:)
 
Back
Top Bottom