Man of Honour
Please remember that any mention of competitors, hinting at competitors or offering to provide details of competitors will result in an account suspension. The full rules can be found under the 'Terms and Rules' link in the bottom right corner of your screen. Just don't mention competitors in any way, shape or form and you'll be OK.
I forget how insular the community is here. Completely oblivious to actual market trends and realities.
They run it 'ok'. They're borderline if you want to do 60fps in nearly everything with good settings. Some games you're gonna have to make more notable compromises.
And games will get more demanding. These two cards you mention might still be 'ok' for a couple years, but for those who really want to push 4k *comfortably*, we definitely need more. Extra horsepower will obviously be the biggest thing, but if super high bandwidth HBM offers notable benefits at these high resolutions, I would imagine consumers, especially enthusiast consumers like those here, would want that. I find it absolutely bizarre that some of y'all are pushing against this.
Combined with GDDR6 for the x70/x80 cards, I think it would create a pretty great 'high resolution-capable' lineup.
Maybe GDDR6 will still be used, though. Of course it will still be 'enough', even GDDR5X will be 'enough', but who pays top dollar for 'enough', ya know? A bit of overkill ala the Kepler and Maxwell Titan's is not necessarily a bad thing.
I think the market for 4way SLI or even 2way SLI for that matter is so low it's just not worth there time.
To put it in perspective:
256-bit at 14 Gbps = 448 GB/s bandwidth
That's pretty impressive for 256-bit.
Though I wonder if this is enough for 4K60. We haven't had the chance to see any 600+ GB/s cards yet, to see if extreme memory bandwidth really makes a big difference at current compute power.
Volta might also have better memory efficiency though too.
Come 2019 a Volta Ti will do everything at 4K but it will be a fairly large chip and will be expensive.
why have nvidia stopped doing refreshes?
500 series was 400 refresh
700 series was 600 refresh
both 900 and 1000 series new tech introduced
Shiny is so hard to resist at times, it just has to be bought even if it isn't needed
OK. Realistically how long before we see Volta. And more importantly how long till we see the "2080Ti" or whatever it will be? That, for me I think will be the first no compromises 4K card. Thats what I am betting on. Assuming the "2080" is about on par with the 1080Ti as is usually the case.
The titan is off the cards for me. I don't see it as good value compared to the Ti. In fact... God-awful value.
I think that would be a push? was the 1080 not equivalent to a decent OC 980Ti?
You mean like the way we heard about the Titan Xp?
I can't see this happening.
AMD's cores badly need the power savings (to budget more to the core) and the extra memory bandwidth vega's design is staved of it), AMD GPU need R300 or a Tesla like shift in design to fix this and get back on top.
Which I hope Navi will be so they can use more cost effective GDD5X/GDDR6
It all depends, 20w on a top tier 250w cards is not much in the grand scheme of things but when your memory controller and ram is sucking up 1/3 of the boards TDP like with RX480/RX580 it makes you wonder how much performance is being held back to get inside the typical board TDP's of 250/150/75w.
I thing it be better for AMD to come out with a 5970 type product, ie top quite taking the performance crown but 90-95% of the cutting edge performance on a smaller cheaper to produce chip which much less power consumption.
Its happened in that past when nvidia (NV30/Fermi) or AMD (Rage128/R600/Vega) have come out with big hot power hungry stinkers they had to start from scratch to get back in the "game" so to speak.
https://www.gamersnexus.net/guides/...marks-async-future-is-bright-for-volta/page-2
In the games that do not use asynchronous compute, the Titan V is barely any faster than a Titan Xp.
Lies !
There is a certain stubbornness in approach that is holding the architecture back - for a long time holding out for when games make optimal use of a future vision for the architecture and it simply isn't happening - even with the hardware in consoles it isn't coming around like people would like to see and with nVidia so dominant it just ain't happening.
With a change of focus on the implementation so that it is better loaded up and less under-utilised by the type of processing it has to deal with here and now today (and on a better node than GF 14nm) it would still compete with anything out today or likely even the next generation.
Most likely. It's a lot of money but then I can stop thinking about it. It's not like I have any other expensive hobbies.