The high refresh 4K panels are two grand anyway right now aren’t they?![]()
Not too much more than a 2080Ti then

Please remember that any mention of competitors, hinting at competitors or offering to provide details of competitors will result in an account suspension. The full rules can be found under the 'Terms and Rules' link in the bottom right corner of your screen. Just don't mention competitors in any way, shape or form and you'll be OK.
The high refresh 4K panels are two grand anyway right now aren’t they?![]()
And far too many people who have some weird fascination with people who are spending their money as they want on their hobby. Weird.
It isn't just about having addressable memory or expanded CUDA support - with current architectures when processing game data each GPU either has to wait until the other GPU has finished whatever it is doing or they have to do some performance costly (in terms of when you are processing things in millseconds or less) semaphoring. The only way around that with current architectures is if a developer implements their game using explicit multi-adaptor as with carnal knowledge of how their game engine operates and what the rendering workload is comprised of they can avoid anything costly and farm out the work in the most efficient manner.
Being able to much more directly and quickly access memory might make it possible to get some kind of SLI scaling (somewhere around average percentage gains) out of some games that traditionally haven't worked at all due to the memory operations destroying any performance boost and the ability to shortcut transactions between the GPUs does add a small performance bonus but that would be in the region of single digit percentages.
Having a solution that didn't require specific software to realise the gains would be ideal, if you could effectively daisy chain them together and the software just sees a doubling, tripling, quadrupling of memory and compute power I would be buying a couple 2080Ti FE for my VR rig.
But I guess this would also work against NVidia's interests.
Are these cards anticipated to last 2 years like the 1000 series before the next release or are we expecting new cards in 2019?
NVLINK does not work like this - whilst you could model the whole memory pool of two cards as one the speed of nvlink is far far away from that of local memory. It is, in effect, better SLI but with the opportunity to use the cards in better ways. It's still AFR based.
Having a solution that didn't require specific software to realise the gains would be ideal, if you could effectively daisy chain them together and the software just sees a doubling, tripling, quadrupling of memory and compute power I would be buying a couple 2080Ti FE for my VR rig.
But I guess this would also work against NVidia's interests.
Probably a stop gap. But I wouldn't put it past Nvidia to eek these out as long as they can possibly get away with too just as we saw with Pascal to Turing.
It’s a slightly different link it’s not identical to SLI, right now it’s ‘better SLI’ but in future potentially much more.
According to the engineers this is inherently a bad idea for gaming as the software would not be accounting for slight latency in one of the cards while it communicates with the other to deliver the frames.
There is a lot of potential for future developments while this is a memory to memory integration vs the traditional display integration. The use of that is what will take some time to come to light but it has potential to improve greatly, it should bring smoother experiences right away though with massive bandwidth.
Queue all of the reviewers telling us not to by RTX2080ti now changing tune and dribbling over the 2080ti due to 50% - 30% performance increases over the 1080ti..... Cost will just fade into the background as normal and everyone will take it up the behind again.
As it's a proprietary interface why couldn't NVidia improve it to the point where the if the interface was connected it disabled the PCIe bus and used their own between the cards, then run at the required speed and bandwidth, I would argue that technically it would be possible but maybe too costly.
Essentially turning the second card into an upgrade, like the old memory upgrades on Matrox from times gone by
If you wanted to have the two cards seen instead, just disconnect the NVidia between them.
And far too many people who have some weird fascination with people who are spending their money as they want on their hobby. Weird.
Weird?, are you serious?, then explain to all of us weird people why you really must have one of these cards so badly?. Nvidia haven't given you a single good reason, that can be backed up by proof or vouched upon by a reputable source. Furthermore, i'll be the first to snap up a shiny ''2080ti'' if their any good, when their actually on the shelves....Christmas maybe. The claims NV are making will take years to come to fruition, unfortunately i think the card will be a bit of a dud, a big disappointment.
Current gen enthusiasts boards don't support these features, either.
Weird?, are you serious?, then explain to all of us weird people why you really must have one of these cards so badly?. Nvidia haven't given you a single good reason, that can be backed up by proof or vouched upon by a reputable source. Furthermore, i'll be the first to snap up a shiny ''2080ti'' if their any good, when their actually on the shelves....Christmas maybe. The claims NV are making will take years to come to fruition, unfortunately i think the card will be a bit of a dud, a big disappointment.
Weird?, are you serious?, then explain to all of us weird people why you really must have one of these cards so badly?. Nvidia haven't given you a single good reason, that can be backed up by proof or vouched upon by a reputable source. Furthermore, i'll be the first to snap up a shiny ''2080ti'' if their any good, when their actually on the shelves....Christmas maybe. The claims NV are making will take years to come to fruition, unfortunately i think the card will be a bit of a dud, a big disappointment.