• Competitor rules

    Please remember that any mention of competitors, hinting at competitors or offering to provide details of competitors will result in an account suspension. The full rules can be found under the 'Terms and Rules' link in the bottom right corner of your screen. Just don't mention competitors in any way, shape or form and you'll be OK.

Poll: Poll: Will you be buying a 2080Ti/2080/2070?

Which card will you be buying?


  • Total voters
    1,201
  • Poll closed .
And far too many people who have some weird fascination with people who are spending their money as they want on their hobby. Weird.

Couldn't agree more. There's some major saltyness on people spending money on what they want to - on an item that's already a luxury and could never be classed as something essential in life. Worse is the wilful avoidance of the fact there is no risk - no money goes anywhere till they ship, reviews will be before they do and you can cancel a preorder (or return one) easy squeezy in the UK.
 
It isn't just about having addressable memory or expanded CUDA support - with current architectures when processing game data each GPU either has to wait until the other GPU has finished whatever it is doing or they have to do some performance costly (in terms of when you are processing things in millseconds or less) semaphoring. The only way around that with current architectures is if a developer implements their game using explicit multi-adaptor as with carnal knowledge of how their game engine operates and what the rendering workload is comprised of they can avoid anything costly and farm out the work in the most efficient manner.

Being able to much more directly and quickly access memory might make it possible to get some kind of SLI scaling (somewhere around average percentage gains) out of some games that traditionally haven't worked at all due to the memory operations destroying any performance boost and the ability to shortcut transactions between the GPUs does add a small performance bonus but that would be in the region of single digit percentages.


Having a solution that didn't require specific software to realise the gains would be ideal, if you could effectively daisy chain them together and the software just sees a doubling, tripling, quadrupling of memory and compute power I would be buying a couple 2080Ti FE for my VR rig.

But I guess this would also work against NVidia's interests.
 
Having a solution that didn't require specific software to realise the gains would be ideal, if you could effectively daisy chain them together and the software just sees a doubling, tripling, quadrupling of memory and compute power I would be buying a couple 2080Ti FE for my VR rig.

But I guess this would also work against NVidia's interests.

NVLINK does not work like this - whilst you could model the whole memory pool of two cards as one the speed of nvlink is far far away from that of local memory. It is, in effect, better SLI but with the opportunity to use the cards in better ways. It's still AFR based.
 
Are these cards anticipated to last 2 years like the 1000 series before the next release or are we expecting new cards in 2019?

Probably a stop gap. But I wouldn't put it past Nvidia to eek these out as long as they can possibly get away with too just as we saw with Pascal to Turing.
 
NVLINK does not work like this - whilst you could model the whole memory pool of two cards as one the speed of nvlink is far far away from that of local memory. It is, in effect, better SLI but with the opportunity to use the cards in better ways. It's still AFR based.

It’s a slightly different link it’s not identical to SLI, right now it’s ‘better SLI’ but in future potentially much more.

Having a solution that didn't require specific software to realise the gains would be ideal, if you could effectively daisy chain them together and the software just sees a doubling, tripling, quadrupling of memory and compute power I would be buying a couple 2080Ti FE for my VR rig.

But I guess this would also work against NVidia's interests.

According to the engineers this is inherently a bad idea for gaming as the software would not be accounting for slight latency in one of the cards while it communicates with the other to deliver the frames.

There is a lot of potential for future developments while this is a memory to memory integration vs the traditional display integration. The use of that is what will take some time to come to light but it has potential to improve greatly, it should bring smoother experiences right away though with massive bandwidth.
 
Probably a stop gap. But I wouldn't put it past Nvidia to eek these out as long as they can possibly get away with too just as we saw with Pascal to Turing.

Fortunately for us consumers, it's guaranteed that they won't sit on this stop gap for more than 12 months, could even be less, as AMD are releasing 7nm next year. Also, Turing are expensive to make because of the size of the chips, Nvidia will be able to make much more profit (and we know how much they like money) by transitioning over to 7nm cards as soon as possible.
 
Queue all of the reviewers telling us not to by RTX2080ti now changing tune and dribbling over the 2080ti due to 50% - 30% performance increases over the 1080ti..... Cost will just fade into the background as normal and everyone will take it up the behind again.
 
It’s a slightly different link it’s not identical to SLI, right now it’s ‘better SLI’ but in future potentially much more.

According to the engineers this is inherently a bad idea for gaming as the software would not be accounting for slight latency in one of the cards while it communicates with the other to deliver the frames.

There is a lot of potential for future developments while this is a memory to memory integration vs the traditional display integration. The use of that is what will take some time to come to light but it has potential to improve greatly, it should bring smoother experiences right away though with massive bandwidth.

As it's a proprietary interface why couldn't NVidia improve it to the point where the if the interface was connected it disabled the PCIe bus and used their own between the cards, then run at the required speed and bandwidth, I would argue that technically it would be possible but maybe too costly.

Essentially turning the second card into an upgrade, like the old memory upgrades on Matrox from times gone by :D

If you wanted to have the two cards seen instead, just disconnect the NVidia between them.
 
Queue all of the reviewers telling us not to by RTX2080ti now changing tune and dribbling over the 2080ti due to 50% - 30% performance increases over the 1080ti..... Cost will just fade into the background as normal and everyone will take it up the behind again.

I think a 30%-50% performance increase would be incredible, considering how much of a beast the 1080Ti already is, unlikely but we can only hope :D
 
As it's a proprietary interface why couldn't NVidia improve it to the point where the if the interface was connected it disabled the PCIe bus and used their own between the cards, then run at the required speed and bandwidth, I would argue that technically it would be possible but maybe too costly.

Essentially turning the second card into an upgrade, like the old memory upgrades on Matrox from times gone by :D

If you wanted to have the two cards seen instead, just disconnect the NVidia between them.

That exact thing is what NVFabric is, but as yet only on the £60,000 deep learning board with IBM CPU.

That system does allow applications to see it as a lump of single memory - however the latency exists while application see’s it as one memory bank but it’s technically not. Not an issue in compute environments but most likely would cause input lag, which is why they discourage people from assuming that it’s a great thing and could deliver huge textures.

All interesting though!
 
And far too many people who have some weird fascination with people who are spending their money as they want on their hobby. Weird.

Weird?, are you serious?, then explain to all of us weird people why you really must have one of these cards so badly?. Nvidia haven't given you a single good reason, that can be backed up by proof or vouched upon by a reputable source. Furthermore, i'll be the first to snap up a shiny ''2080ti'' if their any good, when their actually on the shelves....Christmas maybe. The claims NV are making will take years to come to fruition, unfortunately i think the card will be a bit of a dud, a big disappointment.
 
Last edited:
Weird?, are you serious?, then explain to all of us weird people why you really must have one of these cards so badly?. Nvidia haven't given you a single good reason, that can be backed up by proof or vouched upon by a reputable source. Furthermore, i'll be the first to snap up a shiny ''2080ti'' if their any good, when their actually on the shelves....Christmas maybe. The claims NV are making will take years to come to fruition, unfortunately i think the card will be a bit of a dud, a big disappointment.

Maybe we didn’t have to have one, wanted to buy it so did - is that ok with you?

Take your head out the keynote video and read up - plenty of reasons why these are interesting and a huge change in technology... But obviously it’s easier not to bother reading all technical documentation or other interviews and demos and just **** everyone off who wanted to give it a go.

Current gen enthusiasts boards don't support these features, either.

Be fun if they do in future though ;)
 
Weird?, are you serious?, then explain to all of us weird people why you really must have one of these cards so badly?. Nvidia haven't given you a single good reason, that can be backed up by proof or vouched upon by a reputable source. Furthermore, i'll be the first to snap up a shiny ''2080ti'' if their any good, when their actually on the shelves....Christmas maybe. The claims NV are making will take years to come to fruition, unfortunately i think the card will be a bit of a dud, a big disappointment.

I can only speak for myself but I don't need one "so badly" - I just want one. I don't need to justify it (hey, it's my money) but my reasoning (in my case) is simple - I skipped the 1080ti (I'm running 2 x 1080s) so my expectation (we'll find out in 15 days) is that a 2080ti will be faster than a 1080ti and therefore faster than a 1080. It'll certainly be faster than 1080SLI (IMHO) and more consistently so (as SLI support has grown spotty). I'm buying it through the business and selling my old cards - therefore my outlay is reasonable and I can easily afford it. If, in 15 days, we find out the 2080ti isn't worth it - cancel preorder. Option 2 - send it back when it arrives under DSR. There is no risk therefore.

To turn your argument on it's head - there is no proof the card won't be good but *I* would rather not wait till xmas for one. I think I made a sound decision given the ti sold out so quickly. To reiterate - I don't need anything I have the disposable income to spend on something I enjoy. I wouldn't berate you for not buying one, so I find it weird that people are riled up against those that are given we're on a forum for enthusiasts who basically waste money anyway :D
 
Weird?, are you serious?, then explain to all of us weird people why you really must have one of these cards so badly?. Nvidia haven't given you a single good reason, that can be backed up by proof or vouched upon by a reputable source. Furthermore, i'll be the first to snap up a shiny ''2080ti'' if their any good, when their actually on the shelves....Christmas maybe. The claims NV are making will take years to come to fruition, unfortunately i think the card will be a bit of a dud, a big disappointment.
 
Back
Top Bottom