• Competitor rules

    Please remember that any mention of competitors, hinting at competitors or offering to provide details of competitors will result in an account suspension. The full rules can be found under the 'Terms and Rules' link in the bottom right corner of your screen. Just don't mention competitors in any way, shape or form and you'll be OK.

Geforce GTX1180/2080 Speculation thread

What will you use 16GB for? :confused:

A 4GB R9 Fury X is still pretty decent.

If FuryX had HBCC like Vega has, it wouldn't be an issue.
Some games that using the memory compression of the HBM are fine with the FuryX.
However other games (especially gimpworks ones), do not support memory compression and cripple the card performance.
 
If FuryX had HBCC like Vega has, it wouldn't be an issue.
Some games that using the memory compression of the HBM are fine with the FuryX.
However other games (especially gimpworks ones), do not support memory compression and cripple the card performance.

It is bad because nvidia wastes precious commodity resources. Memory shouldn't be wasted for nothing..
:(
 
It is bad because nvidia wastes precious commodity resources. Memory shouldn't be wasted for nothing..
:(

Actually they force their users to buy more expensive cards having higher memory need.
You will see those RTX2080/2070 will be complete inadequate when proper ray tracing hits because of the VRAM, forcing their owners to switch to Ti or next gen.
 
If I'm not mistaken I recall that Nvidia came out 1st with memory compression techniques that also include caching game files to make them load faster. Actually they are quite good IMO. It's very hypercritical to not use them through gameworks and DX11 just to rationalize the need for wasteful amount of vram.

Can anyone really believe that we need 8-16GB of vram just to game at a level that hasn't change that much in over the last 5 years? Where you need a magnifying glass to tell the difference between that game on PC vs Console. Mod packs are mod packs, a niche. And do to the nature of them only require more memory as it's not something optimized for that game. We accept it and move on from it. However, with all things having some level of equilibrium in graphics, innovation and desire to be as close to metal as possible when will we wake up to see that vram isn't bacon?

To be clear, I like bacon...It does fix a bunch of stuff.
 
Last edited:
If I'm not mistaken I recall that Nvidia came out 1st with memory compression techniques that also include caching game files to make them load faster. Actually they are quite good IMO. It's very hypercritical to not use them through gameworks.

They don't use the tech at all, only on professional GPUs. Plenty of videos comparing FuryX and 98x seeing the former using less ram than the latter back in the day.
Some time 2/3 the VRAM the 98x were using.
 
Has there been nothing to suggest Nvidia is going to adopt Asynchronous Shaders?

With more and more games being developed with DX12/vulkan, now would seem the time to adopt the tech?
 
Has there been nothing to suggest Nvidia is going to adopt Asynchronous Shaders?

With more and more games being developed with DX12/vulkan, now would seem the time to adopt the tech?

Who knows -_- On consoles most games do support them, and direct ports need to be patched not to on PC because of NVidia cards -_-
They are holding back innovation. Like the Ray Tracing. AMD has come up with it since 2013 and under GPUOpen, so everyone could use it without strings.
 
Last edited:
Has there been nothing to suggest Nvidia is going to adopt Asynchronous Shaders?

With more and more games being developed with DX12/vulkan, now would seem the time to adopt the tech?
Well you done open a can of worms. That goes all the way back to 3DMark's TimeSpy use of Asynchronous compute. Async Compute was originally intended for executing instruction in parallel.

Nvidia's hardware, Maxwell/Pascal, can do it but at a much slower rate or it chokes which can cause crashes in OS. Many thought Nvidia couldn't execute parallel instructions but it always could. So they came up with a DX11 way of doing it. They use concurrent execution (DX11). Wow, you got me going back in time about that huge debate. Some of it can be found here:
https://steamcommunity.com/app/223850/discussions/0/366298942110944664/

When the use of the TimeSpy code was scurtinized popular opinion suggested that 3Dmark was favoring nvidia hardware and forcing AMD hardware to use concurrent execution (which it was not intended for). Eventually they posted their own article about its use. Mind you this debate was in several other forums. Not just on steam.

What does that mean now? Very good question indeed. IMO Turing should be able to execute instruction in parallel with a level of efficiency greater or equal to Vega. So keep your eye on those game/benchmarks like Ashes of the Singularity.
 
Last edited:
As i seem to vaguely remember Asynchronous Shader/compute was the main tech behind mantle/DX12 and from what your both saying because of Nvidia the tech has been gimped even though we see on games that make use of it such as DOOM the performance gain is substantial.

Nvidia can´t ignore DX12 forever ,next generation game engines are surely going to be based on it that are targeting the new consoles and windows 10.
 
lol I meant that it's only 8% better then 1080Ti. I hope that's not true?
Oh. lol. Well that is what is being mentioned by some "anonymous source" in the video posted here. I also hope it is a pack of lies as 8% would be pathetic after all this wait.
 
Oh. lol. Well that is what is being mentioned by some "anonymous source" in the video posted here. I also hope it is a pack of lies as 8% would be pathetic after all this wait.


Even if its is 8%, you'll still get people "upgrading" from their 1080-ti cards just to have a new toy to unbox. Remember the 1080-fe and all the 980ti's in MM despite there not being much between them?
 
Well you done open a can of worms. That goes all the way back to 3DMark's TimeSpy use of Asynchronous compute. Async Compute was originally intended for executing instruction in parallel.

Nvidia's hardware, Maxwell/Pascal, can do it but at a much slower rate or it chokes which can cause crashes in OS. Many thought Nvidia couldn't execute parallel instructions but it always could. So they came up with a DX11 way of doing it. They use concurrent execution (DX11). Wow, you got me going back in time about that huge debate. Some of it can be found here:
https://steamcommunity.com/app/223850/discussions/0/366298942110944664/

When the use of the TimeSpy code was scurtinized popular opinion suggested that 3Dmark was favoring nvidia hardware and forcing AMD hardware to use concurrent execution (which it was not intended for). Eventually they posted their own article about its use. Mind you this debate was in several other forums. Not just on steam.

What does that mean now? Very good question indeed. IMO Turing should be able to execute instruction in parallel with a level of efficiency greater or equal to Vega. So keep your eye on those game/benchmarks like Ashes of the Singularity.

You are correct.
Nvidia hardware doesn't have true HARDWARE async compute as per specification. Everything goes through the driver hence is slow. Twice games came out with it, and had to remove it (Wolfenstein for example) because it crippled the perf on the NV cards.

AMD on the other hand supports hardware sync compute all way back 5 years ago when the Hawaii GPUs came out (R9 290), and on their consoles GPUs.
Especially the latter was a demand from SONY for the PS4, and we see now some really impressive engines on PS4 doing wonders using the full hardware especially on the PS4Pro with 60fps 4K amazing looking stuff. (albeit SONY restricts them to 30fps 4k to keep it in line with the capabilities of the normal PS4). And yet the GPU there is just one between RX470 and RX480 in terms of performance, running on a CPU of equivalent grunt power of an Intel Atom (even if made by AMD)


True TimeSpy supposed to be a DX12 benchmark and this is exposed since 2016 how badly favours Nvidia software solution of concurrent execution which is not on the DX12 spec, and not true async compute. That affects especially the Game Test 2 crippling the AMD GPU performance by a flat out 15%. Having the weird effect of comparing GTX1080 @ 2190 vs Vega 64 @ 1727 and same CPU, have exactly the same FPS on GTest 1 but the latter is 15% slower on GTest 2 because of that different non standard execution path.

The funny bit is that DX12 games, are better indication and benchmark tools than the dedicated benchmark tool because they stick to the DX12 spec. (albeit many without hardware async compute).

However next gen of consoles both are going to be PC like using Zen+Navi. So direct porting going to be easier, and lets hope the software companies leave all the AMD optimizations in the games, removing those that wont work when NVidia cards are installed on the system only. And not blanket removal of all optimizations like is happening now.
 
I agree the PS4 pro is a nice piece of kit ( and the performance I hoped for for the launch console) but Sony restricting ps4pro to 30fps 4k is blatantly false. The only Sony restriction is that a game has to run on both machines.
I have a pro so this is no console bashing.... But take last of us.
That runs 1080p 60 (as opposed to 30 like on base machine). It does 4k 30.... And this is simply because it is all it can managed because the 3rd option it gives is a halfway house where it is .(from memory so may be wrong) 1600p with a target of 60fps. It manages it for the most part (and is how I play) but it drops occasionally and this is simply because the hardware is maxed out.
I can't comment on the rest of your post....honestly it would not surprise me if NV are greasing some palms somewhere to make them look good... However given 1 part of your post is demonstrably not true it does make one question the rest.
 
Developers don't like using Async anyway. It's cantankerous to implement and the gains are negliable. When that's coming from the horses mouth (IO Interactive), you just have to take it as it is.
 
Too much geeky talk in here :P Why don't we all do the sensible thing for once and just not buy the cards when they are released? the prices would soon come down a month later and Nvidia would get the message. but NOOOO we are all just sheep huh?
 
Back
Top Bottom