• Competitor rules

    Please remember that any mention of competitors, hinting at competitors or offering to provide details of competitors will result in an account suspension. The full rules can be found under the 'Terms and Rules' link in the bottom right corner of your screen. Just don't mention competitors in any way, shape or form and you'll be OK.

AMD Navi 23 ‘NVIDIA Killer’ GPU Rumored to Support Hardware Ray Tracing, Coming Next Year

Status
Not open for further replies.
hmm.. i dont know if you have been following the reviews
the 3090 is aimed at semi professionals.. they have no option but to buy it.. cant be buying the 12GB variant
further there might be some driver optimisations (or rather debottlenecking) for 2D performance on 3090..

Edit: I am speaking in $'s .. so its more like $600
I think it was confirmed that the 3090 doesn't have the driver optimisations that the previous titan cards have.
 
If they did that (with SLI/NVlink), the 3090 would be dead to prosumers (which is why it is priced so high in the first place). Get the same amount of RAM, with significantly faster rendering performance for slightly more than a single 3090.

Dont think all critical prosumer applications out there can do memory pooling.. some of them scale worse on SLI
and there's 2D performance which will be gimped on the 12GB variant for sure..
i feel it looks reasonable.. maybe $999
 
@KentMan we need a favour.

When the shipments are coming can you get the boys to wave through the queues so the AMD lorry can reach OcUk et al? :D

Haha i dont work near the coast :) i'll keep an eye on the motorways nearby though for you! they have been terrible lately with quite a few early morning crashes causing delays upto half a day etc :(
 
Dont think all critical prosumer applications out there can do memory pooling.. some of them scale worse on SLI
and there's 2D performance which will be gimped on the 12GB variant for sure..
i feel it looks reasonable.. maybe $999

yes.. but they will do it retroactively like they did with one of those consumer SKUs when the vega 64/frontier edition was released..
its a perfectly plausible reaction to competitive pressure..

Good points.
They may muddy the water (in terms of product naming) if they add in software optimisations to the RTX line, even if it is only for the 3090. Lets see if AMD forces their hand
 
Oh we have some leaked prices? :)

As a consumer i hope there are 2 versions for the top dog navi
  • 32 gb for $1199 aimed at semi professionals
  • 16 gb for $799-899 aimed at gamers
128 ROPs is a huge load even for 4K..
(looks bit high on sodium, now that i have regained my senses after reading that leak)

They may muddy the water (in terms of product naming) if they add in software optimisations to the RTX line, even if it is only for the 3090. Lets see if AMD forces their hand

Worse they'd make 3090 EOL and replace them with those 48 GB rumoured Ampere Titans..these are like broad approaches for nvidia, if AMD pips the BFGPU in rasterization perf
 
Last edited:
Is it incorrect that doubling the TFlop count, memory bandwidth, texture rate and pixel rate of a GPU, is likely to result in twice the theoretical performance?

If we assume there's no change in IPC, or shader utilization, I mean.
 
Is it incorrect that doubling the TFlop count, texture rate and pixel rate of a GPU, is likely to result in twice the theoretical performance?

If we assume there's no change in IPC, or shader utilization, I mean.


That doesn't even work for the same architecture, for example if you were to double the 5700XT, literally make everything about it X2 it still wouldn't be X2 the performance, i think it would get very close but there will be scaling issues.

If we are moving from RDNA1 to RDNA2 everything goes out of the window, they are not at all comparable. Ampere has 2X the shaders and with it 2X the FP32 throughput, its no where near 2X the performance in games but that is because Ampere is much more a compute focused vs gaming focused GPU than Turing, RDNA2 is focused on getting more gaming performance for the same amount of stuff, an RDNA2 GPU with identical specs as RDNA1 will be faster.
 
Is it incorrect that doubling the TFlop count, memory bandwidth, texture rate and pixel rate of a GPU, is likely to result in twice the theoretical performance?

If we assume there's no change in IPC, or shader utilization, I mean.

hmm.. theoretically yes at same clocks.
but the last part regarding no change in utilisation requires innovation in scheduling and pipelining logic..
thats why you'd often hear that Polaris had a hard constraint on the number of CUs..
 
hmm.. theoretically yes at same clocks.
but the last part regarding no change in utilisation requires innovation in scheduling and pipelining logic..
thats why you'd often hear that Polaris had a hard constraint on the number of CUs..

Which was due to the GCN architecture having those limits, whereas with RDNA we don't know if it has a top limit on how many CUs it can handle. Though comparing to CDNA, there's Arcturus which has 120CUs so they've at least improved the limit vastly on the compute side of things.
 
Unfortunately, like Jon Snow, I know nothing of such things.

You should read the whitepaper on Navi..
it compares navi 10 with vega..
that should shine some light on how pipelining/issuing/scheduling logic change was needed to improve utilisation beyond 64 CUs..
i too am no expert in microprocessor tech.. more like a math guy who understands queuing systems.. but it should be intuitive nonetheless
 
pipelining/issuing/scheduling logic change was needed to improve utilisation beyond 64 CUs..

In your opinion, can AMD now create RDNA 2 GPUs which can effectively utilize over 64 Compute Units? Lets say, upto 80 as this is the number being tossed around...

Or, is this still unknown at this point?
 
In your opinion, can AMD now create RDNA 2 GPUs which can effectively utilize over 64 Compute Units? Lets say, upto 80 as this is the number being tossed around...

Or, is this still unknown at this point?

You have almost answered your own question. If they could not benefit beyond 64 CUs they would cap it at that until they could. If people are reporting on leaks of above 64 then I would take that they have found some way to get something more out of the previous limitation.
 
In your opinion, can AMD now create RDNA 2 GPUs which can effectively utilize over 64 Compute Units? Lets say, upto 80 as this is the number being tossed around...

Or, is this still unknown at this point?

Unless they've made some significant changes/breakthroughs elsewhere they pretty much must have done. To get to the level of performance over the 5700XT they've shown takes more than any realistic frequency bump or realistic architecture refinements alone could achieve without around 4608 (number chosen for convenience) shaders.

Is it incorrect that doubling the TFlop count, memory bandwidth, texture rate and pixel rate of a GPU, is likely to result in twice the theoretical performance?

If we assume there's no change in IPC, or shader utilization, I mean.

This gets very complex - but if you take the 5700XT for instance just simply doubling everything with no other changes is highly unlikely to result in double the performance in games as you are well into the territory of diminishing returns due to utilisation issues, etc. (you can see the same in nVidia architectures and older AMD GPUs, etc. there comes a point where you need either a new [or refined] architecture or a massive frequency up lift)
 
Last edited:
Status
Not open for further replies.
Back
Top Bottom