• Competitor rules

    Please remember that any mention of competitors, hinting at competitors or offering to provide details of competitors will result in an account suspension. The full rules can be found under the 'Terms and Rules' link in the bottom right corner of your screen. Just don't mention competitors in any way, shape or form and you'll be OK.

AMD Navi 23 ‘NVIDIA Killer’ GPU Rumored to Support Hardware Ray Tracing, Coming Next Year

Status
Not open for further replies.
To read this thrad, anyone would think Nvidia were the performance underdogs rather than a benchmark to beat.
You could have said the same thing about AMD CPUs and could still say it. But my money is on them beating Intel in gaming with Zen 3 and Nvidia with RDNA2.
Having performance leadership isn't Intel or Nvidia's God given right as history shows in abundance.
 
Maybe the fact Nvidia rushed their launch, has zero stock, cards almost at their limit out of the box, price reduced hugely and little difference between their top end and the 3080.

Ontop of that AMD showing within 10% performance without actually stating its their top end product.

I think AMD spooked Nvidia coupled with Samsungs node being junk, AMD has closed the gap.
+1 The anecdotal evidence AMD are bringing something very decent and powerful (as you say rushed launch, little stock, cards at limit and huge price reductions at a given performance level) and the leakers with a good reputation
saying they expect RDNA2/big Navi to be very powerful suggests AMD are going to push Nvidia to the wire.
 
Colin, you know he just completely made up those completely illogical prices based on no factual information whatsoever... right? Right?

I expect AMD to price these cards as high as possible. They wanted to sell the 5700XT at $449 after all, plus the increase in Zen 3 price. Gone are the days in which ~ $500 would have bought you about the same performance as top nVIDIA card which was double in price. Mainstream these days is going for high end prices.
 
Last edited:
You could have said the same thing about AMD CPUs and could still say it. But my money is on them beating Intel in gaming with Zen 3 and Nvidia with RDNA2.
Having performance leadership isn't Intel or Nvidia's God given right as history shows in abundance.

It's like the lazy mans marketing for places like PC World "This machine has intel CPU and nvidia Graphics - the best you can get for gaming". The truth is unless you know what your going to be using the machine for and the applications you run these stereotypes are completely innacurate, even more so now than they ever were.
 
Some armchair analysis on RT. Nvidia's gigarays metric is opaque.. have to work with broad assumptions

RTX 3080 10 gigarays per second.. (assumed as peak)
BVH 8 wide 10 deep 80 boxes per ray (peak).. under the assumption that BVH is mutually exclusive at all levels
800 giga ray-box intersections per second

Big Navi 4 ray-box intersections per CU
320 ray-box intersections per clock (80 CU)
At 2.2 GHz boost 704 giga ray-box intersections per second

Note that Navi's CUs can either do RT ops or texture ops per clock.. as it's mixed mode hardware.. so I am making an assumption that 30-40% of the time it would be deployed for RT.. works out to 211-282 effective ray-box intersection peak.. which is 26 - 35% of 3080

Edit: though I have assumed 30-40% allocation it's implicit to AMD's pipeline scheduling logic.. where things can only be ascertained through simulations
 
Last edited:
I expect AMD to price these cards as high as possible. They wanted to sell the 5700XT at $449 after all, plus the increase in Zen 3 price. Gone are the days in which ~ $500 would have bought you about the same performance as top nVIDIA card which was double in price. Mainstream these days is goes for high end prices.
Yes up to a point but AMD have potentially an opportunity to have all AMD PCs for the next two years if they price their GPUs right. The new CPUs are excellent value and stealing a huge chunk of market share from their two biggest competitors is within reach with well priced performant GPU products as well. The 5900XT CPU is way better from initial impressions than Intel's equivalent and far better value despite Intel price cuts although we'll have to wait for benchmarks to be sure.
 
CPUs have never mattered at 4k before.

I suggest you read through the 3080 and 3090 reviews, specifically those that test 4K CPU performance scaling. In short - it does matter, now that the GPU's are so fast. Obviously not as much as at outdated 1080P resolutions, or bargain basement 1440P, but still a measurable amount.

A 4770K won't get the most out of a 3080 at 4K, for example. My 6700k won't get the most out a 3090 at 4K, hence why I'm upgrading to Zen 3. Even the Zen 2700X will not get the most out of a 3080 at 4K.
 
Some armchair analysis on RT. Nvidia's gigarays metric is opaque.. have to work with broad assumptions

RTX 3080 10 gigarays per second.. (assumed as peak)
BVH 8 wide 10 deep 80 boxes per ray (peak).. under the assumption that BVH is mutually exclusive at all levels
800 giga ray-box intersections per second

Big Navi 4 ray-box intersections per CU
320 ray-box intersections per clock (80 CU)
At 2.2 GHz boost 704 giga ray-box intersections per second

Note that Navi's CUs can either do RT ops or texture ops per clock.. as it's mixed mode hardware.. so I am making an assumption that 30-40% of the time it would be deployed for RT.. works out to 211-282 effective ray-box intersection peak.. which is 26 - 35% of 3080

Edit: there's also an angle of pipeline scheduling.. where things can only be ascertained through simulations... can't do much armchair there

The whole thing about the CUs being able to do everything but only one at a time meaning when RT is being used it'd use up a certain number of CUs and therefore still have the drop in performance has been debunked on Twitter, judging from what various people have said, AMD's CUs can do everything, at the same time.
 
Some armchair analysis on RT. Nvidia's gigarays metric is opaque.. have to work with broad assumptions

RTX 3080 10 gigarays per second.. (assumed as peak)
BVH 8 wide 10 deep 80 boxes per ray (peak).. under the assumption that BVH is mutually exclusive at all levels
800 giga ray-box intersections per second

Big Navi 4 ray-box intersections per CU
320 ray-box intersections per clock (80 CU)
At 2.2 GHz boost 704 giga ray-box intersections per second

Note that Navi's CUs can either do RT ops or texture ops per clock.. as it's mixed mode hardware.. so I am making an assumption that 30-40% of the time it would be deployed for RT.. works out to 211-282 effective ray-box intersection peak.. which is 26 - 35% of 3080

Edit: there's also an angle of pipeline scheduling.. where things can only be ascertained through simulations... can't do much armchair there

One of the things AMD has aimed to do is alleviate/work around some of the traditional inefficiencies when it comes to ray tracing and general compute architectures - I suspect things like shading ray traced results and some of the memory operations will be faster than a synthetic look at it would suggest with less penalty than you'd normally get doing it this way.

Something that people often miss though is that there are limits to concurrency due to the serialised nature of the overall workload - on paper AMD can bring some impressive numbers to the ray tracing pipeline while doing other work as well but in reality actually leveraging that will be much harder and gains much less in the real world - probably more like 2080ti performance.

The whole thing about the CUs being able to do everything but only one at a time meaning when RT is being used it'd use up a certain number of CUs and therefore still have the drop in performance has been debunked on Twitter, judging from what various people have said, AMD's CUs can do everything, at the same time.

The problem is what you can actually do concurrently - if you have no work that you can process at that time you will end up with a lot of potential performance sitting there unused - overall a game rendering pipeline is quite serial despite individual parts of it being very parallel - there are limits how much you can jump ahead to work waiting to be done without results from the next operation and/or how much you can defer other work to where the GPU would be more free, etc.
 
The whole thing about the CUs being able to do everything but only one at a time meaning when RT is being used it'd use up a certain number of CUs and therefore still have the drop in performance has been debunked on Twitter, judging from what various people have said, AMD's CUs can do everything, at the same time.

I have looked at Xbox slides its clear in this regard.. either TMUs or RTX

but that's not bad since RT anyhow stalls the final output going by the frame rate drops..so overall this approach improves hardware utilisation
 
I suggest you read through the 3080 and 3090 reviews, specifically those that test 4K CPU performance scaling. In short - it does matter, now that the GPU's are so fast. Obviously not as much as at outdated 1080P resolutions, or bargain basement 1440P, but still a measurable amount.

A 4770K won't get the most out of a 3080 at 4K, for example. My 6700k won't get the most out a 3090 at 4K, hence why I'm upgrading to Zen 3. Even the Zen 2700X will not get the most out of a 3080 at 4K.

From Tom's Hardware.
https://cdn.mos.cms.futurecdn.net/5fAK6z2suUNXHuuPqEuxm7-2560-80.png

From Hardware Canucks.
https://hardwarecanucks.com/wp-content/uploads/NVIDIA-GeForce-RTX-3080-AMD-CPU-Scaling-20.jpg Cod:MW
https://hardwarecanucks.com/wp-content/uploads/NVIDIA-GeForce-RTX-3080-AMD-CPU-Scaling-22.jpg CS:GO
https://hardwarecanucks.com/wp-content/uploads/NVIDIA-GeForce-RTX-3080-AMD-CPU-Scaling-24.jpg Doom: Eternal
https://hardwarecanucks.com/wp-content/uploads/NVIDIA-GeForce-RTX-3080-AMD-CPU-Scaling-28.jpg Horizon: Zero Dawn
https://hardwarecanucks.com/wp-content/uploads/NVIDIA-GeForce-RTX-3080-AMD-CPU-Scaling-30.jpg Rainbow Six:Siege
https://hardwarecanucks.com/wp-content/uploads/NVIDIA-GeForce-RTX-3080-AMD-CPU-Scaling-32.jpg Jedi: Fallen Order
https://hardwarecanucks.com/wp-content/uploads/NVIDIA-GeForce-RTX-3080-AMD-CPU-Scaling-34.jpg Red Dead Redemption 2
https://hardwarecanucks.com/wp-content/uploads/NVIDIA-GeForce-RTX-3080-AMD-CPU-Scaling-36.jpg Overwatch

So as you can see in some titles it doesn't matter at all, in others it matters a little, so we'll just need to see exactly how much improvement there is when going from the 3900x to 5900x
 
Yes up to a point but AMD have potentially an opportunity to have all AMD PCs for the next two years if they price their GPUs right. The new CPUs are excellent value and stealing a huge chunk of market share from their two biggest competitors is within reach with well priced performant GPU products as well. The 5900XT CPU is way better from initial impressions than Intel's equivalent and far better value despite Intel price cuts although we'll have to wait for benchmarks to be sure.

This is exactly what im saying, everything now depends on price.. The cpu prices have increased, that could be due to leadership or wafer prices but in reality its likely both.

I expect Gpu prices to be only slightly cheaper than Nvidia especially if the performance is there, again due to wafer prices and for the performance they will offer.

I personally hope though that they go disruptive and come in low to sweep up sales!
 
This is exactly what im saying, everything now depends on price.. The cpu prices have increased, that could be due to leadership or wafer prices but in reality its likely both.

I expect Gpu prices to be only slightly cheaper than Nvidia especially if the performance is there, again due to wafer prices and for the performance they will offer.

I personally hope though that they go disruptive and come in low to sweep up sales!
If they can provide a ruddy card to me I would pay more than the 3080s, Nvidia have really screwed the 3000 series launch. I’m hoping that AMD have got a decent supply of cards and I’d be happy to pay a slight increase over what I’ve paid to OC to sit on a very long wait list for a Strix 3080.
 
Ironically the high end CPU's are for low resolution gamers :p

yea and the usual consumer playing at 1080p dont buy high end cpus.
but somehow they think having a powerful gpu going to make them go boom with frames....

+1 The anecdotal evidence AMD are bringing something very decent and powerful (as you say rushed launch, little stock, cards at limit and huge price reductions at a given performance level) and the leakers with a good reputation
saying they expect RDNA2/big Navi to be very powerful suggests AMD are going to push Nvidia to the wire.

Main reason nvidia priced cards as such as they knew the day of end has come.
 
It is an interesting phenomenon, I hope people who want to game at 1080p are not buying 3080's when a card much further down the stack is ideal for it.
 
Status
Not open for further replies.
Back
Top Bottom