• Competitor rules

    Please remember that any mention of competitors, hinting at competitors or offering to provide details of competitors will result in an account suspension. The full rules can be found under the 'Terms and Rules' link in the bottom right corner of your screen. Just don't mention competitors in any way, shape or form and you'll be OK.

AMD Navi 23 ‘NVIDIA Killer’ GPU Rumored to Support Hardware Ray Tracing, Coming Next Year

Status
Not open for further replies.
Soldato
Joined
6 Aug 2009
Posts
7,071
"Analysis" based on guess work and rumours, mostly pish from the sounds of it.

Have you watched it? It all seems to be logical and when applied to already known information ties up well. If you've got some extra insight I'd be interested to hear it. Barring that Jim's "analysis" is the best guess I've heard so far. After all an analysis is just a detailed look at information, whether that information is good or just the best we have to go on is immaterial.
 
Caporegime
Joined
18 Oct 2002
Posts
39,299
Location
Ireland
Have you watched it? It all seems to be logical and when applied to already known information ties up well. If you've got some extra insight I'd be interested to hear it. Barring that Jim's "analysis" is the best guess I've heard so far. After all an analysis is just a detailed look at information, whether that information is good or just the best we have to go on is immaterial.

I'd prefer to just wait on the cards hitting shelves instead of listening to rambling YouTube vids that are stupidly long.
 
Soldato
Joined
6 Aug 2009
Posts
7,071
Even if he's the best of a bad bunch, he's by far the best analyst / speculator / discusser of leaks on YouTube in my opinion.

I like that he shows his working, so anyone can then pick apart his argument. I'm waiting to see what the more knowledgeable than me think of his speculation. He may well be wrong and if so I'd like to know why and learn something :)
 
Soldato
Joined
8 Jun 2018
Posts
2,827
Have you ever drove down a road and noticed a clearing where there are a line of trees indicating a forest behind it?
Those few trees don't actually hide the forest unless you concentrate solely on them. So don't become distracted.

RDN2 and RDN3 (rumored to be around DDR5 debut 2022) should provide some insight on AMD's direction. That Navi23 chip is of a peculiarity that I've not heard more about as of yet. But I digress.
I don't believe, so far, that RDN2 performance is the whole picture. What will also be of interest is the number of die and margins per wafer vs nvidia.



Die Size 251 mm² ........................Die Size 445 mm²

It takes nv to have a larger chip (2070) and specific game optimizations to actually be competitive. When they lost that competitive edge they went with a larger die size!


Die Size 545 mm²

The 2070 Super is what Nvidia plan was to counter the 5700 XT. Think about that for a minute. They simply throw money at it. This is, and has always been, their strategy...brute force.


However, that's not a winning strategy in the long term as it's very similar to Intel's market strategy (because they too dominate the market). Cheerleading "the cause" won't change nor improve that market strategy. Turing has shown us that nvidia reached equilibrium between cost and price and the consumer market isn't bearing it at all when compared to pascal.




Some were not aware of the similarities in Uarch between the 5700 and the 2070. The biggest difference, pardon the pun, is the die size. So, all we needed to do is clock both at the same speeds and see the results.

But hold you horses there. Lets not forget that a 2070 is a much bigger chip. A whopping 194mm² that you paid a premium for. As you can see 5700 is still more efficient even when averaging titles completely optimized for nvidia. So in order for nv to clearly beat AMD they had to use a chip that is 294mm² larger!!!!!!

Are the red flags flapping about yet? It should be. Now granted this isn't on the same node. But that's not the point I'm making. The point is what "you" bought in the past 2 years. You paid more for nvidia's old Uarch/node to get a competitive card. That's the correlation. Furthermore, rumors have it that Nvidia is using their Titan Amper's to compete against Navi 2x. We shell see.

Also, it also gives you a slight glimpse of prospective of what you can expect out of Navi 2x.
 
Last edited:
Soldato
Joined
6 Aug 2009
Posts
7,071
Have you ever drove down a road and noticed a clearing where there are a line of trees indicating a forest behind it?
Those few trees don't actually hide the forest unless you concentrate solely on them. So don't become distracted.

RDN2 and RDN3 (rumored to be around DDR5 debut 2022) should provide some insight on AMD's direction. That Navi23 chip is of a peculiarity that I've not heard more about as of yet. But I digress.
I don't believe, so far, that RDN2 performance is the whole picture. What will also be of interest is the number of die and margins per wafer vs nvidia.



Die Size 251 mm² ........................Die Size 445 mm²

It takes nv to have a larger chip (2070) and specific game optimizations to actually be competitive. When they lost that competitive edge they went with a larger die size!


Die Size 545 mm²

The 2070 Super is what Nvidia plan was to counter the 5700 XT. Think about that for a minute. They simply throw money at it. This is, and has always been, their strategy...brute force.


However, that's not a winning strategy in the long term as it's very similar to Intel's market strategy (because they too dominate the market). Cheerleading "the cause" won't change nor improve that market strategy. Turing has shown us that nvidia reached equilibrium between cost and price and the consumer market isn't bearing it at all when compared to pascal.




Some were not aware of the similarities in Uarch between the 5700xt and the 2070. The biggest difference, pardon the pun, is the die size. So, all we needed to do is clock both at the same speeds and see the results.

But hold you horses there. Lets not forget that a 2070 is a much bigger chip. A whopping 194mm² that you paid a premium for. As you can see 5700xt is still more efficient even when averaging titles completely optimized for nvidia. So in order for nv to clearly beat AMD they had to use a chip that is 294mm² larger!!!!!!

Are the red flags flapping about yet? It should be. Now granted this isn't on the same node. But that's not the point I'm making. The point is what "you" bought in the past 2 years. You paid more for nvidia's old Uarch/node to get a competitive card. That's the correlation. Furthermore, rumors have it that Nvidia is using their Titan Amper's to compete against Navi 2x. We shell see.

Also, it also gives you a slight glimpse of prospective of what you can expect out of Navi 2x.

I have been wondering if AMD have got more room to play with on price reductions. That's where they are killing Intel at the moment, offering good performance that is affordable. If as you say their died are very small and soon to be on a comparable node to Nvidia. Nvidia won't have the cheap node advantage. That leaves die size and economies of scale to affect costs. I'd say the best performance for the price is most people's main criteria. Less so on this forum where outright performance tends to win over.

I'd say it's all about the pricing, AMD get that right and they'll sell. Assuming no PR issues!
 
Soldato
Joined
8 Jun 2018
Posts
2,827
I have been wondering if AMD have got more room to play with on price reductions. That's where they are killing Intel at the moment, offering good performance that is affordable. If as you say their died are very small and soon to be on a comparable node to Nvidia. Nvidia won't have the cheap node advantage. That leaves die size and economies of scale to affect costs. I'd say the best performance for the price is most people's main criteria. Less so on this forum where outright performance tends to win over.

I'd say it's all about the pricing, AMD get that right and they'll sell. Assuming no PR issues!
I'm still not completely sold on the rumor that navi23 is a navi10 refresh. When they could use navi22's harvested dies to do the same thing. It doesn't add up to create a separate new uarch/chip just for a navi10 replacement from a cost prospective. I do wonder if that is a 5nm chip (rumor). Remember:
Nav21 is Big Navi
Navi23 is suppose to be Nvidia Killer.
 
Associate
Joined
25 May 2012
Posts
161
I have been wondering if AMD have got more room to play with on price reductions. That's where they are killing Intel at the moment, offering good performance that is affordable. If as you say their died are very small and soon to be on a comparable node to Nvidia. Nvidia won't have the cheap node advantage. That leaves die size and economies of scale to affect costs. I'd say the best performance for the price is most people's main criteria. Less so on this forum where outright performance tends to win over.

I'd say it's all about the pricing, AMD get that right and they'll sell. Assuming no PR issues!
Let's hope their release drivers are more stable than the rx 5700/5700xt drivers. Stability comes first, no point of having all the bells and whistles if thr card cannot even function reliably. Not hating on AMD, I have a AMD gpu my self but sometimes you have to be a bit critical with the flaws of product releases that are totally unacceptable at the price point we are paying for them.
 
Soldato
Joined
6 Aug 2009
Posts
7,071
Let's hope their release drivers are more stable than the rx 5700/5700xt drivers. Stability comes first, no point of having all the bells and whistles if thr card cannot even function reliably. Not hating on AMD, I have a AMD gpu my self but sometimes you have to be a bit critical with the flaws of product releases that are totally unacceptable at the price point we are paying for them.

Agreed but didn't have any issues myself. Could be as I'm on X570? Maybe they prioritised the new platform first?
 
Associate
Joined
17 Sep 2018
Posts
1,431
Have you watched it? It all seems to be logical and when applied to already known information ties up well. If you've got some extra insight I'd be interested to hear it. Barring that Jim's "analysis" is the best guess I've heard so far. After all an analysis is just a detailed look at information, whether that information is good or just the best we have to go on is immaterial.

I'm watching it now. He says Big Navi will have to be underclocked to keep power under control. I can't see them downclocking and if they did it would be a huge overclocker. But as an ethusiast part that could claim the throne, they'll keep the clockspeed up even if draws 400 Watts imo

Also he says PS5 has a clock of 1800mhz and a boost clock of 2200mhz so he predicts RDNA2 will be 2300mhz. Where as PS4 Pro is 300mhz less than a 580/Vega and both of those overclock 700mhz higher than a PS4 Pro. So that sounds conservative

With Nvidia he doesn't have anything concrete and he just ends up saying they'll win because they're launching a Titan. Which isn't necessarily true. He says Nvidia can launch a larger die, which is true because it's against their business philosphy of profit maximisation. I think it's highly likely Big Navi will be very close to a 3080ti and might beat it. Nvidia can point to it's features
 
Soldato
Joined
6 Aug 2009
Posts
7,071
I'm watching it now. He says Big Navi will have to be underclocked to keep power under control. I can't see them downclocking and if they did it would be a huge overclocker. But as an ethusiast part that could claim the throne, they'll keep the clockspeed up even if draws 400 Watts imo

Also he says PS5 has a clock of 1800mhz and a boost clock of 2200mhz so he predicts RDNA2 will be 2300mhz. Where as PS4 Pro is 300mhz less than a 580/Vega and both of those overclock 700mhz higher than a PS4 Pro. So that sounds conservative

With Nvidia he doesn't have anything concrete and he just ends up saying they'll win because they're launching a Titan. Which isn't necessarily true. He says Nvidia can launch a larger die, which is true because it's against their business philosphy of profit maximisation. I think it's highly likely Big Navi will be very close to a 3080ti and might beat it. Nvidia can point to it's features

I mostly agree. It's extrapolation and predicting future behaviour based on past behaviour. It's not a bad idea to be a bit conservative given the way the internet behave! The best if go for is it looks competitive and will be far more about pricing than a few FPS here or there.
 
Associate
Joined
17 Sep 2018
Posts
1,431
I mostly agree. It's extrapolation and predicting future behaviour based on past behaviour. It's not a bad idea to be a bit conservative given the way the internet behave! The best if go for is it looks competitive and will be far more about pricing than a few FPS here or there.

For most of us, it's more about being an interested spectator because we won't be buying flagshit GPUs. The fact AMD are claiming 50% improvement on performance per watt is massive and will be very interesting and promising. Does that mean its 36 compute unit card will get a jump in performance similar to an RX 580 to a RX 5700? As both have the similar compute units. If that happened you'd get 2080ti performance at mid range. Then we'd probably get 5700/5600 series going end of life and being sold off cheap. Personally I'll hang onto my Vega until 6700 cards are End of Life and being sold off
 
Man of Honour
Joined
13 Oct 2006
Posts
91,027
Have you ever drove down a road and noticed a clearing where there are a line of trees indicating a forest behind it?
Those few trees don't actually hide the forest unless you concentrate solely on them. So don't become distracted.

RDN2 and RDN3 (rumored to be around DDR5 debut 2022) should provide some insight on AMD's direction. That Navi23 chip is of a peculiarity that I've not heard more about as of yet. But I digress.
I don't believe, so far, that RDN2 performance is the whole picture. What will also be of interest is the number of die and margins per wafer vs nvidia.



Die Size 251 mm² ........................Die Size 445 mm²

It takes nv to have a larger chip (2070) and specific game optimizations to actually be competitive. When they lost that competitive edge they went with a larger die size!


Die Size 545 mm²

The 2070 Super is what Nvidia plan was to counter the 5700 XT. Think about that for a minute. They simply throw money at it. This is, and has always been, their strategy...brute force.


However, that's not a winning strategy in the long term as it's very similar to Intel's market strategy (because they too dominate the market). Cheerleading "the cause" won't change nor improve that market strategy. Turing has shown us that nvidia reached equilibrium between cost and price and the consumer market isn't bearing it at all when compared to pascal.




Some were not aware of the similarities in Uarch between the 5700xt and the 2070. The biggest difference, pardon the pun, is the die size. So, all we needed to do is clock both at the same speeds and see the results.

But hold you horses there. Lets not forget that a 2070 is a much bigger chip. A whopping 194mm² that you paid a premium for. As you can see 5700xt is still more efficient even when averaging titles completely optimized for nvidia. So in order for nv to clearly beat AMD they had to use a chip that is 294mm² larger!!!!!!

Are the red flags flapping about yet? It should be. Now granted this isn't on the same node. But that's not the point I'm making. The point is what "you" bought in the past 2 years. You paid more for nvidia's old Uarch/node to get a competitive card. That's the correlation. Furthermore, rumors have it that Nvidia is using their Titan Amper's to compete against Navi 2x. We shell see.

Also, it also gives you a slight glimpse of prospective of what you can expect out of Navi 2x.

Turing has a bunch of extra space taken up with additional features - Tensor cores and additional RT functionality amongst other things. Turing is a forward looking architecture.
 
Associate
Joined
14 Aug 2017
Posts
1,195
Have you watched it? It all seems to be logical and when applied to already known information ties up well. If you've got some extra insight I'd be interested to hear it. Barring that Jim's "analysis" is the best guess I've heard so far. After all an analysis is just a detailed look at information, whether that information is good or just the best we have to go on is immaterial.

If the information is crap, the analysis is pointless.
 
Soldato
Joined
26 Sep 2010
Posts
7,152
Location
Stoke-on-Trent
Turing has a bunch of extra space taken up with additional features - Tensor cores and additional RT functionality amongst other things. Turing is a forward looking architecture.
Pfft, none of the additional features of the Turing architecture are required on a gaming card and are solutions looking for problems to circumvent Nvidia's penny pinching and justify inflated profit margins. Tensor-powered DLSS would not be required if Nvidia didn't cheap out on the RT capabilities in the first place. None of the AI in games is sophisitcated enough it can't be done on CPU. And now Ampere is doubling down on this. Tensor-accelerated memory compression? Really? Or how about just put the correct amount of VRAM on the card to begin with?
 
Man of Honour
Joined
13 Oct 2006
Posts
91,027
Pfft, none of the additional features of the Turing architecture are required on a gaming card and are solutions looking for problems to circumvent Nvidia's penny pinching and justify inflated profit margins. Tensor-powered DLSS would not be required if Nvidia didn't cheap out on the RT capabilities in the first place. None of the AI in games is sophisitcated enough it can't be done on CPU. And now Ampere is doubling down on this. Tensor-accelerated memory compression? Really? Or how about just put the correct amount of VRAM on the card to begin with?

None of what you said actually makes sense.

DLSS application goes far beyond RT (personally I'm not a fan of it but that is another story).
None of the AI related applications is for purely gaming (or in game monster AI) - but just because games now don't require it doesn't mean it couldn't be used to encourage more sophisticated use of AI in games.
Memory compression techniques potentially go far beyond just simple storage - depending on context compression techniques can achieve ratios of throughput and storage unrealistic via hardware useful for things like streaming from large resource sets, etc.
 
Last edited:
Status
Not open for further replies.
Back
Top Bottom