• Competitor rules

    Please remember that any mention of competitors, hinting at competitors or offering to provide details of competitors will result in an account suspension. The full rules can be found under the 'Terms and Rules' link in the bottom right corner of your screen. Just don't mention competitors in any way, shape or form and you'll be OK.

RDNA 3 rumours Q3/4 2022

Status
Not open for further replies.
In other news today, an investment company says both amd and Nvidia should be worried because Intel is going to overtake TSMC in having highest transistor density node by the end of next year.

Doubt that will mean anything in the GPU side of things. But AMD may fall behind again quite a bit in CPU’s.
 
Tldr?

In other news today, an investment company says both amd and Nvidia should be worried because Intel is going to overtake TSMC in having highest transistor density node by the end of next year.
Clever scaling. AMD more able to max out shader numbers on a given reticle with their "chiplet" design. Cache etc can move to the cheap nodes to make room on the GPU die for the horsepower.
 
Tldr?

In other news today, an investment company says both amd and Nvidia should be worried because Intel is going to overtake TSMC in having highest transistor density node by the end of next year.

Like in 2016, 2017, 2018, 2019, 2020...... Intel's return to node leadership is just around the next corner.

Some people are deluded, like Jon Peddie, thie Jon Peddie telling MLID that Intel will sell ARC et mass just because its Intel, MLID himself postured ARC will be a huge success, because its Intel, and if you don't believe that you're stupid, its Intel.

And then there's me, stupid, perhaps, but with a firm grip on reality....
 
Clever scaling. AMD more able to max out shader numbers on a given reticle with their "chiplet" design. Cache etc can move to the cheap nodes to make room on the GPU die for the horsepower.
They werent able to overtake Nv nor Intel with 1.5 and 2 node advantage respectively and miraculously will then overtake at disadvantage?
 
They werent able to overtake Nv nor Intel with 1.5 and 2 node advantage respectively and miraculously will then overtake at disadvantage?
They seem to be doing ok in sales to me ;) I'll say go and watch the whole AdoredTV video and then decide if you think it sounds reasonable. Adds up to me but what do I know :cry:
 
Like in 2016, 2017, 2018, 2019, 2020...... Intel's return to node leadership is just around the next corner.

Some people are deluded, like Jon Peddie, thie Jon Peddie telling MLID that Intel will sell ARC et mass just because its Intel, MLID himself postured ARC will be a huge success, because its Intel, and if you don't believe that you're stupid, its Intel.

And then there's me, stupid, perhaps, but with a firm grip on reality....

Oh... on this, Intel are known to have told AMD that they would be happy to make their CPU's.

I mean, ok, fine, but you are a direct product competitor and manufacturing their chips gives you access to their technology secretes.
 
Last edited:
Tldr?

In other news today, an investment company says both amd and Nvidia should be worried because Intel is going to overtake TSMC in having highest transistor density node by the end of next year.
Amd die’s are approximately 33% cheaper than nvidia
Navi 31 is most likely shader limited not bandwidth limited.
130% increase in shader count, based on this info there is a chance that AMD will beat Nvidia in Raster this generation.

Summary of the last segment. Nvidia are in trouble. They are nearing the maximum amount of shaders they can fit into a node. With AMDs design they can fit a lot more shaders (double the current rumour) on the same node size.
 
Amd die’s are approximately 33% cheaper than nvidia
Navi 31 is most likely shader limited not bandwidth limited.
130% increase in shader count, based on this info there is a chance that AMD will beat Nvidia in Raster this generation.

Summary of the last segment. Nvidia are in trouble. They are nearing the maximum amount of shaders they can fit into a node. With AMDs design they can fit a lot more shaders (double the current rumour) on the same node size.

I hope so. I want to see Lisa lay the smack down on her uncle.
 
Amd die’s are approximately 33% cheaper than nvidia
Navi 31 is most likely shader limited not bandwidth limited.
130% increase in shader count, based on this info there is a chance that AMD will beat Nvidia in Raster this generation.

Summary of the last segment. Nvidia are in trouble. They are nearing the maximum amount of shaders they can fit into a node. With AMDs design they can fit a lot more shaders (double the current rumour) on the same node size.
Yep, it's the same issue Intel are looking at, monolithic dies only scale so far. AMD's approach looks like it will sidestep those limitations to a point.
 
i still maintain the R9 290 was one of the best GPU's i've ever had, on the other side i like the 2070 Super i have now just as much.

Its all good stuff.
The 290 was a very good card I had one with a 295x2 before going quadfire. Was a noisy blighter but as fast as a Titan when overclocked for less than half price.
 
They werent able to overtake Nv nor Intel with 1.5 and 2 node advantage respectively and miraculously will then overtake at disadvantage?
On the versus Intel figures, I think you are making the classic mistake of seeing Intel doing well in ST and gaming with a brute-force monstrously huge P-core and therefore concluding that they are ahead.
In servers Intel are well behind both AMD and ARM (for those cloud providers big enough to do their own server chips using ARM's designs) in terms of perf/watt, perf/area, and scalability.

Intel really need a Xeon design using E cores for density, but that risks performing really poorly.
 
Last edited:
Like in 2016, 2017, 2018, 2019, 2020...... Intel's return to node leadership is just around the next corner.

Some people are deluded, like Jon Peddie, thie Jon Peddie telling MLID that Intel will sell ARC et mass just because its Intel, MLID himself postured ARC will be a huge success, because its Intel, and if you don't believe that you're stupid, its Intel.

And then there's me, stupid, perhaps, but with a firm grip on reality....

Their claims is funny enough based on TSMC forecasts not Intel's. It's because TSMC has delayed its production plans for more advanced nodes than what it can produce today.
 
Last edited:
And this is where AMD can do to Nvidia what they have done to Intel.

Nvidia, like Intel have no where to go with monolithic designs, AMD could double the amount of shaders on the GPU, if they wanted to, and if they split the shaders into chiplets, like they already have with CDNA, the sky is the limit.

Jim predicts that if AMD don't beat Nvidia this time round, they might, they will with the next one, and the next one, and the next one...

And, like Intel, there is nothing Nvidia can do about it.

 
Last edited:
A lot of what Jim covers makes sense. The harrowing info was not the AMD stuff but the narrowing of nvidia's options. After they were hacked nothing special was leaked about their improvements ahead so he may be on to something.
 
Yup they are taking the **** but as pointed out, it's no surprise why they did what they have done with ada, ampere lineup pricing was beyond silly, I mean in whos right mind did it make sense to charge an additional £750 for the next card up when the performance difference is <15%





Sorry I forgot, 24gb vram and FC 6 makes up for that extra cost, right @TNA

:cry: :cry: :cry:

What will be interesting to see is when ampere stock is gone, will nvidia drop ada prices substantially at some point next year....
Wow still can't believe you still argue the 15% increase... The 3090/ti was aimed at people who use the cards for work or sims that can use a lot of VRAM, you were never the target for such a card as a gamer. Also the 3090/3090ti are the last of the real cards that can be used for proper work as they had NVLINK which is worth more than the 15% to professionals as it allows us to use 48GB of VRAM and double the cuda cores all done via NVLINK pooling.

I thought this silly argument was over with now ?

You can't compare any 3080,3070,3060,3050 class to a 3090 class card as they are a totally different type of use card.. Anyone that purchased a 3090 to just game on needed their head examining or had a use more than gaming on it or used sims that can easily use more than 8GB,10GB,12GB VRAM.. :rolleyes:


This argument is getting old really...
 
Last edited:
Reading AMD's Q3 report and Q&A earlier... no good news for graphics imo and they're projecting weakly into '23. To me that reads like RDNA 3 will be iterative on RDNA 2, as I've always suspected, but even just pricing-wise it seems unlikely that they will rock the boat; I expect the same situation as with RDNA 2. I guess it makes sense going into a clear recession year but it also means Nvidia's not going to feel threatened before RDNA 4 or w/e they call the next one. We'll soon find out.
 
Reading AMD's Q3 report and Q&A earlier... no good news for graphics imo and they're projecting weakly into '23. To me that reads like RDNA 3 will be iterative on RDNA 2, as I've always suspected, but even just pricing-wise it seems unlikely that they will rock the boat; I expect the same situation as with RDNA 2. I guess it makes sense going into a clear recession year but it also means Nvidia's not going to feel threatened before RDNA 4 or w/e they call the next one. We'll soon find out.
If the architecture is actually chiplet-based, I would consider that much more than an "iterative" change.
 
Status
Not open for further replies.
Back
Top Bottom