• Competitor rules

    Please remember that any mention of competitors, hinting at competitors or offering to provide details of competitors will result in an account suspension. The full rules can be found under the 'Terms and Rules' link in the bottom right corner of your screen. Just don't mention competitors in any way, shape or form and you'll be OK.

It looks like the 'real' /affordable RDNA3 + next gen NV desktop launch won't launch until September. Thoughts?

Soldato
Joined
6 Feb 2019
Posts
17,868
AMD says it could have competed with the RTX4090 if it wanted to, but they do not want to make a 600watt GPU and costs over $1000, AMD says it wants to keep making GPUs priced no higher than $999 as this is the mainstream sweet spot

It's nice of AMD to confirm that $999usd is mainstream pricing for GPUs

 
Last edited:
Soldato
Joined
28 Oct 2011
Posts
8,481
Also further confirmation (as if any were needed) that scalpers, chip shortages and viruses of unspecified origin were just excuses for insane pricing...

Get stuffed AMD and NV - I'm very much enjoying the new TV I bought instead, next up new a Fridge Freezer and possibly a new bike (analogue).
 
Soldato
Joined
17 Aug 2009
Posts
10,782
AMD says it could have competed with the RTX4090 if it wanted to, but they do not want to make a 600watt GPU and costs over $1000, AMD says it wants to keep making GPUs priced no higher than $999 as this is the mainstream sweet spot

It's nice of AMD to confirm that $999usd is mainstream pricing for GPUs


Ehhhh they said 999 was the upper limit. I dunno why the word mainstream is being used. Yoinked the translation of the interview:

Technically, it is possible to develop a GPU with specs that compete with theirs (NVIDIA). However, the GPU developed in this way was introduced to the market as a "graphics card with a TDP (thermal design power) of 600 W and a reference price of $1,600" and was accepted by general PC gaming fans. After thinking about it, we chose not to adopt such a strategy.

The RDNA 3-based GPU "Radeon RX 7900XTX" released this time is targeted at $999, which is considered to be the "upper price" assumed by high-end users among general PC gaming fans. The "Radeon RX 7900XT" below it is said to be $699.

The price strategy is the same as the previous RDNA 2 (Radeon RX 6000 series), with the top-end "Radeon RX 6900XT" and "Radeon RX 6800XT" targeting $999 and $699, respectively. However, the target price changes for each GPU generation.

We take this strategy to fit into the mainstream infrastructure (hardware environment) utilized by today's PC gaming enthusiasts. At the same time as demanding high performance, it should be possible to operate with an existing "common sense" power supply unit, be able to cool the inside of the case "be installed without requiring an extremely large case." ――The Radeon RX high-end product group was designed with these in mind."

If I may say a few words, we at AMD are developing and releasing ultra-high-performance GPUs. For example, two years ago, we announced the " Instinct MI200 series " as the world's first multi-die GPU. The top model of the series, "Instinct MI250X" marked the world's fastest theoretical performance of FP32 (single-precision floating-point arithmetic) at about 48 TFLOPS at the time. Since this is an Instinct series, it is not a GPU for gaming.

However, if you look at the Instinct series, you should know that AMD can develop (ultra-high-end GPUs) if they want to. We just don't think such GPUs are suitable for consumer use."

It was certainly hot back then. However, the performance range, development costs, and manufacturing costs of high-end GPUs are quite different from the past and now.

We haven't planned a "$1600 GPU" like the competition (NVIDIA) for PC gaming fans in recent years. Instead, we are focusing on planning a GPU that fits in the $1,000 class with a good balance between performance and cost.

There is a difference of about $600 from them (laughs), but I think that using that $600 to procure other parts - such as CPUs - will lead to a better gaming experience."

The chiplet architecture can be used to realize high-performance processors while reducing manufacturing costs. As you know, current high-end GPU cores contain more than 10,000 arithmetic cores (floating point arithmetic units). This is over 1000 times the number of CPU cores. If you try to interconnect (connect) the GPU dies in this state, the number of connection points will be enormous, and reliable electrical signal transmission cannot be guaranteed. So, at the moment, it is difficult not only in man-hours but also in terms of cost to connect the GPU die with the same glue as the CPU die.

There's nothing you can't do, but... Rather than doing so, it is more efficient and less costly to create a large-scale GPU (core) at the moment. Therefore, in the current generation (RDNA 3 architecture), we decided to postpone the realization of "multi-GPU die".

In the semiconductor business, we must constantly assess the trade-off between "performance" and "manufacturing cost". We will continue to challenge ourselves in this field (multi-GPU die), and I hope that you will look forward to it in the future."

I don't see a problem with flagships being 999 personally.

It is interesting that they say they cannot make the multi die gpu as efficient as a monolithic one at the moment.
 
Last edited:
Soldato
OP
Joined
30 Jun 2019
Posts
7,876
Here we go, a bit later than I would have liked:

RTX 4070 available to buy on 13th April - This info appears to be genuine.

No sign of Navi32, I'd guess they want to launch these alongside FSR3 (which there could be more info on this month at GDC).

No word on price. Personally, I would find between £550-£575 acceptable, considering this is intended to be a successor to the RTX 3070 (and taking inflation into account).

My impression is that the rumoured 5,888 shaders sounds rather low. Nvidia could only make up for that by clocking these GPUs higher.

To exceed 30 TFlops, a clock rate of 2600Mhz would be needed. 2800Mhz would get them to 32.9TFlops.

It looks like there is going to be a RTX 4070 FE, which is always something to watch out for.
 
Last edited:
Soldato
Joined
6 Feb 2019
Posts
17,868
Makes sense to launch cheap cards with FSR3 because then you can have deceiving PowerPoint slides showing super high performance compared to previous generation because of fsr3 frame generation
 
Soldato
OP
Joined
30 Jun 2019
Posts
7,876
Makes sense to launch cheap cards with FSR3 because then you can have deceiving PowerPoint slides showing super high performance compared to previous generation because of fsr3 frame generation
Deceiving or realistic? I personally think AMD should aim for a universal driver option that can enable FSR3 (or the frame generation part) in all DX11/12 games. Probably not going to happen though.

Any marketing material should really point out that achieving 4K (natively) at smooth framerates is still very difficult in the most demanding games.
 
Last edited:
Soldato
OP
Joined
30 Jun 2019
Posts
7,876
It's notable that you don't see really high clockrates (at stock) on AD102 and AD103 GPUs... They can manage about 2600Mhz.

It's kind of what I was saying before in another thread - that the best value cards for consumers are often the ones with the highest clockrates (relative to other cards in the same generation). Also, they are generally cheaper to produce compared to the highest end models.
 
Last edited:
Soldato
OP
Joined
30 Jun 2019
Posts
7,876
It's not a bad increase from the last gen is it?

EDIT - Well I was wrong about the RTX 4070 maybe having more shaders than 5888... Hopefully AMD will take advantage of that.
 
Last edited:
Soldato
OP
Joined
30 Jun 2019
Posts
7,876
So, I thought this was worth mentioning:

Psjjh7qXPpY1uUtJ.jpg


A new professional Navi31 based card was released recently and it looks powerful and quite power efficient.

I hope there's a consumer version with less VRAM - Ideally at some point in Q2 2023.

Looking back at Navi21, all of the workstation cards had N21 consumer equivalents, all with 16GB of VRAM:
https://www.techpowerup.com/gpu-specs/amd-navi-21.g923
 
Last edited:
Associate
Joined
3 May 2021
Posts
1,232
Location
Italy
So, I thought this was worth mentioning:

Psjjh7qXPpY1uUtJ.jpg


A new professional Navi31 based card was released recently and it looks powerful and quite power efficient.

I hope there's a consumer version with less VRAM - Ideally at some point in Q2 2023.

Looking back at Navi21, all of the workstation cards had N21 consumer equivalents, all with 16GB of VRAM:
https://www.techpowerup.com/gpu-specs/amd-navi-21.g923
Halve the RAM, up clocks and wattage a bit and we got the 7800XT imho.
 
Associate
Joined
29 Jan 2015
Posts
361
Halve the RAM, up clocks and wattage a bit and we got the 7800XT imho.

I see that or a highly clocked N32 as the 7800XT options and both will probably perform around 6950XT levels. N32 would give AMD a bit more pricing freedom because the die is 33% smaller than N31 but if they are running similar power envelopes then board costs are probably a tie so overall BOM probably won't be that different. If they released this around £550-600 it would be a better option than the 4070 IMO.
 
Associate
Joined
29 Jan 2015
Posts
361
Navi33 is already out in laptops,so it seems AMD is more interested in getting rid of the RX6600 series first.

N33 is drop in compatible with N23 designs so in theory AIBs can just switch over the current 6600 designs to 7600 cards with very little hassle and very little side by side selling.
 
Soldato
Joined
9 Nov 2009
Posts
24,929
Location
Planet Earth
N33 is drop in compatible with N23 designs so in theory AIBs can just switch over the current 6600 designs to 7600 cards with very little hassle and very little side by side selling.

It appears to draw less power too,so the cards should be cheaper to make. It will be interesting to see what performance uplift RDNA3 has as a uarch,because the specs look almost identical.
 
Back
Top Bottom