• Competitor rules

    Please remember that any mention of competitors, hinting at competitors or offering to provide details of competitors will result in an account suspension. The full rules can be found under the 'Terms and Rules' link in the bottom right corner of your screen. Just don't mention competitors in any way, shape or form and you'll be OK.

RX 7900XT, 15,360 cores, MCM, Tapeout Q4

Still a bit far out for credible rumours, I think.

As regards, power usage does anyone have figures for underclocked 6900XT and RTX 3090?

If chiplet works, I don't see them having any choice but to run them at or below the perf/watt sweet point for the process as cranking them crazy like a 3090 or 6700XT won't work.
 
It would be lower TDP surely.

even for server use 600w+ is not sustainable.

the 3x core count 7900xt would probably have its tdp lowered per chiplet and probably clock speeds too (so the 7800xt would have higher clock speeds than the 7900xt)

Servers these days are in the massive kilowatt ranges now, check the video with the 5+ kilowatt server just for gpus and there are some even higher than them, 600w for each gpu is nothing in a server enviroment and servers are now getting exotic cooling and water cooling in rooms that are chilled. Also 500w+ gpus are available for servers, there is even versions of the A100 that can run 500w in some servers and have been gpus before that use more than that.

Check this server with 8 x A100 at 400w each. It has 4 x 3Kw PSUs in it.


https://www.servethehome.com/inspur-nf5488a5-8x-nvidia-a100-hgx-platform-review-amd-epyc/

See my issue with the 7900xt type card for home and workstation use is they can not be using more power than they are right now, it's getting silly to run 500w gpus in a pc case and then people like me require 2 or more of them to do our work, my system right now is basically a heater with 2 x 3090s that can pull 470w each.

Still a bit far out for credible rumours, I think.

As regards, power usage does anyone have figures for underclocked 6900XT and RTX 3090?

If chiplet works, I don't see them having any choice but to run them at or below the perf/watt sweet point for the process as cranking them crazy like a 3090 or 6700XT won't work.


A good example would be this :- https://www.nvidia.com/content/dam/en-zz/Solutions/Data-Center/a100/pdf/nvidia-a100-datasheet.pdf

Where Nvidia takes the same A100 gpu and makes a SXM A100 and a PCIE A100, both same GPUS in different form factors and power envelopes.

The SXM version is 400w and the PCIE version is 250w.

The SXM is 100% performance level while the PCIE version is 90% of the SXM. So by them reducing the power by 150w they have lost 10% performance.


So if the 7900xt is going to use say 600w in an SXM version the PCIE version would be 375w if we take off the same amount as what Nvidia did, basically 75w off each 200w. That would still be a 375w card in PCIE form and lets say again its performance drops by again 10% compared to the SXM 7900xt.

I don't know they could round it off to 350w or 300w and loose some more performance but fit it into a power envelope that would work in a PCIE form GPU card.

BUT I still think if that gpu design is right it would end up in a server compute card and the PCIE version would be half as big and have more silicon used for gaming type functions like RT or AI features for gaming. We have to wait and see what comes out in the end but making gpus what looks like a 4 chiplet design with a single gpu each or a dual chiplet design with two gpus on each chiplet is going to scale in a poor way, for anyone that knows how scaling works on a gpu, it's not really same as a cpu. So I have doubts about that design in that way would work well for gaming and the scaling will be not great but the theoretical TFLOPS will look huge but thats in compute and in gaming it will not in real world work that way.

Look at the 3090 as an example in gaming it has 2x the cuda cores of a 2080ti but in games with an oc 3090 it is only 50% faster but in compute workloads it is 2x+ the performance of a 2080ti and this is where the 3090 really shines but people are buying these cards for gaming and basically not getting the 2x leap in performance for their games, for business use and people that hobby render or do compute based workloads they are an amazing leap. We will have to wait and see but I think the first gaming card with chiplets will have 2 gpu cores to start with, this design shown for the 7900xt is really 4 gpus on one card and no idea how many chiplets it is based on so far.
 
Last edited:
Was watching the MLID video, he said that both AMD and NVIDIA want out of the sub $400 market. Again, add salt to taste, but this wouldn't surprise me.

That's because every node shrink now is double the price of the previous one, 7nm to 5nm is double the price to make now, then 5nm to 3nm again higher price.

Only way I see them making GPUS for under $400 now is if they use an old mature node to keep costs down and will not be all singing and dancing like the higher end on the newer nodes. The lower end will end up being basically cards used just for display output without a lot of performance for things like highend games but good enough for media playback/light gaming and general home/office type workloads, basically something that is slightly better than an IGPU.
 
Was watching the MLID video, he said that both AMD and NVIDIA want out of the sub $400 market. Again, add salt to taste, but this wouldn't surprise me.
Quick easy unoptimised console ports here we come!
There is no way developers will take PC gaming seriously if the cheapest entry level card costs as much as whole console.
AMD at least get some revenue from console hardware, but it there is anything to this rumour then neither AMD or Nvidia have though this through properly have they?

Or is it the classic short-term gain (higher margins) vs long-term pain (killing the PC gaming golden goose)?
 
Quick easy unoptimised console ports here we come!
There is no way developers will take PC gaming seriously if the cheapest entry level card costs as much as whole console.
AMD at least get some revenue from console hardware, but it there is anything to this rumour then neither AMD or Nvidia have though this through properly have they?

Or is it the classic short-term gain (higher margins) vs long-term pain (killing the PC gaming golden goose)?

Those end of year bonuses won't get fatter without higher margins!
 
I expect people happily running dual 3090s would be interested. Not because of the lower power draw but the performance that would be expected.

For me no because I require cuda as many that use their cards for compute. There is great uses for AMDs compute (when you enable it, because off by default) too but a lot of the applications these days are written with cuda in mind and even rendering now is cuda and optix.
 
Last edited:
Quick easy unoptimised console ports here we come!
There is no way developers will take PC gaming seriously if the cheapest entry level card costs as much as whole console.
AMD at least get some revenue from console hardware, but it there is anything to this rumour then neither AMD or Nvidia have though this through properly have they?

Or is it the classic short-term gain (higher margins) vs long-term pain (killing the PC gaming golden goose)?

It’s because they know Intel are coming, with their own fabs and they will eliminate AMD and Nvidias fat margins in this market segment, over time. Intel will be very competitive in the <6600XT area of the market
 
It’s because they know Intel are coming, with their own fabs and they will eliminate AMD and Nvidias fat margins in this market segment, over time. Intel will be very competitive in the <6600XT area of the market

Intel :cry:.. Come on fat margins is Intel's speciality. Intel and gpus is going to be comical for a good half decade at least. Only market they will be in is in gpus for general use, the gaming market is another matter and they will not be doing too well in that for a good while.
 
While Intel love fat margins and probably have a huge amount of people who get big bonuses from those far margins, it is not like they have never dumped stuff.
Their Atom panic mode 'contra revenue' bill ran into billions.
And going way back, the reason all the workstations market eventually went x86 was that Intel's margins were a lot lower than Silicon Graphics, Sun etc. (obviously having huge scale helped, something Intel forgot when the neglected Atom and other lower margins parts in the first place).

I thought part of the problem was supplying AMD cores with enough data, so you couldn't just keep creating them? What changed I wonder.

Hazarding a guess, I suspect RDNA and RDNA 2 laid the groundwork for any changes they needed to make. And then Infinity Cache etc. gave them some experience on how to tie things together.

Still, I wouid have though that going chiplet require a lot of driver re-writing to keep split stuff up etc.

All very interesting and totally outside of my max purchase price of GPUs, but I would imagine v1.0 will have some issues which hopefully a later driver fixes.
 
If only there had been multi core gpus in the past to help us in this discussion. Would be good if they were called something like gtx 590 or r295x2?
 
Back
Top Bottom