• Competitor rules

    Please remember that any mention of competitors, hinting at competitors or offering to provide details of competitors will result in an account suspension. The full rules can be found under the 'Terms and Rules' link in the bottom right corner of your screen. Just don't mention competitors in any way, shape or form and you'll be OK.

Poll: Ryzen 7950X3D, 7900X3D, 7800X3D

Will you be purchasing the 7800X3D on the 6th?


  • Total voters
    191
  • Poll closed .
Mine is actually misbehaving. I think because I'm using a very old Windows install (previously I had an Ryzen 2700X, then 5800X and now the 7950X3D), some of the settings are messed up. Notably I think the power plan is not setup correctly and something I installed may have modified it along the way.

I installed the game bar and game mode, installed chipset drivers, but the parking isn't working correctly (I got RTSS running so I can see every core utilisation, and they're all in the high numbers). I used the program called ParkControl to enable core parking, which seems to have worked, but I'm not convinced I'm getting the full performance.

EDIT:
I did notice there's a setting in the BIOS (I got ROG Strix X670E-F Gaming Wifi) called "x3d core flex gaming", so I might try enabling that and see what happens. Has anyone else tried that?
From my understanding the power plan needs to be on balanced. I personally always do a fresh install of windows just so that if I do have any issues, I know it’s not that.
 
Just to back up my post above…
Weirdly my conclusion is different to yours: if you want best performance, get a 6000CL30, because there is a difference (albeit not massive one). I feel like if you getting best of the best (aka 7950X3D and 4090, might as well shell out for the 6000CL30, which isn't even that expensive? I paid 170 for mine)
 
You clipped two lines and pretended the rest didn't exist. Which is a lie.

That's why I quoted everyone including you.

Despite what you're saying, the entire time it was about claiming gaming crowns and the poor value in it.

Launch happened and since then I have spoken against upgrading without any specifically beneficial case in mind, from an already decent cpu because there's not enough in it as predicted.
I clipped two lines because that’s what you claimed I couldn’t quote.


The gaming crown I agree with. In order to claim such a silly thing would have to mean the CPU was ahead in all games in my opinion. Not just 3 out of 6 for example. There is no clear winner, just pro’s and cons for all, and it comes down to what’s more important for your use case in the end. Price to performance, just gaming, other workloads as well as gaming? There is something for everyone. Also the previous 3d CPU has dropped in price and is still a very good option.

I posted a video as a sort of “have you seen what this idiot is saying now” I didn’t intend for people to think I was posting it as a source of valued information. I do think the majority of people could see it for what it was. I will however refrain from giving that guy anymore attention because in that respect I don’t want him to get what he clearly wants which is more of it.
 
Last edited by a moderator:
Jufes/framechasers exists because the mainstream tech reviewers don’t try. He’s carved out a niche of focusing on system tuning and then presenting it as a “cod bro” to an edgy audience. In terms of the data itself, it’s pretty good on the intel side and ok on the amd side.

One big flaw of his is disabling ecores and enabling HT which is the opposite of what you do on intel for max gaming performance.
 
Jufes/framechasers exists because the mainstream tech reviewers don’t try. He’s carved out a niche of focusing on system tuning and then presenting it as a “cod bro” to an edgy audience. In terms of the data itself, it’s pretty good on the intel side and ok on the amd side.

One big flaw of his is disabling ecores and enabling HT which is the opposite of what you do on intel for max gaming performance.
He’s very good at getting people fired up. He uses a hot topic adds in a small percentage of fact and then completely goes way over the top to get attention. I suppose just testing the way people in here would prefer to see wouldn’t be entertaining so he chooses the click bait style instead.
 
Wanted to backup @Bencher here with his statement regarding power draw which people were skeptical about. He's right in that RPL uses very low power and is efficient when doing normal tasks.

I wanted to capture the full screen so it's all very transparent. You can see Chrome using 6 tabs, one of them streaming twitch, Outlook, Excel with a macro heavy file loaded, Teams and Discord. CPU is running at 11w total, averages 16w over the 3 mins of runtime and barely have a mini spike to 38w.

The 16w average is very good no matter how you spin it.

image.png


This is something that would a typical desktop usage scenario for me.
This is a dilemma. We know that at full load RPL is far far less efficient than Zen4 and especially Zen4 3D.

However AMD's chiplet strategy isn't great at idle or low loads.

Now that AMD are so huge, they really should stop being stubborn and do a decent monolith 16C part for desktop without any compromises like the huge reduction in cache they did with their previous APUs. A monolith 16C / 32T Zen4 part with 3D cache should be pretty much unbeatable.

Will be interesting to see how Intel's idle power fares once they go multi-chiplet with their tiles. No such thing as a free lunch and going off-chip is always going to cost extra power, but maybe they have come up with something clever - after all Meteor Lake is expected to be mobile-first.
 
Last edited:
Jufes/framechasers exists because the mainstream tech reviewers don’t try. He’s carved out a niche of focusing on system tuning and then presenting it as a “cod bro” to an edgy audience. In terms of the data itself, it’s pretty good on the intel side and ok on the amd side.

One big flaw of his is disabling ecores and enabling HT which is the opposite of what you do on intel for max gaming performance.

He's an idiot with no clue what he's talking about, everything that i have seen from him so far is forehead slapping stupid and manufactured click bait drama.

Its quite cancerous. That video had this thread wiped up in a frenzy around his utter BS.
 
Last edited:
This is a dilemma. We know that at full load RPL is far far less efficient than Zen4 and especially Zen4 3D.

However AMD's chiplet strategy isn't great at idle or low loads.

Now that AMD are so huge, they really should stop being stubborn and do a decent monolith 16C part for desktop without any compromises like the huge reduction in cache they did with their previous APUs. A monolith 16C / 32T Zen4 part with 3D cache should be pretty much unbeatable.

Will be interesting to see how Intel's idle power fares once they go multi-chiplet with their tiles. No such thing as a free lunch and going off-chip is always going to cost extra power, but maybe they have come up with something clever - after all Meteor Lake is expected to be mobile-first.
You seriously think that AMD should go backwards. Jeez!
 
You seriously think that AMD should go backwards. Jeez!
Do you not think a monolithic chip with the same amount of cache and an on-die IMC would perform better?

Or do you think improving performance is going backward?
 
Last edited:
This is a dilemma. We know that at full load RPL is far far less efficient than Zen4 and especially Zen4 3D.

However AMD's chiplet strategy isn't great at idle or low loads.

Now that AMD are so huge, they really should stop being stubborn and do a decent monolith 16C part for desktop without any compromises like the huge reduction in cache they did with their previous APUs. A monolith 16C / 32T Zen4 part with 3D cache should be pretty much unbeatable.

Will be interesting to see how Intel's idle power fares once they go multi-chiplet with their tiles. No such thing as a free lunch and going off-chip is always going to cost extra power, but maybe they have come up with something clever - after all Meteor Lake is expected to be mobile-first.

You're talking about 15 watts, what about the 80 watts difference vs Intel when its actually being used?

Let me tell you something about Intel and their Monolithic CPU's, Intel are not making any money on them, nothing, they are giving them away at cost, and its killing them, Intel's market value right now is $104 Billion, because they are not making any money, AMD are making money, their market value is $127 Billion.
 
Last edited:
Weirdly my conclusion is different to yours: if you want best performance, get a 6000CL30, because there is a difference (albeit not massive one). I feel like if you getting best of the best (aka 7950X3D and 4090, might as well shell out for the 6000CL30, which isn't even that expensive? I paid 170 for mine)
I somewhat agree tbf, however with the second CCD physically disabled in the BIOS, the benefits are reduced eve further. However, my suggestion for most would still be buy the cheaper kit and tune yourself.
 
Do you not think a monolithic chip with the same amount of cache and an on-die IMC would perform better?

Or do you think improving performance is going backward?
The other way to think about this is:
Are chiplets a good thing for consumer, or for the manufacturer?
This is similar to asking "who gains the most from Intel's hybrid approach?"

In both cases the primary reason was cheaper to manufacturer. In AMD's case because back then they didn't have the budget, in Intel's case because the P cores are simply too big*.

And in both cases, consumers do gain something: without chiplets AMD would probably not be back in the game; for Intel some workloads do gain from the E-cores.

But back to my original question: I think AMD are now large enough to be able to offer a big monolith core. And there is little doubt that a 16C monolith with full cache and the IMC on-die would perform better and consume less power when idling.

* yet as @humbug says Intel are - at least in the server market - giving away their CPUs despite the space saving of the E cores (although to be strict: Intel's Hybrid hasn't made it to servers yet).
 
The other way to think about this is:
Are chiplets a good thing for consumer, or for the manufacturer?
This is similar to asking "who gains the most from Intel's hybrid approach?"

In both cases the primary reason was cheaper to manufacturer. In AMD's case because back then they didn't have the budget, in Intel's case because the P cores are simply too big*.

And in both cases, consumers do gain something: without chiplets AMD would probably not be back in the game; for Intel some workloads do gain from the E-cores.

But back to my original question: I think AMD are now large enough to be able to offer a big monolith core. And there is little doubt that a 16C monolith with full cache and the IMC on-die would perform better and consume less power when idling.

* yet as @humbug says Intel are - at least in the server market - giving away their CPUs despite the space saving of the E cores (although to be strict: Intel's Hybrid hasn't made it to servers yet).

They are not making any money on any of their CPU's.
 
Do you not think a monolithic chip with the same amount of cache and an on-die IMC would perform better?

Or do you think improving performance is going backward?
Those monolitic dies are gone from design for many reasons.
Takes time to design and costly wafers and error rates.
Make no financial sense.
Not all things in a chip needs to be reworked.
Intel lost leadership a decade ago when they tried that 10nm fiasco with big cores.

AMD is crusching them in all areas with a small die design.
Can scale and cheaper to make and faster to design die critical areas.

Its why intel is slow, late and draw power.

Its why the small die designs are now are in a gpu and next generation amd will benefit massively where nvidia has to redesign things to make it work.

if you did run amd as a company it be bankrupt now
just saying.

its why I buy amd as its run by people that knows things
 
Back
Top Bottom