• Competitor rules

    Please remember that any mention of competitors, hinting at competitors or offering to provide details of competitors will result in an account suspension. The full rules can be found under the 'Terms and Rules' link in the bottom right corner of your screen. Just don't mention competitors in any way, shape or form and you'll be OK.

Poll: Ryzen 7950X3D, 7900X3D, 7800X3D

Will you be purchasing the 7800X3D on the 6th?


  • Total voters
    191
  • Poll closed .
I think you’re missing the point, beyond trying to use GamerNexus numbers to prove one (where in his conclusion he states blender isn’t reflective of production apps). It’s simply an analysis. Someone purchasing the product would never run eco mode, even he doesn’t recommend it.

You’d have to constantly flip between eco and non-eco to cover all the use cases you may face to optimise. Your only justifiable reason to use this is if you’re constrained by a small form factor so thermals considerations are important, or you’re running it in say a dedicated server or station with high up-time where multi thread workloads is it’s predominate (if not only) job.

Let’s go back to my original statement…… The 7950x is a step backwards for AMD, so why is that the case?

- The 3950x was more power efficient, cooler and had greater performance vs Threadripper (2950x)

- The 5950x was more power efficient, cooler and had greater performance vs the 3950x.

A pattern emerges…. Of course they’ll always hit a wall before a large architectural change but it’s been the big “boon” for AMD Products.

Then we get a 7950x, a product which uses significantly more power, is significantly hotter and is less power efficient.

How that can be anything less than a set back is beyond me, especially when Intel has been chastised (and rightly so) for pushing power to achieve generational performance increases in the same manner.

Is it a bad product? No, is it a disappointment from a company who was showing Intel up by delivering power and thermally efficient CPUs with a sizeable performance gain. Absolutely
I have a 7950X and think it’s a nice step forward. It beets a 5950X at 65W and at much lower temps. It gets 30K CineBench with a max temp of 45C at 65W. At 105W it gets 35K at < 65C. It also includes DDR5, iGPU, PCIe 5, AVX512, 20/40 Gb USB-C and more PCIe lanes. Its not perfect but its better than the last gen so it’s not a step back. We will not see big power drops again unless there is a big break though in process tech as most of the big performance per watt improvements come from die shrinks.
I agree that they jacked the power up at the last minute, they even said that the competition has so they must but it’s easy to change so for me it’s not a big deal. I also run mine at 65W and get the why get a 16 core and run it at 65W like 16 cores at (full load) 4-4.5GHz is a POS.
 
Last edited:
I almost agree. I think AMD are looking into the future. They went from a max of 142W with AM4 to 230W with AM5. That’s the difference here. The architecture is more power hungry.

They had already made a decision to allow for more power. They won’t get this right until 1 or 2 revisions.

For now we have significantly more performance.
 
I think you’re missing the point, beyond trying to use GamerNexus numbers to prove one (where in his conclusion he states blender isn’t reflective of production apps). It’s simply an analysis. Someone purchasing the product would never run eco mode, even he doesn’t recommend it.

You’d have to constantly flip between eco and non-eco to cover all the use cases you may face to optimise. Your only justifiable reason to use this is if you’re constrained by a small form factor so thermals considerations are important, or you’re running it in say a dedicated server or station with high up-time where multi thread workloads is it’s predominate (if not only) job.

Let’s go back to my original statement…… The 7950x is a step backwards for AMD, so why is that the case?

- The 3950x was more power efficient, cooler and had greater performance vs Threadripper (2950x)

- The 5950x was more power efficient, cooler and had greater performance vs the 3950x.

A pattern emerges…. Of course they’ll always hit a wall before a large architectural change but it’s been the big “boon” for AMD Products.

Then we get a 7950x, a product which uses significantly more power, is significantly hotter and is less power efficient.

How that can be anything less than a set back is beyond me, especially when Intel has been chastised (and rightly so) for pushing power to achieve generational performance increases in the same manner.

Is it a bad product? No, is it a disappointment from a company who was showing Intel up by delivering power and thermally efficient CPUs with a sizeable performance gain. Absolutely

I don't know where he says "blender isn’t reflective of production apps" i do know he says that about Cinebech but he also says this is why he uses Blender, so are you sure about that? Cinebench is actually performance verification tool for Maxcon Cinema 4D, a path traced artistry application, among other things, as for Blender, its a similar thing. his own GN Logo, the animated one in his videos is made using Blender, all of the products he sells are designed using Blender, so if he says Blender is not representative of a production application that would be asinine.

Now, you don't have to flip between power modes, set it in the BIOS to 105 ECO mode and forget about it.
 
Last edited:
I don't know where he says "blender isn’t reflective of production apps" i do know he says that about Cinebech but he also says this is why he uses Blender, so are you sure about that? Cinebench is actually performance verification tool for Maxcon Cinema 4D, a path traced artistry application, among other things, as for Blender, its a similar thing. his own GN Logo, the animated one in his videos is made using Blender, all of the products he sells are designed using Blender, so if he says Blender is not representative of a production application that would be asinine.

Now, you don't have to flip between power modes, set it in the BIOS to 105 ECO mode and forget about it.

Goto the conclusion. He talks about blender and how it’s not typical of normal workloads, it’s rare to have production apps that just max out all cores and threads to do it’s job. He gives some examples and why the bender score is not indicative of typical results, it’s a great benchmark but beyond the likes of a render workload it should never be used as a sole indicator of performance.

Hence why I said earlier you really only maximise the benefit if you have a very niche usage, either constrained by thermals due to say a small form factor or a dedicated server or workstation with high up-time for render workloads.
 
Last edited:
I don't see the problem here.

I didn't go from a 3600 to a 5800X and thing "oh no this is really bad it uses 60% more power"

Its also a much better CPU, as is Zen 4, the thirst thing i though on testing it was "damn this thing is quick"

Goto the conclusion. He talks about bender and how it’s not typical of normal workloads, it’s rare to have production apps that just max out all cores and threads. He gives some examples and why the bender score is not indicative of typical results, it’s a great benchmark but beyond the likes of a render workload it should never be used as a sole indicator of performance.

Hence why I said earlier you really only maximise the benefit if you have a very niche usage, either constrained by thermals due to say a small form factor or a dedicated server or workstation with high up-time for render workloads.

Then he should stick to the day job because the reality is almost anything that you do production wise is core heavy. I mean he says the same thing about Cinebench thinking its just a benchmarking application, he has no clue.
He probably thinks the world revolves around Adobe software, like many people who have a hobby or maybe even a small business think that's what everyone uses.
I have Photoshop, had it since 2005, its identical now in 2023 as it was then, designed for Pentium 4 CPU's, Adobe is junk. i'll tell you what i use it for, an Nvidia Plug in that allows me to convert images in to CryTiff format, and for creating masks in foliage textures, that's it, the only two things its good for and if i was not so lazy i have no doubt i could learn to use other applications far far far better for even those very simple jobs.
 
Last edited:
I don't see the problem here.

I didn't go from a 3600 to a 5800X and thing "oh no this is really bad it uses 60% more power"

Its also a much better CPU, as is Zen 4, the thirst thing i though on testing it was "damn this thing is quick"



Then he should stick to the day job because the reality is almost anything that you do production wise is core heavy. I mean he says the same thing about Cinebench thinking its just a benchmarking application, he has no clue.
He probably thinks the world revolves around Adobe software, like many people who have a hobby or maybe even a small business think that's what everyone uses.
I have Photoshop, had it since 2005, its identical now in 2023 as it was then, designed for Pentium 4 CPU's, Adobe is junk. i'll tell you what i use it for, an Nvidia Plug in that allows me to convert images in to CryTiff format, and for creating masks in foliage textures, that's it, the only two things its good for and if i was not so lazy i have no doubt i could learn to use other applications far far far better for even those very simple jobs.

As I’ve said multiple times. It’s not a bad product; it’s simply a step back. Performance alone isn’t the only factor, otherwise we’d of all thought the 3090 was god mode.

Also thinking ratcheting up the power is “ok” is really not the approach we should take as consumers, just like when Intel started doing it. As it’ll lead to less pressure for architectural innovation in future and increases prices long term as yields suffer, due to the increased power and thermal requirements. Why worry about pushing to the limits of innovation when you can last minute / later into your development cycle you can make the “CPU go brrrrrr”

Personally I look at my purchases me on “is it better than before”, it doesn’t stop me buying them. I purchased a 3090 (which from a product perspective, was also a step back), have a 4090…. But I do look objectively at any tech, otherwise we just end up defending it out of bias when the facts suggest otherwise.

Production app workloads are core heavy but they don’t purely max out all threads and cores to do their job (which is what blender does, it’s just simulating a maxed render), Steve gives an examples of how in the conclusion.
 
Last edited:
Have read some rummors that the 7800x3d will be able to achiever higher clocks on its 3D stacked cores (all 8 of them) vs the 8 on the 7950x3D due to the issues around heat and dissipation from the non x3d CCD on the latter CPU.

A reason for AMD delaying the launch of the 8 core part may include the eight core part outperforming the 16 in some (mostly gaming) scenarios?
 
Last edited:
Have read some rummors that the 7800x3d will be able to achiever higher clocks on its 3D stacked cores (all 8 of them) vs the 8 on the 7950x3D due to the issues around heat and dissipation from the non x3d CCD on the latter CPU.

A reason for AMD delaying the launch of the 8 core part may include the eight core part outperforming the 16 in some (mostly gaming) scenarios?
Just rumours at the moment and one of the articles that said that also used the leaked benchmarks as proof which is meaningless.
 
7950x3d runs at lower clocks under load than 7800x3d





Also released, here is the algorithm for how the 7950x3d and 7900x3d decide which CCD to allocate to a thread. It seems you can also modify the algorithm yourself in the Bios and you can even enable it for non x3d CPUs


Update to latest AGESA bios and then in the Bios you can find all the settings under "X3D Core Flex Gaming Preset"

You can even disable it if you want, which will be interesting for testing to see how much the algorithm improves gaming performance over the standard Windows scheduler for dual CCD X3D CPUs


 
Last edited:
Definitely waiting for the 7800X3D reviews before buying anything, so glad I have the 7700X to tide me over!

Only care about gaming so if they are holding it back to try and sell 7950X3D to gamers… happy to wait!

Also gives a little extra time for bios and drivers to be debugged!
 
Last edited:
I don’t believe the leak that the 7950x3D would be lower frequency and worse in gaming than the 7800x3D but we won’t know until April. AMD already released info on the base and boost clocks for the new CPUs anyway unless this has changed.
 
I don’t believe the leak that the 7950x3D would be lower frequency and worse in gaming than the 7800x3D but we won’t know until April. AMD already released info on the base and boost clocks for the new CPUs anyway unless this has changed.
I do, because I don't think I'm going to be able to wait until April.
Planning to build a whole system so don't want to buy everything except the CPU in case something needs to be sent back but also don't want to buy a temporary CPU as it seems a waste of money and I imagine it could be tricky to sell on afterwards as a lot of people will be trying to shift the cheaper CPUs (which won't be a drop in up[grade for anyone as they'll need a new motherboard, so less demand?).
 
I don’t believe the leak that the 7950x3D would be lower frequency and worse in gaming than the 7800x3D but we won’t know until April. AMD already released info on the base and boost clocks for the new CPUs anyway unless this has changed.

They haven't showed the boost clocks of the cached ccd of 7950x3d or 7900x3d

The boost clocks advertised for these are for the none cached ccd
 
Last edited:
I don’t believe the leak that the 7950x3D would be lower frequency and worse in gaming than the 7800x3D but we won’t know until April. AMD already released info on the base and boost clocks for the new CPUs anyway unless this has changed.
We should know next week. The 7800X3D and that is 5.0GHz. Odd thing is the info so far is they all boost to 5.0GHz which is why this rumour is extremely sus. We’ll know the answer soon.
 
Have read some rummors that the 7800x3d will be able to achiever higher clocks on its 3D stacked cores (all 8 of them) vs the 8 on the 7950x3D due to the issues around heat and dissipation from the non x3d CCD on the latter CPU.

A reason for AMD delaying the launch of the 8 core part may include the eight core part outperforming the 16 in some (mostly gaming) scenarios?
Ha ha. That would be a marketing nightmare for them. The only early adopters for these CPUs (let’s remember for most people not on 1080p it makes little difference in gaming) are amd stalwarts. Is probably be counted as one of them. My last CPUs have been a 1700x, 3700x, 5800x and now a 5800x3d. My daughter had a 3600 and at Christmas I iofrased her to a 5600x.
 
We should know next week. The 7800X3D and that is 5.0GHz. Odd thing is the info so far is they all boost to 5.0GHz which is why this rumour is extremely sus. We’ll know the answer soon.

Where is the information to confirm the cached ccd boost clocks are 5ghz on 7950x3d and 7900x3d

On AMD official site no information about it , the 5.7 boost is the none cached ccd
 
Last edited:
Where is the information to confirm the cached ccd boost clocks are 5ghz on 7950x3d and 7900x3d

On AMD official site no information about it , the 5.7 boost is the none cached ccd
It’s not confirmed. We’ll have to wait until next week for that. This was the general consensus on the AMD Reddit channel about what the clocks would be. However I’m not sure people know yet (aside from the reviewers using the chip right now).
 
Back
Top Bottom