• Competitor rules

    Please remember that any mention of competitors, hinting at competitors or offering to provide details of competitors will result in an account suspension. The full rules can be found under the 'Terms and Rules' link in the bottom right corner of your screen. Just don't mention competitors in any way, shape or form and you'll be OK.

AMD Zen 2 (Ryzen 3000) - *** NO COMPETITOR HINTING ***

Soldato
Joined
11 Jun 2003
Posts
5,075
Location
Sheffield, UK
until a GPU hits the pcie 3 limit it makes no difference

but... when reviewing, one has to use the best available hardware. That's AMD now? :D

If this is the case how do first gen boards fit in the info for the third gen cpus?

That was the point, some of the first gen boards are not going to be supporting the full range.

gUnrgt7.jpg
 
Last edited by a moderator:
Associate
Joined
11 Dec 2016
Posts
2,018
Location
Oxford
New video on Gamers Nexus - takeaway point for me is that the chipset is constantly running at 11w (peak at 14-15w) and doesn't downclock
For me this puts x570 in whatever category is opposite of premium and desirable.
Why would I pay more for a board that consumes triple power and doesn't downclock?
 
Soldato
Joined
11 Jun 2003
Posts
5,075
Location
Sheffield, UK
For me this puts x570 in whatever category is opposite of premium and desirable.
Why would I pay more for a board that consumes triple power and doesn't downclock?

PCIe 4, Bigger power delivery.
You know... bandwidth and overclocking, only 2 key bits of most peoples performance wishlist...

Oh no, the 11W constant. The horror! :)
That's... already pretty average at worst as power draw goes.

If it puts you off... there's alternatives :)
 
Soldato
Joined
18 Oct 2002
Posts
14,114
Location
West Midlands
For me this puts x570 in whatever category is opposite of premium and desirable.
Why would I pay more for a board that consumes triple power and doesn't downclock?

An interesting concept, why would anyone pay for features they cannot get anywhere else currently?

It reminds me of Apple, when every year they used to brag about how their phones got thinner, and thinner until they didn't, and then they got fatter but they neglected to mention that, was the fatter phone better or worse than the previous one? Sometimes an irrelevant feature/drawback is literally that irrelevant.
 
Joined
2 Jan 2019
Posts
617
I'd say air will stuggle at that, and you'd need a decent loop.

The 3800X with 105W TDP is 3.9Ghz base clock with potential for 1-2 cores at 4.5Ghz. There will be some silicone binning, but at best if you double the cores to a 3950X I'd say you'll need to dissipate ~ 200W at 4.0 - 4.1Ghz and that's where air starts to run out of headroom unless you add a couple of delta's. I'd even speculate the limit on water will be ~4.5Ghz, perhaps a little more if we get blocks that feed the water in the right place. Current blocks assume a central core.

My plan is to go full water and then keep the CPU nice and cool so PBO can do it's thing on a 3900X. I think in most general applications a few cores at high turbo + some cores with lower boost will be decent enough and I'm not sure I need an extra few %.

3950X will be great for encoding etc, even at 3.5Ghz base it's like having a 7Ghz 3800X but frequency is a killer, closer to 5Ghz we get, the more power we need and the better cooling hence these crazy VRM's that have been spec'd for the X570 platform.
Can't say that I agree with this.
There was a leak at Computex that showed a 3950X at 4.25GHz all core, each core consuming 5.5w at 100% load. Sure, there's power required for the IF and IO die, but it certainly isn't anywhere near the numbers that you suggest.
 
Soldato
Joined
16 May 2007
Posts
3,220
Can't say that I agree with this.
There was a leak at Computex that showed a 3950X at 4.25GHz all core, each core consuming 5.5w at 100% load. Sure, there's power required for the IF and IO die, but it certainly isn't anywhere near the numbers that you suggest.

Be interesting to see the real life reviews and experience of people for this. For me the point where temps and voltage need to rise rapidly for little gain is where I stop.
 
Permabanned
Joined
2 Sep 2017
Posts
10,490
I could care less what the chipset uses in power, jesus you lot think about all the silly 5ghz clocks on multi core cpus out there with huge volts squeezed through them.
Man up and start cooling stuff better.

This chipset chip is the same as the IO die on the processors themselves, and is effectively a massive overkill for 99% of the users out there.
I don't know why AMD didn't design something else for a chipset.

Because of it, the mobos become so expensive.

I don't understand why two 75 sq.mm dies + one a little bit larger should be sold for $750.
It boggles my mind how small and cheap these chiplets are.
 
Associate
Joined
9 Jan 2019
Posts
885
What? Why dont you ask your paymasters next time you pick up the cheque for trolling from them why they fancy charging 2k plus for there big consumer cpus or for years absolutely shed loads of money for low % increments on quad core crap.

Maybe its because of the RND involved, maybe its because thats a high margin product at the top of the stack to help pull up the lower margin ones low down.
Maybe you should look at the bottom of the stack, it boggles my mind AMD can charge so little for cpu performance so high just a dozen or so months back was costing much much more.
 
Permabanned
Joined
2 Sep 2017
Posts
10,490
They both with intel are in large debt to us.
intel brought a miserable 42% improvement going from the 2600K to 7700K, that's years 2010 till year 2016, to remember, and only in 2017, you got a six-core mainstream chip for the first time.
 
Permabanned
Joined
15 May 2006
Posts
4,107
Location
London
2fboyqt3ri431.jpg


https://www.computerbase.de/2019-06/amd-mainboard-x590-chipsatz-sockel-am4/
 
Soldato
Joined
24 Jan 2006
Posts
2,541
Can't say that I agree with this.
There was a leak at Computex that showed a 3950X at 4.25GHz all core, each core consuming 5.5w at 100% load. Sure, there's power required for the IF and IO die, but it certainly isn't anywhere near the numbers that you suggest.

I'd love to see that kind of power usage in reality; however, I seriously doubt it based on the last 20 years of CPU and GPU progression.

See here, a 4.2Ghz 2700X is around 200W at 1.4V and they had to switch to a much more powerful cooler to stop it throttling.
https://www.gamersnexus.net/guides/3290-exponential-ryzen-voltage-frequency-curve

The Uncore die is a distraction, PCI-E and memory busses were never overclocked with the core frequency, they were just inconveniently on the same silicon and now they have been centralised in the same 12nm process they used in Zen+ so the main power saving is in the die shrink of the cores.

Intel have made little change in power usage from 32nm to 14nm+++++++ , a 2600k @ 4.7Ghz is around 100W, a 9900k at 5Ghz with twice the cores is around 200W so per core consuption is fairly static, you can argue vs IPC there are small improvements but no significant step change so the evidence suggests that smaller process do not provide dramatic power savings at high frequency.

Smaller processes increase the number of dies per wafer and aid costs while allowing some improvement in a combination of power usage and / or frequency. AMD have held TDP static and made modest increases in frequency across the 6 and 8 cores SKU's with the higher core SKU's taking a hit in base clocks.

What you suggest is that AMD now have double the cores of a 2700X at 7nm and they now run at the same ~ 4.2Ghz frequency on 25% of the per core 12nm power budget.
I think I'm being generous very generous at 200W which is 50% of the power budget and way over the 3.9Ghz all core frequency AMD spec for the 3800X @ 105W TDP.

I can absolutely believe that a 16 core 3950X at it's base frequency of 3.5Ghz can run at 5.5W per core, but power usage scales exponentially with frequency so at 4.2Ghz it will need decent cooling and dissipate way more than 105W. At 4.2Ghz - no chance of 5.5W per core.
 
Joined
2 Jan 2019
Posts
617
The 2700X was on a low power optimised node with GF. Using it to base your 3950X views upon is flawed. I'm pretty sure the 3950X leak that I referred to was posted on this thread; an image showing precisely what I've said. Maybe the screenshot I saw posted was fake, but then AMD did go and announce the 3950X just two weeks later...
 
Soldato
Joined
18 Oct 2002
Posts
14,114
Location
West Midlands
I can absolutely believe that a 16 core 3950X at it's base frequency of 3.5Ghz can run at 5.5W per core, but power usage scales exponentially with frequency so at 4.2Ghz it will need decent cooling and dissipate way more than 105W. At 4.2Ghz - no chance of 5.5W per core.

I think you must be confused, since with the right CPU you could effectively run the CPU at the same voltage from lets say 3.5GHz- 4.5GHz, if the silicon was good enough and the process supported those frequencies, only the power increase would be linear with the frequency as you have not increased the voltage coming in. Obviously if you have to start putting 1.4v in, instead of 1.1v then we are talking silly power usage, but no-one here know how well the final silicon responds to voltage/frequency increases, or indeed what voltage the CPU's even ship at.
 
Soldato
Joined
24 Jan 2006
Posts
2,541
I think you must be confused, since with the right CPU you could effectively run the CPU at the same voltage from lets say 3.5GHz- 4.5GHz, if the silicon was good enough and the process supported those frequencies, only the power increase would be linear with the frequency as you have not increased the voltage coming in. Obviously if you have to start putting 1.4v in, instead of 1.1v then we are talking silly power usage, but no-one here know how well the final silicon responds to voltage/frequency increases, or indeed what voltage the CPU's even ship at.

I'm not confused, 4.1Ghz @ 1.428V suggests they stoked the fire
 
Back
Top Bottom