• Competitor rules

    Please remember that any mention of competitors, hinting at competitors or offering to provide details of competitors will result in an account suspension. The full rules can be found under the 'Terms and Rules' link in the bottom right corner of your screen. Just don't mention competitors in any way, shape or form and you'll be OK.

14th Gen "Raptor Lake Refresh"

I dunno - I think the CPUs meet Intel's spec but have too little overhead to be stable with the motherboard settings, especially some of the boards which are a bit more slapdash with enhancements implementation.

Interestingly on my 14700K the AI Assist actually recommends bumping power limit substantially :o

The owner of the pc store I use told me if a cpu is unstable with the motherboards default profile, they can get the cpu rma'd
 
The owner of the pc store I use told me if a cpu is unstable with the motherboards default profile, they can get the cpu rma'd

The information on this whole situation is a mess - I'm seeing some reports of it on Gigabyte boards, etc. but overwhelmingly the issue is happening with Asus boards and I don't think that is related to any market share differences, etc. or actually a fault with Intel CPUs per se. Similar happened with AMD's 7000 series and mostly Asus a year ago, especially the X3D chips which were burning up.
 
You can’t put so much power in these chips. Adding more voltage is certainly not the right thing to do, unless you’re trying to start a fire. :p

Better cooling is just side stepping the matter.
 
Last edited:
If for no other reason so that Intel and motherboard manufacturers take this seriously.

Although RMA can mean weeks without a PC!

I'm not sure Intel is to blame here despite the accusations from some, though to be fair I've only skimmed through the longer content on it.
 
I'm not sure Intel is to blame here despite the accusations from some, though to be fair I've only skimmed through the longer content on it.
Well, that the main focus so far is mostly Asus. And Asus have had decades of default overclocking just to win benchmarks (anything from running a 100MHz FSB at 100.5 or so, to applying all Turbo defaults etc.).

However, as the benchmarks show (still think Hardwareluxx.de are the only ones to have reviewed this so far) at saner stock settings Intel will lose in a good few benchmarks they currently win. And Intel know this. That's why I don't buy the Intel are blameless narrative here.

Wonder if Intel have sent out complete review samples for 14900K / 14900KS, and if they have what motherboard did they supply? While Intel PR could have supplied auto-overclocking motherboard by mistake, a huge corporation like Intel making the kind of mistake which wins them even more reviews? A bit unlikely IMO.

(The actual good news - that at saner stock settings Intel's CPUs aren't as inefficient as in most reviews - gets buried.)
 

Interesting.

Intel have software they give to motherboard vendors to tune the CPU to get more out of it.

The thing about that is CPU's have a higher voltage curve by default to allow for degradation over time, so if they offer a 3 year warranty they are guaranteeing the CPU has enough voltage headroom for 3 years of degradation, its why you can undervolt CPU's and GPU's.

Intel are allowing Motherboard vendors to take that away so they can decrease power consumption and / or increase performance, effectively allowing motherboard vendors to sell you a board that under volts the CPU out of the box, inevitably when they are messing about with the CPU like that pushing it close to the line of reliability stability, or byond for just its brand new state, after a few months of use it may have degraded enough to push it over the line, if it wasn't already over the line.

Is it just me or does anyone else find the 'inventive' ways Intel come up with to remain competitive increasingly desperate and stupid?
 
Last edited:
Those numbers are just observations.

The load line is not created.

When voltage goes up, current tends to go up. All else being equal.

Electrons have a tendency to degrade when you increase their power levels.

It’s a product of electromagnetism, which in turn is also called heat.

There’s a particle wave duality.

You need to turn the power down.
 
When voltage goes up, current tends to go up. All else being equal.
You need to turn the power down.

I'm not really clued into the context with LLC here but this is the crux of it really and where impedance comes into play at a given power requirement if the voltage drops the current goes up, which can destroy components rated for higher voltages but not sufficient current carrying.
 
Heat is what degrades a CPU, the hotter it gets the more the molecules in the circuit move and with that the more resistance you get in the circuit, in turn the more voltage you need which in turn pushes the heat up.

That resistance is the copper breaking down, the moving molecules get in the path of the electrons and are destroyed causing the material to breakdown, the more it breaks down the more resistance it develops and that's the degradation.

The perfect semiconductor, or superconductor has 0 resistance, when you cool the chip to absolute zero -273c the molecules stop moving entirely, you have a superconductor. the holy grail of semiconductors is superconductors at room temperature, the first person to figure that out will be set for life, a very wealthy one.

When you have a relatively small piece of silicon, like a 200mm 14900K pulling 250 watts its very difficult to cool, 80 to 90c is a very high temperature, despite it being "rated" its not good, not at all, if you want a CPU or GPU to last for many years without degrading much you want to keep it at around 60c, no more than 70c
 
Last edited:
@Rroff Very good point. It’s the same with sound. Depends on positive and negative etc. I don’t like going into minute details because you lose context. The minute details are not clearly understood, so they change every week.

The Load line is just the observed gradient. You shouldn’t change that, or you can affect everything else.
 
Last edited:
Heat is what degrades a CPU, the hotter it gets the more the molecules in the circuit move and with that the more resistance you get in the circuit, in turn the more voltage you need which in turn pushes the heat up.

That resistance is the copper breaking down, the moving molecules get in the path of the electrons and are destroyed causing the material to breakdown, the more it breaks down the more resistance it develops and that's the degradation.

The perfect semiconductor, or superconductor has 0 resistance, when you cool the chip to absolute zero -273c the molecules stop moving entirely, you have a superconductor. the holy grail of semiconductors is superconductors at room temperature, the first person to figure that out will be set for life, a very wealthy one.

When you have a relatively small piece of silicon, like a 200mm 14900K pulling 250 watts its very difficult to cool, 80 to 90c is a very high temperature, despite it being "rated" its not good, not at all, if you want a CPU or GPU to last for many years without degrading much you want to keep it at around 60c, no more than 70c

Normally happy to observe here even when conjecture and pseudosciences are in full swing, but this is quite a big goof.

1. Anything that pulls more than the specification current through the die accelerates degradation.
2. Current is the biggest factor in degradation. However the relationship between voltage, frequency and current needs to be understood.
3. ASUS HQ managed to degrade a 5900X in a day running Prime95 with AVX instruction. This is because overshoot is harmful and more prevalent when the CPU is being fed with significant current such as with these types of workloads, and the short-term transients increase depending on the VRM and the applied loadline setting.

In short, simply running a CPU at a higher temperature such as what the vendor white paper stipulates as a maximum isn't an issue, whilst feeding it with 2x to 3x the rated TDP is. Whilst it's fair to say temperature and thermal stress speed up the process of electromigration it's not likely to impact it's usable life span if not the direct result of high current workloads.
 
Last edited:
Normally happy to observe here even when conjecture and pseudosciences are in full swing, but this is quite a big goof.

1. Anything that pulls more than the specification current through the die accelerates degradation.
2. Current is the biggest factor in degradation. However the relationship between voltage, frequency and current needs to be understood.
3. ASUS HQ managed to degrade a 5900X in a day running Prime95 with AVX instruction. This is because overshoot is harmful and more prevalent when the CPU is being fed with significant current such as with these types of workloads, and the short-term transients increase depending on the VRM and the applied loadline setting.

In short, simply running a CPU at a higher temperature such as what the vendor white paper stipulates as a maximum isn't an issue, whilst feeding it with 2x to 3x the rated TDP is. Whilst it's fair to say temperature and thermal stress speed up the process of electromigration it's not likely to impact it's usable life span if not the direct result of high current workloads.
You're saying what i wrote is a big goof but nothing you went on to explain disagrees with me.

Did you copy and paste this from somewhere?
 
Last edited:
That depends, did you read yours on a cereal packet lol?

Both are needed to rapidly accelerate the process. Your statement is wholly incorrect and starts out by saying temperature alone degrades a cpu faster, which simply isn’t true.

High current as a result of voltage and frequency results in a quicker flow of electrons which is what can lead to a component breakdown, with the resulting temperature accelerating the process. In the absence of high current all you’re left with, ironically, is a stock AMD CPU by all accounts that will happily run 80-90c out of the box with a reasonably low current draw. Unless you’re also implying that a stock Ryzen CPU will degrade quickly?
 
Last edited:
That depends, did you read yours on a cereal packet lol?

Both are needed to rapidly accelerate the process. Your statement is wholly incorrect and starts out by saying temperature alone degrades a cpu faster, which simply isn’t true.

High current as a result of voltage and frequency results in a quicker flow of electrons which is what can lead to a component breakdown, with the resulting temperature accelerating the process. In the absence of high current all you’re left with, ironically, is a stock AMD CPU by all accounts that will happily run 80-90c out of the box with a reasonably low current draw. Unless you’re also implying that a stock Ryzen CPU will degrade quickly?

Its disingenuous to say that, i didn't say heat was the only thing that degrades a CPU, but it does matter and it is the primary thing you can control to help the longevity of the CPU / GPU, i explained the mechanics of it, why extreme cooling a CPU / GPU allows it to operate under conditions that would fry it instantly at room temperature. As an example.

I mean as an enthusiast one instinctively tries to keep ones CPU / GPU as cool as possible, while one might not understand the mechanics of it one knows it makes the CPU / GPU more stable and last longer, i simply explained the mechanics of why you do that.
 
Last edited:
Its disingenuous to say that, i didn't say heat was the only thing that degrades a CPU, but it does matter and it is the primary thing you can control to help the longevity of the CPU / GPU, i explained the mechanics of it, why extreme cooling a CPU / GPU allows it to operate under conditions that would fry it instantly at room temperature. As an example.

I mean as an enthusiast one instinctively tries to keep ones CPU / GPU as cool as possible, while one might not understand the mechanics of it one knows it makes the CPU / GPU more stable and last longer, i simply explained the mechanics of why you do that.

Reducing the temperature mitigates the effects, yes. You said heat degrades a CPU - this statement was incorrect which was what needed clarifying, as you neglected to mention current at all. When you understand that electromigration is the movement of atoms based on the flow of current it starts to make more sense. When we increase the flow of current the temperature rises, causing the atoms to move faster. Electromigration is still happening when the CPU is under LN2, but mitigates the impacts of high current that would result in logic gates looking like a mammoths tubulartract lol.

Digression over, anyway. Nobody is expected to know everything. :)
 
Not looking to get into a game of straw-clutching, just needed clarifying. It's been seen multiple times on both AMD and Intel CPUs.

In short, you need to factor current draw into electromigration and you did not. The fact you also neglected to respond to my comment regarding the operating temperature of a stock Ryzen CPU is quite telling. No need to go further down the rabbit hole with this one.
 
Back
Top Bottom