It will do both. The 14900k will consume less power for the same amount of performance as the 13900k.Maybe the refresh is to reduce power consumption rather than increase performance
Please remember that any mention of competitors, hinting at competitors or offering to provide details of competitors will result in an account suspension. The full rules can be found under the 'Terms and Rules' link in the bottom right corner of your screen. Just don't mention competitors in any way, shape or form and you'll be OK.
It will do both. The 14900k will consume less power for the same amount of performance as the 13900k.Maybe the refresh is to reduce power consumption rather than increase performance
Oh, don't be so pessimistic; after all, it's the same architecture on the same manufacturing process .Nextlake will be +/-5% lastlake. Just a hunch I have, not sure why…
Too good to be true. The i9 12900k ends up worse than an i5 in the span of 2 years. That's a huge leap if true.
Joking aside, I'm asking the optimistic folks seriously, what can Intel magically do with the same architecture on the same manufacturing process? Do you really think that DLVR (if implemented in the RPL refresh) will drastically reduce power consumption at such high clock speeds? And they depend on high clock speeds because IPC won't increase. Reducing power consumption is a challenge even for much more successful companies like Nvidia and AMD, so can Intel make such a significant leap with just one technology? It's hard for me to be optimistic, but no one will be happier than me if I'm proven wrong .
You need some help on using your own power limits? Send me a pm and ill help you out, it's quite easyHuge leap in power consumption, compared to the competition.
The easiest technique to drop power consumption is....to power limit the chips. Doesn't take a rocket scientist. I mean AMD did it with the 7950x 3d, it's' the same chip as the 7950x but way way more efficient, cause it has lower power limits. So...what?How I understand it, Intel could reduce the ICP to allow the ringbus to operate with more Skylake cores and improve efficiency. The downside is it means adding silicon and increasing costs but the chips probably wouldn’t look so dire.
More drastically, Intel could start turning parts of its chips off to drop the power consumption and sacrifice specific types of performance. Intel would essentially be choosing between rocks and hard places. This would probably be less costly, but also less effective.
Also adding more Atom cores. Decent for some work types but poor for others. They do scale really well though.
The easiest technique to drop power consumption is....to power limit the chips. Doesn't take a rocket scientist. I mean AMD did it with the 7950x 3d, it's' the same chip as the 7950x but way way more efficient, cause it has lower power limits. So...what?
I don't see the point of the T parts honestly, but people seem to really care about out of the box power draw so yeah, for these people T and non k parts are great. Most efficient cpus in existence out of the boxSure, just sell T parts.
I don't see the point of the T parts honestly, but people seem to really care about out of the box power draw so yeah, for these people T and non k parts are great. Most efficient cpus in existence out of the box
In all seriousness, regarding gaming efficiency, no way in hell they can contest the 3d chips. Best case they will be a little bit better than 13th gen (if they keep the same clockspeeds), worse case they will consume the same or even more if they push clockspeeds.
In productivity 13th gen is already great (assuming you don't run them power unlimited 4096w unlimited tau etc.), despite what amd fans keep saying. Especially the midrange parts were already - easily winning against amd in efficiency so if the leaks are true about the extra cores on the i5 and the i7, the gap will be even bigger in productivity efficiency between i5 vs r5 and i7 vs r7.
Every CPU is extremely power hungry if you don't limit the power draw. If you care about efficiency, why would you run it at 400 watts? It doesn't make sense, the only people that run the 13900k at 400w is those who wanna complain that it's not efficient. Going from 150w to 400w gets you less than 10% performance, so why are you doing that if you actually care about efficiency? I think you just wanna complain it's not efficient. Well, keep complaining I guess.I don't get why you keep trolling after being suspended once already? 13th gen is extremely power hungry in all applications. Ryzen 7000X3D is much more efficient in all applications, this is a simple known fact. I'm comparing stock vs stock here.
From 12th gen and below it will be a leap if customers have any of those, from 13th gen it will most likely be very little and not worth an upgrade. It may be a more noticeable leap with AMD as they take an age to bring something new to the table and they don't always get it right. Intel on the other hand are constantly making improvements albeit small and they sell them so why wouldn't you keep producing new CPU's, keeps the cash rolling in. I'll be getting 14th gen unless it's a complete disaster even though I have a 13900KS, I like tweaking/over clocking and usually the Intel platform is the most fun for this kind of thing.Considering how much the current generation consumes, even if it were just that scenario, it would still be a great success for Intel, lol. However, most customers logically won't be satisfied with that, just like they weren't satisfied with Nvidia's 4000 series, which consumes significantly less than the previous generation. But customers also want a leap in performance when they buy something new, no matter how affordable the product may be.
It seems to me like you've been sleeping for the past few years That statement held true during the Skylake era, but not since Zen 1. AMD is synonymous with success, innovation, and sticking to their plan. They work diligently and quietly, while Intel recently seems to fail at whatever they undertake. They talk a lot, but everything remains on paper. With Meteor Lake, they will lay the foundation for that chiplet/tiles architecture, and I hope that better days are ahead for them.From 12th gen and below it will be a leap if customers have any of those, from 13th gen it will most likely be very little and not worth an upgrade. It may be a more noticeable leap with AMD as they take an age to bring something new to the table and they don't always get it right. Intel on the other hand are constantly making improvements albeit small and they sell them so why wouldn't you keep producing new CPU's, keeps the cash rolling in. I'll be getting 14th gen unless it's a complete disaster even though I have a 13900KS, I like tweaking/over clocking and usually the Intel platform is the most fun for this kind of thing.
It seems to me like you've been sleeping for the past few years That statement held true during the Skylake era, but not since Zen 1. AMD is synonymous with success, innovation, and sticking to their plan. They work diligently and quietly, while Intel recently seems to fail at whatever they undertake. They talk a lot, but everything remains on paper. With Meteor Lake, they will lay the foundation for that chiplet/tiles architecture, and I hope that better days are ahead for them.