• Competitor rules

    Please remember that any mention of competitors, hinting at competitors or offering to provide details of competitors will result in an account suspension. The full rules can be found under the 'Terms and Rules' link in the bottom right corner of your screen. Just don't mention competitors in any way, shape or form and you'll be OK.

Intel Core Ultra 9 285k 'Arrow Lake' Discussion/News ("15th gen") on LGA-1851

but since 14th gen is the same price as 13th, you might as well get 14th gen. Also there are some benefits with Intel's APO.

14700K also has additional E cores for what that is worth. I've so far seen very little benefit from APO sadly, only a small number of titles which it supports.
 

PuuZnDb.jpeg


Z890 leaked

No DDR4 support (previously known)
16x PCI-E Gen 5 lanes from CPU to PCI-E slots (for GPU)
4X PCI-E Gen 5 lanes from CPU (for SSD)
4X PCI-E Gen 4 lanes from CPU (for SSD)
Native Wi-Fi 7
Native Thunderbolt 4

Solid upgrade for Z890 over Z690/Z790 - though would have been nice to see the chipset x8 DMI 4.0 link upgraded to DMI 5.0.
 
Last edited:
  • Like
Reactions: R3X

PuuZnDb.jpeg


Z890 leaked

No DDR4 support (previously known)
16x PCI-E Gen 5 lanes from CPU to PCI-E slots (for GPU)
4X PCI-E Gen 5 lanes from CPU (for SSD)
4X PCI-E Gen 4 lanes from CPU (for SSD)
Native Wi-Fi 7
Native Thunderbolt 4

Extra 4 lanes from CPU to SSD is a solid upgrade and puts the Z890/LGA-1851 platform ahead of AM5 in terms of IO.

AM5 (Ryzen 7000 series) has a total of 24 PCie 5 lanes on the CPU, 16X for GPU, 4X for NVMe and 4X for the Chipset, those are not usable, so a total of 20 usable PCIe 5 lanes, 1 GPU at 16X, 1 NVMe at 4X.

Yes it appears that Z890 has the same 20 usable PCIe 5 lanes on the CPU but in addition another 4 PCIe 4 lanes for another direct to CPU NVMe, so long as its PCIe 4.
 
Last edited:
  • Like
Reactions: R3X
Still a bit pathetic on the PCI-e provisioning though. I'd like to see Intel push the boat out a bit with a X79 type platform for the desktop which wasn't stupidly expensive at entry level at least and increased feature support significantly. 32x PCI-e Gen 5 lanes, USB4 and OCuLink support, etc. quad channel memory.
 
Still a bit pathetic on the PCI-e provisioning though. I'd like to see Intel push the boat out a bit with a X79 type platform for the desktop which wasn't stupidly expensive at entry level at least and increased feature support significantly. 32x PCI-e Gen 5 lanes, USB4 and OCuLink support, etc. quad channel memory.
How much die space do PCIe lanes cost and are a lot of them really necessary for mainstream platforms?

On my B550 i'm running 16X PCIe 4 for the GPU and 4X PCIe 4 for an NVMe from the CPU, from the Chipset i'm running another NVMe, PCIe 3 4X.

That suits me fine for fast storage, for mass storage i stuff a bunch on SATA SSD's in there...

What's not good is when you only have 16 of the fastest PCIe lanes, so if you use a PCIe 5 GPU and a PCIe 5 NVMe it will take the 4 lanes it needs for the NVMe from the GPU, see Raptor Lake.
 
Last edited:
Last edited:
6.2Ghz for a 1x600 chip?
And 20% over a 14900 is great, are Intel forced to provide a better value proposition this time, maybe not by reducing price but by bumping up the performance uplift?
 
Last edited:
I'd like to see Intel push the boat out a bit with a X79 type platform for the desktop which wasn't stupidly expensive at entry level at least and increased feature support significantly. 32x PCI-e Gen 5 lanes, USB4 and OCuLink support, etc. quad channel memory.

They have that platform, but motherboard manufacturing costs are simply more expensive today.
 
HT can be worth 15%~ performance, so a pretty significant deficit for clockspeed to make up.

HT can be anywhere from ~15 to ~95% depending on application - not sure what the average is I think Intel quotes about 30%. So for some stuff it is a massive amount to make up. One of the game development tools I use regularly for packing data files gains 70-90% performance from HT.

EDIT: Intel quotes 30% as both a max and an average in different places, reviews come up with slightly different results depending on what they are testing.
 
Last edited:
Random arrow lake chip benched, 20% faster than 14900ks in single core and multicore around same as 13700k

This chip could be a 15600k or something


GG if true. This is Intel's first chipset attempt on desktop CPU's, I expected it to be inferior in every way to Zen5.
 
HT can be anywhere from ~15 to ~95% depending on application - not sure what the average is I think Intel quotes about 30%. So for some stuff it is a massive amount to make up. One of the game development tools I use regularly for packing data files gains 70-90% performance from HT.

EDIT: Intel quotes 30% as both a max and an average in different places, reviews come up with slightly different results depending on what they are testing.
Pre meltdown/spectre or in unpatched environments maybe, but that’s cheating extra performance out of HT and Intel can’t market that any longer.

I think Intel need to drop HT draw back their power use and save windows from tripping over itself from dealing with three types of processing pipes and two different chip architectures. Ultimately with Intels power use and drive to smaller nodes something has to give because we’ve reached the point the silicon is degrading pretty rapidly.
 
Pre meltdown/spectre or in unpatched environments maybe, but that’s cheating extra performance out of HT and Intel can’t market that any longer.

I think Intel need to drop HT draw back their power use and save windows from tripping over itself from dealing with three types of processing pipes and two different chip architectures. Ultimately with Intels power use and drive to smaller nodes something has to give because we’ve reached the point the silicon is degrading pretty rapidly.

That varies hugely from processor generation and application - mostly the 20-30% impact are from synthetic benchmarks which doesn't represent real world use - a lot of applications only see a 2-5% decrease in HT efficiency from mitigations. I might be wrong but IIRC it is Skylake based CPUs which see the bigger hit, most generations before and after tend to not see as big an impact (I assume more recent processors developed post these mitigations have some hardware tweaks to reduce the impact).
 
Last edited:
That varies hugely from processor generation and application - mostly the 20-30% impact are from synthetic benchmarks which doesn't represent real world use - a lot of applications only see a 2-5% decrease in HT efficiency from mitigations. I might be wrong but IIRC it is Skylake based CPUs which see the bigger hit, most generations before and after tend to not see as big an impact.

The P core is still essentially rehashed Skylake at its core. Pun intended.

HT is probably seen as hindrance and liability now.
 
The P core is still essentially rehashed Skylake at its core.
You can keep repeating that as often as you like, but it doesn't make it true.

Golden Cove had significant improvements:
 
Last edited:
Back
Top Bottom