• Competitor rules

    Please remember that any mention of competitors, hinting at competitors or offering to provide details of competitors will result in an account suspension. The full rules can be found under the 'Terms and Rules' link in the bottom right corner of your screen. Just don't mention competitors in any way, shape or form and you'll be OK.

Rocket Lake Review: A waste of sand...

You needed to buy a much more expensive motherboard and triple / quad channel ram for those which has never been useful for gaming. HEDT (High End Desktop Range) as they were called were not intended for gaming or general everyday PC use.

Dual channel ram running at higher frequency / tighter timings always outperforms quad channel in most situations, mainly gaming.


I think people forget that up until recently the general desktop market didn't need more than 4c4t. In fact I'd hazard a guess that 99% of people buying an average desktop pc still don't need more than 4c4t . At the time the 6 core+ Intel HEDT cpu's were aimed at hardcore enthusiasts, content creators, professionals etc etc and arguably the platform offered a lot more than just higher core count cpu's.

AT that time AMD indeed offered 6 core + desktop cpu's the problem was those cpu's at the time had rubbish single threaded performance that wasn't really compensated by the multi-threaded performance either.

Going back to rocket lake, the performance uplift for a new architecture is a bit meh, its a regression in core count on the i9 which has been expected for a long time because it was a back port but comet lake aside its not a poor cpu by any stretch of the imagination, it will offer blistering fast gaming performance and do everything most desktop users will ask of it. The problem is that it's competing with last gen Comet and AMD at the same time and PCI 4 is a bit of a non starter and not worth upgrading for yet on any platform.

I think its a great time to be a pc enthusiast (aside from the shortages) Both AMD and INTEL offer silly fast cpu's so whether you're just gaming, just benching or doing some sort of professional work both have you covered and neither will give you a sub par experience.
 
I think people forget that up until recently the general desktop market didn't need more than 4c4t. In fact I'd hazard a guess that 99% of people buying an average desktop pc still don't need more than 4c4t .

But isnt this because developers need to make their games in mind with what specs most gamers already have? Developers had to stick to 4c/8t max as that was a norm, similarly to how little Vram graphics cards used to have - for a long time Nvidia only put up to 1.5 Gb on their highest end cards.

If a developer wanted to make a game that could utilize 8 cores and 4 Gb Vram, then no one would have been able to play it so it wouldn't have sold. Hardware isn't meant to be made based on what current software demands are, the point is that with better hardware becoming the norm, software developers can then utilize those features.
 
But isnt this because developers need to make their games in mind with what specs most gamers already have? Developers had to stick to 4c/8t max as that was a norm, similarly to how little Vram graphics cards used to have - for a long time Nvidia only put up to 1.5 Gb on their highest end cards.

If a developer wanted to make a game that could utilize 8 cores and 4 Gb Vram, then no one would have been able to play it so it wouldn't have sold. Hardware isn't meant to be made based on what current software demands are, the point is that with better hardware becoming the norm, software developers can then utilize those features.

You could put it that way I guess . Vram is a little more difficult. As Gpu's have got faster Vram had increased accordingly. There isn't much point in increasing vram unless you have the gpu processing power to actually make use of it. In all my years of pc gaming (my first 'gaming' pc was a 486sx25) I've never had a problem with a graphics cards vram (or lack of it).

As for whether Intel held back gaming or productivity with lower core count cpu's ? I think it's up for debate. But even today with desktop cpus with up to 16 cores most software scales poorly (outside of synthetic benchmarks) and games that scale well over 4c8t are still few and far between and even in the days when 4c8t cpu's were king very few games made any use of that and that in itself is quite telling.
 
But isnt this because developers need to make their games in mind with what specs most gamers already have? Developers had to stick to 4c/8t max as that was a norm, similarly to how little Vram graphics cards used to have - for a long time Nvidia only put up to 1.5 Gb on their highest end cards.

If a developer wanted to make a game that could utilize 8 cores and 4 Gb Vram, then no one would have been able to play it so it wouldn't have sold. Hardware isn't meant to be made based on what current software demands are, the point is that with better hardware becoming the norm, software developers can then utilize those features.

Also something I forgot to address. Games for many many years have been the driving force to push for more and more powerful graphics cards. Whenever I get a new Gpu I often go back to older generation games that I used to have to lower the details on to get a reasonable amount of performance so I can play them 'maxed' out. Another driving force is ever increasing resolutions and better monitor technology.
 
Also something I forgot to address. Games for many many years have been the driving force to push for more and more powerful graphics cards. Whenever I get a new Gpu I often go back to older generation games that I used to have to lower the details on to get a reasonable amount of performance so I can play them 'maxed' out. Another driving force is ever increasing resolutions and better monitor technology.

Indeed but I think that everything else hardware wise increased in power and such much quicker that it took intel to go beyond 4c8t on their non HEDT lineups.

I had a couple of 6c12t HEDT setups before, and the motherboard cost as well as number of ram sticks isnt something I ever want to go back to. Ram wise 2 sticks running faster is always better for the majority of cases outside of I don't even know what professional things utilize quad channel.

My current hope is that Intel or AMD will add 24 PCI-E 4.0 lanes to future mainstream CPUs, maybe AMD already do, Intel is still only offering 20 lanes (1 GPU, 2 M.2 drives).
 
7 years intel had 6 cores for how long ? ohh thats right 2010. the funny thing because of amd bias you trying to defend amd when....instead of raging about it think for a minute...even if you AMD BIASED... you will now get your favourite amd cpu soon for a lot cheaper because of intel. so as i said thank you intel. for if it wasnt for good pricing you would be gouged scalped by retailers for much much longer.

Oh right, so we are bringing HEDT into this now? When's is In Intel bringing the cost of the Threadripper parts down then?

You can't see the woods for the trees, can you? I call out utter tripe when I see it and you try to hide behind shouting down others using AMD bias as your defence, but really you've got the the problem. You cannot see it, and that is even worse.

Tell me, back in 2017 when did you complain that Intel 8700Ks were being scalped, as they were 'the best' and then offer thanks to AMD for moving the desktop market forward and having a 6c/12t part at 30% of the cost, of course you didn't. Your comments were along the lines of, if you want the fastest you have to pay for it. Go back and refresh your memory if you have forgotten.
 
I been doing some testing, I would for sure buy ryzen today assuming its a system built from scratch, or you need to still buy a new board is buying intel.

After I upgraded to win10 I notice the i/o performance of my system has regressed very noticeably, I dont mean things like benchmarks but rather i/o latency. This is the area that as I understand it cpu microcode updates have been hurting intel, as well as in OS patches.

In windows 8 the month before the first cpu patches hit, if using that code my 9900k can compete with my 2600x no problem, it actually wins out as one would expect. Since putting on win 10 1809 LTSC, and even with me disabling spectre/meltdown mitigations, the effect of all the other patches is visible on my day to day usage. I can for sure see why now various intel cpu owners have kept themselves on win 10 1607, intel is definitely feeling the pain.

Some examples.

Macrium backup of my ff7 mods folder on win 8.1 took on average 4 minutes, on win10 1809 using same hardware and with the above mitigations mentioned disabled it takes a whopping 22 minutes on average. Most of the time is spent scanning the files for changes, as usually on most backup runs there isnt much to add in the incremental backup, the files are stored on my 960 EVO ssd, and if you watch task manager the i/o on the ssd is very low, its all cpu overhead.

Some mods in FF7 can cause large stutters when loading assets, on win 8, or on win10 on my ryzen system its only a tiny stutter, since upgrading this machine to win10 it lasts for about a second loading in the modded assets. These are 2 examples of i/o operations.

In various games it isnt noticeable and certain things feel lightning fast such as ms office. It seems to be just certain workloads its impacted, but the ryzen doesnt seem affected at all. Overall the 9900k is still faster, but obviously the 2600x is a much lower spec'd cpu and before ryzen sorted out their single threaded performance, I expect a 5800x would be completely dominant.

My laptop the effect is way more visible, also a intel cpu but a low end broadwell part, it isnt just specific i/o, the whole OS is visibly much more laggier than a win8 pre cpu patches build. On my laptop when I tried to install the cpu patches on win8 it was completely unusable, so the win10 on there is actually more more optimised than the patches were on win8.

So to me it is clear the bottlenecks caused by the patches can be mitigated by a combination of using a high end enough part and software optimisations to get round it, but also clear certain i/o flows are extremely heavily hit on intel, which explains why servers get hit especially hard as well as more legacy aged code.
 
I'd imagine they'll go with N6 EUV, and leave the original N7 DUV for the lower end parts if they have split capacity, due to limitations on TSMC's capacity, utilising both will allow them more wafers per month, and it makes absolute sense to use the higher performing node for the newer parts, and will benefit most.
As pointed out dropping the X from the 5600 isn't an issue, and making the 5600XT (or whatever) actually faster means it needs to be something of an improvement if they want to keep the MSRP and therefore margins high. Why dilute sales with a $199 5600, and no-one buys the X(T) due to no real gain, N6 will allow them better clocks or lower power if so desired, and the $299 5600XT is born both faster than the X, and therefore more desirable.

I could see potentially 2 new 5600 SKUs, if AMD actually wanted to bother... a say ~45W '5600' sold cheap would be interesting, still be able to boost decently with single/low threaded applications whilst giving it that cap to stop it competing with the 5600X. Then a 105W '5600XT' would be the obvious for a higher end chip allowing it to boost/maintain higher clocks when loaded. If they priced the XT at the currently X price and dropped that it would fit I guess.

But, they don't really need to... They're happily selling all the Zen3 chips they make, and still not actually making enough of the 5900X/5950X (the former utilising the 6-core dies for any extension of the 5600X range of course), so what's the benefit of producing another SKU for a lower price?

If they lose a sale to Intel it's a low-margin chip so not a huge issue, and it's not like the lost sale 'locks' the buyer into Intel beyond this generation. They might do it in a few months because they can, or they may just release Zen3+ instead.
 
They might do it in a few months because they can, or they may just release Zen3+ instead.

Yes, that is what anything on N6 would be. I thought that was obvious, it's up to AMD if they call it 6xxx series though, rather than 5xxx XT or similar.
 
Not a big fan of the angle that Intel makes what people want.

For a start they're not immune to being lazy, getting stuck and failing. They've been dragging their 10nm for years and can't answer AMD making core monsters which sell out while being overpriced.

Clearly it sucks to rely on one company to push things along.

No bad thing they're taking a turn scrapping for the mainstream market with value for money cpus.
 
$170 for this. its the cheapest one, and its $170, for this bare bones piece of crap. What is it with Intel and the high price of their Motherboards that vendors have to strip them down to something that resembles a $70 board just to get them sub $200?

pLrlRbv.png


 
What is it with Intel and the high price of their Motherboards that vendors have to strip them down to something that resembles a $70 board just to get them sub $200?
VRMs and power delivery. Getting something in place to feed the top-end CPUs without melting isn't going to be cheap. Of course, the price precedent has been set now. If Alder Lake comparatively sips power so there's no need for humungous VRMs and power delivery, the board prices won't come down accordingly because people have been "happily" paying stupid money for a few generations.

Exactly like we saw with B550 going in with huge markups over previous gen simply because X570 was stupid money to start with.
 
VRMs and power delivery. Getting something in place to feed the top-end CPUs without melting isn't going to be cheap. Of course, the price precedent has been set now. If Alder Lake comparatively sips power so there's no need for humungous VRMs and power delivery, the board prices won't come down accordingly because people have been "happily" paying stupid money for a few generations.

Exactly like we saw with B550 going in with huge markups over previous gen simply because X570 was stupid money to start with.

Paid £110 for highest model B550 mobo from gigabyte that other than PCIE lanes has all features of the mid level x570 boards at double the price which seems pretty reasonable tbh.

But yeah prices are silly for a lot of mobos.
 
VRMs and power delivery. Getting something in place to feed the top-end CPUs without melting isn't going to be cheap. Of course, the price precedent has been set now. If Alder Lake comparatively sips power so there's no need for humungous VRMs and power delivery, the board prices won't come down accordingly because people have been "happily" paying stupid money for a few generations.

Exactly like we saw with B550 going in with huge markups over previous gen simply because X570 was stupid money to start with.

My board has a 12+2 Phase VRM, its the same one as the one on the $240 Z590 Gigabyte Gaming X, it also has a stack of USB, Including USB C, a 2.5Gb LAN (RealTek) and Intel Wifi 6.

It was £160.

https://www.gigabyte.com/Motherboard/B550-AORUS-ELITE-AX-V2-rev-10#kf
 
You get the feeling Intel are charging Motherboard vendors a small fortune for the chip sets to off set the decreasing price of their CPU's.
 
Back
Top Bottom