• Competitor rules

    Please remember that any mention of competitors, hinting at competitors or offering to provide details of competitors will result in an account suspension. The full rules can be found under the 'Terms and Rules' link in the bottom right corner of your screen. Just don't mention competitors in any way, shape or form and you'll be OK.

Current generation CPU longevity estimates

This longevity situation is what's holding me back but I really want/need to upgrade. My 6700k is still holding its own with a 3080 on a AW3821DW. Sure it can get pinned to 100% in some titles but most games run absolutely fine. Only issues I've had is with instability with my own mild OC than anything else. I'm also stuck with PCIE 3.0 and the board refuses to run at anything other than 8x.

I like to dabble with 3D rendering however and 4c/8t is really showing its age. I really could do with a 12700k or 5900x at a minimum but given I keep things for time, a 12900k or 5950x would be much better which makes upgrading now a fair outlay given I'll likely have the core components for a good 5 years.

I really want to upgrade but don't feel like AL + DDR4 is worth investing in and AL + DDR5 doesn't sound like a decent investment right now either. I'm banking on RL having better memory controller so waiting for that. Not sure if getting DDR5 AL now will allow an upgrade path or if they'll bring more compatible boards with RL. Wah!

I might be swayed by a decent discounted used AL setup mind you, but feels daft investing in AM4 given its out its way out very shortly. Now is just an awkward time to spend money on core hardware.
 
When was the last time dave gamed on anything but the latest intel cpu :cry:, i used a x5650 @ 4.4ghz on x58 up till about a year and a half ago it was only then the single core performance was not really adequate for online fps games
Trouble is, the reason you notice a big leap when upgrading is because you took so long to upgrade. Could have upgraded in half that time and still got a good boost.

Do you think the issue will be lack of cores,IPC, frequency, PCI version...?
I would say IPC and frequency, latency etc, we're getting what, an easy 15%+? per generation now.

It's a waste going for the most amount of cores unless you really need them, better going for the best balance of cores, the speed of them is what's most important, at least IMO.
 
This longevity situation is what's holding me back but I really want/need to upgrade.

If longevity is your primary concern you would be best waiting for AM5 latter in the year, Current AM4 and LGA1700 (six months left) are both EOL.

The last decade marking the transition from single to multithread and rather poor ipc increase over the same period helped the longevity of older processors but i dont think next decade will be so kind.
 
The PS5 and Xbox have 8c16t Zen 2. That's the spec that PC games will be designed for. The last console generation lasted 7 years. So expect that spec of PC to last until 2027.

Also, Nvidia 4090 is supposed to be 100% faster than 3090. Nvidia 5090 will probs be 50% faster than 4090. If that's the case, the 5090 will be 300% faster than the 3090.

So you guys who mentioned the 5600x - do you honestly think it will keep up with a GPU that's 3x faster than a 3090? We only talking about September 2024 until September 2026.

Those GPUs will be for ultra high resolution. 4k, 6k and 8k where the GPU will do most of the grunt work, not the CPU
 
Those GPUs will be for ultra high resolution. 4k, 6k and 8k where the GPU will do most of the grunt work, not the CPU

That's irrelevant because surely the whole stack will be twice as fast. So the 4060 for 1080p will be twice as fast as a 3060. Then the 5060 3x as fast as a 3060. I can't see a 5600 keeping up unless the leaks are fake.
 
That's irrelevant because surely the whole stack will be twice as fast. So the 4060 for 1080p will be twice as fast as a 3060. Then the 5060 3x as fast as a 3060. I can't see a 5600 keeping up unless the leaks are fake.

Game logic is still written for console CPUs, so 8 thermally throttled Zen 2 cores should be able to handle 60-120fps of game logic for the foreseeable future. Sure, with new GPUs if you run the game at 720p hoping to get 2000fps then you might get throttled with a 5600 to push those frames, but shouldn't be a struggle to push 120fps.
 
Modern mid-high end CPUs (5600/12500 and above) will last 5+ years aside from a few niche cases where you need more cores (in which case, get the ones with more cores).

Obviously if you want bleeding edge performance then you upgrade before then, but for the majority of people CPUs longeviety has improved massively this last decade. It used to be that you needed to upgrade CPU every 2 years but that's not been the case since Sandybridge really.
 
Obviously if you want bleeding edge performance then you upgrade before then, but for the majority of people CPUs longeviety has improved massively this last decade. It used to be that you needed to upgrade CPU every 2 years but that's not been the case since Sandybridge really.

You needed an upgrade every 2 years because every 2 years CPU performance was doubling so upgrades always felt significant. It's not the longevity that was improved, it's that CPU performance stagnated in the last decade so year-on-year improvements were in low single digit percentages, if that.

Now that we're going back to a more aggressive performant improvements every generation and a proper competition between at least four major players in the CPU market, differences are becoming more significant so upgrades are also more noticeable.
 
It's not the longevity that was improved, it's that CPU performance stagnated in the last decade
To me the two things are related. CPU performance improvements have stagnated, meaning that CPUs last longer, because there is greatly reduced benefits from upgrading, and developers aren't drastically increasing the CPU requirements.

In other words, CPU longevity is essentially driven by how performance requirements change over time. And performance requirements tend to follow what hardware is available - if CPUs don't get twice as fast, then developers don't produce games (or whatever) needing double the CPU performance.
 
To me the two things are related. CPU performance improvements have stagnated, meaning that CPUs last longer, because there is greatly reduced benefits from upgrading, and developers aren't drastically increasing the CPU requirements.

In other words, CPU longevity is essentially driven by how performance requirements change over time. And performance requirements tend to follow what hardware is available - if CPUs don't get twice as fast, then developers don't produce games (or whatever) needing double the CPU performance.

It's semantics now, but the way you phrased it ("CPU longevity has improved massively") makes it seem like a good desired thing, when in reality it's been a disaster of a decade after years and years of stagnation and lack of progress.
 
The problem is performance per core improvements from gen to gen have significantly accelerated, this means faster obsolescence and is more important than core counts.

@HangTime beat me to it.

If the question is because wondering whether to wait for AM5 or not, then dont, best to buy CPUs end of gen, especially when also is a chipset, ram change coming as well. I would rather save tons of money than have an extra year or two on longevity.
 
I guess I'm just trying to answer the question posed without worrying too much about whether the drivers for that change are good or bad.

I do think it could in a roundabout way be considered a 'good' thing for people who aren't in a position to upgrade CPU regularly, in the sense that they can continue playing new games coming out without doing so. In the 90s, you were screwed, performance was terrible and you needed to upgrade to get better performance even in existing games (e.g. Quake) never mind newer ones. Of course, we needed faster hardware more then, so it would have been bad if it stagnated. I look at the situation in the 20s, with Zen3 and Alderlake, and I don't feel like we need massive gains in CPU performance like we used to aside from a few edge cases.
To reprhase this a bit, in the 90s, a 5yr old CPU was absolute trash. In the 20s, a 5yr old CPU is really not that bad (assuming it was good to start with, obviously).

One way of looking at it is that CPU demands relative to hardware have stopped growing so rapidly, it's a bit of a chicken and egg situation. On the one hand if we had better hardware, we might get more demanding games/software. On the flipside, you could argue we don't need big improvements in CPUs because the use cases are getting more optimised to work with current/old hardware, and it is GPU tech where the focus is now.
 
I guess I'm just trying to answer the question posed without worrying too much about whether the drivers for that change are good or bad.

Fair enough.

I do think it could in a roundabout way be considered a 'good' thing for people who aren't in a position to upgrade CPU regularly, in the sense that they can continue playing new games coming out without doing so. In the 90s, you were screwed, performance was terrible and you needed to upgrade to get better performance even in existing games (e.g. Quake) never mind newer ones. Of course, we needed faster hardware more then, so it would have been bad if it stagnated. I look at the situation in the 20s, with Zen3 and Alderlake, and I don't feel like we need massive gains in CPU performance like we used to aside from a few edge cases.
To reprhase this a bit, in the 90s, a 5yr old CPU was absolute trash. In the 20s, a 5yr old CPU is really not that bad (assuming it was good to start with, obviously).

If no new CPU comes out at all, your 5yr old CPU is still not only very good, it's the best thing out there, it will run everything! That's not a good thing.

It's only "good" as far as you believe stagnation is good. Like a 5yr old car being the best is good, or

One way of looking at it is that CPU demands relative to hardware have stopped growing so rapidly, it's a bit of a chicken and egg situation.

No. If anything, it's grown even more rapidly. In every application out there, from basic web development all the way to code compiling, the software ideas for progress are all not only there but they're implemented but are impractical, the problem? we can't run them on hardware because CPUs are too freaking slow.

On the one hand if we had better hardware, we might get more demanding games/software. On the flipside, you could argue we don't need big improvements in CPUs because the use cases are getting more optimised to work with current/old hardware, and it is GPU tech where the focus is now.

Oh boy we do. We absolutely do. Lots of aspects in gaming is limited by single-threaded CPU performance, the most important/limiting being game AI. Literally we if implemented the current ideas in game AI, most current CPUs would only process at best 10fps.

The idea that current CPUs are as good as we'd ever need, for any workload, is just absurd. This isn't a both sides are valid argument, with better hardware we'd absolutely, 100%, get better software. We haven't reached peak CPU performance, or peak processing requirements yet. We haven't even started yet.
 
The idea that current CPUs are as good as we'd ever need, for any workload, is just absurd.
I completely agree. What I'm saying is that (for gaming at least), CPU limitations impacted framerates (or rather, suppressed framerates to unacceptable levels) in the 90s a lot more than they are now. Mainstream games are either not being developed in a way that demands significantly more power (with some exceptions), or they end up being GPU limited at the sort of settings people tend to game at these days. That doesn't mean to say that devs couldn't take advantage of more power, I'm sure they could, but the difference now is they just aren't creating those demands even if theoretically they would like to, in contrast to the 90s where you'd have games running incredibly slowly even on the best CPUs. And this is why I think the longevity of current CPUs be be greater than it used to be, not because theoretically more power is unwarranted, but because practically those demands are unlikely to rapidly become mainstream.

Like I said, it's a bit of a chicken-and-egg scenario - in the old days, devs could code for future performance improvements, i.e build a game in the knowledge that performance would have risen by the time of release and risen again within a year. People forget how badly the 90s FPS games performed on hardware of the time, but they also typically scaled very well as hardware improved. Maybe they don't do that now, because they know CPU performance won't have doubled in 2yrs or whatever.

Admittedly, I'm primarily talking about gaming here (given the OP mentioned consoles), for non-gaming workloads it's clear we can benefit signifcantly from more performance.

To summarise / repeat, I'm not saying that better CPUs wouldn't unlock more potential in software, I'm saying I see less evidence of games being developed at a rate that warrants replacing CPUs as frequently as in yesterday.
 
Last edited:
Game logic is still written for console CPUs, so 8 thermally throttled Zen 2 cores should be able to handle 60-120fps of game logic for the foreseeable future. Sure, with new GPUs if you run the game at 720p hoping to get 2000fps then you might get throttled with a 5600 to push those frames, but shouldn't be a struggle to push 120fps.

But it will still push beyond that. CPUs on the older consoles were weak, but there were plenty of games that pushed much further than what consoles CPUs offered, so I wouldn't put much weight in that.

Oh boy we do. We absolutely do. Lots of aspects in gaming is limited by single-threaded CPU performance, the most important/limiting being game AI. Literally we if implemented the current ideas in game AI, most current CPUs would only process at best 10fps..

Why would AI be limited by single threaded performance? AMD demonstrated waaaay back with the HD4xxx series, more than a decade ago, that you can accelerate the AI on the GPU. Also, even in newer games, when AI comes to play, the usage of the CPU goes up all around, not just on a single thread.
 
how long will current cpus last for web browsing? can feel the sluggishness with a q6600 these days. dunno if web pages or browsers are getting more demanding or if its the os bloat or something else....
 
@HangTime no disagreements.

But it will still push beyond that. CPUs on the older consoles were weak, but there were plenty of games that pushed much further than what consoles CPUs offered, so I wouldn't put much weight in that.

Not really, PCs and consoles run the same logic, and the compute workload per fps is generally the same. Last gen console games were OK being 30fps so developers had more leeway with game logic, which allowed 60/120fps on PCs with more powerful CPUs. With 60fps being the minimum accepted fps in consoles now generally developers need to limit their games to Zen 2 cores being able to offer 60fps, which is what I'd call adequate gaming. That's 16.6ms per frame, all game AI will be written so that a Zen 2 core can process a frame's worth of compute in 16.6ms, until we get new consoles.

Why would AI be limited by single threaded performance? AMD demonstrated waaaay back with the HD4xxx series, more than a decade ago, that you can accelerate the AI on the GPU. Also, even in newer games, when AI comes to play, the usage of the CPU goes up all around, not just on a single thread.

You could accelerate AI on the GPU ever since there were GPUs, just not every type of AI. Those are high-throughput latency-insensitive problems such as parameter optimisation or ML model training, generally matrix operations, these do not happen during gaming. Game AI runs on CPUs in every single game. Now if we're talking chess or go, those are different because nature of the problem completely changes, no longer latency-sensitive, and solutions are matrix operations, so those can run on GPUs even though inference still happens on CPUs in 99.99% of cases.

Generally, games use 6-8 main threads these days doing bulk of the compute, most AI workload is in one or two, depending on the type of game. Nature of game AI is that it's a sequential problem so very ill-suited for GPU processing and the added latency of copying data to GPU memory would just make it unfeasible. In the end game AI is generally not a parallelisable problem so multithreading doesn't help (that's why adding more cores beyond 6-8 doesn't give you more fps).
 
@HangTime With 60fps being the minimum accepted fps in consoles now generally developers need to limit their games to Zen 2 cores being able to offer 60fps, which is what I'd call adequate gaming.
While there have been , fair few games with 60fps performance modes on consoles, that's not going to remain consistent once hardware is really pushed.

I'd be surprised if the large majority of games developed on unreal engine 5 run at 60fps.
 
While there have been , fair few games with 60fps performance modes on consoles, that's not going to remain consistent once hardware is really pushed.

I'd be surprised if the large majority of games developed on unreal engine 5 run at 60fps.

There definitely is a chance that consoles revert to 30fps towards the end of their cycle, in that case you should also expect a Zen 2 CPU to mirror that on PC as well.
 
Back
Top Bottom