Discussion in 'CPUs' started by almoststew1990, May 28, 2019.
Yeah, Intel on that article says is the underdog also
Yeah, this is what I've noticed too - not necessarily running bucketloads faster in games, but much smoother overall.
Last weekend I replaced my 4770k and replaced it with a z390 meg ace and 9700k. It’s paired with a 2070 super which I bought last year
You seeing good performance gains upgrading?
Definitely. On shadow of the tomb Raider with ray tracing on high (1440p) I’m getting 54 fps. I use to get motion sickness playing this, then it stopped after the upgrade
What % improvements are you seeing roughly though? Like what FPS did Tomb Raider used to run before you upgraded?
AS cjdavid says above, I found a big upgrade from 3770k to a modern processor (AMD 3800k). and it wasn't so much the max or avg FPS that makes the difference but the change in the 1% fps lows that you can 'feel' the difference in. You want to be GPU bound and your GPU can show in the 85-100% range on your current CPU. Change to a modern processor and gpu load will be consistently 98-100%.
I was skeptical and an advocate of this thread only considering MAX and avg FPS results and the uplift doesn't look worth it. But I changed to a modern processor and I guess to a fair extent some of the performance is down to the RAM too. And this really gives a performance boost to how your current GPU will work. Much smoother and an experience you can 'feel'. If your current CPU is anywhere near 50% on the cores when gaming then it's doing too much to get the instructions to the GPU in a regular enough manner to feel smooth.
You may think a 10% increase in max fps isn't worth the cost of a platform upgrade, but it's the 1% lows that make the difference.
This is what is tempting me to upgrade too, I don't necessarily need better max/ average FPS but I want to smooth out some of the stutters I'm getting from the 1% lows - it's not so much just the CPU change but the whole platform shift as well; I'd be going from 1866 DDR3 to 3600 DDR4, PCIE 2 to PCIE 3 and Sata 3 to NVME - all while adding an extra 2/4 cores and more threads at the same time...
I've found this as well. Some games have had a decent increase in frame rate but others have been marginal. Over all however, everything just seems a lot smoother and there is much less slow down. Also by the time I factored in selling my old CPU, board and RAM it only cost me about £200 in total to upgrade.
think my 2600k is nearing the end,
eying up one of these new AF ryzen 1600x chips but for the life of me i dont see that i will see a huge improvement in performance chip for chip
its going to come from the change to DDR4 and a NVME drive rather than the cpu being any better
that said might jus hold off for another couple of months and got a 8 core 16 thread part instead and see if i can get the same sort of lifespan as the current sandy has enjoyed
I'm in the same boat.
Had my 2600k and P67 mobo for nine years now, running on a mild 4Ghz all-core overclock.
Managed to fry my motherboard the other week so have now cannibalised an H77 from elsewhere so I can't even overclock the CPU anymore.
I'm running a 1080Ti on an ultrawide and the CPU is starting to become a serious bottleneck in most games, with very few pushing the GPU past around 60% before the CPU tops out.
I know I need to upgrade pronto but I'm hanging on for Comet Lake. I've discounted Ryzen (don't go there fanboys, I'm not going to change my mind) as Intel are still faster for gaming (which is the only demanding task I run) and the fans on the X570 boards are a major blocker for me too.
I know Comet Lake isn't going to offer much benefit over the current 9th gen but it just seems daft to upgrade now when the next gen is only a couple of months away (hopefully).
its a tough one, the sandy chips have been legendary as you say 8/9 years and still cut the mustard. i'm fortunate enough mines is sitting currently at 4.8ghz under water so isnt really a major bottleneck yet but when you start looking at £85 cpus and see them as beating yours its time to look at moving on,
cant bear to part with it, it might just have to get moved under the stairs and a pile of storage attached as a plex server
I still have my 2700K paired with a GTX 1060. At 4.5GHz it is about as fast as my R5 1600 at 3.9GHz.
Yeah, I play Borderlands 3 with some friends who both have 8700Ks and they're averaging around 25% CPU usage when I'm at 90%+ Cinebench shows theirs at literally 4x the raw performance of mine (at stock).
Dunno what I'm gonna do with mine - frame it and hang it on the wall maybe (I'm serious). Already have a Dell PowerEdge in the garage running Plex
Max fps is something what shouldn't even be mentioned anywhere.
Also average isn't that usefull without including amount of variation, especially that lower end.
Years ago in Finnish PC forum few users with both 6 core previous gen workstation platform and "faster" 4 core Skylake said that in more demanding games that 6 core CPU gave smoother gameplay.
No doubt because of "less valleys between high spots".
There's only certain amount of memory speed any particular CPU can use with full effect.
Any CPU architecture is simply designed to work with some memory bandwidth etc to avoid using resources uselessly.
Execution units capable to lot more are simply wasted resources and power consumption, if they can't be fed with instructions and data.
CPU is simply very complex thing and lots of things has to be in balance and performance increases demand now improvements in lot more things.
That's why transistor budget increase hasn't brought similar automatic performance improvements as 20 years ago:
After certain point it simply became harder and harder to improve execution units.
Also improved caching, prefetching etc are always needed get more out of execution units.
RAM is simply always so much slower than CPU's internal operation.
Where memory bandwidth helps automatically is increasing number of simultaneous worker threads. Meaning core count.
(that's why GPUs need massive memory bandwidths even at expense of some latency)
Again mass storage speed comes truly into equation only when working memory runs out.
During gameplay CPU handles data only from RAM.
And that NVMe is no substitute for enough RAM, because Flash memory is magnitudes slower than DRAM.
In fact NVMe brings very little improvement over SATA SSDs to game, or even Windows loading time, because there's so much other things going on than raw drive accesses:
Only next-gen from which we have any idea of release is Zen3.
Comedy Lake is just another rebranding of 2015's 6th gen Skylake on equally many times rebranded 14nm++++++ node.
Intel's any real next-gen has been in development hell for years along with 10nm node.
You'll then also have the usual limited availability of the high-end models for the first few months. It could be the better part of a year before you can get your hands on a 10700/10900, so you should consider whether you'd rather upgrade now to a 9700/9900 and enjoy the games that you're currently playing more.
The next generation won't offer anything significant other than two extra cores on the top-end model. By the time that actually makes a difference in gaming, I'd say it's a fair bet that you'll want to upgrade to a platform with DDR5, PCIe 4, etc., and a much more efficient 10+ core CPU.
I am truly starting to believe that the various products including the Xeon replacement for a i7 920 I am running are still competitive and lasted so long in games is purely down to intel not pushing the envelope at all.
I feel that AMD creating product for the next generation of console is actually what has brought us forward, as Intel was happy to plod along doing nothing, without innovation the gaming industry didn't really change, and forced any upgrades it created purely on the basis of GPU updates, when SO MUCH MORE was capable.
Intel need punished badly for this, I'd suggest steering away for any Intel chip until a coulee of generations time, when they have created something innovative, rather than reactive, and actually generate competition.
I had a Xeon E3 1230 V2/Core i7 3770 myself and due to a couple of problems I upgraded earlier than I wanted in late 2018 and got a Ryzen 5 2600. One of the worst games for Ryzen is Fallout(especially if modded which my game was),but the difference was far more noticeable than some reviews lead me to believe. It wasn't just minimums which went up much higher,it was frametime consistency...and that was on a sata SSD too,and with the NVME SSD I got made things a bit better too.
On the fact of it,the differences were there,but don't look major. But when you look at the frametimes.
The Xeon E3 1230 V2 would cause stutters if I quickly panned,but the Ryzen 5 2600 was much smoother with less stuttering.The data recorded was walking through the two areas in a defined path,but as I said quick movements were much better on the Ryzen 5 2600.
I think path of the problems,is all the new mitigations have affected aspects of I/O,etc which Ryzen isn't as badly affected by.
Well just look above and what I saw in a game which is poor for Ryzen. This is a game which can use upto 6 threads only though,and prioritises the first two. So it surprised me TBH!!
Separate names with a comma.