Please remember that any mention of competitors, hinting at competitors or offering to provide details of competitors will result in an account suspension. The full rules can be found under the 'Terms and Rules' link in the bottom right corner of your screen. Just don't mention competitors in any way, shape or form and you'll be OK.
So would you say a rtx 3070 paired with a Ryzen 7 2700x is why my graphics card handles games like a piece of poo?
Ideally I want to game at 4k so would a 6700xt be up to the job? I would happily take the recommendation of the 6600 but after using the 8gb 3070 I've noticed some games are right on the 8gb mark When it comes to vram usage!
Buy a console, graphics cards today are a rip-off period.
£1200 graphics card, £1000 for the rest of the PC and it runs Watch Dogs at 1080P/medium with 112fps average when £450 consoles run it at 4K/60fps? not to mention no monitors have good HDR. I'd rather have that than minor texture upgrades and other subtle effects that tank performance for very little visual benefit.
I am quite sure that the CPU itilisation never even approaches anything remotely close to 100%
The problem is in Nvidia's software engineers, and maybe Microsoft.
Nice. We just build my bro's PC last week. I didn't fancy telling him to pick up a 5800X
hmm looks like i will stick to my gtx 780 seeing as your saying its faster than a 3090
Hmm, yeah interesting this. I'm about to make the jump from a 6700K @4.5 to either 5800/5900X. Wonder if I'll see a decent jump with a 2070S. Mainly hoping for better performance in msfs.
You don't really need to 5800X with the 2070S, a 5600X will be more than enough, unless you're planing to upgrade to a 3080 at some point with the new CPU.
In general tho i would seriously consider upgrading from the 6700K anyway, its probably on the limit with most things even with the 2070S.
Thanks. Spent the last couple of weeks getting upto speed with new hardware and I agree I think the 5600X for purely gaming sub 4K looks the pick of the punch but I would be looking at upgrading the GPU in a year or so. That's how I've often upgraded in the past. I must admit I'm still pretty pleased with how well it handles the vast majority of my games and I'm more satisfying an itch.
It worked out well for Nvidia back when games were very main thread heavy, a problem with the DX10 / 11 API where it dumped everything on to one thread.
Nvidia got around it with a Scheduler built in to the driver that split the worker thread in to 4, first seen with the GTX 700 series, the problem with that is you're using the CPU to do that work.
AMD took a different approach, they built a thread scheduler in to the GPU its self, those are these "ACE" units or Asynchronous Compute Engine, first seen in GCN 1.2 (Polaris), again 4 of them, the GPU its self is doing all the work.
Did I read that right, in that it suggests a Zen 5 3600 is sufficient if paired with GPUs up to the RTX3070, after which a 5600x would be a better match? And once an appropriate CPU/GPU match is made, the different in performance between AMD and Nvidia is fairly minimal?
Yeah you might be right.
Are you serious?
Ok lets ignore the fact that we know the difference in Schedulers, what would you suggest is the 20 - 30% higher CPU load on Nvidia GPU's?
They are aware of it, up to this point at least they haven't wanted to create a hardware schedular, why that is only they know but it will increase the die size and increase power consumption.
And if most people don't know about this why should they increase their costs and up until Ampere vs RDNA2 at least their power efficient GPU reputation? Nvidia only care about how it looks, not what it actually does and up until now no one has talked about this and now other than Hardware Unboxed no one is, just like the fact that no one is talking about the extra in put latency that DLSS causes.
They wouldn't drop performance, at worst they may not gain performance.
The HUB slides are very deliberate to make the point, the most extreme example, the slide in this post for example the CPU on the 5700XT is also running at about 90% of its highest level of performance for that GPU, in this case its 17% faster than the RTX 3090 because its actually about 50% slower than the CPU needs to be and only about half of it is down to the software scheduler on the Nvidia card, the other half is just lack of CPU umph, Hardware Unboxed have a habit of deliberately exaggerating to make the point, that's not to say it isn't true, it absolutely is and with the 2600X the 3090 is only capable of 93 FPS in that game in that particular part of it and from a 3090 you would want a shed load more than that, you would probably get about 150 from a 5600X because that CPU has the umph to deal with the software scheduler and the extra headroom needed, tho you might get even more with a 5800X.
This is a problem for modern gaming because games now lean quite heavily on the CPU, and thank #### for high core count high IPC CPU's because with out them Nvidia's GPU's are not getting any faster.
As for what the real issue is, if its not as simple as a lack of will on Nvidia's part, i have no idea but AMD created a schedular for the modern age and Nvidia have not yet replicated it, there is such a thing as Intellectual Property rights.
They might, but do Nvidia want to pay AMD, or Intel for licensing technology?
Leather Jacket Man is proud, and by proud i mean narcissist.
Even Intel are not that proud.
From previous tests the nVidia software driver overhead seems
I remember well, during the days of owning my 290x and vega56 many a fanboys comments were about how much better nvidia's power delivery was and general sniping/sneering. As you say its pushing the narrative and loudly/consistently across so that the mindshare absorbs it and dances along like the pied piper!