Please remember that any mention of competitors, hinting at competitors or offering to provide details of competitors will result in an account suspension. The full rules can be found under the 'Terms and Rules' link in the bottom right corner of your screen. Just don't mention competitors in any way, shape or form and you'll be OK.
Probably need to wait a couple of months for drivers and stability to settle down too ?Soon
Always a good practice, though likely less necessary this time round.Probably need to wait a couple of months for drivers and stability to settle down too ?
Really struggling with the wait on this.
Really want to upgrade.
I'm sooooo looking forward to it. That level of performance will mean so much to me. Even if it's not quite as fast, I gaurentee we'll get more enjoyment than most 9900k owners![]()
That is what I did: 2600 and a x470 Strix, hopefully enough for a 3700x
My question is, I have an ultrawide AW3418DW, will I see any difference between a 2600 and a 3700x if I play in ultrawide resolutions or it will be bottlenecked by the graphics card 99% of the time?
You are afraid that the top Ryzen 3000 will still be somewhat slower than i9-9900K?
Well, I think we will see at least two Ryzen 7s and Ryzen 9s which will be faster. If not Ryzen 5s as well.
New Ryzen 5 faster than i9-9900K.
The problem with that is the 3600 is the mid-range CPU with the mid level silicon as a result. It's very likely the binning by AMD will be pretty accurate and the higher grade silicon will go to the more expensive parts. If the 3600 does equal or even beat the 9900K then the better parts are likely to beat by a wider margin even in single core.Well yeah obviously the 9900K's direct competitor, the Ryzen 9 will completely annihilate the 9900K. So will the Ryzen 7. But the Ryzen 5 is where it matters for a true 8c16t comparison to show that there's more to AMD than high core count. I really do hope they can do it.
Lot depends on if 8 core is single or dual chiplet CPU.The problem with that is the 3600 is the mid-range CPU with the mid level silicon as a result. It's very likely the binning by AMD will be pretty accurate and the higher grade silicon will go to the more expensive parts. If the 3600 does equal or even beat the 9900K then the better parts are likely to beat by a wider margin even in single core.
Lets be honest here, AMD wouldn't be sticking the bad 8c die on the top end 16 core mainstream processors. The ones with a weak core would likely become the 16c non X version and the ones with 8 good on each CCX would be the top end '3850X' or whatever they want to call it.Lot depends on if 8 core is single or dual chiplet CPU.
With single chiplet there wouldn't be much of chances for high clocking core dies.
Again four active cores per chiplet would allow using otherwise good, but multiple faulty/weaker cores including dies.
Of course if there's mostly little variation between cores of die, then dies with four high clocking cores are rarer in lower bins.
Also product segmenting could affect lot to how high boost clocks are pushed.
Though long term wise AMD could only win by not holding back in clocks too much.
Even if buyer would then take only 8 core CPU instead of more expensive model, that's always better than losing that sale to Intel.
12 core models should be interesting in that those would get top bin dies with one or two faulty/weak cores.
Probably need to wait a couple of months for drivers and stability to settle down too ?
AMD tend to be a bit sloppy with first gen drivers but VERY quickly learn which knobs to adjust. You can see it in the graphics drivers too, still improvements to AMD cards vs their at release Nvidia rivals.
Surely that means NVIDIA get it closer to perfect right at the start, whereas AMD take ages to get the full performance out of their drivers? Rather than getting it fixed very quickly as you suggest?
![]()
Surely that means NVIDIA get it closer to perfect right at the start, whereas AMD take ages to get the full performance out of their drivers? Rather than getting it fixed very quickly as you suggest?
![]()
Not really the case. Nvidia rather revealed their hand with their reaction to the "gimping" debate surrounding Kepler a few years ago, around the time The Witcher 3 came out. Kepler cards performed like absolute arse in that game, leading people to accuse Nvidia of intentionally sabotaging Kepler to encourage upgrades. In response, Nvidia quickly rushed out a "fixed" driver that mysteriously found an extra 10-20% of extra performance down the back of the sofa for Kepler cards. Of course, the original suggestion that Nvidia were actively "gimping" Kepler wasn't really accurate. There was no active effort to sabotage it - just no active effort to support it either. Nvidia cards benefit hugely from driver tweaks and optimisations, but Nvidia's policy is to essentially drop previous architectures as soon as something new comes along, leading to those cards falling away in terms of performance when they could otherwise still be very competitive with current products. Nvidia drivers aren't some perfect, unchanging thing that "just work" from day one. It requires constant, ongoing effort.Surely that means NVIDIA get it closer to perfect right at the start, whereas AMD take ages to get the full performance out of their drivers? Rather than getting it fixed very quickly as you suggest?
![]()
Not really the case. Nvidia rather revealed their hand with their reaction to the "gimping" debate surrounding Kepler a few years ago, around the time The Witcher 3 came out. Kepler cards performed like absolute arse in that game, leading people to accuse Nvidia of intentionally sabotaging Kepler to encourage upgrades. In response, Nvidia quickly rushed out a "fixed" driver that mysteriously found an extra 10-20% of extra performance down the back of the sofa for Kepler cards. Of course, the original suggestion that Nvidia were actively "gimping" Kepler wasn't really accurate. There was no active effort to sabotage it - just no active effort to support it either. Nvidia cards benefit hugely from driver tweaks and optimisations, but Nvidia's policy is to essentially drop previous architectures as soon as something new comes along, leading to those cards falling away in terms of performance when they could otherwise still be very competitive with current products. Nvidia drivers aren't some perfect, unchanging thing that "just work" from day one. It requires constant, ongoing effort.
Of course, it makes perfectly good business sense to encourage users to upgrade, and it's ultimately up to consumers to decide how they feel about it all. There's also the argument to be made that AMD only support their cards for far longer because they've been recycling the GCN architecture for eight years now, which means they can apply some blanket performance tweaks "for free" to older cards simply through work to support newer ones, despite the architecture obviously having evolved through the years. Even Navi later this year is still going to be GCN-based. Meanwhile, Nvidia have been through Fermi, Kepler, Maxwell, Pascal and now Turing during that time, most of which have far more radical differences between them, making supporting older cards much more costly.
Well yeah obviously the 9900K's direct competitor, the Ryzen 9 will completely annihilate the 9900K. So will the Ryzen 7. But the Ryzen 5 is where it matters for a true 8c16t comparison to show that there's more to AMD than high core count. I really do hope they can do it.