• Competitor rules

    Please remember that any mention of competitors, hinting at competitors or offering to provide details of competitors will result in an account suspension. The full rules can be found under the 'Terms and Rules' link in the bottom right corner of your screen. Just don't mention competitors in any way, shape or form and you'll be OK.

AMD Zen 2 (Ryzen 3000) - *** NO COMPETITOR HINTING ***

I'm sooooo looking forward to it. That level of performance will mean so much to me. Even if it's not quite as fast, I gaurentee we'll get more enjoyment than most 9900k owners :)

You are afraid that the top Ryzen 3000 will still be somewhat slower than i9-9900K?
Well, I think we will see at least two Ryzen 7s and Ryzen 9s which will be faster. If not Ryzen 5s as well.

New Ryzen 5 faster than i9-9900K.
 
That is what I did: 2600 and a x470 Strix, hopefully enough for a 3700x

My question is, I have an ultrawide AW3418DW, will I see any difference between a 2600 and a 3700x if I play in ultrawide resolutions or it will be bottlenecked by the graphics card 99% of the time?


Unless you need multiple GFX cards then X470 is not needed....The MSI Pro Carbon is a brilliant motherboard and B450...

I bought one.
 
You are afraid that the top Ryzen 3000 will still be somewhat slower than i9-9900K?
Well, I think we will see at least two Ryzen 7s and Ryzen 9s which will be faster. If not Ryzen 5s as well.

New Ryzen 5 faster than i9-9900K.

Well yeah obviously the 9900K's direct competitor, the Ryzen 9 will completely annihilate the 9900K. So will the Ryzen 7. But the Ryzen 5 is where it matters for a true 8c16t comparison to show that there's more to AMD than high core count. I really do hope they can do it.
 
Well yeah obviously the 9900K's direct competitor, the Ryzen 9 will completely annihilate the 9900K. So will the Ryzen 7. But the Ryzen 5 is where it matters for a true 8c16t comparison to show that there's more to AMD than high core count. I really do hope they can do it.
The problem with that is the 3600 is the mid-range CPU with the mid level silicon as a result. It's very likely the binning by AMD will be pretty accurate and the higher grade silicon will go to the more expensive parts. If the 3600 does equal or even beat the 9900K then the better parts are likely to beat by a wider margin even in single core.
 
The problem with that is the 3600 is the mid-range CPU with the mid level silicon as a result. It's very likely the binning by AMD will be pretty accurate and the higher grade silicon will go to the more expensive parts. If the 3600 does equal or even beat the 9900K then the better parts are likely to beat by a wider margin even in single core.
Lot depends on if 8 core is single or dual chiplet CPU.
With single chiplet there wouldn't be much of chances for high clocking core dies.
Again four active cores per chiplet would allow using otherwise good, but multiple faulty/weaker cores including dies.
Of course if there's mostly little variation between cores of die, then dies with four high clocking cores are rarer in lower bins.
Also product segmenting could affect lot to how high boost clocks are pushed.

Though long term wise AMD could only win by not holding back in clocks too much.
Even if buyer would then take only 8 core CPU instead of more expensive model, that's always better than losing that sale to Intel.

12 core models should be interesting in that those would get top bin dies with one or two faulty/weak cores.
 
Lot depends on if 8 core is single or dual chiplet CPU.
With single chiplet there wouldn't be much of chances for high clocking core dies.
Again four active cores per chiplet would allow using otherwise good, but multiple faulty/weaker cores including dies.
Of course if there's mostly little variation between cores of die, then dies with four high clocking cores are rarer in lower bins.
Also product segmenting could affect lot to how high boost clocks are pushed.

Though long term wise AMD could only win by not holding back in clocks too much.
Even if buyer would then take only 8 core CPU instead of more expensive model, that's always better than losing that sale to Intel.

12 core models should be interesting in that those would get top bin dies with one or two faulty/weak cores.
Lets be honest here, AMD wouldn't be sticking the bad 8c die on the top end 16 core mainstream processors. The ones with a weak core would likely become the 16c non X version and the ones with 8 good on each CCX would be the top end '3850X' or whatever they want to call it.

There maybe some range for the lower core CPU's to OC well too with the lower core count on CCX's giving a nice benefit to lower temps and therefore potentially high OC possibilities.

The binning for the first and second gen Ryzen were pretty on point. Threadripper one grade up again. I don't think they'll drop that ball for Ryzen 3xxx or Threadripper 3xxx.
 
Probably need to wait a couple of months for drivers and stability to settle down too ?

I found the day 1 AMD website hosted drivers were absolutely golden. The stuff on the included disk was terrible and massively out of date, ended up needing a (second) full windows reinstall to sort.
AMD tend to be a bit sloppy with first gen drivers but VERY quickly learn which knobs to adjust. You can see it in the graphics drivers too, still improvements to AMD cards vs their at release Nvidia rivals.
 
AMD tend to be a bit sloppy with first gen drivers but VERY quickly learn which knobs to adjust. You can see it in the graphics drivers too, still improvements to AMD cards vs their at release Nvidia rivals.

Surely that means NVIDIA get it closer to perfect right at the start, whereas AMD take ages to get the full performance out of their drivers? Rather than getting it fixed very quickly as you suggest?

;)
 
Surely that means NVIDIA get it closer to perfect right at the start, whereas AMD take ages to get the full performance out of their drivers? Rather than getting it fixed very quickly as you suggest?

;)

Nope. I'd rather suggest that nvidia cares mostly about the current generation and doesn't work as intense as AMD on performance optimisations and improvements during the lifecycles of their products. Hence, the AMD ones get faster after a couple of years.
 
Surely that means NVIDIA get it closer to perfect right at the start, whereas AMD take ages to get the full performance out of their drivers? Rather than getting it fixed very quickly as you suggest?

;)

And after that performance degrades....... When got my GTX1080 in 2016, after the September drivers saw degradation in performance, especially benchmarks.
That is why my GTX1080 GPU benchmark scores in this forum, using August 2016 drivers, were only surpassed by the faster ram GTX1080s made after Feb 2017.......

And only couple. Even the rest couldn't beat my scores at same speeds or higher.
 
Surely that means NVIDIA get it closer to perfect right at the start, whereas AMD take ages to get the full performance out of their drivers? Rather than getting it fixed very quickly as you suggest?

;)
Not really the case. Nvidia rather revealed their hand with their reaction to the "gimping" debate surrounding Kepler a few years ago, around the time The Witcher 3 came out. Kepler cards performed like absolute arse in that game, leading people to accuse Nvidia of intentionally sabotaging Kepler to encourage upgrades. In response, Nvidia quickly rushed out a "fixed" driver that mysteriously found an extra 10-20% of extra performance down the back of the sofa for Kepler cards. Of course, the original suggestion that Nvidia were actively "gimping" Kepler wasn't really accurate. There was no active effort to sabotage it - just no active effort to support it either. Nvidia cards benefit hugely from driver tweaks and optimisations, but Nvidia's policy is to essentially drop previous architectures as soon as something new comes along, leading to those cards falling away in terms of performance when they could otherwise still be very competitive with current products. Nvidia drivers aren't some perfect, unchanging thing that "just work" from day one. It requires constant, ongoing effort.

Of course, it makes perfectly good business sense to encourage users to upgrade, and it's ultimately up to consumers to decide how they feel about it all. There's also the argument to be made that AMD only support their cards for far longer because they've been recycling the GCN architecture for eight years now, which means they can apply some blanket performance tweaks "for free" to older cards simply through work to support newer ones, despite the architecture obviously having evolved through the years. Even Navi later this year is still going to be GCN-based. Meanwhile, Nvidia have been through Fermi, Kepler, Maxwell, Pascal and now Turing during that time, most of which have far more radical differences between them, making supporting older cards much more costly.
 
Not really the case. Nvidia rather revealed their hand with their reaction to the "gimping" debate surrounding Kepler a few years ago, around the time The Witcher 3 came out. Kepler cards performed like absolute arse in that game, leading people to accuse Nvidia of intentionally sabotaging Kepler to encourage upgrades. In response, Nvidia quickly rushed out a "fixed" driver that mysteriously found an extra 10-20% of extra performance down the back of the sofa for Kepler cards. Of course, the original suggestion that Nvidia were actively "gimping" Kepler wasn't really accurate. There was no active effort to sabotage it - just no active effort to support it either. Nvidia cards benefit hugely from driver tweaks and optimisations, but Nvidia's policy is to essentially drop previous architectures as soon as something new comes along, leading to those cards falling away in terms of performance when they could otherwise still be very competitive with current products. Nvidia drivers aren't some perfect, unchanging thing that "just work" from day one. It requires constant, ongoing effort.

Of course, it makes perfectly good business sense to encourage users to upgrade, and it's ultimately up to consumers to decide how they feel about it all. There's also the argument to be made that AMD only support their cards for far longer because they've been recycling the GCN architecture for eight years now, which means they can apply some blanket performance tweaks "for free" to older cards simply through work to support newer ones, despite the architecture obviously having evolved through the years. Even Navi later this year is still going to be GCN-based. Meanwhile, Nvidia have been through Fermi, Kepler, Maxwell, Pascal and now Turing during that time, most of which have far more radical differences between them, making supporting older cards much more costly.

It's a good explanation.
I would add that nvidia won't lose anything financially if they develop the drivers for the older generations. The users lose a lot. And nvidia will tend to lose image in the users' eyes.
 
Well yeah obviously the 9900K's direct competitor, the Ryzen 9 will completely annihilate the 9900K. So will the Ryzen 7. But the Ryzen 5 is where it matters for a true 8c16t comparison to show that there's more to AMD than high core count. I really do hope they can do it.

how can you make statements like this ? also annihilate at what ? i still think that in games they going to be behind but i will leave that up until actual benchmarks happen.even if they are faster they wont be annihilation. it will be close performace either way.
 
Back
Top Bottom