• Competitor rules

    Please remember that any mention of competitors, hinting at competitors or offering to provide details of competitors will result in an account suspension. The full rules can be found under the 'Terms and Rules' link in the bottom right corner of your screen. Just don't mention competitors in any way, shape or form and you'll be OK.

AMD Zen 2 (Ryzen 3000) - *** NO COMPETITOR HINTING ***

We still know nothing about how the memory changes will affect this right?

I assume it could be similar to last gen where memory was directly linked to performance, but their changes make it seem like they know this and the benefits of going higher will be muted.

I wonder if they’re trying something with hbm and their architecture...
 
https://youtu.be/MkO4R10WNUM?t=252 16 core chip under custom loop = 4.25ghz all core. Maybe they're delaying the 16 core chip to see if they can get higher clocks.

Builzoid also mentioned that BCLK OC'ing is back on the menu so there will be more to these chips than bumping up the multiplier.

Excellent as my 2700X can tip almost 4.5Ghz effortlessly 24/7 with a BCLK boost.
 
Quite something when you think just 3 years back, pondering 8 cores or more? in the mainstream, IE not thinking about spending £1400 or £2000 on something with an Intel logo on it....

Except that we had the 6-core Phenom II X6 1055T back in 2010, and the 8-core FX-8150 one year later in 2011. 8-cores in 2019, really?

If the 8 core ones do 4.5Ghz, the 5Ghz 9900KS will be Dead On Arrival.

Nope, with equal IPC and lower general performance in intel-favouring software which a lot, I think you are just being too optimistic.
 
The base clock is 0.1 GHz different, too. The boosts depend on your current state of the cooling, single/dual-channel memory, ambient temperature and you can always offset by manually OCing the cheaper CPU. There is no need to pay 70$ for virtually nothing.

More importantly, it needs to be a better, more efficient chip to start with and that’s what you’re paying for. Just like the previous gens.
 
Yeah, stolen from Reddit

Reading through this, with my awareness of who a couple of these people are (yeah.. I get around)...

The takeway seems to be this:

  • 4.8GHz is achievable on all cores
  • ~4.4GHz performs similar to a 5Ghz 9900k - in Cinebench (not surprising)
  • 5.0GHz is doable, but it's a challenge
  • Overclock for overclock, Ryzen 3000 is still faster (in Cinebench?)
  • 5GHz boost isn't infeasible
  • 5Ghz all core is pretty much a no-go.
  • 1.35V for all core 4.5Ghz (THIS is amazing)
  • Memory is being run very loose and slow to assure stability for testing
That's the information from people who actually have samples.

The memory statement is a big one... we've seen that the memory used in the leaks, so far, has been default rated at CL20 or CL22 (or even CL24). If this is what was being used, then memory latency has not regressed from Pinnacle Ridge in any notable way. That would be welcomed.

wish the industry would move away from cinebench, I will await actual game testing as cinebench is more suited for predicting encoding performance not gaming.
 
Except that we had the 6-core Phenom II X6 1955T back in 2010, and the 8-core FX-8150 one year later in 2011. 8-cores in 2019, really?



Nope, with equal IPC and lower general performance in intel-favouring software which a lot, I think you are just being too optimistic.

Except that we had the 6-core Phenom II X6 1955T back in 2010, and the 8-core FX-8150 one year later in 2011. 8-cores in 2019, really?

Well ok, i had a 1090T, great chip but still based on AthlonXP, couldn't compete with Lynfield, i had one of those too, i7 930, i also had an FX9590, i'm not counting it, its the forgotten AMD family member, or at least it should be....

Nope, with equal IPC and lower general performance in intel-favouring software which a lot, I think you are just being too optimistic.

i don't quite understand that, are you saying with the jump in IPC Ryzen 3000 is still only at Coffeelake IPC level's?
 
Last edited:
wish the industry would move away from cinebench, I will await actual game testing as cinebench is more suited for predicting encoding performance not gaming.

Not so much a prediction but an actual 2D/3D creation tool, 'Maxcom Cinema4D' like Blender, ZBrush, 3DSMax.................................................

Its completely valid and many of the aspects of its performance dependencies are similar to games, it has the same sort of FP render aspects.
 
Rare? A lot of people play that game. It was that way even in BF3.

Maybe if we have a 12 core cpu, then HT/SMT won't matter anymore for awhile. Today, nah. If a player has a 6-core without HT all of a sudden wants to play Battlefield, he/she has no HT to turn on.

I think this video has a hidden agenda. Covering for you know who.

By rare I mean % of games. I dont even know what the game is as it wasnt named, but if its a popular game (bf5?) it may be a lot of players but by rare I mean not many games rather than player count. But even so the performance affect in that game, is a hard sell for the intel HTT tax which is pretty big (no affect if you cap at 60fps like most gamers). But of course a non issue for AMD as SMT is standard.

I dont know what the agenda would be and its actually a moot point on AMD chips as they mostly all have SMT anyway, I perhaps should have posted my comment in an intel cpu thread instead.
 
I just realise that AMD are trolling us. There is no way in hell 3700 can become 3800 with 100MHz and 60-65% TDP increase.
3700 will simply throttle boost clocks back lot faster than 3800 when more threads get loaded.
Just like currently in major difference between 2700 and 2700X, with 2600 actually beating 2700 in clocks in 5 to 12 thread load:
https://www.techpowerup.com/reviews/AMD/Ryzen_5_2600/16.html
https://www.techpowerup.com/reviews/AMD/Ryzen_7_2700/16.html

Higher TDP gives more room for that.
 
Cinebench is excellent benchmark for gaming as long as you can extrapolate correctly.

Its really not.

cinebench fully loads all cores including logical.

98% of games do not.

So different types of workloads based on that alone.

This is even more apparent for AMD as XFR will adjust clock speeds based on temperatures and power load, if you fully loading all cores the clock speeds will be lower, so cinebench pushes out different clocks on AMD chips vs what typical games do.

E.g. on my 2600x it clocks up to 4.15-4.25 ghz on every game I thrown at it, but is 4.05ghz in cinebench.
 
wish the industry would move away from cinebench, I will await actual game testing as cinebench is more suited for predicting encoding performance not gaming.

Cinebench is a very good (and fair) indication of CPU performance. A gaming benchmark will be favoured towards the CPU maker who sponsored it (the slower chip could have the best score in that situation). The industry needs a 100% unbiased test that doesn't suffer from being poorly optimised.
 
Back
Top Bottom