• Competitor rules

    Please remember that any mention of competitors, hinting at competitors or offering to provide details of competitors will result in an account suspension. The full rules can be found under the 'Terms and Rules' link in the bottom right corner of your screen. Just don't mention competitors in any way, shape or form and you'll be OK.

Updated AMD roadmap (rumours)

Soldato
Joined
6 Jun 2008
Posts
11,618
Location
Finland
Zen4 and RDNA3 coming in Q3 2022 sounds a bit late. AMD will get 5nm production capacity before the end of this year, I doubt they'll not release anything for basically a year unless they're planning to use it on APUs.
It's entirely different thing to do lower clock phone/tablet parts than high performance desktop parts on new node.
(bad clock scaling of 10nm is reason why Intel is still in their 14nm)

And so far latencies of DDR5 are sucktastic and I don't expect that to change fast.
First DDR5's might well bring performance regression for many consumer uses because of that.
Also with servers hogging capacity and bandwidth can see initial DDR5 prices being crazy.
 
Associate
Joined
18 Oct 2009
Posts
589
Initially indeed you do. Always seems to be the same when new memory standards are released they suck in comparison to the previous.
My rule is never go for a new standard when first released.
 
Soldato
OP
Joined
6 Feb 2019
Posts
17,624
Do i remember correctly that DDR3 was actually better than early DDR4?

Yes it was

mid history repeats, DDR5 will be slower than ddr4 for gamers at release however DDR5 will rapidly improved over time - it makes being an early adopter a double whammy, not only can you get worse performance but because it's new tech you will most definitely be paying a big premium for it and because of the pace of innovation, your expensive ram will quickly become worthless (ask anyone who has tried to sell their launch day 8gb ddr4 2133mhz kits)
 
Last edited:
Soldato
Joined
30 Jun 2019
Posts
7,875
Seems legit. Wonder if I jumped the gun by getting a 10700KF? Pretty happy with it though right now.

If Warhol exists, I'd thought it would get support for higher speed DDR4. 6nm EUV should be a nice upgrade for many if true.
 
Soldato
Joined
30 Jun 2019
Posts
7,875
Yeah, probably right :D.

It mostly depends on high end RAM prices really, and how long they will take to become affordable.

This old (inaccurate) roadmap is from similar leakers too I presume:
https://hardforum.b-cdn.net/data/attachment-files/2020/10/390346_EkRhT8IXkAEuDoa.png

Never thought Warhol at 7nm made much sense, with Rembrandt on 6nm EUV...

I'm just wondering if Zen3+ has any meaningful differences, or would it just be a simple BIOS upgrade for people with Zen 3 capable boards...
 
Last edited by a moderator:
Associate
Joined
27 Dec 2008
Posts
404
It's entirely different thing to do lower clock phone/tablet parts than high performance desktop parts on new node.
(bad clock scaling of 10nm is reason why Intel is still in their 14nm)

And so far latencies of DDR5 are sucktastic and I don't expect that to change fast.
First DDR5's might well bring performance regression for many consumer uses because of that.
Also with servers hogging capacity and bandwidth can see initial DDR5 prices being crazy.

TSMC's 5nm in its current state is already superior to 7nm in terms of performance and power consumption, Intel's problems are because they gambled on new tech that failed, resulting in poor yield and performance. Also I'm sure those bad DDR5 timings that were recently posted on the internet were just early samples, we haven't seen any production DDR5 being tested yet with a proper high performance memory controller. Although it will most likely still be worse than DDR4, as is usually the case with newer memory types since they focus on bandwidth increases which normally come at the cost of increased latency.
 
Soldato
Joined
6 Jun 2008
Posts
11,618
Location
Finland
TSMC's 5nm in its current state is already superior to 7nm in terms of performance and power consumption
Also I'm sure those bad DDR5 timings that were recently posted on the internet were just early samples
Only products out from 5nm are low clock speed mobile part, whose numbers don't automatically mean anything for high clock desktop parts.
In fact node excellent for them can be horrible for pushing desktop level clocks.
And it's certain that AMD works very closely with TSMC to know when and what parts it's sensible to try to start producing at what point.


Those numbers were from mass production modules, not some barely functional prototypes.
And that Alder Lake latency test with ludicrous 110+ ns was even with 6400 MHz clocked modules.
 
Associate
Joined
27 Dec 2008
Posts
404
Only products out from 5nm are low clock speed mobile part, whose numbers don't automatically mean anything for high clock desktop parts.
In fact node excellent for them can be horrible for pushing desktop level clocks.
And it's certain that AMD works very closely with TSMC to know when and what parts it's sensible to try to start producing at what point.


Those numbers were from mass production modules, not some barely functional prototypes.
And that Alder Lake latency test with ludicrous 110+ ns was even with 6400 MHz clocked modules.

The main reason why AMD isn't onto 5nm yet is because TSMC's capacity is being taken by Apple for the A13 and M1 SoCs, this will change at the end of the year as TSMC increases capacity and Apple is expected to move to 4/3nm. 5nm isn't a mobile node or only for low clock parts, the node itself has been in mass production for a year, if AMD somehow managed to shrink Zen3 onto 5nm right now, guaranteed it would have better clocks and power efficiency than with the current 7nm.

Again with the latency, wait for final parts. I remember the same thing with early DDR4, it didn't seem much faster than the fastest DDR3 but the benefits became obvious after a short while when higher clocked DDR4 became available for cheaper than slower or equivalently clocked DDR3. If you look at the JEDEC spec, they don't expect DDR5 to be much worse than DDR4 in terms of latency, only a few ns, but you get the benefit of much higher clocks. DDR5 will be capable of hitting 8GHz+ which will bring significant bandwidth improvement and allow for faster CPUs with more cores and higher IPC. Saying DDR5 will be bad just because of some early test with poor latency is completely the wrong way to look at things.
 
Soldato
OP
Joined
6 Feb 2019
Posts
17,624
The main reason why AMD isn't onto 5nm yet is because TSMC's capacity is being taken by Apple for the A13 and M1 SoCs, this will change at the end of the year as TSMC increases capacity and Apple is expected to move to 4/3nm. 5nm isn't a mobile node or only for low clock parts, the node itself has been in mass production for a year, if AMD somehow managed to shrink Zen3 onto 5nm right now, guaranteed it would have better clocks and power efficiency than with the current 7nm.

Again with the latency, wait for final parts. I remember the same thing with early DDR4, it didn't seem much faster than the fastest DDR3 but the benefits became obvious after a short while when higher clocked DDR4 became available for cheaper than slower or equivalently clocked DDR3. If you look at the JEDEC spec, they don't expect DDR5 to be much worse than DDR4 in terms of latency, only a few ns, but you get the benefit of much higher clocks. DDR5 will be capable of hitting 8GHz+ which will bring significant bandwidth improvement and allow for faster CPUs with more cores and higher IPC. Saying DDR5 will be bad just because of some early test with poor latency is completely the wrong way to look at things.


Isn't JEDEC DDR5-5400 CL40 lol (pushing 100ns in AIDA64 lol) , how is that only slightly worse than ddr4
 
Associate
Joined
27 Dec 2008
Posts
404
Isn't JEDEC DDR5-5400 CL40 lol, how is that only slightly worse than ddr4

The raw latency isn't much worse than DDR4, it's typically around 3-5 ns more with CL40 at the higher frequencies. Usually JEDEC timings are quite loose as well, e.g. the JEDEC spec for 3200MHZ DDR4 has CL22 timings.

So that's why I said wait for final production units because they could well be capable of much lower timings and latency, similar to DDR4.
 
Soldato
Joined
6 Jun 2008
Posts
11,618
Location
Finland
Again with the latency, wait for final parts. I remember the same thing with early DDR4, it didn't seem much faster than the fastest DDR3
You're remembering only marketing.
There were faster both in clock speed and latencies DDR3s commonly available than what DDR4 had during its initial time on market.
Hence it took some time for DDR4 to get to level of actually offering any benefits instead of regression.

And while DDR5's clock/bandwidth is now higher, timings have gone backwards more.
These modules are following that final DDR5 spec delayed and tweaked many years, during which time manufacturers were demoing WIP chips and modules.
Actually that CL40 for 6400MHz is already faster than JEDEC specs and still comparable to miserable CL20 of 3200MHz.
With rest of the timings beyond CAS propably being equally much worser.

And try to remember that AMD's ~60ns memory latency is considered very average (compared to Intel's sub 50ns) and AMD had to go for bigger caches etc to mitigate that.
Now Alder Lake using those already overclocked 6400MHz modules clocked 110ns memory latency!
https://www.legitreviews.com/ddr5-6400mhz-memory-benchmarks-shown-on-intel-alder-lake-s_226693
Intel must be having their hands full in trying to prevent performance drop in games...

Because higher bandwidth is really must only in things needing continuously moving big amounts of data like video encoding.
Games and typical single/lowly threaded home use are mostly about small burst transfers.
And those don't tolerate high latencies.

DDR5's time will come, but that's still firmly in future and not now.


And you're very misguided about manufacturing complex CPUs.
It's entirely different thing to do max 3GHz mobile chips than 5GHz capable desktop chips with difficulty rising exponentially between them.
Such clocks require very mature process and lots of tuning and tweaking.
If it were easy to get high clocks, Intel wouldn't be still at 14nm!

As another example console APUs have CPU cores clocked barely over 3½ GHz despite of very mature and well known node.
 
Soldato
OP
Joined
6 Feb 2019
Posts
17,624
As per Dr Ian Cuttres, this is the full list of DDR5 frequencies and timings that have been approved by JEDEC.

Each frequency has 3 sets of timing, the first timing is the "tight" timings, to be used for lower capacity desktop applications. The looser timings are for high capacity server kits.

Anything outside of these numbers is considered Overclocking. Therefore the reference range starts as low as 3200 CL22 and as high as 6400 CL56.

DDR5 has a theoretical frequency soft limit of 8400mhz, so it's expected that in time to come JEDEC will approve new frequencies above 6400mhz

 
Soldato
Joined
30 Jun 2019
Posts
7,875
I think DDR5 will be adopted across all motherboards in 2023-2024... That's when Intel will design a new microarchitecture for 7nm EUV CPUs.

I think Raptor Lake (or whatever odd name Intel calls Alder Lake's successor), will be a refresh of Alder Lake, maybe with an improved version of Intel's 10nm process. I don't think this will be until early 2023, since Alder Lake won't be available to buy until the 1st half of 2022.
 
Associate
Joined
9 Jul 2009
Posts
1,008
First of heard of TSMC 6nm, does anybody know any info about it? I'm assuming its just a tweaked 7nm that they are calling 6mm to differentiate it but will be interesting to see how Zen3+ shapes up on it. A slight clock bomb across the board?
 
Associate
Joined
27 Mar 2010
Posts
1,468
Location
Denmark
First of heard of TSMC 6nm, does anybody know any info about it? I'm assuming its just a tweaked 7nm that they are calling 6mm to differentiate it but will be interesting to see how Zen3+ shapes up on it. A slight clock bomb across the board?
We don't know much yet except that TSMC’s 6nm process began mass production in August 2020. It uses extreme ultraviolet lithography tech (EUV) for up to 5 layers (5nm will use much more) to further increase the density of the chip so we basically get faster CPU's.
 
Last edited:
Associate
Joined
27 Dec 2008
Posts
404
You're remembering only marketing.
There were faster both in clock speed and latencies DDR3s commonly available than what DDR4 had during its initial time on market.
Hence it took some time for DDR4 to get to level of actually offering any benefits instead of regression.

That's essentially what I said. DDR4 was overpriced on release but after a short time you could get DDR4 that was higher capacity and faster than equivalently priced DDR3. It will be the same with DDR5, by the time Zen4 comes out next year we'll probably be able to get 32GB 5000MHz DDR5 for cheaper than 16GB 3600MHz DDR4.

Because higher bandwidth is really must only in things needing continuously moving big amounts of data like video encoding.
Games and typical single/lowly threaded home use are mostly about small burst transfers.
And those don't tolerate high latencies.

Higher latency is of course expected but it is not anywhere near as bad as you're trying to make it out to be. Higher bandwidth will enable higher IPC cores and that will provide a net gain in gaming performance despite possibly higher memory latencies, Zen4 is expected to be another 30-40% increase in IPC over zen3 which is huge.

And you're very misguided about manufacturing complex CPUs.
It's entirely different thing to do max 3GHz mobile chips than 5GHz capable desktop chips with difficulty rising exponentially between them.
Such clocks require very mature process and lots of tuning and tweaking.
If it were easy to get high clocks, Intel wouldn't be still at 14nm!

You're repeating this like a broken record, but you do know that 5nm has been ready for mass production since the end of 2019? In comparison 7nm was ready in early/mid 2018 and was used by AMD for the Radeon VII, a high clocking GPU at the end of 2018, and finally in mid 2019 it was used for Zen2 which could clock to 4.7GHz.

If TSMC's 7nm could already produce an 8 core chiplet that could clock to 4.7GHz only a year after mass manufacturing started on that node, then 5nm by all means should be able to do the same right now, especially as TSMC said multiple times last year that 5nm was ahead of 7nm in terms of yield at the same point in their cycles.

Where Intel flopped was being late to EUV and with their bet on cobalt interconnect technology not working out, whereas TSMC bet early on EUV and didn't use cobalt interconnects so they didn't run into the same problems as Intel. Plus you have to consider that Intel's 14nm is better than TSMC's 10nm in terms of performance, so finding clockspeed gains at the same time as increasing density would always have been hard for them without more advanced tech like EUV and better interconnects.
 
Back
Top Bottom