• Competitor rules

    Please remember that any mention of competitors, hinting at competitors or offering to provide details of competitors will result in an account suspension. The full rules can be found under the 'Terms and Rules' link in the bottom right corner of your screen. Just don't mention competitors in any way, shape or form and you'll be OK.

Apple M1 Pro and M1 Max

Soldato
Joined
6 Oct 2009
Posts
3,998
Location
London
Apple announced M1 Pro and M1 Max, same CPUs but different GPU/IO:
  • Same uarch as M1 (i.e. A14-based uarch, not A15)
  • 5nm (unclear if N5 or N5P)
  • Likely higher frequencies
  • 8 performance cores
  • 2 efficiency cores
  • Up to 64GB ram
  • Display support of 3 6K + another 4K (as well as the laptop screen)
  • 3 Thunderbolt 4 ports
  • PCIe 4.0 SSD
Apple's own charts, so take them with a grain of salt until benchmarks are out:
iFXLzbi.jpg

eLTXzUT.jpg

PcomwIa.jpg
 
Last edited:
Soldato
Joined
28 May 2007
Posts
18,257
Did you want to post this in the laptop section?

I heard this was going to be disruptive. Looks as if Apple delivered. Very likely my next notebook.
 
Soldato
Joined
6 Feb 2019
Posts
17,589
As for not trusting Apple charts, put it this way: When Apple announced the iPhone 13 in September they provided numbers for CPU and GPU performance. And those numbers ended up being under reality, once reviewers did tests the numbers they got were 10 to 20% higher than Apple said. So it's not likely that Apple will over estimate here - Apple tends to quote


Now going by those charts it's very impressive. They've improved multi thread CPU etc by 70% which means it will be faster than any 8 core x86 laptop that money can buy, nothing from Intel or AMD is faster. And the GPU perf/watt is crazy good. The new Macbooks will once again be the battery life kings lasting many hours longer between charges compared to x86 laptops
 
Soldato
OP
Joined
6 Oct 2009
Posts
3,998
Location
London
Did you want to post this in the laptop section?

I heard this was going to be disruptive. Looks as if Apple delivered.

Considered it, but these will likely show up in desktops as well, like the M1 did. Even though announced with laptops, it's not solely a laptop part.

As for not trusting Apple charts, put it this way: When Apple announced the iPhone 13 in September they provided numbers for CPU and GPU performance. And those numbers ended up being under reality, once reviewers did tests the numbers they got were 10 to 20% higher than Apple said. So it's not likely that Apple will over estimate here

They are usually conservatives with performance claims, but had to post the obvious disclaimer that they are Apple's own charts.
 
Associate
Joined
1 Jun 2019
Posts
449
They claim M1Max is 2x than their last 8 core MacBook Pro in XCode compilations, which is what I have. 2x speed for rendering in DaVinci using a Mac with a 5600M.

I'll try and get it replaced for work if I can. No more Intel to make it go Whirrrrrrr. I know Apple are a slick marketing team, but when it comes to spending company money, i'm all for it.
 
Soldato
OP
Joined
6 Oct 2009
Posts
3,998
Location
London
They claim M1Max is 2x than their last 8 core MacBook Pro in XCode compilations, which is what I have. 2x speed for rendering in DaVinci using a Mac with a 5600M.

I'll try and get it replaced for work if I can. No more Intel to make it go Whirrrrrrr. I know Apple are a slick marketing team, but when it comes to spending company money, i'm all for it.

M1 was beating the i9 8-core Intel macs at code compilation even at independent compilation tasks to maximise multithreading. These M1Pro/Max ones will just be on a whole new level.
 
Soldato
Joined
28 May 2007
Posts
18,257
Considered it, but these will likely show up in desktops as well, like the M1 did. Even though announced with laptops, it's not solely a laptop part.



They are usually conservatives with performance claims, but had to post the obvious disclaimer that they are Apple's own charts.

I don’t think this flavour chip will be the in the desktop Mac personaly.
 
Soldato
Joined
28 May 2007
Posts
18,257
They claim M1Max is 2x than their last 8 core MacBook Pro in XCode compilations, which is what I have. 2x speed for rendering in DaVinci using a Mac with a 5600M.

I'll try and get it replaced for work if I can. No more Intel to make it go Whirrrrrrr. I know Apple are a slick marketing team, but when it comes to spending company money, i'm all for it.

TBF these laptops are a lot of bang for buck. The GPU performance is mental.
 
Soldato
Joined
10 Apr 2013
Posts
3,745
M1 was beating the i9 8-core Intel macs at code compilation even at independent compilation tasks to maximise multithreading. These M1Pro/Max ones will just be on a whole new level.

Assuming performance scales doubling the number of performance cores would mean that these M1 Pro/Max Chip are potentially faster than any of Intel/AMDs top end x86 i7/9 and Ryzen 59xx series chips which is an impressive feat. Be interesting to see how the benchmarks play out in real world testing.

https://appleinsider.com/articles/2...-than-m1-in-supposed-benchmark?utm_medium=rss
"single-core score of 1749 and a multi-core score of 11542" (unconfirmed Geekbench 5)

Edit:

Maybe not quite as impressive as I'd expect based on Apple's claims but still seriously good for a mobile chip.
 
Last edited:
Soldato
Joined
9 Nov 2009
Posts
24,841
Location
Planet Earth
Yet ultimately most of the market is still going to be made up of lower performance products,because what Apple is doing looks a very expensive way of doing things. The reality especially,with the SOC being 432MM2 for the M1 Max(with twice the transistors of the GA102) and 245MM2 for the M1 PRO on 5NM,the production costs are going to be insane(and yields on the bigger SOC probably are not great). The SOCs also are using a ton of expensive LPDDR5 memory,literally soldered next to the SOC. This is what happens when you make massive dies and throw transistors at the problem on a cutting edge process node. The issue is how many more years is this going to be viable for??

The issue is Apple is relying on jumping onto new nodes as quickly as they can,and if there is any hiccup,they are going to be affected worse than many of their competitors.

Both AMD/Intel have progressed further along the chiplet/heterogenuous node manufacturing route(especially as they are far more experienced in packaging) because the reality is relying on new nodes(and chucking tons of transistors at the problem),is going to become harder and harder as the shrinks get harder too. Its why GPU chiplets are going to be a thing soon,and even why AMD went that way with their CPUs. A Ryzen 9 5950X,for example, is made of two 80MM2 7NM chiplets,and a 125MM2 I/O die on an ancient 12NM/14NM node,using cheap DDR4. AMD has also proven,by using 3D packaging they can get notable performance and effiency improvements,via simple stacking more chiplets on top(which is cheaper than making even bigger chiplets).

It was why Intel Lakefield,was more notable for how it was made(than the final product),and stuff like delinking production from process nodes,is increasingly going to be important. So is putting R and D into lowish power connectivity(the I/O fabric) between the various parts. AMD and companies like Fujitsu have put a lot of effort into power reduction in the latter area.

Yet,if you look at both Zen2 and RDNA1,AMD managed to get decent gains on the same node,and Nvidia did the same. I really want to see how Apple can do,if they end up having to stay on a node for more than one generation(they had to once and it wasn't pretty IIRC). ATM,it seems more a case of chucking more and more transistors at each SOC and die area at the problem(and using exotic memory standards).
 
Last edited:
Soldato
Joined
28 May 2007
Posts
18,257
Assuming performance scales doubling the number of performance cores would mean that these M1 Pro/Max Chip are potentially faster than any of Intel/AMDs top end x86 i7/9 and Ryzen 59xx series chips which is an impressive feat. Be interesting to see how the benchmarks play out in real world testing.

https://appleinsider.com/articles/2...-than-m1-in-supposed-benchmark?utm_medium=rss
"single-core score of 1749 and a multi-core score of 11542" (unconfirmed)

Edit:

Maybe not quite as impressive as I'd expect based on Apple's claims but still seriously good for a mobile chip.

Should beat everything at 60 watts.

The performance jump in mobile GPU power is impressive. A quick search says this beats a GeForce RTX 3080 Mobile.

Yet ultimately most of the market is still going to be made up of lower performance products,because what Apple is doing looks a very expensive way of doing things. The reality especially,with the SOC being 432MM2 for the M1 Max(with twice the transistors of the GA102) and 245MM2 for the M1 PRO on 5NM,the production costs are going to be insane(and yields on the bigger SOC probably are not great). The SOCs also are using a ton of expensive LPDDR5 memory,literally soldered next to the SOC. This is what happens when you make massive dies and throw transistors at the problem on a cutting edge process node. The issue is how many more years is this going to be viable for??

The issue is Apple is relying on jumping onto new nodes as quickly as they can,and if there is any hiccup,they are going to be affected worse than many of their competitors.

Both AMD/Intel have progressed further along the chiplet/heterogenuous node manufacturing route(especially as they are far more experienced in packaging) because the reality is relying on new nodes(and chucking tons of transistors at the problem),is going to become harder and harder as the shrinks get harder too. Its why GPU chiplets are going to be a thing soon,and even why AMD went that way with their CPUs. A Ryzen 9 5950X,for example, is made of two 80MM2 7NM chiplets,and a 125MM2 I/O die on an ancient 12NM/14NM node,using cheap DDR4. AMD has also proven,by using 3D packaging they can get notable performance and effiency improvements,via simple stacking more chiplets on top(which is cheaper than making even bigger chiplets).

It was why Intel Lakefield,was more notable for how it was made(than the final product),and stuff like delinking production from process nodes,is increasingly going to be important. So is putting R and D into lowish power connectivity(the I/O fabric) between the various parts. AMD and companies like Fujitsu have put a lot of effort into power reduction in the latter area.

Yet,if you look at both Zen2 and RDNA1,AMD managed to get decent gains on the same node,and Nvidia did the same. I really want to see how Apple can do,if they end up having to stay on a node for more than one generation(they had to once and it wasn't pretty IIRC). ATM,it seems more a case of chucking more and more transistors at each SOC and die area at the problem(and using exotic memory standards).

The RTX3080m is 392 mm² alone. 432 mm² for the complete Apple APU is very impressive.
 
Last edited by a moderator:
Soldato
OP
Joined
6 Oct 2009
Posts
3,998
Location
London
Yet ultimately most of the market is still going to be made up of lower performance products,because what Apple is doing looks a very expensive way of doing things. The reality especially,with the SOC being 432MM2 for the M1 Max(with twice the transistors of the GA102) and 245MM2 for the M1 PRO on 5NM,the production costs are going to be insane(and yields on the bigger SOC probably are not great). The SOCs also are using a ton of expensive LPDDR5 memory,literally soldered next to the SOC. This is what happens when you make massive dies and throw transistors at the problem on a cutting edge process node. The issue is how many more years is this going to be viable for??

The issue is Apple is relying on jumping onto new nodes as quickly as they can,and if there is any hiccup,they are going to be affected worse than many of their competitors.

Both AMD/Intel have progressed further along the chiplet/heterogenuous node manufacturing route(especially as they are far more experienced in packaging) because the reality is relying on new nodes(and chucking tons of transistors at the problem),is going to become harder and harder as the shrinks get harder too. Its why GPU chiplets are going to be a thing soon,and even why AMD went that way with their CPUs. A Ryzen 9 5950X,for example, is made of two 80MM2 7NM chiplets,and a 125MM2 I/O die on an ancient 12NM/14NM node,using cheap DDR4. AMD has also proven,by using 3D packaging they can get notable performance and effiency improvements,via simple stacking more chiplets on top(which is cheaper than making even bigger chiplets).

It was why Intel Lakefield,was more notable for how it was made(than the final product),and stuff like delinking production from process nodes,is increasingly going to be important. So is putting R and D into lowish power connectivity(the I/O fabric) between the various parts. AMD and companies like Fujitsu have put a lot of effort into power reduction in the latter area.

Yet,if you look at both Zen2 and RDNA1,AMD managed to get decent gains on the same node,and Nvidia did the same. I really want to see how Apple can do,if they end up having to stay on a node for more than one generation(they had to once and it wasn't pretty IIRC). ATM,it seems more a case of chucking more and more transistors at each SOC and die area at the problem(and using exotic memory standards).

You use what's available, Apple is in the position to be able to make huge dies and absorb the cost because that's still cheaper than buying chips from Intel/AMD. If they can afford that, they are going to use the die space for sure to improve the product. Apple doesn't need to make a profit on the chip itself, it's the final product that they sell. Intel/AMD's priorities are very different, they have to make their margins on the chips and sell it for a profit, then the manufacturer has to put the CPU into a device and still sell it for a profit on top of that. This gives Apple a lot of flexibility in their microarchitecture designs.

If/when node improvements stop, they may not be able to pack more transistors in to make the cores wider as they've been doing in recent years, and they can't run them faster, so they have to come up with new ideas. Intel surely didn't do well, but AMD has done so recently. This year (A14 -> A15) Apple didn't add meaningful transistors to the A15P cores, and managed to improve efficiency by 20% despite increasing frequency on the same node (although N5 -> N5P is a minor improvement).

The performance jump in mobile GPU power is impressive. A quick search says this beats a GeForce RTX 3080 Mobile.

I don't think it's going to beat RTX 3080 Mobile. That's way too optimistic.
 
Associate
Joined
31 Dec 2011
Posts
815
Looking to see if more 32gb combinations come out. Currently over 3k for the m1x chip on the 16inch with 32gb. Will be looking to pay around £2.5 for a 32gb machine sometime next year. Currently running a 6core intel MacBook Pro and a 4core MacBook air so am happy to wait it out.
 
Soldato
Joined
9 Nov 2009
Posts
24,841
Location
Planet Earth
The RTX3080m is 392 mm² alone. 432 mm² for the complete Apple APU is very impressive.

The Apple SOC is a huge chip made on a 5NM SOC,which not only costs more,but yields worse. Nvidia is using a much less denser and less power efficient Samsung 10NM node,which is purported to cost less than TSMC 7NM. Apple is on second generation TSMC 5NM!! Nvidia did it because it was cheaper and supply was less constrained than TSMC 7NM,not because it was better. JHH is not as foolish as many thought on here last year.

Apple is throwing tons of transistors at the problem because they can do so,due to the high RRPs of their products. The M1 Max has double the transistor count of the GA102 GPU in the RTX3090. AMD/Nvidia seem to be having dGPU uarchs which simply do more with less transistors(especially as the GPUs also do machine learning too,and RT operations). What AMD did with its chiplets in Zen2/Zen3,and Intel demonstrated with Lakefield are where things are headed. AMD understood years ago,large monolithic dies,are not the way forward. Intel started to realise that more recently,and so will Nvidia too. Apple is just throwing the kitchen sink at things,so you would expect with so many transistors on a cutting edge node that the performance would be there(but so are the costs and yields).


You use what's available, Apple is in the position to be able to make huge dies and absorb the cost because that's still cheaper than buying chips from Intel/AMD. If they can afford that, they are going to use the die space for sure to improve the product. Apple doesn't need to make a profit on the chip itself, it's the final product that they sell. Intel/AMD's priorities are very different, they have to make their margins on the chips and sell it for a profit, then the manufacturer has to put the CPU into a device and still sell it for a profit on top of that. This gives Apple a lot of flexibility in their microarchitecture designs.

If/when node improvements stop, they may not be able to pack more transistors in to make the cores wider as they've been doing in recent years, and they can't run them faster, so they have to come up with new ideas. Intel surely didn't do well, but AMD has done so recently. This year (A14 -> A15) Apple didn't add meaningful transistors to the A15P cores, and managed to improve efficiency by 20% despite increasing frequency on the same node (although N5 -> N5P is a minor improvement).

Yes,but the issue is that its quite clear AMD,Nvidia and even Intel are moving through transitory phases,and the whole industry will have to also. The issue is the other companies will have to work through all the issues with multi-chip systems at a later date,and if this is problematic for very experienced companies like AMD,Intel and Nvidia,its not going to be easy for the others too. Fujitsu with its A64FX(and a long history of server CPU designs),had to invest a lot of effort into developing a lowish power I/O fabric,and AMD had to really work on dropping power too.

Apple also is hampered by having to maintain very high margins too,which is why it lost a huge amount of the smartphone marketshare worldwide to Android. Apple cares far more about margins than volume.

Intel has been stung by its failures in the fab arm of the company,but since it can still output enough volume is still doing OK(AMD being limited by volume). However, the US government is not going to want to allow TSMC/Samsung free reign forever(hence the money being funnelled into Intel now),so once their node cadence get closer to the competition they will rebound IMHO,especially if they simply have more experience with heterogenous manufacturing by then.

This video from Ian Cutress(of AT fame) was quite interesting:
https://www.youtube.com/watch?v=oaB1WuFUAtw


It's less about the Intel products themselves,but more the changes behind the scenes.

AMD has already identified this years before - its quite possible if AMD went with a large 7NM monolithic design,Zen2 and Zen3 would have been better overall,and possibly lower power(chiplet designs do have power penalties). But that is the thing,it would be less efficient to manufacture. Its like with some of the ARM server CPUs which have been tested recently,they look really solid,but again huge monolithic dies.Eventually costs will be the problem and yields,not the performance.
 
Last edited:
Soldato
Joined
28 May 2007
Posts
18,257
You use what's available, Apple is in the position to be able to make huge dies and absorb the cost because that's still cheaper than buying chips from Intel/AMD. If they can afford that, they are going to use the die space for sure to improve the product. Apple doesn't need to make a profit on the chip itself, it's the final product that they sell. Intel/AMD's priorities are very different, they have to make their margins on the chips and sell it for a profit, then the manufacturer has to put the CPU into a device and still sell it for a profit on top of that. This gives Apple a lot of flexibility in their microarchitecture designs.

If/when node improvements stop, they may not be able to pack more transistors in to make the cores wider as they've been doing in recent years, and they can't run them faster, so they have to come up with new ideas. Intel surely didn't do well, but AMD has done so recently. This year (A14 -> A15) Apple didn't add meaningful transistors to the A15P cores, and managed to improve efficiency by 20% despite increasing frequency on the same node (although N5 -> N5P is a minor improvement).



I don't think it's going to beat RTX 3080 Mobile. That's way too optimistic.

Apples charts are a little odd, but it looks like it.
 
Soldato
OP
Joined
6 Oct 2009
Posts
3,998
Location
London
Yes,but the issue is that its quite clear AMD,Nvidia and even Intel are moving through transitory phases,and the whole industry will have to also. The issue is the other companies will have to work through all the issues with multi-chip systems at a later date,and if this is problematic for very experienced companies like AMD,Intel and Nvidia,its not going to be easy for the others too. Fujitsu with its A64FX(and a long history of server CPU designs),had to invest a lot of effort into developing a lowish power I/O fabric,and AMD had to really work on dropping power too.

Apple also is hampered by having to maintain very high margins too,which is why it lost a huge amount of the smartphone marketshare worldwide to Android. Apple cares far more about margins than volume.

Intel has been stung by its failures in the fab arm of the company,but since it can still output enough volume is still doing OK(AMD being limited by volume). However, the US government is not going to want to allow TSMC/Samsung free reign forever(hence the money being funnelled into Intel now),so once their node cadence get closer to the competition they will rebound IMHO,especially if they simply have more experience with heterogenous manufacturing by then.

This video from Ian Cutress(of AT fame) was quite interesting:
https://www.youtube.com/watch?v=oaB1WuFUAtw


It's less about the Intel products themselves,but more the changes behind the scenes. AMD has already identified this years before - its quite possible if AMD went with a large 7NM monolithic design,Zen2 and Zen3 would have been better overall,and possibly lower power(chiplet designs do have power penalties). But that is the thing,it would be less efficient to manufacture. Its like with some of the ARM server CPUs which have been tested recently,they look really solid,but again huge monolithic dies. Eventually costs will be the problem,and yields not the performance.

It'd be interesting to see how Apple manages scaling up, as for Ampere Altra being a huge monolith, they are actually moving to a chiplet design in their Siryn uarch (which comes next year). We'll see if Apple follows this trend, but they do get a lot more leeway with their chip costs, to their considerations are very different to Intel/AMD/Ampere. I think Apple will prefer to keep the monolithic design but I'm just speculating.
 
Soldato
Joined
9 Nov 2009
Posts
24,841
Location
Planet Earth
It'd be interesting to see how Apple manages scaling up, as for Ampere Altra being a huge monolith, they are actually moving to a chiplet design in their Siryn uarch (which comes next year). We'll see if Apple follows this trend, but they do get a lot more leeway with their chip costs, to their considerations are very different to Intel/AMD/Ampere. I think Apple will prefer to keep the monolithic design but I'm just speculating.

By then AMD would have a few iterations of IF,and Ampere will be on their first generation. Plus is this going to be just homogenous chiplets,or heterogenous chiplets?? Its what I find interesting about what AMD is doing(or even Intel with Lakefield),because they are targetting different parts to different process nodes. Some components in a CPU/SOC for example don't benefit massively in power reductions on a new node,so it makes more sense to make those on lagging edge one. I honestly thought many of these ARM based designs,would do this before AMD or Intel TBH,as technically the ARM reference cores are designed to be made on different nodes. Even with the new M1 designs,I would have thought it made more sense to have it as chiplets for laptop/desktop systems. A CPU core chiplet with the 10 CPU cores(and maybe the I/O),and a 16 core GPU chiplet. So the M1 Pro would be two chiplets,and the M1 Max have three chiplets. Having the RAM on package also would have been helpful in this regard too.
 
Soldato
Joined
6 Feb 2019
Posts
17,589
Mining didn't work well on the M1. But the M1 max has faster memory so maybe, but the cost is probably just too high. Checked the Apple site and I could buy two rtx3080ti for one 64gb m1 max macbook and I'm sure the 3080tis would mine better.

rough back of the hand calculations, the M1 max would need to mine at least half the output as a desktop gpu, but if it can do that that would make it like 1000% faster than the M1 at mining and that seems unlikely
 
Back
Top Bottom