• Competitor rules

    Please remember that any mention of competitors, hinting at competitors or offering to provide details of competitors will result in an account suspension. The full rules can be found under the 'Terms and Rules' link in the bottom right corner of your screen. Just don't mention competitors in any way, shape or form and you'll be OK.

Apple M1 Pro and M1 Max

Soldato
Joined
6 Oct 2009
Posts
3,998
Location
London
Apple announced M1 Pro and M1 Max, same CPUs but different GPU/IO:
  • Same uarch as M1 (i.e. A14-based uarch, not A15)
  • 5nm (unclear if N5 or N5P)
  • Likely higher frequencies
  • 8 performance cores
  • 2 efficiency cores
  • Up to 64GB ram
  • Display support of 3 6K + another 4K (as well as the laptop screen)
  • 3 Thunderbolt 4 ports
  • PCIe 4.0 SSD
Apple's own charts, so take them with a grain of salt until benchmarks are out:
iFXLzbi.jpg

eLTXzUT.jpg

PcomwIa.jpg
 
Last edited:
Soldato
OP
Joined
6 Oct 2009
Posts
3,998
Location
London
Did you want to post this in the laptop section?

I heard this was going to be disruptive. Looks as if Apple delivered.

Considered it, but these will likely show up in desktops as well, like the M1 did. Even though announced with laptops, it's not solely a laptop part.

As for not trusting Apple charts, put it this way: When Apple announced the iPhone 13 in September they provided numbers for CPU and GPU performance. And those numbers ended up being under reality, once reviewers did tests the numbers they got were 10 to 20% higher than Apple said. So it's not likely that Apple will over estimate here

They are usually conservatives with performance claims, but had to post the obvious disclaimer that they are Apple's own charts.
 
Soldato
OP
Joined
6 Oct 2009
Posts
3,998
Location
London
They claim M1Max is 2x than their last 8 core MacBook Pro in XCode compilations, which is what I have. 2x speed for rendering in DaVinci using a Mac with a 5600M.

I'll try and get it replaced for work if I can. No more Intel to make it go Whirrrrrrr. I know Apple are a slick marketing team, but when it comes to spending company money, i'm all for it.

M1 was beating the i9 8-core Intel macs at code compilation even at independent compilation tasks to maximise multithreading. These M1Pro/Max ones will just be on a whole new level.
 
Soldato
OP
Joined
6 Oct 2009
Posts
3,998
Location
London
Yet ultimately most of the market is still going to be made up of lower performance products,because what Apple is doing looks a very expensive way of doing things. The reality especially,with the SOC being 432MM2 for the M1 Max(with twice the transistors of the GA102) and 245MM2 for the M1 PRO on 5NM,the production costs are going to be insane(and yields on the bigger SOC probably are not great). The SOCs also are using a ton of expensive LPDDR5 memory,literally soldered next to the SOC. This is what happens when you make massive dies and throw transistors at the problem on a cutting edge process node. The issue is how many more years is this going to be viable for??

The issue is Apple is relying on jumping onto new nodes as quickly as they can,and if there is any hiccup,they are going to be affected worse than many of their competitors.

Both AMD/Intel have progressed further along the chiplet/heterogenuous node manufacturing route(especially as they are far more experienced in packaging) because the reality is relying on new nodes(and chucking tons of transistors at the problem),is going to become harder and harder as the shrinks get harder too. Its why GPU chiplets are going to be a thing soon,and even why AMD went that way with their CPUs. A Ryzen 9 5950X,for example, is made of two 80MM2 7NM chiplets,and a 125MM2 I/O die on an ancient 12NM/14NM node,using cheap DDR4. AMD has also proven,by using 3D packaging they can get notable performance and effiency improvements,via simple stacking more chiplets on top(which is cheaper than making even bigger chiplets).

It was why Intel Lakefield,was more notable for how it was made(than the final product),and stuff like delinking production from process nodes,is increasingly going to be important. So is putting R and D into lowish power connectivity(the I/O fabric) between the various parts. AMD and companies like Fujitsu have put a lot of effort into power reduction in the latter area.

Yet,if you look at both Zen2 and RDNA1,AMD managed to get decent gains on the same node,and Nvidia did the same. I really want to see how Apple can do,if they end up having to stay on a node for more than one generation(they had to once and it wasn't pretty IIRC). ATM,it seems more a case of chucking more and more transistors at each SOC and die area at the problem(and using exotic memory standards).

You use what's available, Apple is in the position to be able to make huge dies and absorb the cost because that's still cheaper than buying chips from Intel/AMD. If they can afford that, they are going to use the die space for sure to improve the product. Apple doesn't need to make a profit on the chip itself, it's the final product that they sell. Intel/AMD's priorities are very different, they have to make their margins on the chips and sell it for a profit, then the manufacturer has to put the CPU into a device and still sell it for a profit on top of that. This gives Apple a lot of flexibility in their microarchitecture designs.

If/when node improvements stop, they may not be able to pack more transistors in to make the cores wider as they've been doing in recent years, and they can't run them faster, so they have to come up with new ideas. Intel surely didn't do well, but AMD has done so recently. This year (A14 -> A15) Apple didn't add meaningful transistors to the A15P cores, and managed to improve efficiency by 20% despite increasing frequency on the same node (although N5 -> N5P is a minor improvement).

The performance jump in mobile GPU power is impressive. A quick search says this beats a GeForce RTX 3080 Mobile.

I don't think it's going to beat RTX 3080 Mobile. That's way too optimistic.
 
Soldato
OP
Joined
6 Oct 2009
Posts
3,998
Location
London
Yes,but the issue is that its quite clear AMD,Nvidia and even Intel are moving through transitory phases,and the whole industry will have to also. The issue is the other companies will have to work through all the issues with multi-chip systems at a later date,and if this is problematic for very experienced companies like AMD,Intel and Nvidia,its not going to be easy for the others too. Fujitsu with its A64FX(and a long history of server CPU designs),had to invest a lot of effort into developing a lowish power I/O fabric,and AMD had to really work on dropping power too.

Apple also is hampered by having to maintain very high margins too,which is why it lost a huge amount of the smartphone marketshare worldwide to Android. Apple cares far more about margins than volume.

Intel has been stung by its failures in the fab arm of the company,but since it can still output enough volume is still doing OK(AMD being limited by volume). However, the US government is not going to want to allow TSMC/Samsung free reign forever(hence the money being funnelled into Intel now),so once their node cadence get closer to the competition they will rebound IMHO,especially if they simply have more experience with heterogenous manufacturing by then.

This video from Ian Cutress(of AT fame) was quite interesting:
https://www.youtube.com/watch?v=oaB1WuFUAtw


It's less about the Intel products themselves,but more the changes behind the scenes. AMD has already identified this years before - its quite possible if AMD went with a large 7NM monolithic design,Zen2 and Zen3 would have been better overall,and possibly lower power(chiplet designs do have power penalties). But that is the thing,it would be less efficient to manufacture. Its like with some of the ARM server CPUs which have been tested recently,they look really solid,but again huge monolithic dies. Eventually costs will be the problem,and yields not the performance.

It'd be interesting to see how Apple manages scaling up, as for Ampere Altra being a huge monolith, they are actually moving to a chiplet design in their Siryn uarch (which comes next year). We'll see if Apple follows this trend, but they do get a lot more leeway with their chip costs, to their considerations are very different to Intel/AMD/Ampere. I think Apple will prefer to keep the monolithic design but I'm just speculating.
 
Soldato
OP
Joined
6 Oct 2009
Posts
3,998
Location
London
Apart from the fact there's little to no games in their ecosystem. That's not going to change with the current bar for entry being 2K+.

Yeah, what they've made a mockery of are high-end workstations with Quadro GPUs, e.g. Dell Precision, HP Elitebook, Lenovo P series.
 
Soldato
OP
Joined
6 Oct 2009
Posts
3,998
Location
London
From Mark Gurman, the Bloomberg guy that Apple always uses to leak info to media to build/manage expectations:


~40 CPU cores, ~128 GPU cores coming next for Mac Pro.

@CAT-THE-FIFTH Unless we expect Apple to pack ~230 billion transistors into a single monolithic chip, these will be chiplets or even multi-CPU designs. Or Maybe just splitting CPU and GPU into separate chips (finally), they have to take RAM off-package anyway to be able to offer 1TB+ ram for the Mac Pro.
 
Last edited:
Soldato
OP
Joined
6 Oct 2009
Posts
3,998
Location
London
Apple have said they want a slice of the mobile graphics market and you can always run Windows on this.

You can't run Windows games on ARM.

Since it wont be monolithic, they'll either be gluing 2 to 4 M1 MAX chips together with some interface? Or more likely, just be a custom board with multiple "sockets" - since most tasks mac users do are not latency sensitive, having physical seperate SOC's isn't a problem and gives you a huge boost in multitasking - imagine having 30 8k video timelines open at once to try and fill your 256GB of memory

I think splitting CPU and GPU is more likely and a cleaner design, with their own separate SRAMs and the unified DRAM located in between. But again, all of this is speculation. Only people inside Apple know how they're approaching this.
 
Last edited by a moderator:
Soldato
OP
Joined
6 Oct 2009
Posts
3,998
Location
London
I doubt it personally. Especially if you're stuck installing windows and having to run games through Rosetta 2 (if that even works with Windows).

I could understand Apple pushing native gaming but that will take time and investment to try and get going.

Rosetta doesn't even exist on Windows. And there are no Apple GPU drivers for windows, Microsoft doesn't even sell Windows ARM licenses for MacBooks. Apple's focus on GPU power on macs is for compute for the time being. If/when they want to focus on gaming it will be on macOS, via metal, not on Windows.
 
Soldato
OP
Joined
6 Oct 2009
Posts
3,998
Location
London
What games are there?

Native - Valve catalogue, Civ 6 and a load Indie titles. iOS titles now supported.

Parallels - patchy coverage but some games work quite well, lots of performance issues, anything with anti-cheat is broken, DX12 isn't supported from what I can see

Am I missing something? This doesn't sound like gaming nirvana that's going to win over gamers.

That guy is very misguided. You're 100% right, the idea that you're gonna run Windows games on these through virtualisation, after emulation, is just wrong, it's not possible in any way, shape or form. He's confused about the potential that these chips have at being used for gaming, that people are accurately pointing at, with the idea that it's a gaming supermachine right now.

So the new M1 Max is like PS5 APU performance?

Good, hopefully this will spur AMD to make them available to Desktop, or even Laptop's. and they wont cost £5000.

I'd love to see that as well, but clearly they're in no rush to do it given that there's no real competitor (and these laptops won't concern AMD a lot), and AMD can basically sell every chip they can make right now anyway.

There are technical limitations as well, generally modern GPU architectures require a huge memory bandwidth, and with APUs you can't get it from typical system memory anyway. And they can't make the memory as wide as Apple did with these chips on their consumer platforms. That's why PS5/XSeriesX have GDDR6 ram rather than DDR4.
 
Soldato
OP
Joined
6 Oct 2009
Posts
3,998
Location
London
I thought the poster sounded familiar:

https://forums.overclockers.co.uk/threads/gaming-laptop.18936713/#post-35073434

What gaming laptop? Macbook ofcourse!

Bit of a disconnect from reality going on.

Hahaha :cry:

They might just be using multiple M1 Max SOCs together? It will be interested to see the power penalty from the I/O links they need to use. AFAIK,Fujitsu spent a good amount of time tweaking that for the A64FX.

That's one way to do it. I think separating CPU and GPU is a simpler and more flexible solution, also allows them to have the machine with multiple GPUs easily.
 
Soldato
OP
Joined
6 Oct 2009
Posts
3,998
Location
London
Apple have made those types of laptop look silly now. Clearly sales will be lost to Apple and prices need adjusting.

It’s a SOC design. I can’t see Apple building a CPU and separate GPU. Unless Apple have another strategy. I did hear the Apple workstation chip will be a little different from the mobile chip.

All modern chips are SoCs, and have been for a very long time. If they split CPU and GPU, they will still be SoCs.
 
Soldato
OP
Joined
6 Oct 2009
Posts
3,998
Location
London
It's marketing ********, guys.

1) You can't compare TFLOPS directly when using different architecture. It's nonsensical.

3090 - 35.6 TFLOPS, 6900xt -20.6 TFLOPS. Almost 40% difference yet the real performance difference is 3-10% depending on a game\task

2) They didn't even show what exactly was tested when they compared GPU performance. Show us some games running with the same settings and same resolution! Ah, wait. there are no games.

3) Forget about AAA games on M1.

Metal API+ ARM=> no proper gaming.

Metal API essentially killed Mac gaming even on x86 architecture (bootcamp excluded). Going Metal (typical apple "we-know-what's-best-for-you-no-matter-the-facts way) instead of Vulkan was a huge mistake.

I have no doubt M1 max will be a great laptop for video editing and stuff... but if you think about getting it in hopes of running proper non-mobile games on it with good graphics settings, resolution and performance then think twice...

Apart from some idiots online, nobody really claimed these are good gaming machines, they can theoretically be, if Apple sorts out software issues and delivers AAA games, but that doesn't exist right now.

Apple made no gaming claims either. The focus was definitely GPU compute tasks. There's only one user here delusional about gaming on these macs and has been posting nonsensical stuff ever since these were announced, everyone else gets it :D

Big whoop… it’s not going to be the most amazing thing for gaming.

the people who actual care about getting things done will not mind that it can’t run Battlefield.

Exactly. Novody serious expected it to be a gaming machine either, apart from those whose entire understanding of a GPU's use case is "game game game".
 
Soldato
OP
Joined
6 Oct 2009
Posts
3,998
Location
London
Apple has 0 chance with their current stance of breaking into the gaming market - They are refusing to use Vulkan, DX12 etc. so good luck getting anything other than Apple Store games to work on their new Macbooks!

Gaming industry don't like that Metal is solely in control of Apple, all the API and driver development is done in-house and Apple won't be tweaking these to optimise for games, something that Nvidia and AMD do all the time. And Metal is quite different to DX/Vulkan, so not that easy to port.

Apple are full Khronos group members etc.

Apple really has been going their own way for the better part of the last decade versus Khoronos. They deprecated their support for OpenCL, OpenGL and OpenGL ES in favour of Metal (good decisions) and will drop WebGL when WebGPU is ready (also good decision). But not supporting Vulkan means game development on Metal is expensive for studios and porting isn't simple.
 
Soldato
OP
Joined
6 Oct 2009
Posts
3,998
Location
London
the M1 Max has 3.5 times higher performance than the M1. The M1 Max's 69k geekbench 5 GPU score is on par with the desktop 5700xt

I'm guessing it's the 24-core version, because the M1 GPU was memory/bandwidth limited and that's massively improved now, so ~3.5x improvement for 3x cores seem reasonable. If that's the case, we should expect ~90K from the 32-core version.
 
Soldato
OP
Joined
6 Oct 2009
Posts
3,998
Location
London
Anyone remeber the Apple Bandai Pippin?

If Apple were to revisit the console market, that's when they'd have to make Metal feature level with the others (or bring over Vulkan).

Until then, well the M1 Max is technical impressive but why spend so much silicon on a GPU which almost nobody will be able to use?

Seems very unbalanced.

If Apple have some specific video editing or similar program in mind, fixed-function hardware would be even more efficient.

Apple are very stingy, so they must have a reason, but what it is? If they want to run their own data centres on their hardware, again fixed-function hardware would make more sense.

Apple SoCs are full of fixed-function hardware, significantly more so than anyone else's. Usually 25-30% of the chip is fixed-function stuff that aren't the typical stuff you see on CPU/GPUs. But they're also very powerful general purpose compute tasks for CPU, GPU and tensor processing (i.e. neural engine). Not everything warrants fixed-function units on the chip itself, that's where their focus on compute-oriented GPU is warranted. Think more like Tesla or Quadro chips, that's what Apple is competing against right now. That's why they put so much emphasis on the GPU's access to 64GB of RAM, despite what some spammers on forums/reddit think, that's not for gaming, lol. We've seen no evidence that Apple cares about serious gaming on macs.

Not my area of expertise but this is what I could find online:

https://www.phoronix.com/scan.php?page=news_item&px=Apple-M1-GPU-More-Bits

"For all the visible hardware features, it’s equally important to consider what hardware features are absent. Intriguingly, the GPU lacks some fixed-function graphics hardware ubiquitous among competitors. For example, I have not encountered hardware for reading vertex attributes or uniform buffer objects. The OpenGL and Vulkan specifications assume dedicated hardware for each, so what’s the catch? Simply put – Apple doesn’t need to care about Vulkan or OpenGL performance. Their only properly supported API is their own Metal, which they may shape to fit the hardware rather than contorting the hardware to match the API. Indeed, Metal de-emphasizes vertex attributes and uniform buffers, favouring general constant buffers, a compute-focused design. The compiler is responsible for translating the fixed-function attribute and uniform state to shader code."

Metal is more compute oriented, with the assumption that anything can be build on top of it, in fact there is an implementation of Vulkan on top of Metal: https://github.com/KhronosGroup/MoltenVK

It doesn't implement the entire Volkan API, but it is now officially supported by Valve and Dota 2 uses this, but it's nowhere near stable or feature-rich enough for AAA game development.

The simple reality is, there just isn't a ROI for developers to spend time and money releasing macOS games. Maybe one day that changes, but that day is not today. And Apple themselves are not making any claims about macOS gaming. I think we should ignore the trolls at this point.
 
Soldato
OP
Joined
6 Oct 2009
Posts
3,998
Location
London
Looks like the M1 Max's live playback in Premiere is where it really shines. Which is understandable considering its massive memory bandwidth. It only falls short against massive core systems in exporting where those extra cores come into play.

I think a lot of video editors will love it. And when the Mac Pros come out they'll be the best video editing machines outright. I dunno how they'll glue the silicon together, but these will cost an absolute bomb.

Premier supports ProRes as well, so that's where the dedicated accelerators in M1 Pro/Max shine, and those are apparently faster than a stand-alone £2000 Afterburner card!

We have more benchmarks and the M1 Max is looking very strong compared to that first geekbench GPU score

Puget Workstation benchmark:

M1 Max: 1168 points https://www.pugetsystems.com/benchmarks/view.php?id=60176

5900HX + RTX3080: 888 points https://www.pugetsystems.com/benchmarks/view.php?id=51362


the GPU is no slouch with the M1 Max achieving the same graphics GPU score as the RTX3080 laptop

Nice benchmarks. This is really the sort of use-case that these devices are aimed for, so these benchmarks are quite interesting.
 
Soldato
OP
Joined
6 Oct 2009
Posts
3,998
Location
London
Last edited by a moderator:
Soldato
OP
Joined
6 Oct 2009
Posts
3,998
Location
London
The Apple GPU IP is licensed from UK based Imagination Technologies. However,Apple,a few years afo tried to bankrupt them,and hire away their engineers. Then a Chinese back entity bought them up,and Apple then went back to licensing their GPU technology.

What happened to Imagination is a sad story. They used to make the best mobile GPUs, they were in iPhones, PS Vita and even Intel used their GPUs for their higher end integrated ones. But Apple wanted something different, they didn't play ball and Apple decided to just open up a new chip design centre in St Albans and hire all their key engineers. Then progress halted, they were almost bankrupt and were sold off to Chinese investors.

Apple never stopped licensing Imagination patents though, their original agreement ended at the end of 2019 and they signed a broader one which came into effect at the beginning of 2020, with the understanding that Apple just needs their parents, not the microarchitecture which has been done in-house for several years now.

Imagination are focusing their designs towards high performance computing, primarily for the auto industry. Their IMG-B architecture is geared towards multi-GPU implementations to maximise compute density. This is very different to how NVIDIA and AMD do multi-GPU, and apparently solves software compatibility issues by going away from alternate frame rendering towards a more compute-focused workload sharing where one big GPU decides how to distribute work among many smaller GPUs, similar to how execution engines parallelise workloads across a big cluster of workers.

Their C-series which is going to be announced soon is going to support ray tracing.
 
Back
Top Bottom