• Competitor rules

    Please remember that any mention of competitors, hinting at competitors or offering to provide details of competitors will result in an account suspension. The full rules can be found under the 'Terms and Rules' link in the bottom right corner of your screen. Just don't mention competitors in any way, shape or form and you'll be OK.

Apple M1 Pro and M1 Max

Anandtech review is out:

CPU:

Single threaded unchanged from M1 (as expected).
Multithreaded scaled as per extra cores, except for FP which went insane with M1 Max.

TjVHjJL.png

The 8P alone is roughly equal to the desktop 5800X in SPECint, beats it in SPECfp, basically core for core compared to Zen 3 (which again is what we expected). The full 10-core beats 5800X in almost every SPEC test. In fact M1 Max is basically at the same level as 5950X in SPECfp, despite having half the cores. This is because of the huge memory bandwidth.

This is insane, insane performance levels.

The GPU is also interesting:

xglbiCK.png

Premier Pro results are scaled to desktop RTX 3080, i.e. an RTX 3080 will get a score of 1000. M1 Max gets 85-95% of the way there. CPU plays a role here too, but the baseline 1000 score is with an AMD 5800X anyway.

DCcxRAl.png

Davici Resolve is more GPU-bound. This is roughly on par with desktop AMD RX 6700XT.

Gaming performance, as expected, isn't as good and doesn't compare as well to higher end AMD/NVIDIA GPUs after emulation and trickery to run them, but it was never the point of this machine anyway.
 
These cherry picked benchmarks are pretty wild, they are not near desktop 5000 series ryzen in performance and still behind the top end mobile GPU's as well lol.

https://www.anandtech.com/show/17024/apple-m1-max-performance-review

This anandtech article shows the rough level its at. in some cases it will will outperform bigger more powerful chips, but general case its not as powerful as people were claiming.

3080 mobile and desktop ryzen performance its nowhere near really

What are you talking about? I posted the average multithreaded SPEC benchmarks, by definition, the average is not a cherry picked. I also posted 100% of the compute benchmarks that Anandtech did. Again, posting 100% of the benchmarks is not cherry picked.

It's actually more powerful than people were expecting, because multithreaded FP scaling was significantly better than expected. You're claiming literally the opposite of what Anandtech concluded in the review you linked:

>> The M1 Pro and M1 Max change the narrative completely – these designs feel like truly SoCs that have been made with power users in mind, with Apple increasing the performance metrics in all vectors. We expected large performance jumps, but we didn’t expect the some of the monstrous increases that the new chips are able to achieve.

>> On the CPU side, doubling up on the performance cores is an evident way to increase performance – the competition also does so with some of their designs. How Apple does it differently, is that it not only scaled the CPU cores, but everything surrounding them. It’s not just 4 additional performance cores, it’s a whole new performance cluster with its own L2. On the memory side, Apple has scaled its memory subsystem to never before seen dimensions, and this allows the M1 Pro & Max to achieve performance figures that simply weren’t even considered possible in a laptop chip. The chips here aren’t only able to outclass any competitor laptop design, but also competes against the best desktop systems out there, you’d have to bring out server-class hardware to get ahead of the M1 Max – it’s just generally absurd.

^^^ That is what Anandtech says.

You seem to have not read the article.
 
AT answers the gaming question - its not quite there yet even in the best case scenario. Realise also AT is not testing a lot of Zen3 based laptops either - you can get 8C Zen3 based laptops with an RTX3060 for as low as £900 if you look on HUKD.

https://images.anandtech.com/graphs/graph17024/126685.png

https://images.anandtech.com/graphs/graph17024/126683.png

[IMG]https://images.anandtech.com/graphs/graph17024/126681.png

[IMG]https://images.anandtech.com/graphs/graph17024/126682.png

Also,some tests in a YT video today:
[URL]https://www.youtube.com/watch?v=IhqCC70ZfDM[/URL]
[URL]https://imgur.com/a/YdErGtK[/URL]

[IMG]https://i.imgur.com/kqKBEjl.png
6M64ZJY.png

SihqohM.png

Its solid,but as usual PCMR has gotten way too overhyped on these things and people have to realise the media gets in on the hype too. The Asus G15 has a Ryzen 9 5900HS,which is made on an old 7NM process and AMD is already moving over to newer designs over the next 12 months.

So as usual the Macs work well if you are already in that ecosystem and TBH,you would be already using a Mac irrespective of what CPU/GPU would be in there.

Honestly people who expected these to be gaming machines are going to be disappointed, for good reason. We spent a week telling people that these aren't gaming machines, and they didn't listen :D
 
Last edited by a moderator:
Its no different than all those compute focussed GPUs AMD/Nvidia have made which never did that well in gaming! Although one could argue for an ARM based SOC these are the quickest so far. It does make you wonder what AMD could do if someone would commission them to do a laptop focussed Zen3 SOC.


I do hope it means AMD/Intel actually get on an try and push their laptop CPUs a bit more. The Zen3 SOCs are only 180MM2 - I am hoping once DDR5 comes along,AMD can try and push more cores.

There really isn't much they can do, they've pushed as far as they've been able to within power and thermal constraints. Adding more cores on a laptop means running them at lower clocks to keep within power and thermal limits, which diminishes returns while the chip gets bigger and more expensive, there always is a viability range and for Zen 3 the higher end chips are already at the top end of that.


The huge performance gains due to the memory system makes we wish we had that on PC.

8-channel DDR5 really isn't something we're going to see on PC anytime soon. That bandwidth only exists on server-grade products but those lack the single-threaded performance.
 
Decided to post here rather than make a new thread,

Considering what Apple has done with their M1 chips, is there a chance that future CPUs will have significantly larger "RAM"/cache on the/next to the processors. This would be alongside the standard DDR4/5 arrangement we have now.
I'm talking 1GB on Ryzen 3 and more the higher up you go. Say a Ryzen 7 sporting 4GB and threadrippers with 16GB on the low end and 64GB on the high end. How much would computational speed benefit from such an approach? Would we need to change how software is programmed to take advantage of this?

What about accelerators, will we see more HW accelerators for professional work?

It's generally expected that cache is going to increase in future CPUs, that's quite uncontroversial and has been the trend for decades. If a computation task is memory-intensive or requires core-to-core communication, it could benefit from more and/or faster cache. If it's not, there's no difference.

It will be quite a while before we see many GBs of cache though, that requires a huge amount of space and will be incredibly expensive. The highest end EPYC has 256MB of cache, to put it in context. We're a very long time off before we see the likes of 1GB in cache for a low-end CPU, likely decades away.
 
Just had a proper look and apparently the RAM on the M1 is a separate chip on the same package so I've worded my question wrong.

Will we see RAM added onto the same package as the rest of the CPU.

Lets take AMD, stick RAM off to the side, attach it with IF or something else. For desktop purposes the on board RAM would be in addition to your standard DDR4/5 not a replacement. As an example, you would have a Ryzen 7 with 8GB of onboard RAM and 32GB of DDR4.

Is this something that could happen? Would windows and/or software need to be coded differently to take advantage of this?

No we won't see that because it makes no sense, and it won't make any difference anyway. Putting DRAM on the same package as a SoC just adds to manufacturing complexity and takes away flexibility from OEMs. Intel and AMD don't make DRAMs. Some laptops have soldered ram as well as SO-DIMM slots, but those are not on the same package as the SoC itself. Being there makes no difference to performance or software though.

The only reason Apple does this is because it's space efficient and reduces complexity of the boards, as they make both the CPU and the end-product, something Intel/AMD don't do. There are no performance benefits to this approach for Intel/AMD. But for Apple, it makes perfect sense.
 
it could, it’s technically possible, but you would need a lot of control circuitry over what data that you store on that local RAM as opposed to the sticks. And that cost/complexity likely outweighs the benefits.

As fantastic as the M1 stuff is, I Can’t help think that the apple route will plateau due to die size unless they can manage to split it towards the chip let type route.

M1's DRAMs are not on the die and make no difference to die size.
 
Agreed, my point was more towards the cores and gpu die aspect of it. How far can that scale?

It really can't scale much above what we've seen with M1 Max, they have to split them up when they go larger. I can see them doing a CPU-DRAM-GPU all on one package.
 
I do laugh at this gaming criticism. Is it designed as a gaming platform? nope. Is it marketed as a gaming platform? nope. Is there a eco-system around it for gaming? nope. Is it good at gaming? nope.

Is it designed as a production platform? Yup. Is it marketed as a production platform? Yup. Is there an eco-system around it for production? Yup. Is it good at production? Yup.

So why there is a fixation on gaming is completely beyond me.

Certain idiots blew the expectations for these chips way out of proportion, specifically for gaming, despite as you say Apple made no claims about gaming. There are some people whose understanding of GPU is "game game game" so when they heard powerful GPU their minds automatically went to "must be awesome for gaming". Then there's a user here who made asinine claims about how these are the best gaming laptops and has been quiet ever since the reviews came out :D

There's also another crew who basically hate everything Apple does or makes, so they got their bone and can't stop talking about how uncompetitive these are at gaming, literally running Windows through Parallels, without any GPU drivers, then using Windows' emulation to run some x86 Windows games, again without drivers, to show that these chips can't get good fps even at 720p gaming, so they can come and say they suck at gaming and this GPU sucks or is the same as 2050 Ti or something.

Now both sides blame the other idiots for their own stupidity and claim all they're doing is to refute the other side, they got their excuses to keep going.

Apple for some reason really brings the worst out of people, fans or haters alike. It brings the idiot inside all of us to the surface :cry:
 
The main problem with apple seems not to actually be apple themselves, but the people who are massive apple fanboys! A lot of them talk so much crap and are usually very condescending, which in turn brings out people to try smack apple down as their way to argue and vice versa.

If people were just normal and didnt fanboy over specific company's (not just apple), the tech world would be a better place. Would be much better in general if people were just a fan of specific products and technologies, regardless of where it comes from or who made it, but sadly I dont think thats ever going to happen

It was always obvious that these chips won't be any good for gaming for anyone who uses macs, for the simple reason that there are no AAA games to even play on them, no matter the hardware! It doesn't get simpler than that. Apple has not taken mac gaming seriously in any way for a very long time, and made no mention of it anyway during the launch.

Hopefully this gaming meme will die off as every reviewer has been clear about it now, that nobody should get these chips for gaming, as if it wasn't obvious to begin with. You can spend half the price, get 2x gaming performance, and access to hundreds of titles that you can't even play on the Macbooks regardless of performance.
 
It’s totally going to kill sales of gaming laptops with the performance and battery life on offer. The installed base of Mac users and ability to run Windows will be a tempting market for games developers and that will drive more sale of the M1.

Lol at this point I think you're trolling. GPU, games availability, emulation, etc aside, the screen on these Macbooks have awful response times and ~4-5 trailing frames, as mentioned by Anandtech's reviewer on Twitter. Very similar to previous Apple IPS screens, they heavily prioritised accuracy over speed, for good reason given their use-case.
 
You can’t see developers wanting to reach a new market with a large user base top level hardware?

I don't know why I'm bothering, maybe for the benefit of someone who's reading. But here it is:

First, it takes several years for this user-base to be meaningful in numbers. It's not a large user base now and won't be in a couple of years. Apple is still selling high-end Intel macs with AMD GPUs and will continue to do it into 2022. What you describe (large Apple silicon user base with powerful GPUs) is several years away. These current products will be 3-5 years old, and quite likely obsolete.

Second, Apple will need to develop the software side of things to allow this to happen. So far, they haven't done so and they've shown no indication that they're even interested. What they've done for gaming so far in Metal has been heavily focused on iOS gaming rather than desktop AAA games for macs. Maybe this changes in the future, but it won't change overnight. It will take years, and again, by then current generation of products are old and obsolete.

Third, the rest of the hardware is just not gaming optimised, e.g. the screens have very high response times, because Apple cares about colour accuracy and sharpness, rather than speed. So to compete with gaming laptops, Apple needs new hardware too, by then there will be new chips.

Fourth, assuming we get feature parity between Metal and Vulcan/DX12, these GPUs in their microarchitecture are heavily optimised for compute. There's a reason AMD can match Nvidia in gaming but not in compute, and there's a reason Apple is easily catching up to AMD in compute, but not in gaming. The microarchitecture priorities are different. That means even if all the above happens, Apple will still need to prioritise gaming in their future GPUs microarchitectures, i.e. products that haven't been released yet and won't be for a few years.

So to make it clear, these current products, and the M1 Pro/Max, won't make a difference to the gaming industry and won't be gaming laptops. Whether things will change in the future is irrelevant, these products won't be part of that future even if Apple becomes king of gaming.
 
I'm looking really forward to the M2. I can see Apple adding in even more dedicated processing units, who knows what for, but it's cool to see them have so much control over what they deem important.

I've just gotten into Final Cut a bit more seriously at the moment, and would love some hardware accelerated ProRes encode/decode but for now my M1 MBP is still pretty snappy!

Assuming M2 uses A15's uarch and node, we should expect ~10% faster and 20% better power efficiency for CPU, and ~25% faster GPU.
 
All of which are appreciated, although still won't be letting us play Cyberpunk at 4K ultra just yet... That being said... what if in the background they are working on Ray Tracing Cores? Or DLSS type acceleration? Maybe to also put into Apple TV's and iPads? I guess if these cores can also be helpful in other than gaming this is more likely...

Well, Cyberpunk isn't even released for macOS anyway, if they released a sueprcomputer GPU it would be no different. AMD's FidelityFX works on M1 GPUs. As for ray tracing, there is no hardware support, only software ray tracing on Metal.

Maybe Apple is doing stuff behind the scenes, who knows. There's potential here, but that's all there is for now.

On the more practical sense, I would say I could see them adding some sort of AR boosting capabilities (they might already been covered by the general ML cores). I can see Apple's next push into AR to expand what we expect computers can do. Something about processing spacial info real time but extremely power efficient?

These chips are already a winner for Apple in wearables, just look at how successful Apple Watch has been compared to the competition and the chip is a big part of it. For AR wearables, it will be even more so. There's a reason Apple is focusing so much on the efficiency cores of their silicon which has gone under the radar, this year they made them 25% faster, and A15E cores are now 3.5x faster compared to ARM's A55 efficiency cores. This performance-per-watt advantage in both CPU and GPU is critical for AR devices which have to be lightweight and battery operated to become mainstream products.

So no doubt their AR chips will have dedicated silicon in there to boost some AR-specific features, e.g. greater spatial awareness. The ML cores are effectively TPUs, so any tensor computations can be offloaded there which again is helpful for AR.
 
There is zero chance that they are not at least communicating between the M1 GPU software and chip hardware teams to talk about ray tracing. It can also be used for special effects generation for film/movies. I can see in a few years there is a big announcement about how fast it is in Motion or another effects program, and tail end it with improvements in games as well. Who knows, maybe the types of calculations for raytracing are also super useful elsewhere, like calculating how wifi or sound waves bounce through a room and you pair it with LIDAR scanning to give new options.

Wish I could jump in that time machine to see where Apple is in ten years... I guess until then I will entertain myself watching people get mad about benchmark charts.

Always more exciting to see it in action every year than jump 10 years and see the result :D
 
News time: If you take the M1 Max chip and turn upside down and photograph it (with the right tools) you can see that the chip has an interconnected die on one side and this detail was never shown in any of the marketing materials.

So Apple has already been playing with interconnects and it's possible that the M2/M3 are MCM CPUs capable of performance that is 2x to 4x greater than the M1 Max

Yeah, they're going to put multiple M1/M2 Max into the same machine for iMac Pro/Mac Pro.

Just catching up on this thread. Amazing chips from Apple. Will be the goto choice for many looking at video editing and other specific use cases.

A side note:
I love watching people make a complete **** of themselves. Cheers jigger.

Let's see if jigger will ever return to this topic :D
 
This is actually what stops me from buying into macOS. For work, macOS would be great (development), but I’d still need my Windows PC for games. Now I just dual boot Linux and Windows, which meets both requirements.

That's what I used to do before I got my first Mac. Then decided to just have a separate gaming PC.
 
Back
Top Bottom