• Competitor rules

    Please remember that any mention of competitors, hinting at competitors or offering to provide details of competitors will result in an account suspension. The full rules can be found under the 'Terms and Rules' link in the bottom right corner of your screen. Just don't mention competitors in any way, shape or form and you'll be OK.

Apple M1 CPU

He couldn't be more wrong if he tried, 64bit x86 computing is designated `amd64` in build architectures because AMD released the first 64bit x86 parts and Intel had to copy those instructions

Intel's strategic 64 bit direction at the time was "Itanium", which turned out not to captivate the market quite how they'd hoped. (I played with Itanium machines around ... 2007ish? They were OK, weirdly HP-UX on Itanium was missing some posix thread primitives that were available almost everywhere else)
 
He couldn't be more wrong if he tried, 64bit x86 computing is designated `amd64` in build architectures because AMD released the first 64bit x86 parts and Intel had to copy those.

While the Innovators where pushing Itanium ;) At the end of the day x86/64 is all about that massively long term backward compatibility on instructions. With what apple are doing you can almost leave the backward compatibility to a software layer while transitioning native apps. x86/x64 you can't rely on that because of the absolute massive footprint. The reason there is still x86/x64 is that the world pretty much is developed targeting it.
 
You moved the goalpost now, and you're absolutely wrong about everything you said in these last few pages. How many times do you need people to correct you before you stop and think that maybe you have no idea what you're talking about?

I am right - there is no ARM architecture developed by AMD right now which is popular enough so to be to my knowledge.
And this is for a person who reads a lot.

I am not going to argue - history will prove who was right or who was wrong.

For now the facts speak for themselves - everyone should drop x86-64. And they are doing it.
 
Do you understand that RISC architectures are much more power efficient and it's in their very basics?

Where did this bizarre "idea" of free lunch ever come to you?
Free lunch means something for free. Where do you see anything for free?
It's ok to admit you don't understand something. That's no need to be aggressive or angry about it.

The reference to a "free lunch" is a reference to Heinlein's work, The Moon Is A Harsh Mistress. "There ain't no such thing as a free lunch" was a core concept in the book, and has been used since in a variety of ways in economics. It wasn't original to him, of course, but he did much to keep the idea and concepts in modern vernacular.

It's relevance here is opportunity cost. If this performance was always trivially available on the table someone would have already done this. It's not a new technique or revolutionary change - just a refinement of existing tech. Potentially a very well executed one, still not actually new.

As such, there much be a cost paid to get that extra performance. In the past we've seen this bought a number of ways. Sometimes the cost is moving to a new performance node, more power, more transistors/hardware, more complexity in architecture and so on. It might even be in this case just several hundred million spent on really good optimisations.

But there is a cost, somewhere, that will have been paid. Something different that they have done or are doing. Nothing game changing happened.

So - I'm looking for what that cost is. We can't just have a free lunch, or free performance. Somewhere there is a price.

Or it would already have been done, for profit, by someone else.

In the case of lunch - if anyone outside of friends/ family offers you lunch, what they are buying is your time or good will. Time share salesmen used to use it. They feed you... but you have to listen to them. Casinos feed you... So you don't stop gambling your money away.

There ain't no such thing as a free lunch.
 
It's ok to admit you don't understand something. That's no need to be aggressive or angry about it.

The reference to a "free lunch" is a reference to Heinlein's work, The Moon Is A Harsh Mistress. "There ain't no such thing as a free lunch" was a core concept in the book, and has been used since in a variety of ways in economics. It wasn't original to him, of course, but he did much to keep the idea and concepts in modern vernacular.

It's relevance here is opportunity cost. If this performance was always trivially available on the table someone would have already done this. It's not a new technique or revolutionary change - just a refinement of existing tech. Potentially a very well executed one, still not actually new.

As such, there much be a cost paid to get that extra performance. In the past we've seen this bought a number of ways. Sometimes the cost is moving to a new performance node, more power, more transistors/hardware, more complexity in architecture and so on. It might even be in this case just several hundred million spent on really good optimisations.

But there is a cost, somewhere, that will have been paid. Something different that they have done or are doing. Nothing game changing happened.

So - I'm looking for what that cost is. We can't just have a free lunch, or free performance. Somewhere there is a price.

Or it would already have been done, for profit, by someone else.

In the case of lunch - if anyone outside of friends/ family offers you lunch, what they are buying is your time or good will. Time share salesmen used to use it. They feed you... but you have to listen to them. Casinos feed you... So you don't stop gambling your money away.

There ain't no such thing as a free lunch.

Ok, but there is no more performance in ARM, there is less power consumption.
Why is there less power consumption? it comes form the basics of ARM which turns on the needed transistors only when it needs to execute a code, while x86-64 can never achieve this because it's full with legacy and unnecessary high number of transistors and instructions which overload the designs.

Free lunch may be applicable to something that is perfect. x86-64 is mediocre at best.
 
Ok, but there is no more performance in ARM, there is less power consumption.
Why is there less power consumption? it comes form the basics of ARM which turns on the needed transistors only when it needs to execute a code, while x86-64 can never achieve this because it's full with legacy and unnecessary high number of transistors and instructions which overload the designs.

Free lunch may be applicable to something that is perfect. x86-64 is mediocre at best.

Did you even read what I explained to you in this post?

https://www.overclockers.co.uk/forums/posts/34212900/

I laid it out for you as clearly as possible.

And what you said about transistors turning on and off, just laughable. Stop embarrassing yourself.
 
Last edited:
For now the facts speak for themselves - everyone should drop x86-64. And they are doing it.

Lol facts.

I think you should take a look at the server market. Arm has made small inroads, but is still a long way from any significant change, and that's with the leading server OS Linux and a variety of open source software that could easily be recompiled for arm.

If there was a huge market for arm, Intel and AMD would be all over it.
 
Why is there less power consumption? it comes form the basics of ARM which turns on the needed transistors only when it needs to execute a code, while x86-64 can never achieve this because it's full with legacy and unnecessary high number of transistors and instructions which overload the designs.

Just lol, x86 can't turn off transistors?

Step away from the keyboard you're done
 
If there was a huge market for arm, Intel and AMD would be all over it.

The market is for cheap, powerful processing, with an emphasis on power consumption. If ARM processors can do that better than x86 processors the market will grow. Linux runs well on ARM, software folks don't really care what they're targetting as long as the tools are present.

Intel had an ARM product for years, I had an intel ARM board in about 2005. They gave it up. Turns out that might have been part of a pattern of errors on their part.
 
How much of the performance is due to the on package memory? That clearly isn't cost effective to scale up, hence only 8gb and 16gb parts being released so far.
They seem to be using LPDDR4X-4266 which is the same as Intel use in the current generation and possibly also AMD! It’s fast but it’s not that big a deal.
Being on package doesn’t make it any faster but the fact that they are using a Unified Memory Architecture will help a lot where multiple parts of the chip are using the same data as it doesn’t need to be copied. So tasks that heavily use both the CPU and GPU can get significant gains.
They could add 32GB of LPDDR4X-4266 off package but not sure how that would impact their UMA! Is there space on the package for 32GB of RAM?
I suspect they will move to DDR5 and off package RAM for the bigger designs.
 
It might even be in this case just several hundred million spent on really good optimisations.

Apple is one of the richest companies on the planet, I think it's quite likely this.

And remember this doesn't come out of nowhere - they've been refining their chip designs in their tablets and phones for several years. It really looks like they're just executing better than intel right now.
 
I think you should take a look at the server market. Arm has made small inroads, but is still a long way from any significant change, and that's with the leading server OS Linux and a variety of open source software that could easily be recompiled for arm.
If there was a huge market for arm, Intel and AMD would be all over it.
It's not in their interest to promote ARM on servers as currently they have an almost total monopoly so why would they want to open the door to competition?
Intel and AMD are happy staying with x86/AMD64.
 
It might even be in this case just several hundred million spent on really good optimisations.

More like tens of billions of dollars.

They seem to be using LPDDR4X-4266 which is the same as Intel use in the current generation and possibly also AMD! It’s fast but it’s not that big a deal.
Being on package doesn’t make it any faster but the fact that they are using a Unified Memory Architecture will help a lot where multiple parts of the chip are using the same data as it doesn’t need to be copied. So tasks that heavily use both the CPU and GPU can get significant gains.
They could add 32GB of LPDDR4X-4266 off package but not sure how that would impact their UMA! Is there space on the package for 32GB of RAM?
I suspect they will move to DDR5 and off package RAM for the bigger designs.

That's exactly it. They need 2 chips per SoC, there's just no viable 16GB LPDDR4X-4266 package on the market right now. And they can't do 4 package on SoC, so they have to move off-chip if they want to offer more than 16GB, which they'll eventually have to do anyway. They will likely add a large on-chip shared L3 cache (across big and little cores, plus GPU and maybe even other elements) and move DRAM off-chip for their bigger chips that will come later.
 
Did you even read what I explained to you in this post?

https://www.overclockers.co.uk/forums/posts/34212900/

I laid it out for you as clearly as possible.

And what you said about transistors turning on and off, just laughable. Stop embarrassing yourself.

Fair enough - maybe I am not exactly right in my statement - what you want to say is that if there had been other companies doing x86, maybe it would have been more competitive.
This means that Intel should open the licence.

The thing is that even your post shows 10% higher performance on the AAarch64 part.

None of these is specific to ARM, as for being "RISC", you can run any workload and count instructions that x86-64 and Aarch64 require for those tasks, the results do come up within 10% of each other. Anandtech did run this benchmark a while back as well.

Since late 1990s almost all x86 processors have been internally RISC. In simple terms, x86 processors have an instruction cache that creates a queue, which are fed into micro-ops decoders (simpler ones are decoded directly, more complex ones are decoded through a microcode engine), and these are passed onto buffers and run through the internal RISC processor. This process generally is responsible for about 10-15% of the power consumption in the CPU. ARM CPUs do similar things as well, btw. This isn't to say ISA does not matter, it does, and certain x86 design decisions have contributed towards the stagnation that we're currently seeing by making progress more difficult, but they're only a minor aspect of modern CPUs.

10 years ago, CPUs with ARM ISA had no Performance Per Watt advantage over Intel/AMD ones with x86, in fact, they were behind by a good margin. 5 years ago, they caught on and became on par. Now with Apple (and ARM's own designs), ARM-based ones are significantly ahead, this is not inherent to ARM or x86 ISAs, but because Intel stagnated for a decade and AMD has just woken up in the last few years and has only completely surpassed Intel in the latest Zen 3 generation.

Some extra reading material:
https://community.arm.com/developer...rinsically-more-power-efficient?pi353792392=2

Edit:

In case someone is still somehow concerned by the CISC vs RISC stuff that haven't been relevant for decades, this is from Andrei (of Anandtech) on Twitter:

EiWvQmKXkAEK0Vz


Basically comparing instruction counts of A12 (Aarch64) versus 9900K (x86) across SPEC subbenchmarks, and instruction counts are only 9.84% higher on AAarch64.
 
Free lunch may be applicable to something that is perfect. x86-64 is mediocre at best.

Doing something perfectly is hard. It takes a lot of time, patience, development and money. Which was one of the options I gave above - millions of pounds of development funds. The cost of the "lunch" can be paid in so many ways.

But always watch for it - there always is one.
 
Apple is one of the richest companies on the planet, I think it's quite likely this.

And remember this doesn't come out of nowhere - they've been refining their chip designs in their tablets and phones for several years. It really looks like they're just executing better than intel right now.
Yes, it's a good option. But I'm still looking for other tradeoffs - because there doesn't have to be just one!
 
Intel's strategic 64 bit direction at the time was "Itanium", which turned out not to captivate the market quite how they'd hoped. (I played with Itanium machines around ... 2007ish? They were OK, weirdly HP-UX on Itanium was missing some posix thread primitives that were available almost everywhere else)

Yes quite true, I do recall though that IA64/EPIC was a better late than never hail mary against SPARC and DEC Alpha - I think it might have been the fastest Intel part at the time but the developer tooling and compilers were awful which meant it was d.o.a - I guess that's why they had to copy AMDs x86 64 instruction set.
 
Apple is one of the richest companies on the planet, I think it's quite likely this.

And remember this doesn't come out of nowhere - they've been refining their chip designs in their tablets and phones for several years. It really looks like they're just executing better than intel right now.

The full free in-perpetuity license they have as an ARM founder probably helps a little too ;)
 
Back
Top Bottom