-Microprocessor Wallpapers — Top Free Microprocessor Backgrounds — WallpaperAccess

The Apple M1

Anurag Mukherjee
SIGMA XI VIT
Published in
7 min readJan 29, 2021

--

Why it made headlines, and why it matters

Traditionally Apple PCs have always run microprocessors sourced from Intel, even though there smartphone lineup use their own home-grown chips. So when Apple announced their next gen chip for PC usage, titled the M1, it didn’t come as too big of a shock to the community. Rather, it was met with the fervor deserving of it being an Apple product. However, when its specifications were released, and then when it was finally launched, it smashed all expectations, the expectation’s expectations and then some. Let’s take a look at what made this little fish turn into a soaring dragon.

The Legacy….and the Architecture

Most modern day Intel and AMD processors are based on the x86 platform, utilizing the Complex Instruction Set Computer (CISC) architecture. Simply put, it is a computer in which single instructions can execute several low-level operations within a single instruction set. This platform has been in widespread usage since the mid 70-80s. In 84' however, Stanford introduced the Reduced Instruction Set Computer (RISC) architecture which led to the modern day Advanced RISC Machines (ARM) platform.

The CISC approach attempts to minimize the number of instructions per program, sacrificing the number of cycles per instruction. RISC does the opposite, reducing the cycles per instruction at the cost of the number of instructions per program. In layman’s terms, CISC is more powerful, RISC is more efficient. Now, because of its lower power consumption, performance to Watt ratio and better battery life, most modern day smartphone chips are RISC ARM based.

Given this dynamism, these platforms represent two extreme ends of the spectrum. Before we delve into what these platforms have to do with the M1 being the M1, lets differentiate them a bit more clearly.

ARM vs x86–64

Till now, we have been referring to the CISC based architecture as purely x86. However, ‘x86–64’ is a bit more technically appropriate. The ‘64’ here refers to the system being 64 bit, a major architectural feature. Simply, x86–64 is the 64 bit version of the original x86 instruction set. It’s just that since almost all systems nowadays are 64 bit, the two names have become synonymous. So all mentions of x86 here actually refer to the x86–64 instruction set.

Coming back to the topic at hand, The core difference between these are that ARM instructions operate only on registers with a few instructions for loading and saving data from / to memory while x86 can operate directly on memory as well. Thus, ARM is a simpler architecture, leading to a smaller silicon area and simpler instructions. But simple instructions come at a cost. More instructions are required to do tasks, which results in an increase in memory consumption and longer execution times. However, ARM processors make up for the increased execution time with faster processors and pipe-lining. This results in way lower power consumption.

https://www.esa-automation.com/en/arm-or-x86-esa-automation-suggestions-to-choose-the-best-processor/

x86 cores consume a lot more power than ARM cores due to their increased complexity. But this complexity is what enables them to deliver better execution times and thus overall performance. This along with the emphasis on internal logic and richer instruction sets make the cores power hungry beasts.

In general, a PC has more multitasking needs and does not suffer from power limitations while handheld devices need to maintain a performance-power balance. So, CISC x86 for PCs and RISC ARM for mobiles. Perfect…until that is Apple announced the ARM based M1 for PCs. Note, PCs, not mobiles. ARM based laptops were very niche before this. Apple’s intentions were clear. The MACs were going to be streamlined for efficiency, not raw power. However, the very advantage a PC offers is that of raw power. Something feels of, doesn’t it? Well, here is where Apple went really out of the box in creating something new.

The Integration

Modern day PC performance depends on a lot of inter-related factors. It’s not just the microprocessor and RAM anymore. Recently even the GPUs are leaping by bounds…to the point where a single GPU for a desktop costs worth 2 or 3 high-end processors, case in point the recent RTX3090. So anyways, what did Apple do? …well they simply combined everything.

The M1 is the worlds first integrated System on a Chip (SoC) microprocessor. It packages the actual CPU, the RAM, the GPU and for the heck of it even a Neural Engine for machine learning optimization onto a single unit. And, to top all of it off, this entire thing is fabricated using a 5 nm technology. For context, AMD introduced its latest gen of 7 nm processors just last year. So, relatively speaking, the M1 can be made to be smaller than competing chips. Or, it can fit way more transistors than its competitors…and that is just what it does. With a whopping 16 billion transistors, the M1 is packed to the core. And this is what makes this insane integration possible

https://www.apple.com/in/mac/m1/
The M1 SoC

This integrated RAM is an absolute first for any PC ever. The unified memory architecture uses 2 LPDDR4 SDRAMs* running at 3733 MHz. This allows both the CPU and GPU to access it at high speeds.

And then just because they could, they also put in an image signal processor, a Non-Volatile Memory express(NVMe) storage controller, Thunderbolt 4 controllers, and a Secure Enclave. No point in wasting valuable real estate now is there?

So, put simply, the M1 does not fade away in punching power but at the same time is more efficient at doing so. What more do you want from a chip now? Let an ARM chip run x86 software? Well, the M1 can do even that with its Rosetta 2 dynamic binary translation.

The Benchmarks

So here is a thing which must be made clear. The thought process behind the M1 is that of efficiency, not power, even though it doesn’t lack in that. The MACs which will be running these chipsets will be used for daily productivity, not for intensive rendering or video editing. So we are not supposed to compare this with the likes of a Ryzen Threadripper paired with a RTX gpu. But an i5 with a basic gpu as will be found in comparable non apple PCs or previous gen MACs? Now that is something worth checking out.

To begin with, the M1 is an 8 core processor, while most i5s are quad or 6 core chipsets. So the M1 is already running at an advantage.

https://www.apple.com/in/mac/m1/

In the CPU tests, the M1 chip wins hands down, being on average 25% faster than the Core i5 CPU. The M1 is also not much slower than the RX 580X GPU in the GPU scores. This is exciting for Apple’s M1 chip, which clearly demonstrates its ARM based architecture can go toe to toe with previous-gen Intel x86 chips in performance. In the past, ARM was a great for attaining great power efficiency and long battery life. Now Apple is demonstrating we can have the best of both worlds: high performance and long battery life.

The GPU used in the Apple M1 has eight cores and takes up just a bit more space on the chip than the eight CPU cores. Apple claims the GPU can deliver 2.6 TFLOPS. To put this in perspective, Nvidia’s GeForce GTX 1050Ti from 2016 manages 2.1 TFLOPS. That’s a desktop graphics card with 3.3 billion transistors that draws up to 75W of power, beaten by integrated graphics on a passively cooled MacBook Air.

What it means for what it does

Very long story cut short, Apple made a remarkable powerful yet ultra efficient processor with cutting edge technology. If they hadn’t done it, somebody else would have in a couple of years. That’s the beautiful thing about technology. Stick around with the same concept for a prolonged period of time and someone is bound to come and completely turn everything on its head.

So the important thing is not that Apple did it. No, its something far simpler and way more profound. For they did it FIRST. As with the original iPhone way back in 2007, Apple has proved that something new can be done, and by doing so have started a new trend. A trend in which they now have a considerable head-start. What is more interesting is how the other giants respond to this trend. Especially with Intel’s new CEO Pat Gelsinger recommitting to chip manufacturing and AMD going super strong with its performance oriented Ryzen series, things are about to spice up. And that is always a good thing. For it means more development at relatively cheaper expenses in shorter periods of time. Perfect for the end consumer. Unless of course if you are like me, pottering away on an old Core 2 Duo…for me, it is hopeless regardless of what they do.

If you want to check out the various pieces of hardware mentioned, use these links:

  1. Ryzen™ Threadripper™ | Desktop Processor | AMD
  2. GeForce RTX 3090 Graphics Card | NVIDIA
  3. Apple M1 Chip — Apple (IN)
  4. Intel® Core™ i5 Processors

Want to know more about RISC, CISC and microchip architecture? Follow on:

  1. RISC vs. CISC (stanford.edu)
  2. Computer Organization | RISC and CISC — GeeksforGeeks

Want to check out the benchmarks yourself…here you go:

  1. http://bit.ly/3offIqf
  2. http://bit.ly/2KHVqYz
  3. http://bit.ly/3a3Bc48

*LPDDR4 SDRAMS : Low power double data rate synchronous dynamic random access memory…yeah, everyone thinks it’s a mouthful.

Adieu then…until next time.

--

--

Anurag Mukherjee
SIGMA XI VIT

Just another IT, electronics , research and anime enthusiast……weird combination isn’t it???