Apple recently announced a transition in the Mac’s hardware architecture that promises to be the most significant in the product’s history. But how will this affect musicians and audio engineers?
“Let’s talk about transitions,” remarked Steve Jobs at the company’s annual Worldwide Developer’s Conference (WWDC) back in 2005. He was about to announce that, after much speculation, Apple would begin a two‑year process of replacing the PowerPC chips at the heart of the company’s Mac computers to those developed by rival chipmaker Intel. “Now, why are we going to do this?” quipped Jobs. “Isn’t the business great right now? Why do we want another transition?”
In recent years, there will undoubtedly have been those wary of contemplating such a question again. For some time, it’s been widely rumoured Apple would switch away from Intel processors to ARM‑based alternatives, similar to those used in the company’s other devices like the iPad. So when Tim Cook announced Apple would embark on another two‑year transition during this year’s WWDC keynote in June, re‑engineering Macs to be based on what the company is referring to as Apple Silicon, it was perhaps the least surprising surprise.
While it may seem like yesterday for long‑time Mac aficionados that Apple began the move from PowerPC to Intel processors, it’s worth remembering this was now 15 years ago. And the period since then has been the Mac’s longest relationship with a hardware architecture in the product’s 36‑year history. Apple’s original Macs were based around Motorola’s 68k family of chips (starting with the 68000 processor in 1984), and the final 68k Mac in production was 1995’s PowerBook 190 that used a 68LO040, which was a cheaper derivative of the 68040 due to the lack of a floating‑point unit!
The first PowerPC‑based Macs — the PowerMac 6100, 7100, and 8100s — were endowed with a PowerPC 601 processor and appeared in March 1994. And the final PowerPC was introduced 11 years later in November 2005, a PowerMac G5 that used the PowerPC 970MP chip, the first dual‑core PowerPC that was essentially two 970FX processors on a single piece of silicon.
Cook remarked that Apple still have some Intel‑based products in the pipeline, as we’ve seen with the recent 27‑inch iMac update. So we might not have seen the final Intel Mac just yet, since Apple have always released Macs that overlapped architectural transitions. Indeed, PowerPC‑based Macs were still shipping when Apple announced the first Intel‑based Macs in January 2006: the iMac and MacBook Pro, with Jobs noting “we’re kind of done with Power!”
Apple transitioned to Intel not because PowerPC processors weren’t performant at the time — indeed, the Xbox 360 and the PS3 consoles both used one or more PowerPC cores to drive performance in the living room. Rather, Apple’s motivation was that Intel chips achieved better performance per Watt, allowing them to run cooler using less energy without sacrificing performance. And for the last 15 years, Intel’s processors have largely enabled Apple to introduce some pretty great products, such as the category‑defining MacBook Air, the aluminium iMacs and iMac Pro, and the latest Mac Pro.
The move to Apple Silicon is once again predicated largely by thermals, although this time for slightly different reasons due to the effective end of two laws in computer architecture and the implication of another: Moore’s law, Dennard scaling, and Amdahl’s law. And, at the risk of appearing sciolistic, it’s worth briefly noting the significance of these laws, and why their diminishment has led to Apple deciding the Mac’s need for such a radical brain transplant at this moment in its history.
Moore’s law is perhaps the best known of the three, named after Intel co‑founder Gordon Moore, stating that the number of transistors on a chip would double approximately every two years. This observation paved the way for an exponential performance increase in general‑purpose computer chips, although Moore himself has joked that “all exponentials come to an end, it’s just a question of when”.
The slowing of this trend in turn saw the limits of a related scaling law being reached. While Dennard scaling sounds like a Bladerunner reference (it’s actually named after IBM researcher Robert Dennard) it describes how the power density remains constant as transistors become smaller. Simply put, if the transistor density doubles from one generation to another, the power consumption for a given area remains the same even though there are now twice the number of transistors. So, Moore’s law gave you performance, whilst Dennard scaling kept the thermals in check.
However, where these laws had made it relatively straightforward to increase processor performance by scaling up vertically, their limits meant that to make use of an ever‑larger number of smaller transistors, it became necessary to scale processors horizontally instead. Therefore, over the last 15 years single‑core chips gave way to multicore implementations, essentially shifting the performance problem from hardware to software. Where developers were accustomed to their applications running faster on newer chips without much effort, it was now necessary to parallelise as much code as possible to run on multiple homogeneous cores. Such software engineering is a non‑trivial task, especially when it comes to scaling native audio engines.
The rise of multicore processors took us into the realm of Amdahl’s law, named after the computer scientist Gene Amdahl, whose career had begun at IBM before he started his own company. One application of Amdahl’s law describes how the speed‑up you’ll see in parallel parts of an application by adding more cores (such as processing channel strips in an audio mixer) is limited by the parts that can’t be parallelised. Simply put, this means you can’t just keep adding cores to see a linear increase in performance. And you can see an example of this in the Mac Pro review in the May 2020 issue, where the performance improvement of different audio applications was plotted against the number of cores.
Apple Silicon aims to overcome these limitations in traditional computer architecture by adopting a more heterogeneous approach to processing on a single chip known as a system on a chip (SoC). An SoC makes it possible to implement a full computer on a single chip, rather than requiring a larger chipset and motherboard to handle different tasks such as input and output controllers, graphics, and so on.
In the WWDC keynote, Apple’s Senior Vice President of Hardware Technologies, Johny Srouji, explained that the company have shipped over two billion SoCs in the last 10 years, since the introduction of the A‑series chips in 2010. The iPad and iPhone 4 were the first of the company’s products to be powered by its very own A4 silicon, which used CPU cores based on the ARM instruction set that’s proved popular in mobile devices thanks to its traditionally high‑performance, low‑power‑consumption design. And while any company can license designs from ARM, Apple are one of a few companies whose licence permits them to develop custom, hardware‑level implementations of the ARM instruction set.
Such licensing explains why Apple’s ARM‑based chips tend to outperform off‑the‑shelf solutions. And over the last decade, Apple’s A‑series silicon has seen a 100x improvement in CPU performance, with a focus on performance per Watt.
However, when one considers an SoC and Apple Silicon in particular, we’re no longer just discussing a chip with homogeneous, general‑purpose CPU cores, whether based on ARM, Intel, or any other instruction set. Instead, the industry is moving to so‑called Domain Specific Architectures that Apple Silicon packs on to a single heterogeneous chip, which is packing quite a bit terminology into a single sentence.
An example of a heterogeneous chip would be one that includes both CPU and GPU cores for general‑purpose and graphics‑specific processing, and we’ve seen the rise of such architectures during the last 20 years, such as those Macs that rely on an Intel processor’s ‘integrated’ graphics rather having a dedicated GPU.
Apple’s own A‑series SoCs have always incorporated graphics processing, often accompanied by variants appended with the letter ‘X’, offering twice the graphics performance with wider memory subsystems. The first of these was the A5X necessitated by the third‑generation, so‑called Retina iPad, which required the ability to process twice the number of pixels per frame. And, since then, such chips with the ‘X’ suffix have been used in all high‑end iPads, including the A12Z in the latest iPad Pro (which is based on the A12X used in the previous generation). If you thought Apple’s CPU increase was impressive, over the same 10‑year time span there has been a staggering 1000x increase in GPU power.
However, graphics are just one example of the many types of domain‑specific processing that Apple Silicon handles. If you look at the slide from the WWDC keynote that gives an overview of Apple Silicon, you’ll notice parts of the chip dedicated to tasks like machine learning, cryptography, low‑power video playback, high‑performance video editing, and even high‑efficiency audio processing. And because Apple Silicon will scale across all of the company’s devices, it will provide a common architecture that leverages the advantage of tighter integration between hardware and software.
While Apple Silicon sounds great in theory, how will Apple make this transition easy and transparent to users in practice? As you might expect, the answer is software: the operating system — Mac OS — and compatibility for both first‑ and third‑party applications. For the operating system, Apple Silicon runs the forthcoming Mac OS Big Sur release, which was also unveiled at WWDC, and is deemed significant enough for Apple to baptise it with the first major new version number since Mac OS 10.0 was introduced back in 2001 — yes, Mac OS is going to 11 on both Intel and future Apple Silicon‑based systems.
Native binaries will enable applications to take full advantage of Apple Silicon, and Apple already have their own catalogue running natively, including Pro Apps like Final Cut Pro and, of more significance to those reading this magazine, Logic Pro. The concept of Universal Binaries first seen by Mac users in the transition from PowerPC to Intel is being resurrected, allowing native binaries for both Intel and Apple Silicon to be included in a single bundle. And although applications will be the most common Universal Binary, plug‑ins, app extensions and more can be delivered using the same approach.
For those applications where extensive work will be required to create native binaries, Apple have a technology that allows software created for Intel‑based Macs to run as is on Apple Silicon called Rosetta 2. The original Rosetta was deployed during the previous transition, making it possible to use PowerPC applications on Intel Macs, and worked quite well for the majority of general‑purpose software, despite the inevitable trade‑offs in terms of performance.
When Apple moved from 68k to PowerPC‑based Macs, the Mac operating system included an emulator for the newer Macs to run older applications, and this remained part of the classic Mac OS until the very end. However, in the transition to Intel Macs, rather than emulating PowerPC‑based Macs, Rosetta used a technique known as dynamic binary translation that translated existing PowerPC code for the Intel architecture in the background during execution. This led to Apple describing Rosetta as “the most amazing software you’ll never see” on the company’s web page.
Such translation was largely possible because, despite being compiled for a different processor, a PowerPC application was still a Mac application written for the same operating system using established application programming interfaces (APIs). And although not every PowerPC instruction was supported, such as those specific to the G5, a cleanly written PowerPC application ‘just worked’ so far as most end users were concerned.
Rosetta 2 takes the ideas of Rosetta to the next level, this time translating Intel code to ARM. Unlike its predecessor, Rosetta 2 translates as much code as possible when an application is installed rather than leaving it all until runtime. Although, as before, not every Intel instruction is supported by Rosetta 2, such as Intel’s Advanced Vector Extensions (AVX) that are often used by audio applications when optimising DSP code.
The next two years will be an interesting time for Mac users, and the potential of using custom silicon is clearly immense as Apple boldly embrace the future.
Transitioning the Mac to Apple Silicon will undoubtedly be a giant leap forward for the platform. However, given the first models will be arriving in the very near future [Apple announced M1-equipped models on November 10, 2020 - Ed.], the one question quivering on the lips of every Mac‑based musician and audio engineer I’ve spoken to is: “I need to purchase a new Mac; should I buy an Intel model or wait for the new Apple Silicon‑based systems?” This is, of course, a perfectly reasonable, sensible, and obvious question to ask, although I think one only needs to look at the previous transition to deduce an answer since, as the French would say, plus ça change!
Firstly, it’s never a good idea to base any future purchase decisions on products that don’t yet exist, and while Apple Silicon is the next big thing for Apple, it’s going to take some time before it becomes the next big thing for the majority of creative professionals. As with the original implementation of Rosetta, Rosetta 2 will solve initial application compatibility for most users. However, offering legacy application compatibility using techniques like emulation and translation will always incur a performance penalty; and even though Rosetta 2 is faster than its predecessor, it’s still not ideal for running demanding music and audio software that will get the most from a system.
Logic Pro is already running natively on Apple Silicon, as mentioned, though users of other software will probably have to wait slightly longer, even after Apple begin shipping Macs with the new architecture. I would posit that cross‑platform applications, such as Cubase, Nuendo, Live, Reason and so on, may have the hardest time embracing Apple Silicon, since, despite running different operating systems, both Intel Macs and Windows‑based computers have used similar hardware. And if history tells us anything, Pro Tools is likely to be one of the last applications to fully make the transition. However, applications represent just one part of the music and audio software ecosystem.
Plug‑ins will also need adapting to run natively on Apple Silicon, and, given that plug‑ins are generally simpler than host applications, this is likely to be less of an issue — except where plug‑ins are no longer supported. One caveat is that while native applications can only run native plug‑ins, Intel applications running under Rosetta 2 run plug‑ins designed for Intel Macs in the same process. This means we could see a return of various ‘bridging’ solutions, where a native plug‑in can steam audio from a non‑native plug‑in running in a special host via Rosetta 2. Although, anyone who remembers such solutions in the more recent 32‑ to 64‑bit transition, will likely recall this approach with a myoclonus grimace.
Finally, there is, of course, the inevitable subject of specialised hardware. Copy protection devices like iLoks and eLicensers will require new drivers for Mac OS Big Sur on Apple Silicon, whereas audio hardware will run the gamut from ‘just working’ to requiring new or updated drivers. Any USB audio or MIDI device that is class compliant should, in theory at least, continue to work as before without any additional software. But any audio or MIDI device requiring its own drivers will require new, native software to function on Apple Silicon‑based Macs. Again, it will be interesting to see how Avid handle this transition, since more powerful Pro Tools systems rely heavily on hardware for both audio I/O as well as DSP to implement voices, mixing and processing tasks such as algorithms provided by plug‑ins.
The next two years will be an interesting time for Mac users, and the potential of using custom silicon is clearly immense as Apple boldly embrace the future. We have, after all, already seen what’s possible with this approach on the iPhone and iPad, although it will be creative professionals like musicians and audio engineers who will be amongst those to suffer the most turbulence in the transition. However, as previous shifts have shown as the platform has moved from 68k to PowerPC to Intel processors in terms of hardware, and from Classic Mac OS to what is now referred to simply as ‘macOS’ in software, Mac users always end up better off once the silicon dust has settled.
Here's a useful benchmark video from Apple Insider who got their hands on a 13-inch M1-based MacBook Pro.
Just as Apple released a Developer Transition Kit (DTK) for developers to test their applications with a preview of Mac OS 10.4.1 running on Intel, the company announced an equivalent DTK for approved developers to get started on Apple Silicon. Where the Intel prototype cost $999 and used a 3.6GHz Pentium 4 processor in a PowerMac chassis — those were the days! — the new DTK costs just $500, comes housed in a Mac mini case and is based around an A12Z (the same used in the 2020 iPad Pro) with 16GB memory and 512GB of storage running the Mac OS Big Sur preview.
It’s important to stress, as Apple did during the keynote, that the DTK is solely a development platform and not a product, and is therefore not representative in any way, shape, or form of the Macs Apple will eventually ship using their own silicon. The idea is to provide something for developers to test their existing products with and begin the transition to native applications.
Beyond this overview, there are two reasons why I can’t write about anything that wasn’t disclosed in the keynote. Firstly, I don’t have a DTK; and secondly, even if I did, Apple prohibits the publication of any information gleaned from using it. However, the threat of losing access to the Universal App Quick Start Program doesn’t seem to have prevented certain developers from leaking Geekbench results to various Mac rumour sites. For example, 9to5Mac cited a tweet with a screenshot showing single‑ and multicore scores of 833 and 2582 (with four cores) respectively.
If these numbers are correct, they are remarkable for several reasons. For one thing, Geekbench (5.2.0) would have been running under Rosetta 2 using translated code for the tests, without support for Intel’s AVX instructions, as mentioned in the main text, that were implemented in Geekbench 5.1. And, even more notable at this early stage, the results seem to trample over an ARM‑based system running Windows 10. As an example, Microsoft’s Surface Pro X scores single‑ and multicore results of 746 and 3025 (with eight cores) basically executing the same tests using native ARM code!
I’ll repeat myself by stressing again the DTK is not a product and, as such, isn’t a simulacrum of the Apple Silicon‑based Macs to come, although the leaked numbers do of course invite comparisons with other Macs. The single‑ and multicore performance is equivalent to a 2013 ‘trashcan’ Mac Pro with a 3GHz Xeon E5 and a 2012 Retina MacBook Pro with a 2.3GHz Core i7 respectively. And with the current midrange, i5‑based MacBook Air (reviewed in the August issue) scoring 1023 and 2742, a future Mac occupying this spot in the product line‑up using Apple Silicon should be rather interesting.