We undertake the most comprehensive look at Apple’s M1‑based Macs from a musician and audio engineer’s perspective.
When Apple CEO Tim Cook led the keynote for the company’s Worldwide Developer’s Conference last June, during which the Mac’s transition to Apple Silicon was announced, he promised that the first Mac with the company’s custom silicon would arrive by the end of the year. Ever a man to keep his word, when Apple announced it would hold an event last November dubbed One More Thing — honouring the immortal words Steve Jobs used when unveiling Apple’s latest creations — expectations were justifiably high. But the company rose to the occasion by announcing not one, but three new Macs based on Apple Silicon: a MacBook Air, a 13–inch MacBook Pro and, unexpectedly, a new Mac Mini. Speculation had suggested the first Apple Silicon‑based Macs might use a variant of the A14 Bionic system‑on‑a‑chip used in the iPad Air, but instead, Apple announced the M1. The number 1 suggests that this is but the first in a new series, while the M likely stands for Macintosh and shouldn’t be confused with Apple’s earlier M‑series motion co-processors.
Apple recently added two further devices using the M1 chip at the company’s first event of 2021, Spring Forward, on April 20th. Joining the Mac family was a brand‑new, completely redesigned iMac, which is thinner and available in a rainbow of different colours that harken back to earlier models. The second device raised a few eyebrows: Apple have also put an M1 chip in the 2021 iPad Pro model, which further blurs the edges between Mac OS and iPad OS‑based devices. This article covers the M1 from the perspective of the Macintosh, so I’ll leave commenting on the new M1‑based iPad Pros for now.
From London To Leeds
As described in the Apple Silicon article in the December 2020 issue (www.soundonsound.com/sound-advice/apple-silicon), the M1 is an SoC, or system‑on‑a‑chip, much like the A‑series chips that have been at the heart of every iPhone since the iPhone 4 . Rather than relying on a number of dedicated chips to provide different aspects of the overall functionality, SoC’s are like the computing equivalent of Wash & Go. Take a CPU, GPU, T2 security chip and a Thunderbolt controller into the shower? Not me, I just want to SoC and go! Placing system components under one roof in such close proximity brings performance gains in terms of latency, and also improves power efficiency. Improving the ratio between performance and power — usually expressed in terms of performance per Watt — was the driving factor behind the Mac’s transition to Intel, and it’s one of the main reasons for the transition to Apple Silicon. The importance of this balance cannot be understated or ignored, as we shall see, with Apple claiming that the M1 offers the “World’s best CPU performance per Watt”.
The M1 is arguably an evolution of the A14, and could have been called the A14X, as many predicted. Amongst the similarities, both are manufactured with TSMC’s 5nm process — the A14 was the first commercial chip to be built on this node — which TSMC refers to as N5 technology. This technology alone yields a chip that’s around 15‑percent faster whilst consuming about 30‑percent less power than TSMC’s N7 7nm node, which was used to fabricate Apple’s previous A12‑series and A13 Bionic chips.
While the process or node terminology used to describe the manufacturing of a chip has become technically diluted, the 5nm process node roughly means that the smallest feature in silicon on the M1 (or A14) measures five nanometres. To put this in perspective, a human fingernail grows approximately one 10th of a millimetre every day, on average, which equates to around 1.16nm/sec. So, you could say that one of your fingernails grows to the size of the smallest component on an M1 in about five seconds! Consequently, Apple have been able to pack 16 billion transistors into the M1: the most the company have ever put into a single chip.
Little & Large
One of the most important features of the M1 can be traced back conceptually to Apple’s A10 Fusion chip, which debuted in the iPhone 7 and introduced the idea of a heterogenous architecture. As the term suggests, this contrasts with a homogenous CPU configuration where all cores are identical, as was the case with previous A‑series chips and virtually every multicore CPU used in previous Macs and beyond. So, while the A10 Fusion was a four‑core chip, two of the cores were designed with high‑performance in mind while the other two were engineered for maximum energy efficiency, code‑named Hurricane and Zephyr respectively.
This concept was based on ARM’s appropriately named big.LITTLE technology; using an Apple‑designed performance controller, the A10 would switch between the set of performance or efficiency cores as appropriate. This, in effect, made the chip function like a dual‑core CPU, since only one of the two ‘clusters’ could be used at a time. This behaviour would change with the A11 Bionic’s second‑generation performance controller, which allowed all the cores — two so‑called Monsoon performance cores, and four efficiency‑oriented Mistral cores — to be used simultaneously. Apple refer to this as Asymmetric Multi‑Processing (AMP) rather than heterogenous multi‑processing, with the alternative being Symmetric Multi‑Processing, where all cores are the same.
Like the A14, the M1 features Apple’s latest Firestorm performance cores and Icestorm efficiency cores. However, while the A14 has a six‑core CPU with a maximum clock speed of around 2.99GHz, the M1 features four Firestorm and four Icestorm cores, running at a maximum clock speed of around 3.2GHz. Apple don’t bother to talk about clock speed in their specifications for the M1‑based Macs, which makes sense given that this value no longer has much significance.
The Matrix Reloaded
Since Apple Silicon uses ARM for its CPU architecture, ARM’s Neon instruction set is naturally included as part of the micro‑architecture. This is generally used to speed up matrix maths operations, which are particularly useful for DSP algorithms (see ‘Vector Victor’ box). However, Apple’s engineers have gone one step further and implemented additional, custom and undocumented instructions, providing even greater performance for such operations. These were first referred to as ‘AMX blocks’ during the introduction of the A13 Bionic, allowing that chip’s Lightning performance cores to handle matrix multiplication six times faster than the A12’s Vortex performance cores.
The A14 introduced the next generation of these instructions, which were also briefly mentioned during the M1’s unveiling, although Apple now refer to them as Machine Learning Accelerators. Because these accelerators are undocumented, developers can’t address this part of the CPU directly, as is possible with similar extensions that are documented. Instead, Apple encourage the use the Accelerate framework, which provides high‑level functions for vector and matrix computation, DSP, lossless compression and more, which, in turn, utilise the accelerators on the CPU as appropriate. This is presumably so that Apple can make hardware changes without having to worry about breaking compatibility: all a developer need do is recompile their apps using the latest version of the Accelerate framework, which seems like a fair trade‑off.
To give you some idea of what all of this CPU technology makes possible in terms of raw performance, I turned to Primate Labs’ trusty Geekbench. The latest version (5.3.1 at the time of writing) runs natively on Apple Silicon and the scoring system allows for meaningful comparisons to other architectures, of which Intel’s is obviously the most germane to this discussion. I ran the CPU test on the M1‑based Mac Mini, MacBook Air and 13‑inch MacBook Pro, which resulted in single and multicore scores of 1742, 1744, 1743 and 7579, 7678 and 7563 respectively. This gave the M1 average single and multicore scores of 1743 and 7607. To put these numbers in perspective, the 2020 Intel‑based Core i7 MacBook Air had single and multicore scores of 2887 and 1116, and the 2020 Core i7 13‑inch MacBook Pro scored 1386 and 4800. That’s a pretty big difference, and it’s particularly interesting (or depressing, depending on how you look at it) to consider that the high‑end 16‑inch MacBook Pro configured with a 2.4GHz Core i9 processor has single and multicore scores of 1092 and 6847. Yes, that’s right: Apple’s cheapest laptop outperforms their most expensive.
Turning for further comparison to Apple’s high‑end Intel machines, the 2020 27‑inch iMac had single and multicore results of 1290 and 10067, while the current 2.7GHz, 24‑core Xeon W‑3265M‑based Mac Pro scored 1136 and 19640 respectively. The now‑discontinued 3.0GHz, 10‑core Xeon W‑2150B iMac Pro produced results of 1164 and 10094 when reviewed in our June 2018 issue.
These numbers are interesting because, although the multicore scores are justifiably impressive for the Intel‑based Macs, the single‑core scores reveal what might be considered the real story behind the true potential of the M1’s architecture and Apple Silicon in general. If you look at the performance of a single core (presumably of the Firestorm variety), the M1 effortlessly casts a shadow on the Intel processors used in current and previous Macs. And while multicore performance is obviously important, the performance of individual cores is also significant for audio software. Traditionally, in a homogeneous architecture, the optimal balance between the performance of each core versus the number of cores has yielded the best results. And even as we move to a heterogenous future, I would imagine that this ratio will remain important when it comes to considerations regarding the performance cores.
One of the key features of the M1 is that, in addition to being an SoC, it can also be referred to as a system‑in‑package (SiP) like the W‑series silicon used in the Apple Watch. The M1 ‘package’ contains both the SoC and the system memory to implement what Apple call a Unified Memory Architecture, which can provide high‑bandwidth, low‑latency memory access to the entire SoC (the CPU, GPU, Neural Engine and so on).
The big advantage to unified memory is that there’s no need to copy blocks of memory to and from different system components. The data need only exist in one place and can be directly accessed by different parts of the SoC. Contrast this with a high‑end system that utilises the PCIe bus to host a graphics card, for example: the graphics card features its own fast memory, and so for the graphics processing to take place, the required data needs to be copied from the main memory to the graphics card’s memory and possibly back again.
One way to think of the unified memory is as a cache for the whole SoC, building on top of the other caches in the hierarchy. The Level 1 (L1) and L2 caches are larger than those in Apple’s A‑series chips, and differ in size between the efficiency and performance cores and clusters. Each efficiency core offers 128k and 64k L1 caches for instructions and data respectively, while the efficiency cluster features a 4MB L2 cache. By contrast, the performance cores each have 192k and 128k L1 caches for instructions and data, and the cluster implements a 12MB L2 cache. An L3 cache would be accessible by both efficiency and performance cores, but, in the case of the M1, is omitted since the unified memory kind of does a similar job; however, it is more akin to an L4 cache, in that it doesn’t just provide data to the CPU cores.
Now, you might be forgiven for thinking that the idea of unified memory sounds familiar. After all, haven’t we seen such shared memory between the CPU and GPU on integrated graphics hardware like the Intel UHD Graphics 630 and Iris Plus Graphics 640 found on lower‑end, Intel‑based iMacs, Mac Minis and the 16‑inch MacBook Pro? While integrated graphics do share the main memory with the CPU, main memory (DDR) is usually slower than dedicated graphics memory (GDDR), and then you might need to consider any format conversion required between graphics frameworks like Mac OS’s Metal and the actual metal of the hardware. All of this can affect performance, and that’s before you take into account that such shared memory is only shared by the GPU and CPU and not, in the case of a chip like the M1, the complete system.
The M1 is available in two memory configurations — 8GB or 16GB — which need to be chosen at the time of ordering, since you can’t obviously ‘upgrade’ the actual M1 package on the system board, although this won’t be new for anyone who’s purchased a new Mac in the last decade. The only Macs that allowed for memory expansion after purchase were the Mac Pro and the Intel‑based Mac Mini. So, to answer a more uncomfortable question: yes, 16GB is the highest memory capacity for any Mac based on the new M1 chip.
This will inevitably be disappointing news for musicians, especially for those requiring obscene amounts of memory to accommodate large, sample‑based instruments. However, this won’t be a problem for everyone; the 2020 Intel‑based MacBook Air, for example, had the same memory ceiling. In day‑to‑day use — noting the exception of sample‑based instruments — I found that having 8GB of unified memory in an M1 machine offered a similar experience to an Intel‑based Mac with 12‑16 GB, and I’d imagine that an M1 system equipped with 16GB would also perform beyond the expectation of the number.
Purchase recommendation: You’ll probably regret not configuring your M1‑based Mac with 16GB.
Show Me The Macs
While all technology behind the M1 is both as fascinating as it is fantastic, the most important question is what this means for the Macintosh as a computing platform. As we’ve seen in the past, having incredible technology doesn’t in and of itself guarantee a great system from a user’s perspective. But without employing too many superlatives, Apple’s M1‑based Macs are simply superb, with great performance, and the potential to usher in the most painless transition Mac‑based musicians and audio engineers will have ever experienced.
Outwardly, the M1‑based Mac Mini, MacBook Air and 13‑inch MacBook Pro are basically identical to the previously available 2020 models based on Intel processors. The one exception, aesthetically speaking, is that the M1 Mac Mini now comes in Silver rather than the forebodingly professional Space Gray finish used for Intel‑based models. And while the M1 MacBook Air replaces its Intel‑based predecessor — which was only refreshed eight months earlier in March 2020 (and reviewed in last August’s issue: www.soundonsound.com/reviews/apple-macbook-air-2020) — Intel versions of the Mac Mini and 13‑inch MacBook Pro (reviewed in last November’s issue: www.soundonsound.com/reviews/apple-macbook-pro-13-2020) remain available. This is particularly welcome since the 13‑inch MacBook Pro can be ordered with 32GB memory, and the Mini can be upgraded to a hearty 64GB.
The new 24‑inch iMac features a long‑awaited, highly desirable redesign that will make you want one, regardless of whether you need one.
By contrast, the new 24‑inch iMac features a long‑awaited, highly desirable redesign that will make you want one, regardless of whether you need one. The photography on Apple’s website doesn’t do the new iMac justice: you really have to observe and investigate this re‑engineered model in person. The front’s lighter hue blends in harmony with the darker back, and the M1 has made it possible for Apple to make the new iMac stunningly thin, at only 11.5mm. Part of this svelteness can be attributed to the fact the new iMac doesn’t feature a built‑in power supply, relying instead on an external, MacBook‑esque Power Adapter.
The M1‑based Air and iMac are available in two configurations: one with eight GPU cores as with the majority of other M1‑based Macs, and a cheaper variant offering seven GPU cores. Unless you need peak graphic performance for working with 4k video streams, games, or such visually intensive applications, the seven‑core models will supply plenty of graphics horsepower for music and audio work. However, the seven GPU‑core models have other limitations, particularly when it comes to the iMac, such as the provision of fewer connectors for external devices.
A particularly welcome difference concerning the new MacBook Air is that there’s no fan. This means the M1 MacBook Air is basically silent, making it an ideal companion for recording when you don’t have or indeed want access to a separate control room. Previously, this would have been a good purpose for an iPad, but now you can do the same thing with desktop‑class applications at a cost lower than the price of some of the higher‑end iPad models.
The 13‑inch MacBook Pro, Mac Mini and 24‑inch iMacs do have a fan, but it rarely seems to be deployed unless you’re running some seriously large projects. Even then, it’s not what you’d describe as audibly intrusive compared to the equivalent predecessors. A useful application for monitoring the state of the fan is Tunabelly Software’s TG Pro, which can give you a full thermal report of your system should you be curious. Mostly, though, it’s just handy to see the computer’s internal temperature as a menu bar item.
Just as I was finishing this article, reports were emerging that some initial shipments of the new 24‑inch iMac had been delivered with ‘crooked’ displays. This word was used to describe iMacs mounted to their stands such that the vertical sides of the machine were not appropriately perpendicular to the desk. I’m sure a statement from Apple will be forthcoming, but it’s worth remembering you can return a Mac purchased from Apple within a two‑week window, although you’ll be required to contact AppleCare support thereafter.
Playing Well With Others
Every M1‑based Mac model offers two Thunderbolt/USB 4 ports, which support Thunderbolt 3 and USB 4 (both up to 40Gb/s), USB 3.1 Gen 2 (up to 10Gb/s), DisplayPort and charging. There’s also a mini headphone jack. The Mac Mini and more expensive iMac models give you two additional USB ports: those on the Mac Mini are USB‑A connectors supporting USB 3 up to 5Gb/s, while the iMac has USB 3.1 Gen 2 USB‑C connectors.
The Mac Mini also retains an HDMI 2.0 output, as well as a Gigabit Ethernet connector, with a 10‑Gigabit Ethernet option available as a build‑to‑order option. This would be useful for anyone working in a post‑production facility that uses high‑speed networking. And while the iMac might not initially seem to offer a Gigabit Ethernet port, the Power Adapter supplied with the more expensive model incorporates such a connector. This Power Adapter can be purchased as an option with the cheapest iMac model for $30, and I would imagine Apple will also sell these separately going forward.
Incidentally, the power cable supplied with the new 24‑inch iMac is more than your average specimen. It’s a 2m cable woven with a colour matching your chosen hue, capable of supplying both power and, of course, Ethernet to your iMac. But it also sees the return of a magnetically attached power connector, reminiscent of the MagSafe connectors found on MacBooks of yore. This is where combining power and Ethernet into a single cable makes sense for a desktop system, since these are the two cables that dangle onto the floor in most cases, while other, shorter cables tend to reside on the desk. Given that wired networking is still widely used by desktop Macs, it wouldn’t have made sense to just make the power connector magnetically detachable when stepping on the Ethernet cable would also topple your system.
Compared to the preceding Intel‑based equivalents, it’s a shame that some of the external connectivity has been sacrificed to only have two Thunderbolt ports (instead of four) on both the 13‑inch MacBook Pro and Mac Mini, especially since one of those ports might be needed for charging on the former.
For wireless networking, all M1 Macs support the newer 802.11ax Wi‑Fi 6 that’s backwardly compatible with 802.11a/b/g/n/ac networks; the last generation of Intel‑based Macs used 802.11ac with compatibility for 802.11a/b/g/n. As with the preceding Intel Macs, the M1‑based offerings still use Bluetooth 5.0, as opposed to the newer 5.1 or 5.2 standards.
The new iMac is the only M1‑based Mac released thus far with a brand‑new display: a dazzling 23.5‑inch, 4.5k Retina implementation featuring over 11 million pixels at a resolution of 4480x2520, offering 500 nits of brightness and the always‑fantastic True Tone technology. The Mac Mini’s HDMI 2.0 port supports a single 4k display at 60Hz. The Thunderbolt 3 digital video output available on all M1 Macs (which supports native DisplayPort over USB‑C, as well as VGA, DVI and Thunderbolt 2 via adaptors that are available separately) supports external displays with a maximum 6k resolution at 60Hz. Some users have reported being able to attach up to five extra displays at varying resolutions over Thunderbolt by employing some workarounds, although I didn’t try this myself.
The Hard Question
The first issue to address when considering an M1‑based Mac for music or audio is whether your MIDI and audio interfaces of choice are compatible with these newer systems. If your hardware requires no additional drivers, thanks to it being USB Class Compliant, it should ‘just work’ as usual with the built‑in support offered by Mac OS’s Core MIDI and Audio frameworks. However, if your hardware requires a dedicated driver, that driver will need to offer support for the ARM 64‑bit architecture used by the M1.
Thankfully, there’s good news here, as quite a few manufacturers are already offering M1‑compatible drivers for their products. And you can easily verify this after completing an installation by opening Mac OS’s System Information application and navigating to Software/Extensions in the sidebar. A hardware driver is referred to as a Kernel Extension in Mac OS, and you can get detailed information on a given extension by selecting it from the list in the upper view of the window. A kernel extension is compatible with an Apple Silicon‑based Mac if you see ‘arm64e’ as one of the architectures in the window’s lower view.
RME were among the first — if not the first — out of the gate with M1‑compatible drivers, which shouldn’t come as a surprise for anyone who’s experienced RME drivers in the past. And Apple deserve credit for making this transition easier than in the past, laying the foundations for this change in recent Mac OS releases, such as last year’s Big Sur and 2019’s Catalina.
Should you want to use PCIe cards with the current M1‑based Macs, which clearly don’t accommodate internal expansion, Sonnet’s Thunderbolt 3 expansion systems — such as the latest Echo III Desktop and Rackmount models — are compatible. However, Pro Tools HDX systems aren’t currently supported, and given the lack of M1 drivers for PCIe cards right now, this is likely an option for future.
On the subject of PCIe and Apple‑Silicon‑based Macs, it seems likely Apple are already planning an Apple Silicon offering internal PCIe expansion. For one thing, the return of such expansion to the latest Mac Pro was met with both relief and enthusiasm, making it seem doubtful Apple would backtrack on this for their ‘Pro’ desktop Macs. And such speculation is also supported by the fact Apple has already made the driver for their Afterburner PCIe expansion card for video acceleration compatible with both Intel and ARM architectures in Mac OS Big Sur. This could be included so the card can be used in an expansion chassis like the Sonnet Echo III, but I can’t imagine Apple would worry about this support so soon if that was the only intention.
Hopefully, this article has given you an overview on what Apple Silicon — and the M1 chip in particular — means for the future of Mac hardware. In a future article, we’ll consider its ramifications for software.
In addition to a processor’s main instruction set, whether that’s Intel’s x86 or ARM, it’s been common over the last few decades for processor microarchitectures to use additional instruction sets for optimising certain workloads. A common example would be the inclusion of so‑called SIMD instruction sets, where a Single Instruction can operate on Multiple Data at the same time by using a larger register to store shorter numbers. For example, a 128‑bit add instruction can work with four 32‑bit numbers (or words) simultaneously rather than requiring four separate 32‑bit instructions, to proffer a slightly simplistic example.
One of the earliest examples of such an instruction set was Intel’s MMX (which were described multimedia or matrix maths extensions) in the Pentium’s P5 microarchitecture, and more recent Intel exemplars would be SSE (Streaming SIMD Extensions) and AVX (Advanced Vector Extensions), culminating in AVX‑512 in the most recent microarchitectures. Incidentally, the number 512 refers to the word length (in bits) that can be handled by these instructions, so with audio engines using either 32‑ or 64‑bit numbers, a single AVX‑512 instruction might address 16 or eight such items of data respectively.
As mentioned, these types of extensions are found in processors beyond those developed by Intel. For example, Mac users might remember the AltiVec instructions used by the so‑called Velocity Engine found in the G4 and G5 processors. And, in the ARM world, there’s the snappy‑sounding Neon instruction set mentioned in the main text. This used to be marketed alongside a technology called MPE, which had nothing to do with polyphonic MIDI expression and was instead an abbreviation of Media Processing Engine.
Now, you might be wondering why these extensions, which are essentially designed to expedite matrix maths operations, might be interesting to those running music and audio applications. And the simple reason, as mentioned in the main text, is that DSP algorithms rely heavily on matrix maths for everything from mixing audio streams to more complex FFT‑based operations. One of the key instructions of a dedicated DSP chip is known as multiply‑accumulate, allowing the product of two numbers to be added to an accumulator in a single clock cycle (more or less), and this is used in everything from designing mixers and convolution filters to MP3 codecs. In fact, in the latter case, a fun example is that even a decade ago it was possible to implement an MP3 playback algorithm using an ARM CPU with Neon requiring less than 10MHz of the processor’s clock frequency! Therefore, it’s not all that surprising that a multiply‑accumulate instruction is usually included in a processor’s SIMD extensions.
More Than Just A CPU
Given that the M1 is an entire System‑on‑a‑Chip, discussing the CPU cores just scratches the surface, as Apple’s diagram shows. Moving briefly along to the Graphics Processing Unit, surprisingly little is known about its internal workings beyond the fact that it offers either seven or eight cores. Apple describe the eight‑core GPU as having 128 execution units capable of handling up to 24,576 concurrent threads, with the ability to process 2.6 teraflops (floating‑point operations per second). This equates to processing 41 gigapixels per second and, unsurprisingly, Apple claim the M1’s GPU offers the “World’s fastest integrated graphics”.
The M1 also integrates many co‑processors, such as the T2 chip, which previously handled security and video encoding and decoding amongst other tasks, and the second‑generation Neural Engine, making its debut on the Mac platform. This latter processor is arguably one of the most interesting aspects of Apple Silicon, and it’s a requirement for many of the features demonstrated in the forthcoming Mac OS 12 (such as Live Text, which allows you to interact with text in photos). Also integrated are the Thunderbolt/USB 4 and PCIe controllers, an image signal processor, and various other accelerators for cryptography, video and HDR support.
The Neural Engine essentially provides the brain and brawn for machine learning, which is a fast‑growing discipline in computer science and is gradually finding its way into commercialised music and audio‑related domains. Early examples of this include smarter drum‑replacement algorithms, automatic mixing with products like iZotope’s Mix Assistant, or the gesture control functionality of Algoriddim’s djay Pro on the latest iPad Air. It’s going to be fascinating to see how developers figure out ways of using machine learning to enhance the creative workflow in future.
Each of the new M1‑based Macs starts with a 256GB storage capacity, which is probably the ideal minimum given the pricing (although this is potentially arguable with the new iMacs). Some of the more expensive models are supplied with 512GB, and each of the new M1 Macs can be configured with a maximum of 2TB storage.
The speed of the built‑in storage is arguably one reason why the M1’s memory limitations don’t tell the full story. All modern operating systems feature virtual memory, where unused memory is paged to disk and brought back again when needed. As you might expect, how much your system gets bogged down paging memory in and out depends on the bandwidth of the disk — in this case, an SSD. However, while the M1 pages memory in and out of the system with great aplomb, it’s not going to be a magic bullet when it comes to large sample libraries.
Purchase recommendation: buy as much storage as you can afford. 256GB is workable (especially if you have external Thunderbolt drives for libraries and project‑based files), but 512GB should be considered the minimum if possible.
From £699 (Mini), £999 (MacBook Air), £1249 (iMac) and £1299 (MacBook Pro). Prices include VAT.
From $699 (Mini), $999 (MacBook Air), $1299 (iMac and MacBook Pro).