The most dramatic overhaul of the Mac OS ever was ignored by most musicians until recently — and rightly, as established music applications and standards no longer run under it. But lately, there have been more and more reasons for musicians to leave OS 9 behind. If you're considering making the switch, read on...
Apple's CEO Steve Jobs may have 'buried' Mac OS 9 at a developers' conference last year, but for most Mac-oriented musicians, it's very much alive. If you have little interest in computers per se beyond their use as music-making tools, the chances are you're still using the old operating system; indeed, you may not have followed the development of the new OS, Mac OS X, at all. So, it's a computer system update. They happen once in a while. What's all the fuss about?
Mac OS X (pronounced, despite its appearance, 'ten', not 'ex') is the biggest upheaval to occur to Mac users in the platform's history, and its arrival will directly affect all Mac-based musicians. In the long run, the effects will undoubtedly be for the better, but taking a shorter-term view, some disruption is inevitable. After all, aside from the way a computer looks, the operating system is the most fundamental aspect of what separates, for example, a Mac from an Intel-based machine — to all intents and purposes, the operating system is the computer, the software that controls the way you and your peripherals interact with the machine. So when the company responsible for the operating system decides to replace it, and with something that looks different, feels different, and works completely differently from what you're used to, you can be fairly sure that some major changes to your computer-based activities — including music-making — lie ahead.
If this is making you think you should have paid more attention, and switched to using Mac OS X years ago, don't panic. One of the other reasons musicians have been slow to develop awareness of the new OS is that until fairly recently, it was something best avoided by musicians altogether — professional music recording software wouldn't run on it, and plug-ins weren't available for it. If your Mac formed the hub of a commercial music-making setup, it would have been business suicide to switch to OS X at, say, the start of last year. It's only really in the last six or seven months, since the release of Mac OS 10.2 (also known as Jaguar), that those using the Mac for MIDI- and audio-related tasks have been able to start using the new operating system at all seriously. In the meantime, even the most computer-aware have predominantly stuck to using the old Mac OS.
However, the writing is on the wall for OS 9, and since Steve Jobs's public funeral for the trusty old Mac OS, the pressure on Mac owners to make the transition to OS X has been stepped up. At the start of this year, Apple announced that henceforth, all new Macs would only boot into OS X. While owners of existing Macs (which can still use either OS) can still get some mileage out of OS 9, there will come a point where if you want to remain a Mac user, you'll have make the move to OS X. So in this article, we're going to take a detailed look at OS X from the perspective of the musician and audio engineer, looking at how the advanced new features will benefit your music software, how the technology actually works and the reasoning behind its development, and also consider how Apple's new audio and MIDI technologies, which are now an integral part of the Mac OS X, will affect recording hardware.
If you're interested in learning something of the history behind Mac OS X, take a look at the 'Rhapsody In A Blue Box' sidebar on page 168 of this article. For more on the technical foundations of OS X, check out the box on page 170 too. But if you want to know about OS X and music, simply read on.
The Mac has always been regarded as an ideal computer for music and audio, especially in the early '90s, when MIDI was handled by your sequencer, and professional audio and DSP was taken care of by separate hardware. However, with the advent of applications like Steinberg's Cubase VST, the evolution of native-based signal processing with effects and instrument plug-ins, coupled with the proliferation of project-level soundcards, and the transition to USB-based MIDI interfaces, different third-party solutions emerged to overcome the lack of built-in advanced MIDI and audio functionality in Mac OS.
In Mac OS X, the situation couldn't be different — or better — and one of the fundamental aims seems to have been the consolidation of concepts from competing MIDI, audio and plug-in solutions into the new built-in Core MIDI, Core Audio and Audio Units technologies. These are operating system-level parts of the Mac OS itself that do away with the need for contradictory third-party solutions, and demonstrate a commitment on the part of Apple to deliver the functionality professional musicians and audio engineers require.
However, having changed the way audio and MIDI software interacts with the Mac OS, Apple have made it harder to update existing Mac music software to OS X — certainly much harder than it is to update office software, say. This is one reason why it's taken a while for popular OS 9 music and audio applications to be fully compatible with OS X (for more on the current state of play, see the 'OS X Application Support' box, above). A further problem for Mac music software developers has been the way Core MIDI and Audio have continued to evolve since the initial release of OS 10.0, at which stage they were much less finished. While music-related software like Propellerhead Reason, TC Works Spark and Emagic Logic were compatible with the later releases of OS 10.1, it's only with the release of 10.2 that these technologies seem to have matured to the stage where they're ready for anything professional users can throw at them.
Prior to Mac OS X, the Mac's internal audio capabilities and the system sound in general were handled by the Apple Sound Manager, which, although adequate for the early days of multimedia, wasn't suitable for the demands of audio professionals, or even the increasing demands of a consumer's digital lifestyle. Significantly, Sound Manager was limited by the features of the audio hardware it was controlling, couldn't handle resolutions and sampling rates higher than 16-bit and 48kHz, could only handle a single stereo input and output at a time, and had a somewhat unpredictable latency depending on what system you were using. Although the major MIDI + Audio sequencer manufacturers supported Sound Manager, they also eventually developed their own solutions for dealing with low-latency, multi-channel audio at high sampling rates and resolutions; notably, Steinberg's ASIO (Audio Stream Input Output), Emagic's EASI (Enhanced Audio Streaming Interface) and MOTU's MAS (MOTU Audio System).
The audio system in Mac OS X is known as Core Audio, and although the Sound Manager remains to some extent (now known as the Carbon Sound Manager) for compatibility with older applications, the system has been completely redesigned from the ground up to overcome all the previous limitations.
At the very heart of Core Audio is the Audio HAL (Hardware Abstraction Layer), and this is the layer that sits between your audio application and other high-level aspects of Core Audio, and the actual audio driver that enables OS X to communicate with your audio hardware, be it the built-in headphone and mic sockets, or an external USB, Firewire or PCI interface. You can see this architecture rendered diagrammatically below. Like the major native-based audio applications, such as Cubase, the Audio HAL handles all audio internally as 32-bit floating-point data for maximum precision and efficient processing, allowing Core Audio to fully support 24-bit audio, for example. In addition to this, the Audio HAL also supports variable sampling rates, which enables Core Audio to handle whatever sample rate you require, whether it's 44.1, 88.2, 96 or even 192kHz.
As both professionals and consumers move towards surround sound, both for production and entertainment (such as watching DVDs), it's becoming crucial for modern operating systems to support multi-channel audio. Indeed, the Audio HAL supports an unlimited number of audio channels, providing the ability for Core Audio to scale and support any conceivable surround or multi-channel configuration. This open-ended support for any number of audio channels is also useful for another one of Core Audio's clever tricks: namely the support of access to multiple audio devices simultaneously, no matter how they're connected to your system, whether by USB, Firewire, or PCI card.
Core Audio devices are configured using Apple's Audio MIDI Setup utility, included in Mac OS 10.2 and above, which allows you to alter the master levels and format configuration (sampling rate, resolution and so on) for both inputs and outputs (see above). If you own at least one audio device other than your Mac's internal hardware, one of the neatest features of Core Audio can be set via Audio MIDI Setup — the ability to route the system sound (alert beeps and other audible annoyances) to a different device from the default sound, which contains the output from music and audio applications, including iTunes. This means you could leave the system sound to output through your Mac, and route all other sound to output via the audio hardware that connects to the rest of your studio. So no longer will you have a violent nervous reaction when the Mac alert beep sounds across your main studio monitors because you forgot to turn the volume down after playing a CD, for example!
One of the most significant features of Core Audio is the ability for the latency to be determined by the application itself — even when you're only using the built-in Mac audio hardware — and users of applications such as Logic and Cubase will note the inclusion of an audio buffer size control for adjusting latency in the appropriate audio hardware setup page (shown above right). Due to the efficient design of the OS X's MIDI system (see the next section), this now means that even an iBook can be tighter in terms of timing than any hardware sound module or synth when triggered in real-time.
So how is Core Audio able to provide such low-latency operation for any audio device? The first reason is to do with the way audio drivers are created in OS X. Device drivers in OS X can exist right in the kernel (the lowest level between the operating system and hardware), and those developing audio device drivers are encouraged to write them in this way — the source code of the Mac OS relating to audio I/O is part of Darwin (see the 'Understanding OS X's Foundations' box later in this article for more on this) and is therefore available to developers under Apple's open source licence agreement, so there are no hidden secrets.
Several Apple-developed audio device drivers are included with OS X, and these not only offer full support for the internal audio hardware used in the Mac hardware, but also support for any USB hardware that conforms to the so-called audio class standard of the USB specification. It's these drivers that allow such high performance from the Mac's built-in hardware (which is great for laptop users, of course), and enable generic USB audio devices to be plugged in and recognised without any additional drivers, while still achieving the same high level of performance.
In addition to support for the internal audio hardware and audio class USB devices, from version 10.2.4, Mac OS X includes enhanced support for handling MIDI and multi-channel audio via Firewire devices compatible with the IEC 61883-6 standard, as mentioned in this month's Apple Notes. While this might not immediately sound familiar, IEC 61883-6 is the standard utilised by Yamaha's mLAN and Apple's own Firewire Audio Networking (FAN) technology, which means mLAN-compatible equipment can now be connected to your Mac and incorporated into Core Audio and used in any music or audio application without requiring any additional drivers. Plug and play really is a reality now in the professional audio world thanks to Core Audio — and with a Firewire audio interface if you want. For a selection of manufacturers who are bringing their MIDI and audio I/O hardware drivers up to spec for use with OS X, see the 'Core MIDI & Audio Driver Support' box below.
The second and most important reason for the low-latency performance of Core Audio concerns the model used for handling audio I/O in OS X, and the fact that OS X itself includes incredibly accurate timing systems to ensure that all parts of the OS get the data needed at the required time. Buffers — areas of the computer's memory that hold data — are used when transferring audio to and from an audio device and the operating system, and when audio was output in OS 9, the driver would have to inform the application every time the sample buffer was full and ready for collection in every I/O cycle.
By contrast, the Audio HAL in OS X is incredibly clever; it makes use of predictive timing routines that enable it to predict when the I/O cycle will complete, saving the driver from having to send a message to the application. It can accurately manage the time-stamped data being sent and received by several different applications and audio devices in a system simultaneously, and the result is an incredibly efficient and powerful audio system.
Core MIDI isn't the first time Apple have implemented MIDI into the Mac OS, of course, and Mac diehards will remember MIDI Manager, a set of MIDI management tools that appeared with the later versions of System 6 in the early 1990s. MIDI Manager provided a way for MIDI applications to provide virtual MIDI ports that could be linked to physical ports on your MIDI interface, and the user was able to make custom connections between all the available ports using an application appropriately named Patchbay.
However, MIDI Manager's potential was never fully realised after Apple Computer got sued by Apple Corporation, The Beatles' publishing company, over the use of the name Apple for musically-related products. Although the case was eventually settled, there were some limitations in MIDI Manager that weren't addressed during this time, such as the number of ports that could be supported. This was an issue that became increasingly important when multi-port devices like Opcode's Studio 5 and MOTU's MIDI Time Piece appeared on the market.
To address the limitations of MIDI Manager, both the leading Mac sequencer manufacturers at that time developed their own solutions, resulting in OMS (Opcode MIDI System) and FreeMIDI from MOTU. Opcode later rebadged OMS as the Open Music System and it became adopted as the most common way for music applications to work with multi-port MIDI interfaces on the Mac, except for Digital Performer users who continued to use FreeMIDI, or applications that implemented their own specific MIDI drivers for the developer's own MIDI interfaces, such as Emagic's Logic and Unitor interfaces.
One of the key concepts of OMS was the Studio Setup document, where you could inform the system about how your actual MIDI devices were connected to your MIDI interfaces. While many applications only used OMS's ability to present MIDI input and output ports, OMS also made it possible for applications to see your MIDI configuration as a series of MIDI devices — sound modules, keyboards, and so on, for example.
Since the same programmer has been involved in both Core MIDI and OMS (Doug Wyatt, who now works for Apple and was previously employed at Opcode), it stands to reason that Core MIDI should be based on the studio concept of OMS. However, one of the big advantages of Core MIDI over OMS is that no complicated set-up procedure is required to get your applications talking to your MIDI devices — it really is plug and play. Core MIDI devices, like Core Audio devices, are configured via Apple's Audio MIDI Setup application, and if you want to set up the OS X equivalent of an OMS Studio Setup document, this is what you use. In many ways, it can be thought of as an evolution of Opcode's original OMS Setup application.
In addition to OMS, Core MIDI's implementation was also influenced by the USB MIDI class device specification, and, as such, many USB MIDI devices whose functionality can be catered for with the generic USB MIDI class driver don't require any additional driver software to be recognised by and be fully useable with Core MIDI. Where dedicated Core MIDI drivers are occasionally required for a device, these need to be installed before connecting the device to your Mac for Core MIDI to be able to recognise it, just as in older versions of the Mac OS.
Core MIDI is a huge improvement over previous solutions for MIDI on the Mac, especially in its abilities to allow multiple applications to share the available MIDI ports, and the fact that USB MIDI devices now work without the problems of old. In terms of Core MIDI's timing accuracy, it's worth mentioning that one of Apple's original design goals was, to quote a preliminary document, "to be able to get a MIDI event into and out of the computer within one millisecond, and also to keep jitter to under 200 microseconds". While a sequencer-driven MIDI interface technology like Emagic's AMT (Active MIDI Transmission) may still provide the best timing, Core MIDI should offer dramatic improvements to ordinary MIDI interface and sequencer setups, especially in larger studio configurations.
The use of plug-in instruments and effects has perhaps caused more confusion among those using, or thinking of using, OS X for music and audio than any other aspect of the new system. When referring to plug-ins, it's worth remembering that the base definition for a plug-in is a smaller computer program that adds functionality to a larger application. In the world of music software, of course, plug-ins are most commonly used to integrate additional software effects and instruments into a music-making application. When investigating the underlying technology in operating systems and applications, it's important to bear in mind that 'plug-in' isn't a term that can be used interchangeably with effect, or instrument.
When Steinberg wanted to implement a plug-in architecture for native-based processing in Cubase VST back in 1996, to enable enable third-party companies like Waves to add their own effects into the system, there was no OS-based standard to facilitate this need, so they came up with their own VST plug-in programming interface and encouraged other developers to jump on board. Later, in 1999, Steinberg extended the VST programming interface to allow software instruments to be implemented as VST plug-ins as well, and VST was and remains a successful platform, both in the Mac and Windows worlds.
However, Steinberg weren't the only ones to come up with a solution to a native-based processing plug-in architecture, and while many developers implemented support in their own applications for VST, including Emagic in Logic from version 3.5, companies such as MOTU and Digidesign developed the MAS (MOTU Audio System) and RTAS (Real Time Audio Suite) plug-in architectures respectively. Whether the need for different plug-in architectures to achieve essentially the same goal was based on technical or political reasoning isn't relevant, since the end result is a nightmare for the user. Some manufacturers support only specific plug-in platforms, such as VST, and if your music-making application of choice only supports MAS plug-ins — like Digital Performer — your choice of plug-ins is limited. Another type of third-party software, known as a wrapper, can usually bridge the divides between the different plug-in formats, but this adds further complexity to an already muddled situation.
To solve the problem of the many incompatible plug-in formats, Apple introduced Audio Units in OS X, their own format for native-based audio plug-ins. The idea is simple: as with Core MIDI and Audio, if the operating system vendor itself presents a solution for audio plug-ins that's application-neutral, host developers are saved from having to create their own format, and every host application running on OS X could essentially support the same plug-in format. Windows users have enjoyed the benefits of a system-adopted audio plug-in format for some time in what has evolved to be the DirectX plug-in format, and in theory it presents a win-win solution for everyone. Developers only need to support one plug-in format, and all your plug-ins are available in your music and audio applications.
Perhaps one of the most significant aspects to Audio Units is that it's not just intended as a format for developing plug-in effects and instruments — there are many different types of Audio Units, and the format can basically be thought of as a way of creating building blocks for working with audio streams. However, the most common types of Audio Units musicians will come across will undoubtedly be Effects Units and Music Device Components, which are used to develop effects and instruments respectively.
Like most current audio plug-in formats, Audio Units supports multi-channel operation, enabling surround-compatible effects and instruments to be developed, and also offers a flexible method for handling audio busses within a plug-in. Usually a single input and output buss will be used (in the case of an Effects Unit), although additional input busses can also be implemented for effects requiring side channels, where a second input modulates the main audio input.
In addition to these two types, the Audio Units specification also encompasses Mixer, Output and Format Converter Units. Mixer Units can be used to implement simple mixers that mix a couple of inputs into a single output, or more complicated multi-channel and send-buss configurations, while Output Units can be attached, unsurprisingly, to audio outputs and could be used to implement an Audio Unit that wrote the incoming signal to an audio file. Finally, Format Converter Units are used, as the name suggests, to implement audio-format conversion processes, such as sample-rate conversion, and so on.
Apple supply a collection of 'factory-set' Audio Units plug-ins with OS X, including a reverb, a delay, a parametric EQ, a limiter and a selection of filters. There's also DLSMusicDevice, which allows you to effectively use the sounds from QuickTime musical instruments (or a Soundfont or DLS file) as you would any other plug-in synth, with the audio output being routed through the mixer of the host application.
Despite the ideology of Audio Units, the introduction of another plug-in format has actually created something of a problem for those currently using (or thinking of using) Mac OS X for music and audio. When developers first needed to come up with a solution for audio plug-ins in the first releases of OS X, such as when TC Works released OS X versions of Spark, Audio Units weren't quite ready. Instead, people turned to an established OS 9 plug-in format that had already proved popular and contained all the necessary functionality — VST. And it's interesting to note that Apple themselves provided empty plug-in folders labelled Digidesign, MAS and VST in OS X, alongside their own Components folder for Audio Units in OS X. There was obviously an awareness that developers would need to bring their own formats to OS X in addition to embracing Audio Units.
Unfortunately, OS 9 applications can't run natively under OS X, and VST plug-ins developed for OS 9 can't run natively under OS X either, although with a little tweaking it is possible to port an OS 9 VST to plug-in to OS X with relatively minimal effort, especially for simpler effects and instruments. One of the benefits of the VST plug-in API is that it's fairly portable to other platforms (VST plug-ins have existed for both the BeOS and the Linux operating systems, for example). So bringing VST plug-ins to OS X wasn't a big deal.
As OS X evolved, however, so did the Audio Units format, and when Emagic released Logic for Mac OS 10.1.4, just a week after the official release of version 10.2, it was announced that support for VST plug-ins would be dropped and Audio Units would be the de facto plug-in format. In many ways this was fair enough: current VST plug-ins can't run under OS X, so if developers have to do a little tweaking to their code anyway, they might as well do slightly more and port the plug-in to the Audio Units format instead. And to make life slightly easier, Emagic have released a library to developers that, in many ways, acts as the source-code equivalent of a wrapper in order to make the process slightly easier.
When Steinberg released Cubase for OS X, they implemented support for VST plug-ins under OS X (support for Audio Units is also promised in a future update), and they've been steadily importing their existing range of VST plug-ins to OS X. However, Steinberg aren't the only developer offering support for VST plug-ins under OS X (see the 'Effects & Instrument Plug-ins For OS X' box for more information), and since VST has been a feasible solution longer than Audio Units, there are actually more OS X VST than Audio Units plug-ins currently available.
It's perhaps unfortunate that because of the path Audio Units and VST plug-ins have taken in OS X, users once again find themselves facing a multitude of plug-in formats. Politics aside, if Apple and Steinberg had got together to create a new version of the multi-platform VST format in place of Effects Units and Music Device Components, it might have been easier for both users and plug-in developers making the switch to OS X.
As we've already seen, there's plenty that's new in OS X, but not all of the improvements relate directly to audio and MIDI. Nevertheless, the Mac-based musician will find many of the technological developments useful. Let's now consider some of these.
One of the most striking elements of OS X is the Aqua user interface (seen on all the OS X screen grabs throughout this article). When Steve Jobs unveiled this during a keynote speech in 2000, he commented that Apple's goal was to create a graphical user interface that looked so good you'd want to lick it! I think most users would agree that Apple have once again succeeded in creating one of the most elegant and attractive user interfaces around. However, Aqua doesn't just look pretty, and Mac users can at last fully control the interface with the keyboard thanks to a special Full Keyboard Access mode, which can be set up in the Keyboard System Preferences panel. Another significant concept in Aqua is sheets — window-dependent panels that slide down from the title bar to present the user with dialogue boxes and alerts that would traditionally have been displayed in another window, such as the file selector. Unlike file selectors in OS 9, for example, the use of a sheet doesn't block out access to the rest of your desktop, so you could now, if you wanted, have multiple file selectors open at the same time.
When it comes to writing applications for OS X and harnessing the core, display and sound technologies, developers can choose one of three Mac OS-based frameworks, known as Cocoa, Carbon and Classic, in addition to Java, Sun's architecture-independent programming platform.
Classic mode runs existing OS 9 software in OS X by essentially booting up a version of OS 9 in the background (it's almost like running an OS 9 emulation program). Applications running in Classic mode look exactly the same as they did under OS 9, In order to make it easy for developers to make their existing Classic applications compatible with OS X, Apple created the Carbon framework (see box above), which is close to the OS 9 application framework. They refer to the process of porting a Classic application to the Carbon framework as Carbonisation, and a Carbon-based application can almost take full advantage of everything OS X has to offer (all the major music software to make it from OS 9 to OS X so far has been Carbonised). It's worth pointing out that Carbon can be (and is) also used to develop new applications for OS X as well, although it's perhaps best suited to making life easier for existing Mac developers.
Cocoa is an advanced rapid application development framework based around the Objective-C programming language for developing native OS X software (Carbon is based around the C++ language). One particularly neat thing about Cocoa is the use of Services, small functions that can add functionality to any another Cocoa application, which take advantage of Cocoa's highly object-orientated nature. While there aren't really any musical examples of this yet, a simple example of this is the ability to automatically create a new sticky note in Stickies (which has been written in Cocoa for OS X) based on the currently selected text in any Cocoa program where you can select text, such as Text Edit.
In addition to the frameworks for developing applications and the Aqua user interface used to control them, Mac OS X also provides another way of controlling applications via AppleScript, which has been part of the Mac way of life since System 7.1. AppleScript allows you to write simple programs (or scripts) in a language somewhere in between English and BASIC that send instructions to an application, allowing you to easily automate repetitive tasks. With OS X, Apple have introduced AppleScript Studio, which, amongst other things, allows AppleScripts to present a graphical, Aqua front-end to the user, much like that of an ordinary Carbon or Cocoa application.
Unfortunately, applications need to implement support directly for them to be automated via AppleScript, and most music software developers traditionally haven't done this. Instead, cross-platform applications have adopted platform-independent solutions, such as Cubase SX/SL's Macros facility and Sibelius' Manuscript language, and Mac users in general have often used third-party utilities like CE Software's QuicKeys to automate an application via its user interface elements where it couldn't be remote-controlled from AppleScript. However, the latest version of AppleScript also has support for automating an application via its user interface elements, regardless of whether it supports AppleScript or not, which could lead to users finding more creative ways in which to control their applications in OS X.
One of the most important features of a modern computer operating system is its ability to multitask effectively, creating the illusion that many processes are running simultaneously on a single processor. A process is actual computer code, combined with any private data needed by the process and information about the state of the processor. The most common example of a process would be an application.
The Mac operating system first gained the ability to multitask with Multifinder when System 6 was introduced in 1988, and the type of multitasking used from this point until Mac OS 9 is known as 'cooperative multitasking'. As the name suggests, cooperative multitasking relies on all the processes cooperating with each other to make sure they all get a fair share of the processor's time — in other words, it's up to the process to provide points in the source code where the processor can switch to another process to create the illusion of multitasking.
While cooperative multitasking has some advantages, such as the relatively low processing overheads required to manage the process swapping, it has one significant disadvantage: if a process doesn't (or isn't able to in the event of a crash) pass the ball, as it were, every process on the system will appear to hang up. And if you've ever tried to 'force quit' an application on Mac OS 9 to alleviate a blockage and bring the system back to life, you'll know that this isn't always successful, and a full restart is often necessary.
Mac OS X uses pre-emptive multitasking instead, where the responsibility for switch tasks is given to the operating system instead, enabling rogue applications to be easily put out of their misery without affecting the rest of the processes currently running.
In addition to multitasking, Mac OS X also has more advanced facilities for multithreading, where each running process can consist of multiple threads. Put in English, this means that one application can appear to do more than one thing simultaneously, such as processing audio files off-line in the background, for example, while allowing you to continue editing in the foreground. Classic and Carbon applications have to be specifically written to take advantage of multithreading, but all Cocoa applications benefit from OS X's native abilities immediately.
If you're thinking that this all sounds very interesting, but are wondering what the implications of OS X's advanced multitasking and multithreading abilities mean for running music audio software, consider what happens when you factor multiple processors into your working methods. Unlike under Mac OS 9, where programs had to be specifically written to take advantage of multiple processors, Mac OS X supports what's known as symmetric multiprocessing, where the operating system itself makes intelligent decisions about how to best make use of the available processors, and the application doesn't need to know.
Different processes can obviously be assigned to different processors, but, more significantly, different threads can also be assigned to different processors, even if the application itself hasn't been written to be aware of its host's multiple processors. In this way, any application can benefit from multiple processors in OS X, and if there's one area where the use of multiple processors is really useful, it's when running real-time DSP effects and instruments in host-based music and audio software. OS X can effectively share the processing demands of the system, an application like Logic, Cubase or Digital Performer, and all the plug-ins you might be running between different processors in a way that simply wasn't possible under OS 9.
While having two processors doesn't necessary double the amount of processing power your computer has, since managing two processors requires a certain amount of horsepower in itself, many Logic users, most notably, have found they are able to run double the amount of plug-in effects and instruments on the same dual-processor machine under OS X, compared with the same machine running Logic on OS 9.
Memory management is another crucial aspect of the operating system that's been dramatically improved in OS X. Long-time Mac users won't need reminding of the fact that virtual memory always needed to be deactivated when running music and audio software, or the way you had to manually adjust the amount of memory allocated to applications when fine-tuning performances in previous OS versions. And the good news is that these annoyances are now but a distant memory — pun intended.
In Mac OS X, memory management is handled by the deepest, darkest parts of the system, at kernel level, and the amount of memory allocated to each application you launch is automatically handled for you. The same applies to disk-based virtual memory, which is also completely transparent from the user's perspective — gone is the Memory control panel from OS 9, because virtual memory is now dynamically allocated to each application, depending on the needs of that application and the amount of total memory available.
If you want to take a closer look at how Mac OS X is allocating memory, you can run the Process Viewer application (below right), which is located in the Applications/Utilities folder. When the Process Viewer opens, you'll notice that it lists all the processes currently active, which includes many system-related tasks. To reduce this list to something that makes a little more sense, set the Show pop-up menu to User Processes in the top-right of the window. By clicking the More Info toggle at the bottom-left of the Process Viewer window, and choosing the Statistics tab, you can see how much physical (or resident) and virtual memory is being allocated to the selected process at that time.
An example of the benefits OS X's advanced memory management can offer became clear when I tried running Logic with an iBook that initially only had the 128MB RAM factory configuration. Under OS 9, Logic would report that it didn't have enough memory to launch the audio engine; but with OS X, Logic loaded without any errors and was able to playback a demo song using the ES virtual synths. So while OS X does still benefit from having as much physical RAM installed as possible, you can be sure that it's being handled efficiently for the best possible performance.
Significantly, Mac OS X also supports what's known as protected memory, which means that if an application crashes, only the memory allocated to that application is affected, leaving all other running applications and the operating system itself intact. By contrast, if an application crashed in OS 9 and earlier, it was very likely that other areas of the memory would become corrupt and the system would usually end up less stable than a one-legged man on stilts — a situation that would inevitably result in a restart. Protected memory is available to all Cocoa and Carbon applications, but not applications running in Classic mode. So if a Classic application crashes, it will probably be necessary to restart Classic mode, although at least you won't have to reboot the whole computer or worry about any applications running outside of Classic mode.
Apple is hoping that OS X will provide the backbone for their operating system strategy over the next 10 to 20 years, and despite taking a few revisions to polish it to perfection, there's no doubt from my current standpoint that they've produced one of the most technically impressive, elegant, and easy-to-use operating systems around, one that is robust enough to survive this length of time. But OS X is also perhaps the first mainstream operating system (I nod respectfully to BeOS here) to be developed from the ground up with the requirements of musicians and audio professionals in mind. In the same way that Apple's core operating system engineers have built in support for many of the latest standards in computing, so the music and audio engineers have included support for standards like USB and Firewire-based audio and MIDI class devices, which really will make life easier for the Mac-based musician.
While musicians and audio engineers haven't enjoyed the smoothest transition from OS 9 to OS X, with such a radical shift in technology, it's perhaps inevitable that there are going to be some problems to resolve before a better system can be fully adopted. However, at the end of the day, as mentioned earlier, Mac users will ultimately have to either embrace OS X or choose another platform. Having said that, though, with the sheer technical brilliance of OS X, the gorgeous look of Aqua and Quartz, the built-in improvements to audio and MIDI handling, and the improved stability and reliability, I can't imagine that any of the Mac users who make the switch to OS X will ever want to go back. OS 9? What's that?