
Using MIDI Controllers With Reason
Reason has two completely different systems for configuring MIDI sources. We unpick some of the confusion around this to find the best ways to sequence and play Reason from external devices.
To find the exact phrase, put the words in quotes or join them together with a plus sign e.g. live+recording or "live recording".
To find, say, all live recording articles that mention Avid, enter: live+recording +avid - and use sidebar filters to narrow down searches further.
Reason has two completely different systems for configuring MIDI sources. We unpick some of the confusion around this to find the best ways to sequence and play Reason from external devices.
This remote-controllable preamp boasts some unusual features aimed largely at the live-sound market.
Rating: ***** 5/5 Stars. CGM is an impressive beast. With over 850 instruments and presets included, there's plenty to explore even before you dig into the multitude of possibilities offered by the powerful front-end.
Rating: **** 4/5 Stars. This tasteful 60GB library is the flagship of the Organic Samples range and consists of violin, viola and cello solo instruments recorded at the Teldex Scoring Stage, Berlin, from five mic positions.
Rating: **** 4/5 Stars. This expansion pack for Spectrasonics Omnisphere is described by its Polish creators as a collection of ambient sounds inspired by the sea.
Rating: ***** 5/5 Stars. Primarily aimed at media composers, Trails is best described as a blend of playable instrument, rhythm/pulse creation and abstract sound design.
Here are 10 ways engineers and studio owners can ensure the studio works as well for the musicians as it does for themselves.
The Viennese maestros go intergalactic with their full-scale orchestra range.
JBL Control Ones with dented grilles, sticky with fake blood, painted bright yellow for some forgotten scenery reason, are the unsung heroes of theatre audio delivery.
Yamaha look to reclaim their stage keyboard crown with two world-class live instruments.
Looperator takes any audio source and slices it into 16 steps, applying a series of effects to any step that include filters, stutters, slices, distortion, tape stops, volume modulation and looping.
Whether you are setting up a bedroom studio or the next Abbey Road, choosing the right audio interface for your needs can be brain-meltingly difficult. Editor In Chief Sam Inglis breaks down everything you need to know into 26 bite-sized chunks!
Grammy Award-winning engineer and producer Dom Morley chats with Sam Inglis about the art and craft of recording and mixing vocals.
Distortion can be a surprisingly useful mix effect, if you know what you're doing. Mike Senior reveals several common applications, and explains how to get the best results in practice.
Dr Manny Fernandez has been heavily involved in development and programming for Yamaha and is best known for his FM and Physical Modelling work through their glory years of new synthesis technologies in the DX7II, SY77/99, VL1, VP1, EX5, AN1x and FS1R. More recently he has been working with the Reface DX, Montage and MODX.
Several employees at Universal Audio have lost their homes and all of their belongings in the California wildfires. Help them get back on their feet...
Grunge pioneer teams up with Native Instruments to produce hard-hitting drum instrument.
Big Reds house twin 10-inch woofers and offer low-frequency extension down to 28Hz.
The Haas Effect is a psychoacoustical effect named after Helmut Haas who first described it in 1949 and clarified his findings in 1951, although it was actually discovered by Lothar Cremer the previous year and called 'the law of the first wavefront'. It is also known as the Precedence Effect which is a far more descriptive term.
If one sound wave arrives at the ear shortly after another, the two are heard as a single sound, with the first arrival being used to determine the perceived sound location for the merged sound, even if the later sound is louder. For simple transient sounds, the time window for the two sounds to merge is below 5ms, but for more complex sounds like speech it can be as much as 40ms. A longer gap between the two sounds is usually perceived as an echo.
It is this precedence effect that allows accurate sound localisation in reverberant locations, since only the direct sound determines the perceived source location, and the later reverberant reflections are merged into the first sound.
However, if the second sound is significantly louder than the first it can become dominant in the perception of source location. It was determined that for time differences of up to 30ms, the first arrival determined the perceived source location even if the second arrival was as much as 10dB louder. Only when the second arrival was around 15dB louder did it become dominant in determining the source location.