I was listening to Conor Maynard’s YouTube cover of Shawn Mendes’ ‘Stitches’ and wondered if you could provide any insight into what he used to get that huge, rich vocal tone.
I’ve got to the point at mixdown where I’d usually be thinking about sending everything through a couple of global reverbs/delays with a view to simulating a soundstage. Would this have actually been possible for a mix engineer in that decade?
Why do people put delays and reverbs on a separate track instead of putting it with the track that has the EQs and compressors? Do you have to do this?
In the May 2015 Mix Rescue article, Mike Senior talks about how he bounced out a mix any number of times to compare it to his references. I understand the purpose of referencing, but what I don’t understand is why he bounced out the mix.
I’ve put a lot of effort into creating and editing a recording of a solo mandolin. Although I like the final result a lot, on reflection the tone seems too trebly and cold.
I’m an aspiring electronic musician, and am hoping to create some stuff with real shaking low-end impact, but the interaction between kick and bass still puzzles me.
I’ve been using the Haas delay effect to add some nice width to my guitar parts. It sounds great, but I’ve noticed that when I listen in mono my guitars pretty much disappear from the mix. Any advice?
I'm really interested in the vocal sound on Haim's debut album, Days Are Gone — could you tell me what sort of processing might be involved in this album?
I’ve listened to music recorded on tape and the ‘warmth’ seems to be largely due to the soft-saturation characteristics of the tape and other non-linear components in the signal chain. This being the case, won’t a simpler valve emulation or mild overdrive plug-in achieve pretty much the same result?
Tracks that I mix in my home studio tend to sound really bad in my car, but when I correct the mixes so they sound good in the car, they sound awful over my monitors — so what's going wrong?