Is there a preferred method of using normalisation? I'm not a big fan of normalising every single waveform I have in my session (specifically voices), as my understanding is that if you record at a decent level you don't need to normalise. I believe it's an unnecessary process, when you have the option of using dynamics to achieve the same, if not better results. I do understand that not setting the dynamics properly could squash the life out of the audio.
Editor In Chief Paul White replies: Normalising simply scales up the level so that the loudest peak reaches the top of the digital scale (or 0dBFS). It doesn't change the sound or the dynamics but because it is a mathematical process, it can lose a little resolution, though this is in practice irrelevant if you're recording at 24-bit. (Incidentally, when recording at 24-bit, having your peak levels between -6dB and -12dB is fine.)
If I get something to work on where the tracks are very under-recorded, I often normalise them first. Otherwise I'll sort out any level changes after applying any other processing I need, as EQs and compressors often have output level controls that make adjustment easy without normalising.