We explain how to create MIDI string arrangements that don’t sound like MIDI string arrangements!
A lot of musicians like using string ensembles in their productions, whether it’s for sweeping orchestral soundtrack cues, anthemic rock/pop choruses, or slippery disco‑funk countermelodies. Of course, most of us don’t have a real string section on speed‑dial; neither do we have a recording setup comprehensive enough to do it justice even if we did! So we ruefully fire up some sample library or virtual instrument and hope for the best — a tactic that rarely ends very well, to be honest. Indeed, the majority of ‘canned’ string‑ensemble parts I hear coming out of project studios only sound at all plausible if you bury them in the mix. The moment you try fading the tracks up as a feature, their manifest fakery just makes your whole record feel cheap.
But it doesn’t have to be this way, because there are ways to get remarkably realistic string‑ensemble sounds when working with MIDI strings, and in this article I’d like to suggest a few tips that can really help in practice.
Divide & Conquer
The first thing to realise is that you can’t just play your strings patch the same way you would a synth patch if you want best results. String ensembles don’t play like that, because each instrument section operates independently, tailoring the phrasing and tone to suit the nature of the line, whereas all the keyboard player’s fingers are being driven by the same musician!
There’s a world of difference between how Middle C sounds on double basses and how it sounds on violins, and part of the reason that canned strings sound a bit rubbish is that they don’t reflect this reality.
So one of the things I always recommend is splitting out each internal MIDI line as a separate track. For a start, this gives you much more flexibility to adjust the timbre of the ensemble sound to suit the musical context. You see, the ranges of the different string instruments overlap a great deal, but if you feed your MIDI data into an all‑in‑one ‘ensemble strings’ patch it’ll give you no choice about which instruments play which notes. There’s a world of difference between how Middle C sounds on double basses and how it sounds on violins, and part of the reason that canned strings sound a bit rubbish is that they don’t reflect this reality. In particular, melodic parts will sound a lot more convincing if they’re not hopping around between different instruments.
And speaking of melodic parts, the other big reason to split each MIDI line onto a separate track is that this makes it possible to take better advantage of the more advanced articulation and phrasing options provided by your choice of virtual instrument. Orchestral libraries now often provide special legato patches, for instance, which render overlapping MIDI notes as slurred phrases, something that can immediately improve the sense of musical realism. Things like accents, staccatos, and swells are also usually much better handled using dedicated articulation‑specific patches than by simple MIDI programming tweaks, and it’s useful to have the flexibility to switch between those articulations on a per‑part basis, rather than just switching the whole ensemble at once.
More advanced string‑ensemble sample libraries offer keyswitches that can select different articulations. Making use of these can help contribute to a more convincing‑sounding string arrangement.
Velocity Versus Volume
That said, there is plenty you can usually do to improve a string ensemble sound with your MIDI programming too. The most important consideration, in my view, is how you shape volume/expression and velocity data in conjunction with each other. It’s vital to realise that volume and expression data typically affect only the level of a string section (much like your DAW’s fader automation), but that velocity data will also change the string section’s timbre — typically by making higher‑velocity notes more strident, as you’d expect if the player were playing with greater intensity.
With this in mind, I usually prefer to adjust velocity data first to achieve a suitable overall timbre and believable musical phrasing, and only then balance the parts against each other (and indeed the rest of the mix) using volume/expression data or DAW fader automation. Mind you, if you’re using budget‑friendly virtual instruments, you may find that they only have two or three different samples available per note to cater for all 127 velocity values, so I do sometimes feel the need to boost/cut the level of some notes with volume data for phrasing purposes in order to avoid...
You are reading one of the locked Subscribers-only articles from our latest 5 issues.
You've read 30% of this article for free, so to continue reading...
- ✅ Log in - if you have a Subscription you bought from SOS.
- Buy & Download this Single Article in PDF format £1.00 GBP$1.49 USD
For less than the price of a coffee, buy now and immediately download to your computer or smartphone.
- Buy & Download the FULL ISSUE PDF
Our 'full SOS magazine' for smartphone/tablet/computer. More info...
- Buy a DIGITAL subscription (or Print + Digital)
Instantly unlock ALL premium web articles! Visit our ShopStore.