Overview: A very common thing with a lot of amateur producers and engineers is using effects (ie, reverb, delay, compression, chorus, flanger etc) when it is not necessary to use, or in many cases over using them. In this article, I’d like to address certain situations where effects are hindering the mix, and others where they can benefit the mix.
The first thing that comes to mind when thinking about effects being overused is the ever classic scenario of “too much reverb”. Drenching your tracks with reverb can really take away from the intimacy of a performance , and often times adds unwanted clutter in the final mix. A lot of times people just add reverb on instinct because it sounds cool , or as a way to cover up an otherwise poor recording (including myself when I first started). Just because adding reverb makes it sound larger, and more epic on it’s own, doesn’t mean it will in the context of the mix.
When you consider what reverb actually is in the physical world, it helps you understand how to use it better in the digital world. In basic terms, reverberation is created when a sound source is reflected (quickly) off a surface(s) in a space, in that the reflections combine to build an “echo” of the sound, before it decays and gets absorbed by it’s surroundings. Of course, depending on the conditions of the room, the reverb from a source can sound drastically different. Consider the following situation: You have a portable speaker connected to a cd player with a recording of a dry , unaffected vocalist on it. You have a microphone with you to record the sound of the vocalist coming out of the speaker. Depending on what kind of space you go to record the speaker, you are going to be capturing different characteristics and impulse responses from that room. In the digital world, this is what convolution reverb is: mathematically capturing the impulse response of a room, to sound like a specific space. So lets say you record this in a medium to large sized cathedral. What are you going to get? You will end up with a fairly washed out, warm sounding recording that has a long decay time. Because the space is large, the only frequencies that are going to come back to you from the walls and ceilings, are the ones with longer wavelengths (ie, low bass frequencies) , and when they do come back it is going to take a long time to decay because it takes more time for the reflections to bounce off the wall. In the exact opposite scenario if you record the speaker in a small bathroom, you are going to get a quick decay, with a mix of higher frequencies (that contain shorter wavelengths.) So just by changing the space, you can get different tones, decay times, and impulse cues. So what does this mean in terms of using reverb in a mix?
Before I answer that, first consider if the sound needs reverb. A lot of times, having a dry sound in the mix works with the song. Or if the space you recorded already has sufficient reverb, leave it alone. No need to over do it. Just ask, is that what this mix needs before adding it? And if the answer is undoubtedly yes, and the then it’s time to choose which verb to use.
There is a lot of talk amongst engineers about whether you should use more than one reverb for your song. The idea is that when you use more than one reverb, you are putting the sounds in different spaces, and therefore will make it harder for the listeners’ ears to localize sounds in the stereo field. But often times finding one reverb that works on all instruments is tough (and in your humble narrators opinion, matters more for live music, than electronic music), so using more than one isn’t entirely taboo if done correctly. Say you’ve sent all the tracks you want to have reverb on it to a single medium hall reverb, but you notice the vocal needs more reverb. But when you turn the send up on the vocal, it sounds too far back and lost. So you turn it back down, and it sounds too close. There seems to be no medium ground where the vocal sits right. Basically what’s happening here, is that everything else sounds good with that specific reverb, but the decay and darkness of it is too much for the vocal . So the fix? Try pulling up a brighter reverb on the vocal track, with a shorter decay time. This will give it some space with a right amount of brightness and intimacy, then bus a small amount to the original reverb. This will make it stand out, give it a little more air, and also glue it to the room that everything else is sent to.
More tips on reverb settings:
I mention decay time a lot in this portion, that’s because it’s very important to set up correctly! You always want to keep in mind whether the decay of a reverb is too long or too short.If it is too long, then it will step on the next transient and create a muddled mess. Adding a shorter decay will keep it from stepping on each other, and also leave some space in between to keep dynamics (especially after everything is squashed after mastering). But if it’s too short, you won’t hear the effect of the reverb as much. So it’s finding a balance that works both aesthetically, and technically.
Many reverbs, such as the Rverb by Waves comes with a built in EQ. When you add reverb to low frequencies, the long wavelengths reverberate and tend to interfere and mask with each other. What you get is a mud puddle, so it is wise to roll off any low end on the reverb, so that it is mainly affecting the mids, and some highs. A lot of times reverbs automatically, or have a setting to roll off high frequencies too. This is because, as I explained earlier, medium and large size rooms have naturally dark sounds because of the longer (and lower) frequencies that reflect back. Rolling off unwanted frequencies on your reverb can also add more headroom in your final mix ; )
This may be obvious, but be subtle! When the effect is the dominant part of the track, it easily can sound over processed, or amateur. Less is more!
So now that we understand reverb , next we’ll dive into the do’s and don’ts of delays.
Delay is similar to reverb in a lot of ways. Delay can occasionally substitute for reverb, depending on the delay time, and characteristics of the delay. In fact, many times delays are preferable to reverb because 1) it adds glue and cohesion without a reverb tail to interfere, 2) it doesn’t affect the stereo field, while still sounding effected 3) can sound more upfront and intimate while sounding wet. Number 1 is fairly obvious, similar to reverb, sending multiple tracks to the same delay adds a cohesion amongst the tracks. Secondly, because a delay is mono (unless using a stereo delay), it doesn’t affect the stereo channels, and again, still sounds effected and wet. This is good because you can now use the wide stereo field for something else, such as other reverb or other wide panned things (instead of reverb that took up the stereo field). And lastly because it isn’t being added to reverberant space, the delay manages to sound mostly upfront in the mix.
For the most part you can use your ears and hear which delay times work and what don’t. Aside from the basic ¼, 1/8th , 1/16th note intervals, many delays also have dotted time, and triplet timings. When going through times, and you come across these these odd balls, use your ears and make sure it doesn’t sound too cluttered, or simply not in groove with the song. I find from personal experience that many times the dotted 1/8th notes do actually work well in a typical 4/4 track. This is because it doesn’t really interfere with anything else in the track seeing that normally nothing else is on a dotted note timing. So from a mix perspective, it works out great. But there have been times where it doesn’t quite swing with drums, or it’s competing with something else in it’s place, so just use your ears.
No matter what plug in or hardware compressor you use, all compressors are all essentially doing the same function. They are limiting the peaks of the audio signal, and bringing up the quiet parts. There are a variety of applications for compression. Compression can be used to smooth out volume differences, bring up parts that are inaubdle, reduce hard transients that kill headroom, glue tracks together, bring tracks more forward, or can even used as an effect. Compression can really add a lot to a vocal line for example. The human voice has a lot of wonderful subtitles and nuances that sometimes get lost in the mix. If you add the right amount of compression, you can really bring these characteristics out to create a more intimate sound. Adding compression can also level things out, such as a bass line or a really dynamic guitarist. Used in the right amount, compression can work wonders for a track .However, if used incorrectly or overused, compression can destroy a track.
When considering using compression on a track, it’s vital to ask if it needs it. Many times producers and engineers compress way too much, and as a result kill all the dynamics of a given track, and therefore the entire mix. In fact when I first started out in my teens, I habitually compressed everything because, similar to reverb, it made things sound “full” and “epic”. But I quickly realized when you make everything sound “epic”, nothing sounds epic. This is also why they tell you not to EQ, or adjust effects while a track is soloed. Because the listener will never hear just that channel soloed, so instead focus on how it sounds in the mix. This is how I approach compression. Sometimes I think something sounds over compressed when I hear it on its own, but in the context of the mix, it sounds just right. Or conversely, sometimes I want to compress a sound when I hear it alone to make it more full, but in the mix , other sounds create and fill in the fullness. But as a general rule, it’s better to under compress then over compress, seeing how more compression will be added during the mastering process.
So lets say you think something needs compression, so you pull up a compressor and right away go to the pull down menu for presets. That’s a start, but your not done quite yet.
There is no one setting that will work on everything. Any time you compress you have to consider a variety of characteristics about the sound. Are there a lot of hard transients? Are there quiet parts that need to brought up? Is the sound thin, does it need to be fatter? Lets say for example you want to compress a snare drum. Normally, you’d open up a compressor in your DAW, click on the “snare” setting, and go from there. While this is a good spot to start, it’s important that you continue to tweak the settings to the specific snare drum sound and what it’s doing in the track. For example the preset “snare” does not know how fast your snares are being hit in succession. This is very important to know so you know how long your release should be. With a fast succession of snare hits, and a long release, you are going to suck the dynamics out of the snare hits that follow because the compressor is stepping over the transients , and therefore the attack of the snare shots. Lets say you dial in the snare then decide to compress the kick, so now you go to the “kick” preset. Another thing that presets don’t pay attention to is the envelope of a sound. Aside from the release, this is important to know for the attack setting on the compressor. If the kick has a quick sustain and decay time, adjusting the attack of the compressor to come in right after the transient for example, will compress and bring up the sustain of the kick before it decays. The result is that it will sound a fatter because the part that was quiet (the decay), has now been brought up. But if the attack of the compressor is too short, it will destroy the transient click and attack of the kick (say goodbye to intelligibility and punch!) While the ratio’s of this “kick” preset may be close, you may still want to adjust it according to taste. Does it need more compression, or limiting? Or in between. Adjusting the ratio from a softer 2:1-4:1, to a 6:1-10:1 can make a big difference to the peaks, and the quiet parts of the signal. Generally speaking, softer ratios such as 3:1 will bring up quieter parts, while compressing the peaks, and harder ratio times will focus more on compressing, or limiting the peaks . So it’s important to identify first whether a track needs compression, and if it does, tweak the setting it so it fits the specific sound instead of settling for the preset!
Other effects :Chorus, Flanger, Phaser
Here are the esoteric effects that can be used not only for experimental purposes, but also for the purpose of enhancing a mix. For example, if a bass sounds too narrow or dull, a subtle chorus can work wonders by adding harmonics and spreading it a little more in the stereo field. A chorus can also work on vocals, and if done right, wont sound like you’re even using a chorus (not in the traditional sense at least). Similarly, a flanger or a phaser can add a cool change to a high hat, or guitar that seems stale and robotic. Adding movement via these effects makes the track more interesting to listen to, and can even make some tracks stand out from each other. I like to use auto pan plug ins on certain percussion and vocals, and spread them just a little bit left and right at a ¼ note interval. This way instead of sounding flat and deadpanned in the middle, they dance around the center a little, and create for a more interesting listen.
Hopefully through these examples, I’ve shown you a thing or two about the importance of using effects to your advantage. The point of this article is to show that effects can be used as tools to create interest, and enhance a mix, rather than just adding them because they sound “cool”. Whether it’s reverb, delay, compression or other effects, using anything in audio inappropriately can take the audience out of the experience of the song. So remember to always use effects in moderation, and most importantly, when they are needed.
Dan Zorn, Engineer
Studio 11 Chicago
209 West Lake Street