Studio 11’s Glossary of Recording Studio Terms (That Every Recording Artist Should Know!)


Perhaps a more obvious one to some, but a “take” is a single turn, or run through of a recorded phrase.  For example if someone says I don’t like the first take as much as the second take, it means you tried the same thing twice, and they like the second time around better. And if it’s still not perfect, that’s when you…


Punch (In/Out):

To “punch” a take means to redo a certain part of a phrase. For example if you sung the line “Blackbird singing in the dead of night” (which you shouldn’t because that’s blatant copyright infringement), and you didn’t like the way you sang “Blackbird singing”, you can punch or redo only that part without having to redo the entire phrase. This means the engineer is hitting the record button for those words only (overwriting the previous lines), or he took those bad parts out and put you on another track to record on during that part.



To consolidate multiple takes together in an attempt to use the best parts of each one, creating one good take.



To layer another identical (or closely similar) take on top of a previously recorded one. This is used to typically make parts sound “bigger” or more full. Occasionally the engineer or artist may even want to “Triple” for an even larger sound. This process is also known as Overdubbing (adding additional parts to existing parts).


Ins and Outs:

To layer a second vocal take on top of another take , but only doubling selected parts. This is used when one wants to emphasize or embellish certain words. Ins and Outs can occur anytime throughout the verses/hooks, but generally speaking it’s emphasizing parts where there are key words in the sentence. Think back to grade school when your English teacher asked you to go up to the board and underline the nouns and verbs, or perhaps the “subjects” or “action” words, but leave out the “articles” and filler words. It’s kind of like that.


Ad Libs:

Adding a track of extra vocals on top of a previously recorded phrase. Unlike “In and Outs”, ad libs don’t have to consist of the same words, or even of the same lyrical rhythm, but can have different lyrics and flow all together. Ad libs create a “call and response” with the lyrics. For example if the line is “Baby, you’re the one I want to be with”,  an example of an ad lib that would fit that part would be something along the lines of “Only You”, or “It’s true”. Or if the line is “Gotta get that money”, an example of an ad lib would be “Yessir” or “Gotta get it”.


Many artists who are unfamiliar with the studio world, often get these terms mixed up. They are related, but create different end results.  Reverb, or reverberation, imitates a space.  It’s a familiar sound that we all know from churches, halls, bathrooms or concert venues. Reverb is used to make the sound seem further away, or perhaps hidden behind other layers. Delay on the other hand, or as some artists call “echo”, is an effect that takes a sound and repeats it, usually in time with the beat. Think yelling “Who is the king of Siam” into the grand canyon, and the repeating sound that you get back is what’s known as delay.


Scratch Track:

A scratch track is a guide track. A scratch track, or scratch take is recorded while performing to get an idea of how it will sound all together, but 9 times out of 10 will get re recorded with more attention to detail and performance. Vocalists of bands may want to do this when playing along with each other to either get an idea of how the vocals will sound in the track, or to help the players identify the different parts of a song. Which segues to…



On a fundamental level, it’s important to know the difference between these three sections of a song. The Hook, also known as the Chorus, is the part of the song that repeats a few times throughout the song and is usually shorter than the length of the verse (think “Billie Jean is not my lover…”). The verse is the longer parts in between the hook that change throughout. There are usually 2- 3 verses in a song (sometimes with different artists on each verse), and is also where most of the lyrical content lays. A Bridge is something that doesn’t occur on all songs, but in most genres, especially in Pop music, it is the part (typically towards the end after a verse ) that changes before it goes back in the chorus. A Bridge serves to break up the flow of the song, and to build suspense before the hook comes back in.



To harmonize means to sing or play another line on top of a previously recorded phrase that doesn’t consist of the same notes, or is the same note in a different octave.


From the Top, Front or Back:

This is pretty self explanatory, but just in case, from the top or the front means from the start of the song, and back means towards or the end of the song.



In the studio , if the engineer says the sound is too hot, he isn’t complimenting your track, but is usually referring to something that is about to, or is overloading. This happens when artists are louder than the microphone can handle,  (or an instrument is turned up too loud for the recording input) and is usually followed by “clipping” or unwanted distortion.



Sometimes you’ll hear the engineer say, “let me fly the hook”. Fly simply means, “duplicate”. Seeing how most times the Hook is recorded once, it is necessary to “fly” it to the next part where the hook is supposed to come back in.



If the engineer says you’re flat, it means you are slightly below the correct pitch of the note. If the engineer says your sharp, it means you are slightly above the pitch. When an engineer says this, it generally means to be conscious of your pitch.


If you take your art serious, it’s beneficial to understand the language of the people who are an integral part of the process. You can be a brilliant film maker for example, but if you don’t know the terminology that your actors understand, you will never get across your point effectively (or it will take you much longer to do so).  Similarly, if you understand your studio lingo, it will help your engineer understand you and vice versa (not to mention save you a bit of time and money in a session.) And in a world where time is money, every minute counts!


Dan Zorn , Engineer


Studio 11 Chicago

209 West Lake Street




10 Useful Tips for the Modern Audio Engineer

While good sounding audio has existed for a while , the process that goes into making it good has changed greatly. So here is a list of a some helpful tips for the new 2015 Audio Engineer.

1) Know Your Plug-Ins

Many studios and engineers have a lot of options of how to process sound. But because of the recent increase in processing power, flexibility, and cost,  people in the modern era are now predominately using plug ins to mix their tracks with. With so much at hand, (especially with with massive bundles like Waves suites), it may be tempting to throw on a plug in that you haven’t used before because you want to try new things. But if you haven’t used it before, it’s ill advised to use it in a song (especially on a clients time) until you know the in’s and outs of the way it operates and sounds.   If you want to use something you haven’t used before, take the time off the clock to play around with it a few times, and learn how the plug in sounds. Because each plug in has a unique character with different types of settings, you will need to spend the time to get it to sound good before plopping it on a channel and turning knobs. For example if you never used the Waves C4 before, but heard from a friend that it’s awesome on the mix bus, you will first need to know how that particular plug in works. While you may know what multi-band compression is and have used other products, chances are it’s not going to sound the same and will require some time to get to know it. So on the clock, go to what you’ve used before, and in your spare time expand your arsenal of tools by getting to know them well.

2) Don’t be Deceived by the Look of Plug -Ins (Use your ears!)

I read a post a while ago from a guy who was interested in turning off the GUI interface of his plug ins because they were “deceiving” him. And he’s right. It’s very easy to be persuaded by the look of a plug in, rather than the sound. A perfect example is the Waves Puigtec EQ or the Waves Kramer tape simulation.

Waves Puigtex (Top), Waves MPX Tape Simulator (Bottom)


Theres no doubt that they both look cool and emulate very reputable studio gear, but are you choosing to use it for the specific sound or because you subconsciously like the “vintage” look . I can recall a particular scenario a little while back where I pulled up the impressive animated Kramer Tape plug in to get a tape saturated sound on my bass,  then went back to it later to find that the basic looking built-in Saturator plug in on Ableton had a sound that was more what the bass needed. Don’t assume it will automatically work for the track just because it looks cool.  Always use air conditioner with air conditioning service in the recording room!

3) Learn Some Music Theory/Rhythm Basics

If you are an engineer who is recording music at a studio, it’s probably a good idea to learn about the music your recording right? Seems obvious, but there are countless engineers that don’t know when something sounds off key, or out of time because they are only focused on the way it sounds and not the way it should be played. You should at all times be listening to both. Many engineers have the stance that it’s up to the band or the producer to know what is supposed to be played musically, but when it’s a technical problem, like an off key note or a weirdly shuffled drum hit, it’s up to the engineer to ask the artist to redo a take or fix the problem.  After all, at the end of the day, the goal is to create a great sounding song, and it will only go so far if it sounds good but isn’t played right. Your clients will be happy you care. Trust me.

4) Monitor at Low Volumes for More Accurate Mixing 

The biggest problem when listening to mixes at loud volumes is that for the most part ,everything sounds balanced and as a result,  automatically “good”. At loud decibels sound becomes compressed and parts that are quieter are heard more easily, and parts that are loud seem on the same plane as the quiet parts. Also at loud volumes, especially in non absorbant spaces, there can be a lot of reflections that bounce back and impair the accuracy of your perception. Bass builds up and causes strange room nodes, and phase issues and can quickly skew the mix. But, if you mix at low levels it eliminates reflections and the deceiving flattened EQ curve.

5) Spend Some Time Getting to Know the Music you Record

Spending some time researching the type of music you are recording in advance can help greatly in the session. For example, if you are recording Trap or Drill rap, listen to the big Trap and Drill singles that are out, and take notice to what they are doing. If the artist is still under the radar, chances are they in some way are trying to emulate the sound of the ones who made it big, so you should do your part from an engineering standpoint and know what sound they are going for. When dealing with Rap music for example,  things like placement of drops (cutting the beat out), pitch effects, when to use auto tune, stutter vocal effects, telephone filtering are all things that the artist will want , but don’t necessarily know how to explain (or wont be thinking of). If you beat them to the punch, or surprise them with a cool sounding effect, they will show you a lot of respect for really trying to make their song a hit song. If you do these things they will no longer view you as someone who is working for an hourly rate, but someone who is dedicated to their song.

6) Roll Off Unwanted Highs

You may know that rolling off low frequencies on most tracks that aren’t bass/kick can improve intelligibility in the mix, but what some people don’t think about, is that the same thing applies with high frequencies. Some sounds don’t need to have high end content, especially when it’s fighting with other sounds in the same frequency ranges (vocals, guitars). Putting lowpass filters on certain instruments can make room for other things to come through. In an interview with legendary engineer Chris Lord Alge, he said for a recent song he put a low pass filter on his drum buss so his vocals would come through more. Granted this is a bit out of the ordinary, the point is he’s making space in the high end, instead of just the low end.

7) Be Careful About Overcompressing

 By the time a song is done it’s going to be compressed various times. For example, the vocal may have a compressor on the individual channel, on the vocal bus, possibly a compressor on the master, then compressed and limited again during mastering. So seeing how there are a lot of stages where the dynamics are getting squashed, make sure you don’t over do any of them, because the end result will be very additive and also very obvious.

 8) Check Translation on Multiple Systems

This may be something you heard before (or something you at least should have figured out by now), but checking your mixes on different playback systems is a good way to judge how your song will translate, and if you made the right decisions. If I have the opportunity, I usually check my mixes on both my laptop and my headphones. The laptop is a good reference to check on, because that’s where a lot of your audience is going to be hearing your songs from. It’s hard to gauge sub bass on a lap top, but you should at least be able to hear the upper harmonics of the bass and kick. And if you can’t hear any bass, theres a good chance it’s sitting too low (spectrum wise) in the mix, or it’s too quiet.  For me personally, I also listen on my Sony MDR 7506 headphones because I listen to a lot of great sounding music on them, and know how they are supposed to sound. I know right away if there is too much low end on the song with my headphones, so it’s a good tool to utilize.

9) Watch Interviews to Learn from Your Peers and Elders

 It’s now 2015, and information is at the tip of your finger tips. The internet is chalk full of useful information, and should be utilized as often as possible! Watch tutorials on how to use a plug in, watch “in the studio” videos, and perhaps most importantly, watch the web series Pensado’s place on Youtube! Link> .Dave Pensado, an award winning engineer,  sits down one on one with some of the greatest and most famed audio engineers of our time, and picks apart their approach and engineering process for mixing and tracking. Another reason why it’s good to watch how other people do things is because it shows you another way how to do things. You may have always been stuck on a certain way of doing something, until you see that there is another way that also works well (if not better.) This is good to know because when something goes wrong, like gear or a plug in suddenly not working, you know another way to do it.

10) Hold a High Standard for Yourself and Your Mixes

My mixing technique has definitely changed over the years and similar to yours, will continue to change.  Although I have to say,  the one thing that has changed the most about my engineering is my standard of what a “great” mix is. For example, instead of setting things and leaving them, I utilize automation much more to make every section work perfectly (instead of lazily finding a middle ground). I take the time to go through all the components of the track, and make sure they sound good alone, grouped, and as a whole in the context of the mix. So if you have a high standard for what a great mix is, and you have the technical knowledge, there’s no reason why your tracks can’t sound amazing. Unfortunately recognizing great sounding audio verses just good sounding audio is not something that comes easily, and takes a lot of patience and perseverance. Listening to well mixed song on hi fidelity recordings on vinyl or cd (verses poorly represented mp3) will give you a good example of what great recordings sound like. After all you can’t make things sound great, unless you know what “great” is.


Dan Zorn, Engineer

Studio 11 Chicago






Where is Chicago Rap at Now?

It’s no surprise that we at Studio 11 are veterans in the Chicago Hip Hop and Rap scene. Having recorded a few of the famed originators ourselves, and many of the newer upcoming rap acts, we consider ourselves lucky to witness the evolution of Chicago Hip hop and Rap music first hand. The genre has undergone so many changes in the Windy City over the years, but where it is at now?

Those who follow the lineage of the Hip Hop and Rap scene in Chicago are familiar with it’s origins. Going back to the early 90’s, it wasn’t unusual to hear an east coast jazz or soul sample, along side west coast synth lines , and fast double time rapping (Crucial Conflict, Do or Die, Twista). These tracks usually had conscious lyrics that often times talked about discrimination, struggle, and corruptions of the political and public system. Taking the sounds from the coasts, they resurfaced them into a new style and were amongst the first to be identified with a “Chicago sound”. After almost a decade , during the late 90’s a new style began to emerge as artists like Kanye West, No I.D., and Common began to have an even further effect on the Rap scene in Chicago. Artists like Kanye West, who rapped about alternative things that had nothing to do with being “hard” on the streets, was a huge hit amongst audiences that were used to hearing conscious political rap from the midwest, or gangster rap from the coasts (NWA, Snoop, Ice T).

Kanye also paved the road for other famed rappers such as Kid Kudi, Drake, and Chicago’s own Lupe Fiasco, Twista and The Cool Kids. While this style rode out for a fair portion of the 2000’s, a darker side of rap was forming amongst the youth on the south side of Chicago. Influenced by it’s southern Trap roots,  a new form of Rap emerges called “Drill music”.  Comprised of heavy 808 percussion, fast hi hats, along side grim, violent lyrics,  it’s roots can be narrowed down to one Chicago young man :Chief Keef. Artist Chief Keef is largely responsible for introducing Drill into the Chicago rap scene, and after his claim to fame paid off, other south side artists began to follow. Lil Durk, Lil Herb, King Louie, Montana of 300, Lil Reese, Lil Bibby all exploded in 2013-2014, and continue to gain fame.

So now that Chicago is most recently associated with Drill Music, lets take a further look into what exactly makes this subgenre so popular in Chicago and around the nation.  Drill, being a slang term for automatic weapons, is very much tied into gang violence on the south side of Chicago. When listening to Drill music, it starts to become apparent that it is used less for political expression (compared to it’s past conscious rap style), but more as a tool to state one’s day to day living.  Whereas some of the rap predecessors may have discussed topics of discrimination and corrupt politics using clever euphemisms and metaphors, Drillers tend to have more of a focus on direct delivery towards an opposing rival, or about their role on the streets. To quote Chief Keef “I know what I’m doing. I mastered it. And I don’t even really use metaphors or punchlines. ‘Cause I don’t have to. But I could. … I think that’s doing too much. I’d rather just say what’s going on right now. … I don’t really like metaphors or punchlines like that.” Metaphors , clever rhyming, are all irrelevant to the goal of Drill music and when you think about it, makes a great deal of sense. If the goal is to ward off your enemy, or taunt them, wouldn’t you want to make that point as clear as possible.? Any trickery, or allegory gets int the way and skews the message.

What is also interesting about the Drill scene, is the age that these artists are getting recognition. Chief Keef , age 16, Lil Durk age 19, and in an extreme case Lil Mouse (picked up by Lil wayne), was a mere 13 years old when he started. One may argue that this is a key role in the success of Drill music, as many americans idolize the teen pop star (Justin Beiber, Miley cyrus , Selena Gomez). Young age,  controversial lyrics,  hard hitting beats, it doesn’t take a rocket scientist to figure out why this formula continues to bring success.

It is also safe to say that that unlike earlier forms of rap in Chicago, where we gathered sounds from the east and west coast and applied made it our own, Drill music is the first we can uniquely call our own . A product of it’s environment, hailing from the South side, Drill music continues to dominate the rap scene locally, and worldwide.

Dan Zorn, Engineer



209 West Lake Street


Tuning Electronic Drums to Fit Your Track

For this tutorial, I’m going to demonstrate the importance of tuning your drum samples to fit the track that your working on. I’m going to be using Ableton live, but the theory applies to any DAW.

So you’ve found some good sounding drums, and a good melody that fits the song. Everything seems to be groovin and working together, but perhaps there is something that you may have missed that can make it sound even better? …

Tune your drums!

“But I don’t have a drum key to tune my digital samples.” Don’t worry, that’s not what I mean.  Here is a scenario to demonstrate how tuning your drums can enhance your track:

 Lets first start with the kick drum. Say you have a melody that sounds great, and a kick drum that you love.  They sound okay together, but it doesn’t have that “wow” factor that you hear on the dance floor. Instead of spending a bunch of time going through different kick samples (which can be a very long procedure), try experimenting by tuning your drums. In Ableton, it’s a very simple procedure, and I can imagine it’s a similar process with other DAWS. In Ableton, all you have to do is click on the sample and utilize the “transpose” option. By transposing a kick drums fundamental pitch up or down, you will find that more than half the time the kick will begin to groove with the key of the song better. Sometimes you will find that the original pitch of the sample is the key of the song, but many times by transposing it up 1-3 , or more likely down 1-3 notes you will find a pitch that works better with the key of the song.

Here is a secret that I’ve found if you are having trouble identifying the main note of the kick drum. Because humans can hear with more accuracy for frequencies in the mid and high range, take your kick drum and transpose it up a whole octave (12 steps). Now, go up or down a few notes until you find the one that fits with the key of the song better, then subtract/bring it back down 12 from that. I don’t know if this is something everyone does who tunes drums, but it’s just a little trick that has helped me tremendously over the years.

Now that you realize you can tune your kick drum, why stop there?  Try it with the snare/clap. You may find a spot where the snare drum “resolves” better. Maybe the hi hats sound better tuned lower than higher (where they seem harsh.)

Another reason why transposing is good is because you can effectively change the EQ curves of the drums without needing to do extreme EQing.  Remember if the sound isn’t good to begin with, no amount of EQing can make that sound work. Tuning a kick drum down (if it works with the key of the song), can now make room for the bass on top, or vice versa. Maybe tuning your hats up, leaves more space for the synth pads to sit behind it. Be creative with it, but know that you can use it as a mixing tool too.  At this point you may ask, well why don’t you keep cycling through samples until you find one that does work with the song. Well many times I’ll find that I like the characteristic of a sample, but it may not work with the song, or I don’t like the characteristic of the sound, but it works great from a mix perspective in the context of everything else. The point is to give you extra flexibility with samples, so you don’t have to spend hours cycling through them to find the one that’s perfectly in tune with the track. (Having said that, I do recommend you spend at least some time cycling through sounds to find the highest quality sample that sounds the closest to the end result you want.) Having to pitch and shape the sound is only added work, and if you get lucky with not having the do any of this , of course that’s preferable. But, chances are you will have to pitch and shape your drums at some point, so it’s good to get into a habit of experimenting with it.

Something Id like to also point out is that it is okay to leave some dissonance with your drum tuning. Everything doesn’t have to be in tune. Sometimes it creates a good tension , and makes it stand out in the mix more if a certain percussion isn’t entirely in tune with the rest (think detuning to make sounds stand out). So try around and experiment and see what sounds better.

Another aspect of tuning your drums is to edit the envelope. Similar to how a drummer would add padding and mute the drums so it doesn’t sustain as long, you can do this digitally in the realm of your DAW. Once again, using Ableton , you can take away some of the natural sustain of the samples by adjusting the sustain in the ADSR envelope editor.  Very often with snares (especially real snares) for example, the initial pitch (the attack) is different from the pitch of the sustain of the snare. Maybe you find that the pitch of the sustain bends up or down and doesn’t sound in “key” with the track. In this case, try shortening the decay/sustain of the sample and see if it yields better results.

The bottom line is that you shouldn’t just find good samples and leave it at that without making them fit with the rest of the song. Chances are they can sound much better with the track. By simply adjusting the pitch, and envelope of samples you can really make or break the drums on a song. So play around with it, experiment, and have fun!


Dan Zorn, Engineer


Studio 11

209 West Lake Street, Chicago IL

How to Make Your Electronic Drums Have More Feeling

Whether you’re making house, techno, hip hop, or any other form of electronic music, chances are you are going to be using electronic drum samples that you’ve sampled yourself or downloaded from the internet. The way of sequencing these samples in your DAW may vary depending on the your preference, and  what software/tools you have. You can use built in step sequencers, play out your samples on a midi keyboard, pencil in notes one by one, or move around audio clips. However, no matter what your method is, a lot of the times electronic drums can sound very stale, or robotic if the necessary steps aren’t taken. This is because computers and sequencers are repeating the exact same digital sample, and because we humans are very capable of picking up minute differences , (or a lack thereof in this case), drums quickly become plain and boring. So I’m going to give you a few tips on how to create these differences, and make your drums sound more human, less robotic , and have more feeling.


I mentioned  that there are a lot of ways to enter your samples into your DAW. I myself have tried all methods of sequencing drums, and have recently found a good formula that works for me and hopefully for you too. Instead of sticking to just using one form of entering samples, using a combination of  midi drums, penciling in notes, and placing audio samples,  is the best way to get what you want. Every method has it’s pros and cons, so it’s good to learn the in’s and outs of all of them, so you can use each when necessary. So without further adieu, I’m going to walk you through a hypothetical situation of building up some drums and point out tips on how to make them sound less robotic.


1)   Kick Drum. Start by finding a good sounding kick drum. When trying to humanize a kick drum, there’s actually not much you can or should do. A kick drum is mostly sub, lows and low mid frequencies, and because we  “feel” more than we “hear” with  those frequencies, any subtle variation of pitch, velocity or duration will be very tough to hear, and will almost always go unnoticed. So this is when I just pull up a kick drum audio sample, and drag it in to the edit window.Once you find a good kick sample, drop it in for a few measures, then loop it.

2)   Snare Drum/Clap. This is an element that can go both ways. You can choose to have it robotic (sound more like a drum machine), or change it up and add some feeling to it. It all depends on the groove of the song. Unlike the kick drum, we can detect small volume, duration and  pitch differences in this frequency range  (mids, high mids) , so whatever changes you make will be heard more easily. If you do decide to humanize it here are some things you can do: Instead of penciling it in, or dragging the audio clip, try playing it on a midi keyboard (or even laptop keys) without quantizing it. The small timing inconsistencies will help them sound more like a real person is playing it. Because lets face it, even if you are a professional drummer, achieving perfect timing is very difficult.  As a result, some notes will be close to dead on, some will be ahead and some will be lagging behind. This is called adding “shuffle” to the track. On top of that, if your keyboard is velocity sensitive (and the velocity on your drum rack is engaged), you will create a subtle change in volume as the song goes on. (You can also change the velocity after the performance to really get the perfect grove). Also try experimenting with changing the pitch a few cents/semitones every so often to change it up even further. You can also experiment with changing the decay of each hit. Maybe on every other note you shorten the decay a little bit, and have the last one of the measure extra long? The possibilities are endless!

3)   Hi Hats Similar to the snare drum, playing out hats on the keyboard will result in a more live sound. Although if your playing faster notes , like 1/16th notes, you may want to quantize, but not 100%. It’s good to leave some notes on the beat, and some off. Also with the quantize function, you can go through a variety of quantizing patterns. Certain DAWS have a pull down menu of different grooves. Swing, dotted note, shuffle, mpc style, can all add much human love to those hi hats! And aside from timing changes, playing out the notes on a keyboard is going to add subtle velocity changes. You can continue to shape the sound of the hi hats by occasionally altering the decay too. Then after that if that track calls for it, try adding an effect that helps it move even more, like a subtle slow phaser. If mixed in correctly, overtime the phaser will add a very pleasing change to the otherwise robotic sound.

4)   Percussion/cymbals- Now that you have a basic groove, you can choose to add the percussion and cymbals in how you’d like. Again, you can experiment with velocity, pitch and time changes, but as your adding more elements, be wary of how it clashes with the sounds that are already there. If you drag a conga in for example, and have them playing in the same place as the hi hats, or snares, make sure they are shuffled the same. In other words, putting a conga on every 4 beats, when a hi hat also comes in every 4 beats, but is shuffled slightly off time, will result in a strange phased sound and will clash greatly. The fix, is to move them so they are aligned on the same plain.

Last step: Automation

Now that you have your drum groove, and its sounding live and interesting, you want to continue by processing it in a similar fashion. Think of how an actual drummer would play a drum kit during a song, and try to bring that knowledge into your drums. The goal isn’t to make your drum samples sound like a real drum kit, but make it sound like a human is playing the samples. So begin automating parts. Maybe during the chorus, the hi hats velocity get louder and more epic, then quieter again in the verse. Try to mold the drums to the dynamics of the song . Maybe automate the wet dry of the reverb so every 8 bars or so the clap as a big splash sound. Creating movement and variation are the two primary goals of making drums sound less robotic, so be creative and have fun with it!


Dan Zorn, Engineer

Studio 11 Chicago

209 West Lake Street


Preparation for the Studio …Tips to Get the Most Out of Your Time

Let’s face it, your’e not rich. You work hard for your money, and when it comes to studio time want to get the most out of the few hours you booked. In the studio, I’ve seen a lot of artists who work super efficiently, but I’ve also seen a lot of artists who use their studio time poorly. I’m going to give you a list of tips on what not to do, and some things that can help you get the most out of your time.


1) Rehearse , rehearserehearse! This applies to all artists, bands and rappers alike. The tighter you have your grooves down, the less time we have to take doing overdubs. For rappers, unless part of your aesthetic is freestyling all your verses (Lil Wayne, Common), then I highly suggest writing your lyrics down, and practicing them so you don’t have to do too many do overs.  Sometimes artists will just write their verses down in their iphone, then not practice reading them. You must actually practice them! Otherwise (and I’ve seen it) when you get in the booth, you have to do a lot of back writing because you didn’t read it out loud to count the syllable’s correctly,

2) Come early if you can. This way you have time to feel the vibe of the studio, and do any vocal exercises you may want to do before getting in the booth.  This way the engineer can start up the session, which takes a few minutes anyway.  And if you can’t make the session, always give the engineer a warning a head of time.  It reflects poorly on you if you don’t show up without telling anyone, and aggravates the engineer for wasting his time coming down to the studio.

3) Get to know the studio lingo. Knowing terms like “overdub, from the top, fly, in/out/ ad lib, punch, stack” can all make communication with your engineer much easier. It also helps to know a little about music theory. Just basic terms like bars, phrases, measures can go along way. So instead of saying “ can you uh do that thing with that part”, you can say “can you punch me towards the end of the measure”. Any time spent trying to communicate with your engineer is potential time you can be recording!

4) Put your phone on silent. If you do this you wont have to waste time by redoing a take that your phone went off during, and you can wont be tempted to waste studio time by talking to whoever calls you.

5) Bring your own beat on a flash drive or cd. Finding a song on Youtube, then putting it in the Youtube to mp3 converter doesn’t take long for the engineer, but there have been times where the beat wasn’t on youtube. Or it took a lot longer to find the exact version you were thinking of. Sometimes the internet may be down (some studios don’t have internet on their studio computer). It’s best to come prepared so you don’t have to rely on these factors.

6) Get high before . If you are going to get high before your session, do it in the car! Seriously, it takes 10 minutes to roll the blunt, and another 10 minutes to smoke it. If you have 2 hours booked, that’s already 16% of your time wasted.

 7) Don’t come with your whole posy, unless they are all in the group. If you are coming to the studio, come with 1-2 people. Chances are they will distract you, and waste your time. I’ve seen it happen too many times to count.


Follow these steps and you will surely get the most out of your studio time!


Dan Zorn, Engineer 

Studio 11 Chiacgo, IL
209 West Lake Streeet


Getting Bass to Translate

Dan Zorn studio 11 pic revamped

Getting bass to translate is one of the toughest things to accomplish as an engineer or producer.  After many years of working with various genres (and making countless mistakes) , I have finally compiled a list of tips that will help you get your bass to sit right in the mix and to be heard on any playback system (including those wretched Mac Book speakers). But before we delve deeper, we first must understand a bit about the playback systems themselves, and how our human hearing affects the way we perceive “bass”.

On a fundamental level, humans are able to hear sound because our ears (through a complex series of processes), pick up air molecule displacements (vibrations), and convert them into electrical impulses.   Our brain detects these impulses and then, as a transducer, translates them into “sound”.  On a piece of paper, humans  are capable of hearing frequencies from 20hz to 20Khz. However that’s “perfect” hearing. Most of us do not have  perfect hearing, and on top of that begin to loose sensitivity to certain frequency ranges as we get older.  We pick up vibrations through hair cells in our inner and outer ear, and as we age, some of these cells began to deteriorate. The first hair cells to go are typically specific ones that are attached to our “outer ear” , the part of our hearing responsible for detecting high frequency content. So depending on your age when your reading this, you can have a very different hearing response than someone much younger or older than you. So the reason your old man can’t hear you isn’t necessarily that he is loosing all of his hearing, but most likely because he’s losing or lost some of those higher frequency hair cells (typically where the articulation of the human voice sits).

Because of the way we humans have evolved, we are most sensitive to mid frequencies around the human voice (2-5Khz), and will hear these over other frequencies of the same SPL. This concept is widely known as the Fletcher Munsun Curve, and understanding this can help you greatly when mixing, and specifically when dealing with bass.

Fletcher Munson Curve

fletcher_munson_ 2


The Fletcher Munson  Curve  refers to how our “frequency response” changes with volume. As shown in the graph above,  when  1Khz  is at 60db,  it takes about 80dB to hear 50hz as the same perceived “volume”. A 20dB difference. As the overall dB increases ,  1Khz at 110dB will sound the same as 50 hz, if 50 hz is played only 10 dB louder.  This relationship changes or “flattens” out as the volume is increased.  So what does this mean for you? If you listen to your mix at a louder volume, things are going to sound equal  and balanced in volume. Bass, mids and highs will seem in their place, but it’s  is a trick!   Once you turn the level down, all of a sudden bass (and some highs)  get lost in the mix. You may not have chosen to turn the bass up when it is at a high level because it sounds present in the mix, but when played at a quiet or reasonably loud level, it is lost in the mix.  On top of the frequencies flattening out, if the overall volume  is loud  when mixing the music, the song has a greater impact. This “greater” impact fools your ears into being satisfied. But they aren’t satisfied because things are clear in the mix, they are satisfied because the music is cranked and your body can “feel” the bass. You aren’t going to think anything is really wrong with the mix if it’s loud…so the solution? Monitor at low levels, and you won’t trick your mind into thinking things are balanced when they aren’t, especially when dealing with bass.

Another reason to monitor at low levels is because it won’t interfere with the acoustics of your room as much. If something is cranked, and you’re in your not so perfect sounding bedroom, then that will reflect in your mix. Low frequencies will be boosted, standing waves will cause strange phase issues, and your mix will be all wrong. Listing at a low level, will eliminate any problems relating to acoustics, and will give you a more direct , unaffected sound.

If that isn’t enough, yet another reason to monitor at low volumes is so your ears don’t fatigue. If you spend enough time working on a mix at high volumes, you will certainly began to loose sensitivity to certain frequency ranges. Mids will become washed and hard to distinguish, highs will become less harsh, and your decision making will be less accurate. We’ve all had at least one song where we thought we nailed the mix, then after checking on it the next day, thought “Man, what the hell was I doing?”. Well that’s ear fatigue, and it can greatly destroy the quality of your final mix (and especially your hearing). And after all, without your hearing, you’d be out of a job!  If you want more information on what loud sounds can do to your hearing from the perspective of a used to be engineer, now hearing specialist, check out this website

Monitoring at low volumes isn’t at all a new concept, and has been a “secret” of mix engineers for decades. This “secret” is based off of the concept that  if it sounds good quiet, it’ll sound good loud, but if it sounds good loud, it wont’ necessarily sound good quiet. After all a good mix sounds good at all volumes, on all playback systems. So next time you listen to a professionally mixed and mastered song on a laptop, or cheap playback system, listen to how  the  bass is audible and clear. You will find that even on your cheap 15 dollar  portable whatever, that you can still hear the bass, crystal clear. Why is that you ask? Keep reading and you’ll see why.

It’s 2014 and we have entered an era where people are no longer listening to your mixes on vinyl through a good home stereo system. Now your audience is listening to your music on mp3 through their macbook speakers, cheap  ipod earphones, and ihomes. There is even more demand on getting your bass to translate because, with the exception  from the earbuds, these are playback systems that generally struggle to reproduce fundamental bass frequencies. Here are two frequency response graphs that illustrate this “lack of low end” The top graph is the frequency response for a Mac Book laptop, and the bottom is for a Sony laptop.

Mac Book Pro Speaker Response


Sony Laptop Freq Response

Looking at these graphs, one can see right away that there is a serious roll off of low end around 200-300 hz and below. As you may know this is where most “bass” or low end sits in a mix. So why can you still hear the bass on professionally made albums through your laptop speakers? It’s because the artist, and engineer learned to compensate for this issue. . Similar to what I said before about having your mix translate if it is monitored quietly versus loudly, if your bass sounds good through crappy speakers, it will sound good through great speakers. This is why you will see many professional studios, and even some home studios using “unflattering” speakers to run their mixes through. Speakers like the Yamaha Ns10’s are a staple in the recording industry not because they sound good, but actually because they sound “bad”.

There are two major steps in getting a good bass sound that will translate. And it all starts with the Artist. ( And if your the engineer, don’t worry, you still have options.)


For the Artist:

Proper Bass Arrangement

Many good artists are aware of this problem, and consciously try to avoid it during the writing process.  Something amateurs do often when making music, whether it’s hip hop, house,  rock or what have you, is to choose bass that is bone rattlingly low because it sounds “epic”. While this may sound “epic” in your Dre beat headphones, it’s not going to sound good anywhere else. Trust me. A good way to avoid having a lost bass, is to play either an octave higher than you normally would or if that’s too high, if you can play a different arrangement somewhere in between. As we discussed earlier, our ears are less sensitive to frequencies that are lower, and tend to gravitate towards ones that are higher in the spectrum. So bass with more upper mid/high end content is going to cut through easier.  But that’s not bass anymore you say? Well actually even a bass line played in an upper register will still contain alot of low end content and also  point to the fundamental bass frequency (a nice little trick you can thank your ears for).

Sub Bass-

As a general rule, for most genres I’d say stick away from writing in a sub bass line if you’re worried about bass translation.  This is not to say sub bass can’t be used, but it all depends on where you are using it. If you have a bass with alot of low end content already, adding a sub is only going to make things worse. If you have a bass that hardly contains any low mid “meat”, you can put a sub on there . And if you do put a sub on there, filter out the highs, and low mids, so when combined with the actual bass, won’t sound muddy. Sub can easily destroy a mix if it isn’t sitting in the right place so take the time to make sure it sits in the right place.

For the Engineer:

If your an engineer, and you get a bass line that is too low to be heard on any small playback system and can’t be changed, don’t panic, there are a few things you can do.

1) Apply Maxbass. 

The engineers over at Waves realized the issue of bass translation, and actually made a plug in specifically designed to help bass translate called Maxbass. Without getting too technical, Maxbass basically duplicates the signal, adds new upper harmonics to it then mixes it back in with your original bass. It’s essentially adding more audible frequencies to your bass to make it more audible for the listener.

2) Add Harmonic Distortion

Simular to what maxbass is doing, a great way to get your bass heard is to add frequencies that are more easily heard to it. By adding a bit crusher, or sample reducer you can add upper harmonics that will give your bass a lift in the mids and highs to make it stand out in the mix more, all without actually boosting the volume of it. Or try running your bass through a saturator, or subtle distortion effect. Used in parallel, can produce wonders for a bass.

3) Run Your Bass through a Transformer.

When lower frequencies pass a  transformer, especially an old one, the audio signal gets more “DC” or slower moving. Transformers don’t pass DC current through them, so as the signal passes through them, various things begin to happen.  Saturation, new harmonics, and interesting phase changes occur and are added to the signal. The end result will be a little more edge in the midrange, and the frequencies that were too low to be audible will have been shaped in a way to make them sound colored and surprisingly, louder! The added saturation and color of the transformer will shift your bass a little higher, and allow your ears to fill in the fundamental frequency. (Studio 11 offers individual track processing. So if you want to run your bass through one of our many units with transformers, it’ll only cost you about 10 bucks. Hint, hint ; )

4) Compression

Compressing bass can increase the overall subjective volume of it, and will help keep a more constant level throughout your track. Compression also naturally brings out the subtleties such as the sound of the pick, and guitar slaps that are more audible to our ears. Increasing these will increase the over all presence of the bass.  Compress away!

5) Move Things Out of the Way 

Sometimes the kick drum may have too much low end content and will be masking the bass. By side chaining the  bass with the kick, you can increase intelligibility greatly between the two instruments, and as a result your bass will sound more present. Also, if the kick is in the way, roll off some low end content . When combined with the bass, the  low end of the bass will fill in what the kick is missing and your ears will  assume it’s from the kick.

6) EQ

Try simply boosting upper frequencies , and taking out some mud to improve intelligibility. It doesn’t hurt to roll off some sub on the bass either. Rolling off frequencies (like 30hz) we can’t hear well anyway will only make the bass cleaner, and adding upper frequencies around 1khz for example will bring out the presence of the bass more.

Try listening to your finished product on various playback systems to get an average of how the bass sounds.  Check it on the laptop. If you can hear your bass clearly on a mac book for instance,  in my opinion, you’ve nailed it. Also one of my favorite tests is to hear how it sounds is on  my ipod earbuds. Because I listen to alot of music through these while i’m out and about, I have a good reference of my how bass stands next to other  recordings. Or if you have a car, that’s always a good test too. The point is to try it all over,so that you can make adjustments as necessary. Follow these tips and you’ll be well on your way to crafting bass that translates!


Dan Zorn

For recording in Chicago hit up Dan at Studio 11!