top of page
Writer's picturelumicstudiomix

Sing for me!

What elements do we focus on while listening to music? Well... It really depends who you ask, an audio engineer would probably say 'all of them' but the average listener would say something a bit different though... And we all make music for people to enjoy, all people, not only audio nerds (sorry guys, just kidding... ) So what does the average listener, the general audience, focus on the most while listening to music? The answer is simple... vocals and drums. I`m not sure if you`re aware of 'focus points' theory in audio engineering. Basically it says that throughout the entire song there should be focus points that grab listener attention, the elements that stand out at the particular moment of the song. Of course they change as the song develops but mainly they are built by vocals and drums (lead instruments take over when there is no vocals at the moment). That`s why it`s so important to focus on those 2 elements while mixing. If you analise majority of songs, you`ll notice that other supporting elements like guitars, synths, pianos vary from song to song. I`ll focus on guitars as I`m a guitarist (but that applies to other instruments as well)... Let`s take guitars. if you`ll take 10 songs you`ll have 10 differently sounding guitars (even in the same genre), and if they work for the song and more importantly work nicely with the bass guitar then the job is done. When listening to the song we hear that the guitars sound good but our main attention is focused more on both vocals and drums. In this blog post I would like to focus a bit on vocals (it`s a part 1 of well... thousands of future posts about vocals). Us humans, we`re so used to the natural sounding human voice as we hear it on a daily basis. When EQig vocals we should pay attention to the range between 1k and 2.5k Hz as this region is where the frequencies which are sensitive to your natural vocal perceptions live. Messing up them usually leads to unnatural sounding results and our ears will pick it very quickly from the reason stated above. Well… there’s a few things you can do, but let’s not talk about the magic plugin chain or the next cool blinky light to make the vocals sound better. Let’s talk about a great EQ process to get your vocals to clear up and sound amazing in the mix while still sitting nicely in the mix. First thing (after comping, tuning (if necessary), gain staging entire performance, deesing is removing any ringing frequencies. Considering proximity effect and comb filtering, there are usually some ringing frequencies in low mids (I`m not mentioning HPF as this depends on the vocals, song and the way they are recorded), then the next region where ringing frequencies appear is the region around 2,5k and little above. Often vocals sound bad around 700Hz, so that`s the next place to look at. I always cut the very top on vocals, but I do that at the very start. There's nothing nice above 10k (12k tops) - anything up there is just noise. Just be aware of the slope of the filter - a LPF at 10k could be affecting frequencies down to 5k or lower, depending on the slope (the "corner frequency" i.e. 10k of a filter is the point at which it is attenuating the signal by 3dB). The point about those "air" EQs is that they have very broad shapes. The 40k band on the Maag EQ for example affects frequencies down to 3-4k, it just has a very gentle slope, so it's barely noticeable and there's a subtle lift in the audible top end. It's all about the shape of the EQ curve. I will especially cut the high-end on vocals that have been recorded with cheaper microphones because the sound of those high frequencies is usually really nasty! So, LPF vs. air boost.... I might use both to sculpt the tonal balance of the vocal to where it needs to be. Of course numbers mean nothing as every single voice is different but those regions are quite problematic quite often and slightly vary from recording to recording, Also please remember that vocals like dynamic EQ to process the track only when necessary. Let`s go back to tuning... Use it with caution as every pitch tuning brings artefacts and remember not to tune sibilants (separate them from the core part of a particular phrase as this is the first sight of amateur tuning. Remember that autotune was invented in 1997, before that there were countless epic songs where singers could actually sing without depending on technology... And still creating timeless anthems. Just depending on their talent and voice. With all parts being in tune remember this... In the studio we do have a button for pitch correction, we also have a timing button... But we do not have an emotions button. And if the vocal recording is lacking that... there is nothing we can do. Sorry. All good but how do I get my vocals to stand out? Often, we lose the vocals because we are masking the primary frequency. When I say the primary frequency of the voice, I mean the frequency that when you look at an EQ analysis is the high spike. So… let’s say you bring up your vocal track and look at the frequencies with SPAN or your EQ or whatever, and you see that the most pronounced spike is at 1.7 kHz . You want this vocal to stand out…so do this… boost 1 to 2 dB at 1.7 kHz but boost 3 dB at 850 Hz as well. The second frequency is 1/2 of the primary…which will be an octave below. Here’s what will happen. When we hear 2 notes together, in a chord on a guitar for example, we naturally hear the higher note more than we hear the lower note. What we don’t realize though, is that the lower note if actually giving our ear a frame of referenced to interpret the higher note with. The presence of the lower note helps us hear the higher note better. By taking ½ of the primary frequency of the voice and boosting it a touch more than you did the other, you are using the exact same principle. In another post I'll focus on how to make the vocals stand out in mastering and the mix itself by using sidechain and other techniques. Saturate the magic... Saturation very often is the secret weapon when it comes to vocals (either used subtly or for sound designing). I'll generally always put saturation straight on vocals to some extent and have that as the "raw" sound - so the saturated version would go out to parallel channels if they exist. That just comes from my purely analog days where a recorded vocal always came through at least one outboard compressor (sometimes 2), a pre-amp, the analog console, then on to tape - that's a lot of different types and levels of saturation from the gear (not to mention then coming back off multitrack tape, through the console, and out on to stereo tape at the mix!). It's the vocal sound we`re used to, along with most of the general public. Parallel channels then provide support if necessary and the difference between styles and songs is really just a difference in the amount and type of saturation/distortion/compression applied. Generally though, denser mixes would call for a more focused vocal sound, so that means keeping it really controlled - so more compression/saturation directly on the vocal. More open mixes would favour a more parallel approach, so you're keeping all the dynamics of the vocal but using the parallel tracks to add depth/clarity in a more consistent way. Having different processing between sections is definitely a good option. I tend to split the audio on to separate channels though, rather than automate anything, when the changes are for whole sections (so I have a "verse vocal" channel and a "chorus vocal" channel, or whatever).

We can write entire books about vocal processing, this was part 1 with some general thoughts There`s still so much to cover with compression, reverbs, slap back delays, paning, using the automation and many more... But that will be cover in another posts. Lucas/LUMIC Studio



77 views2 comments

٢ تعليقان


Stephen W. Foster (Astral Rock)
Stephen W. Foster (Astral Rock)
٢٥ أبريل ٢٠٢٢

Fascinating Lucas and very helpful

إعجاب
lumicstudiomix
lumicstudiomix
٢٥ أبريل ٢٠٢٢
الرد على

Thank you, it`s a part 1 of many as the topic is very deep

إعجاب
bottom of page
.