Articles

Understanding Frequencies in Audio: Getting Perfect Sonic Character

Learn how to use the full frequency spectrum to your advantage with our guide to the science of sound and pitch

You’ll know the words Treble and Bass from music – the first means ‘high’ and the second means ‘low’. You might have twisted Treble and Bass controls on a basic stereo setup, too… or gone even further into changing your audio signal with an EQ. But there’s a world more to experience out there in the complex world of sound, and understanding how it all works will help you on your journey to making things sound great. 

Put simply, audio frequency is a measure of how high or low the pitch of an audio signal is. A low C on a grand piano vibrates 32.7 times per second, while the highest C vibrates at 4186 times per second. We call those frequencies 32.7 Hertz and 4.186 kiloHertz (or 32.7Hz and 4.186kHz for short).

Two sound waves of different frequencies (5Hz and 15Hz) over one second

The two audio waves represented above are inaudible – they’re too low-frequency to actually hear – but it’s easy to count the number of repeated cycles of each wave that happen in one second.

Audio Fact: The frequency of a sound wave is its speed divided by its wavelength. Because, unlike light, the speed of sound can change with temperature, musical notes and other audio frequencies might differ in hotter or colder environments

Low frequencies, from 20Hz to about 150Hz, make up the bass region of audio, with the very lowest sounds often more felt than they are heard. High frequencies are piercing, shrill and sharp, and tend to become less audible as we get older. The lowest, the highest, and everything in between, make up the entire frequency spectrum. Our human hearing system can hear frequencies from about 20 Hertz to up to 20 kiloHertz (that’s 20000 Hertz). The older you are, the lower that maximum audible frequency becomes.

How Audio Frequencies Work

Let’s consider two extreme examples of frequencies. In the video below, the left example is an audio clip of a few single-frequency tones. These ‘sine wave’ tones are the simplest type of sound that can exist – a single tone at a particular frequency. Depending on the frequency played, the sine wave can sound dull, pure, bassy or piercing.

On the right, we have the opposite extreme – a white noise signal. Technically, this signal is the result of every single frequency across the frequency spectrum being audible at once. 

Viewed on a spectrogram, we get to see the frequency in Hz or kHz of any sound energy along the bottom axis.

Somewhere between the simple nature of the sine wave and the white noise signal, we find the sounds that make up the world around us.

Harmonics

At the top of this article, we mentioned a piano note sounding at 32.7 times per second. Actually, in the real world, a musical note won’t be made from a single frequency – it’ll be made up of many.

In the case of the 32.7Hz note, we’d also expect to find ‘harmonics’ at 65.4Hz (x2), 98.1Hz (x3), 130.8Hz (x4), 163.5Hz (x5) and higher. That 32.7Hz tone is the ‘fundamental frequency’, giving the musical note its pitch; but the harmonics give it its unique timbre, helping us differentiate the quality of one sound source from the quality of another.

How to See Audio Frequencies

How to View the Frequency Spectrum in your NLE

Your NLE’s EQ will often contain a visualization of the frequencies running through it. Premiere users can turn to the Parametric EQ, for example. In the world of VST Plugins, though, there’s a lot more choice. 

Using a free visualizer plugin such as Gramotech GT Analyser or Voxengo SPAN, you can interrogate your audio like never before.

A frequency analyzer’s horizontal axis represents frequency, from 20Hz to 20kHz, while the vertical axis charts the level at that particular frequency.

Check out our guides to loading audio plugins in Final Cut and Premiere.

Sculpting Audio Frequencies with EQ

An equalizer is a tool that helps us work with different frequencies in audio, letting you select particular bands of frequencies to boost or cut.

Why not check out our take on the Best 5 EQ Plugins for Video Editors?

EQ is often used reductively, to remove frequencies from a sound so that those same frequencies can be better heard in another sound that’s happening at the same time. For example, human speech frequencies could be masked by an instrument in a backing track playing around the same frequencies, so removing just those frequencies from the music track using an EQ can help the voice track be heard on top.

You can use the graphic below to help you identify the most powerful frequencies in some common video subjects. With knowledge of your wanted audio’s frequency spectrum, you can de-emphasize these frequencies in other elements of your audio to give every part of the project its own character.

Common sound sources and their general frequency ranges

The Human Voice and the Frequency Spectrum

The voice can be one of the most complicated parts of audio to get right. Our hearing system has evolved to keep us alive and thriving, and while listening out for predators and prey has been essential, hearing and understanding our fellow human beings is absolutely crucial. Because of this, our hearing system is so expert at paying attention to the human voice, it’s quite obvious to us when something’s amiss.

The Human Voice’s Fundamental Frequency

The lowest frequency of any voice signal is called the Fundamental Frequency. As well as being the lowest, it also conveys a lot of the information about the pitch of the voice at any given point, and therefore about the overall intonation of speech. 

The average fundamental frequency for a male voice is 125Hz; for a female voice it’s 200Hz; and for a child’s voice, 300Hz. Remember, as the speaking changes the pitch of their words, this frequency will change too, so think of fundamental frequency as a range, rather than an absolute value. These differences are due to the speed of movement of a speaker’s vocal folds (aka vocal cords), as well as their size, and how they’re used.

Vowel and Consonant Sounds in Human Speech

Vowel Sounds are found further up the voice’s frequency spectrum. They’re most prominent between 500Hz and 2kHz, but they tend to affect the whole voice. 

Each vowel is characterized by its own resonance pattern, created by the speaker positioning their mouth and tongue. Because vowels are defined by sound resonance rather than sound creation, they will remain the same while the fundamental frequency changes. Vowel resonances do vary from person to person, though. 

Consonants take up space between 2kHz and 5kHz. These sounds pass quickly and can help make speech more intelligible – think of a bad public address system which makes it hard to tell the difference between S, T and F sounds, for example.

Consonants are made in a variety of ways, but usually involve sudden air movements from the mouth and through the teeth. If consonants are too loud, they can become sharp and grating; if they’re too quiet, the speaker can sound like they’re lisping. To get consonants right, you could try a helping hand from the ERA 4 De-Esser plugin.

Music and the Frequency Spectrum

If you consider a whole piece of music as seen on a frequency analyzer, the results are often quite similar. Although it can vary based on the audio engineering decisions made in the creation of the track, the general characteristic of music is to roll off slowly as the frequency gets higher, with a strength of about 3dB per octave.

So music tends to encompass all hearable audio pitches – it’s a feast for every frequency.

Audio Fact: If we double the frequency of a musical note, we end up with the same note, one octave higher. This means that we hear logarithmically – the doubling in frequency (pitch) between 120Hz and 240Hz feels like the same jump in pitch between 3kHZ and 6kHz.

Each musical note has a fundamental frequency – the lowest frequency at which it vibrates, and the one that sounds most strongly. The graphic below represents the frequency of each note alongside musical notes.  

Each musical note has a fundamental frequency, and musical instruments cover many note ranges – and therefore frequencies – along this range

Remember, just because a note contains its strongest and lowest energy at the frequencies given, doesn’t mean that’s the only place it contains energy. Because of harmonics (see above), each note tends to contain energy at double, triple, quadruple, quintuple (and so on…) its frequency. 

Mixing Audio: A Question of Frequencies

When working with video, we often use music as a background element to add depth of emotion to what else is happening on screen or heard on the speakers. How can musical frequencies interact with other crucial elements of our projects? And what knowledge can we arm ourselves with to make everything work together?

Mixing Music with Spoken Voice

When layering a background piece of music with audible dialogue, your actions will depend on the content of the music. If the background music contains a vocal part, this will easily compete with on-screen voices, and you may need a heavier solution such as ducking using dynamic EQ. 

If the music in question is simply instrumental, your dialogue will be more audible if its key frequency ranges – are reduced in the music track. If there’s a guitar playing around the key frequencies of human vowel sounds, making the voice hard to make out, reduce the music track at those frequencies if possible. 

On the other hand, if there are percussion sounds or cymbals that collide with the dialogue’s consonant frequencies, reduce the music at these frequencies to give the dialogue more prominence.

March 20, 2020
0%