- How to Add Sound Effects to a Video for YouTube – Beginner’s Guide
- Edit for Free with Stock Video Editors (NLEs)
- Edit Using a Professional Video Editor (NLE)
- How to Line up Sound Effects with Video Like a Pro
- SFX Pitch and Timing Guide – Speed up, Slow Down and Tune your Sound Effects
- How to Layer SFX to Create a Unique Audio Experience
How to Add Sound Effects to a Video for YouTube
Sound effects (SFX) are an effective way of quickly and easily making your videos more professional. By adding some atmospheric rain sounds to a moody scene in your short film, or some impact sounds to your YouTube title sequence, you can instantly grab your audience's attention and keep them engaged in your content.
In this comprehensive practical guide to adding SFX to your videos, we’ll take you through the basics as well as more advanced techniques such as perfecting the timing of YouTube SFX and music, altering sounds using timestretching and pitchshifting and layering SFX to create unique audio experiences.
We're also showing off some free sound effects for video editing throughout this article, thanks to the SFX Cellar platform, a free, highly curated, royalty-free, extreme in quality sound effects library for your project.
Since captions, links and annotations can be added after uploading your video to YouTube, many beginner video makers make the mistake of thinking that the same is true of YouTube SFX, but this isn’t the case. Instead, you’ll have to add the SFX beforehand using video editing software (commonly referred to as a video editor or an NLE); but don’t worry, this is really easy and it doesn’t have to cost you anything. Read on to learn more!
You can’t add SFX after Uploading
There are loads of awesome free SFX available to video creators online. We have our own selection available via the Accusonus SFX Cellar, and our Free SFX Collection, and video platforms like YouTube have their own audio libraries for you to use.
You’d be forgiven for thinking that, since YouTube provides its own SFX library full of popular YouTube sound effects, you can add these sounds to your videos after uploading – as with annotations and links. However, sound has to be added in the editing process before the video is uploaded to the internet.
If you haven’t got video editing software yet, fear not. You’ll find guide to both free video editor software and professional video editor software later in this article, and it’s likely you already have a video editor on your computer and you don’t even know!
How to Add Sound Effects to your video
Adding sound effects for YouTube videos is really simple, and the process is almost identical no matter the software you’re using. In our example we’re going to be using Adobe Premiere Pro, but the same method is applicable in almost every video editor.
For this guide, we’ll be using one of the 100 free sounds in our UI Button SFX Pack, a YouTube sound effects pack which you can download as a sound effect pack for free below. It contains loads of great user interface sounds, button sound effects, beep sounds and more free sound effects for video editing.
Step 1: Import your Media
We’re going to begin by importing our video into our video editor. Find your video clips and sound effects on your hard drive. We’ve got a clip of a person pushing a button on an old radio, and we want to add a sound effect from our free SFX Cellar - UI Button SFX Pack to emphasize the pushing of the button – SFX Cellar - UI Button 05 sounds perfect.
Of course you’ll want to choose the sound that fits your video best, whether you want more of a buzz, ring or alert sound effect, you’ll find something that works for you in our UI Button Pack. Locate your video and audio files and import them into your project, this can usually be done with a simple drag and drop, but your video editor might need the audio to be imported via its File menu.
Your media will appear in your media library, this is very useful for larger projects where you’re using lots of different video and audio clips. It allows you to quickly browse your media so you can place it in your timeline.
Step 2: Place your video in your timeline
Once you’ve got your media into your media library, you can start editing your video together. The first step is to get your video on the timeline, to do this simply drag your video from your media library onto the video timeline. (Depending on your video editor, you can probably drag the video file from a folder on your computer straight onto the timeline).
Your video will appear as a clip in your timeline, you’ll also be able to preview it via your video editor’s monitor.
Top tip: If your video has unappealing background sounds, it might be best to build your audio from scratch using SFX. To do this, you’ll need to mute the original audio associated with your video clip. This can easily be done in the timeline by pressing the mute button. In Premiere Pro this is a letter ‘M’ but it may be a microphone or speaker icon in your video editor.
Step 3: Place your audio in your timeline
You can add your SFX to the timeline in exactly the same way you added your video, just drag and drop it onto your timeline from your media library or from a folder on your computer.
Step 4: Position your audio
Now you’ve got your audio and video loaded into your timeline, it’s simply a matter of placing your YouTube SFX in the correct position.
This really is as simple as dragging your audio clip to the appropriate point in the video. A useful technique is to position the playhead at the point in the video where the sound effect needs to go, then you can easily drag the audio to the right position.
For a more in-depth guide to timing YouTube SFX, check out the section lower down this article on How to Line up Sound Effects with Video Like a Pro.
If you’ve got a computer or laptop, you can add sound effects for YouTube videos for free!
Macs come with the fantastic iMovie. Although this isn’t as fully featured as a professional video editor or NLE, it will allow you to do most basic video editing tasks, making it perfect for a Mac user who’s learning the ropes of video creation.
Windows users used to get Windows Movie Maker with their machines, however sadly it has now been discontinued. No problem – there are loads of freeware video editors available to Windows users online, and we’ve rounded up some of the best below:
Blender is an awesome open source video editor that was initially developed as a 3D animation suite. It can perform all your basic video editing tasks such as cutting and splicing, as well as a few more complex ones. It comes with a dedicated audio mixer which is a godsend for professional quality video, and allows you to add keyframes, transitions and more. It’s also available for Mac and Linux users.
Another great free video editor for Windows users (plus Mac and Linux) is Shotcut. Covering all your video editing needs, Shotcut is a fantastic beginner's video editor. It also has a whole bunch of awesome audio tools built right in including a compressor, expander, limiter, various filters, reverb and much more!
The free version of the fabulous DaVinci Resolve Studio, DaVinci Resolve is one of the most popular video editors on the market right now. The only NLE in the world that combines 8K video editing, colour correction, visual effects and audio post production in one tool, DaVinci Resolve is a killer free option for both Mac and Windows users. If you later decide to upgrade to the paid for Resolve Studio, knowing the free version will come in very handy too.
If you’re serious about making videos ,then you should consider upgrading to a professional-grade NLE. It’s a good idea to have a bit of practice with some freeware before you take the financial plunge, but in the long run, investing in some professional software can really pay off.
There are a lot of options out there so it can be daunting choosing the one that’s right for you. Because of that, we’ve put together a brief overview of some of the most popular video editors on the market right now. We also have a guide to working with audio in each of them on the Accusonus Blog.
A stalwart of the video editor world, Adobe Premiere Pro has been around since 2003. It’s probably the most popular NLE out there and is well regarded among professional video editors. It comes with the Adobe Creative Cloud Suite, which is good news for anyone already invested in the Adobe ecosystem!
However, just like any Adobe product nowadays, you cannot buy a copy of Premiere Pro – instead you have to subscribe to one of their many Creative Cloud plans – potentially a downside for some.
Since the launch of Resolve Studio 16, many longtime fans of Premiere Pro have been making the move over to Blackmagic’s Davinci Resolve Studio 16. With functionality for 8K editing, color correction, visual effects and audio post production Resolve Studio is a real powerhouse, it’s also fantastic for collaboration, allowing contributors to work on the same project at the same time.
If you’ve got aspirations to one day edit for TV and film then learning Media Composer will put you in good stead. Despite launching way back in 1989, it’s still the most used video editors at the top levels of the industry.
However, that being said, if you have dreams of editing at the highest levels, it’s definitely worth mastering as many video editors as you can.
Finally we have Apple’s offering. Final Cut is a streamlined tool well suited to YouTubers looking for a video editor to take their work to the next level. Though it’s not quite at the professional grade of the other NLEs in this list, it’s certainly got everything your average online video creator will ever need to make high-quality content.
Now we’ve got to grips with the basics of adding SFX to your videos, let’s take it a step further. If you want your audio to have maximum impact then timing is crucial, an impact sound coming in a few frames late can really detract from the power of a scene - it’s a small thing but it makes a big difference.
Fortunately, there’s an easy trick to make sure your SFX line up perfectly every time by using the visual audio waveform. In this section, we’ll be lining up one of the 100 samples from the free SFX Cellar - Impact SFX Pack, which is a free YouTube sound effects pack that you can download as a sound effect pack zip below.
Perhaps the easiest and most effective category of SFX is short impact sounds such as gunshots, punches, door slams, and bass drops. Feature film action movies, promotional trailers, documentaries, title sequences, and pretty much every other style of popular video makes use of impact sound effects.
When starting out, adding sound effects for YouTube videos might seem like a process of trial and error, which can be time-consuming and downright discouraging. But what if there was a way to know exactly where to place an impact sound effect the first time? The secret lies in the audio waveform.
Waveforms are visual representations of the amplitude of an audio signal. When the amplitude is loud, the waveform gets tall. Conversely, the waveform will get smaller when the amplitude decreases. By understanding the audio waveform, editors can make inferences about a piece of audio before even listening to it.
Luckily, impacts are easy to sync with video because they share a common waveform shape, thus providing a recognizable visual pattern to know exactly where the sound should be placed. Below, you can see and hear Impact 97.mp3 from the SFX Cellar - Impact SFX Pack.
Impacts, by nature, are very loud at the moment of, well, impact. Imagine a car door slamming shut. There will be a monstrous initial amplitude at the beginning, followed by a decaying signal as the sound waves reverberate in the garage. In our SFX Cellar - Impact SFX Pack you’ll find a strong selection of royalty-free, cinematic booms, brahms, hits and explosions.
In order to line up an impact sound effect, find the loudest part of the waveform, or the peak, and place it at the moment of visual impact.
Using the car door example, we would first find the frame of visual impact, which would be the moment that the door slams shut. Next, place the moment of auditory impact to occur at the same time. Remember that the auditory impact moment will be visible as the tallest peak in the waveform. Finally, watch your handiwork and make sure everything is as intended and adjust audio levels for your desired effect.
Pro Tip: When aligning impact sound effects, enlarge the track height and zoom in on the audio waveform to more easily identify the peak.
Occasionally, the timing will be a tiny bit off despite your best efforts to line up the audio peak with the visual impact. If this happens, try nudging the audio to the left or right by one frame. If this doesn’t work, you might have to temporarily switch the timebase of the timeline to get more precise results.
Video editing applications are set up to show frames per second (fps) by default. For example, assume we are editing footage shot at 30 fps. This means we have 30 different frames every second where we can place audio. However, sometimes we need to get more exact to ensure perfectly timed audio.
Fortunately, most applications have a way to switch from a frame-based timeline to sample-based, allowing you to make extremely subtle adjustments to an audio clip’s position. Once you're done adjusting audio, switch back to the normal frame-base to continue editing.
In Adobe Premiere, access the sample-based timeline by pressing the hamburger menu icon next to the sequence name, and selecting Show Audio Time Units.
Once you get the hang of aligning sound effects with their visual counterpart, you can start to also consider music as an element to synchronize. Integrating music, sound effects, and visual movement in rhythm can elevate your overall edit.
This technique is commonly used in cinematic title sequences and with logos in advertisements. Think back to the last movie trailer you watched. When the name of the movie appeared at the end, was there a final music hit? Chances are there was also a layer of sound design to make that moment more dynamic.
Editors already commonly use music hits and rhythms as a method to punctuate their edits. Using these musical cues together with sound effects can add a new level of impact. But how do you line everything up?
Once again, use the waveform as a visual guide. In this case, align the peak in the music (usually a kick drum or cymbal crash) with the impact moment of the sound effect. Synchronize all of that audio with visuals such as a logo or text graphic to maximize its impact.
In the image below, see how the logo, music (track 1) and impact sound effect (track 2) are all lined up to work in unison.
Lastly, remember to have some fun experimenting with sound effects for YouTube videos! Get creative by layering multiple sounds, especially for scenes with explosions or massive impacts.
So you’ve got a sound effect from our SFX Cellar, you’ve imported it into your video project and you’ve timed it perfectly - but what if after all that the sound isn’t quite right for your scene? Don’t worry, this is where the wonders of sound design come in.
Understanding what is possible with time and pitch processing can be very useful for anyone working with audio, both functionally and creatively.
To demonstrate the sound design power of these tools, this example below uses a single sample, and puts it through timestretching and pitchshifting effects. No other samples or effects.
In the past, the only way to alter the pitch or speed of a sound was to increase or decrease the playback speed of an analog medium, such as vinyl or tape. However, modern algorithms (found in most editing software) will allow you to change pitch or time independent of each other.
The analog Time/Pitch modification method is known as varispeed. The relationship between time and pitch means that if you slow a sound down, you lower the pitch, if you speed it up the pitch will be higher. Here it is, demonstrated with the free Whoosh SFX Pack’s sound sample Whoosh 14.wav
When you slow down audio recorded to an analog medium, the number of vibration cycles per second (Hz) becomes lower, creating a lower pitch. Understand that all the core principles revolve around mathematics, even though you may not think of mathematics when you experience the sound.
Imagine a sample of a guitar, playing the note A4, for 1 second. The note A4 has a frequency of 440Hz, meaning that in 1 second, the guitar string goes through 440 complete vibration rotations. If we stretch this 1 second guitar sample, to being 2 seconds long, we now have 220 rotations in each second. In mathematical terms, 440Hz divided by 2 equals 220Hz. This note would now sound as A3, one octave below the original.
This math demonstrates the relationships between pitch and time. This is the same for all sound, not just musical instruments. All sounds are made up of combinations of frequencies Hz, so using varispeed Timestretching will always change the pitch of a sound.
Today, changing a sound’s length or pitch can now be separate processes. Thanks to technologies called Timestretching and Pitchshifting, you can tune a sound higher or lower without changing its timing (pitchshifting), or you can slow down or speed up a sound without having any effect on its pitch (timestretching).
Here’s out Whoosh 14 sound from the Free Whoosh SFX Pack, made slower and faster, this time using modern timestretching algorithms instead of classic varispeed effects.
These were created in Audacity using its Change Speed (varispeed) Change Tempo (timestretching) and Change Pitch (pitchshifting) processes.
Different video editors have their own methods of repitching and timestretching, some allow you to choose between algorithms. However there are common principles throughout all platforms.
Timestretching is the process of modifying the length of a piece of audio, for instance stretching a 15-second sample to 3 seconds in length. This can be done without altering the pitch of the sample, however this will introduce some artefacts to the stretched audio.
There are different methods for timestretching, each being suitable for a different type of sound. Some sounds are quick, percussive and Transient in nature, while some are slower and more Textural, and others still defy explanation, being too Complex. These types of sounds have bred timestretch and pitchshift algorithms designed to treat them.
Here’s the same sound – Whoosh 1 from our free Whoosh SFX Pack – processed using three different timestretching algorithms.
Here are some examples of timestretching a human voice to be slower or faster, while retaining its original pitch…
Pitchshifting is the process of changing the pitch of a sound without changing its length. For example, changing a 5-second piano note from a C to an F, while keeping the sample 5 seconds long long. Again, this may create artifacts or glitchy sounds.
Let’s say you wanted to make someone's voice sound deeper, but you need their words to remain the same length so they match up to the lips moving on-screen – a pitchshifter would be the idea tool. It can also be used to keep instruments and singers in tune.
Here’s a pitchshift effect applied to the Whoosh 21 sample from our free Whoosh SFX Pack. Observe that whilst the pitch has been changed, the timing is no different.
Notice that when you lower the pitch, it changes the timbre of the sound, making it darker and muddier. This is because all those airy high frequencies have been pushed down to the mids, and there's very little sound left in the high frequency area. Alternatively, when you pitch sounds higher, they can become thin-sounding, bass-less and tinny.
This is a generic walkthrough of how to pitchshift a sound. The specifics will be different depending on your software, but the general process will be similar.
1 > Select the sound you want to pitchshift.
2 > Copy and paste the sound to a new empty audio track. (Not always necessary but recommended for safety and convenience.)
3 > Navigate to and activate your video editor’s pitchshifting tool.
This may be:
- in a pop up menu,
- an effect that you insert on the track,
- a designated control built into the software interface.
Check your video editor’s user manual if you’re struggling to find it.
4 > On the Pitchshifter interface, you will usually see controls for:
[Semitones: +/- st] AKA Coarse, Transpose:
Change the pitch by semitones... (+12st = +1 Octave) Twice the pitch, (-12st is -1oct) Half the pitch
[Cents: +/- c] AKA Fine, Detune:
This gives you microtonal control, for making smaller changes (+100ct = +1st).
In some software the pitch control will be given as a percentage (50% = -1oct. 200% = +1oct).
[Algorithm: ] Not always present, but this will allow you to change the processing algorithm. Based on how the algorithm works, the process will have a unique sonic effect on the resulting audio, based on byproduct artefacts.
Algorithms are usually designed to address specific source material. Common types include: Voice, Monophonic, Polyphonic, Complex. Try out different types on your sound and choose the one you think works best or is least noticeable.
5 > Modify the controls to the desired amount of pitch change.
6 > Depending on the software, you may need to “commit” the sound here, rendering it as a new file.
Note: the further you pitchshift from the original sound, more artefacts and distortion will be present in the result, eventually making it sound unrealistic, robotic, or even just strange. Make sure the monitoring headphones/speakers you are using are high enough quality to reveal the artefacts. If you can’t get the pitch you want without it being too mangled, try a different algorithm, or consider using varispeed.
This is Ableton Live’s pitchshifting controls; most software has a similar interface.
This is a generic walkthrough of how to Timestretch your sounds. Again the specifics differ for each video editor, but the process will be similar. This is referring to Timestretching without altering the pitch.
1 > Select the sound you want to stretch.
2 > (It might be worth copying the sound to a new track for safety in case you want to quickly switch back to your original sound.)
3 > Navigate to your video editor’s Timestretching tool.
- Some video editors allow you to grab the edges of the audio clip and visibly stretch it along the timeline (Ableton, Audition, Logic, PT, Reaper…) Some allow you to place multiple stretching points within a single clip.
- Other video editors give you a % speed control. 100% would be the original length (1sec). 50% would be twice as long (2sec). 200% would be half the length (0.5sec).
- Sometimes there is the option to manually enter the resulting time, you just type into a box how many seconds and milliseconds.
4 > Stretch sound to desired length.
5 > If allowed by your NLE, switch between different timestretching algorithms to determine which is the most transparent, or sounds best for the source material or situation.
Common timestretch Algorithms:
Transient: Repeats based on transient selection to elongate the sound.
Monophonic: Use with single voice instruments/sounds or dialogue.
Polyphonic: Use with multiple voice (harmonic) instruments or complex sounds.
Varispeed: modifies pitch and time, like slowing down a tape machine
6 > Depending on the software, you may need to “commit” the sound here, rendering it as a new file.
If you need the clip to be a specific length (e.g. 12.5 sec), and your video editor only gives a percentage control, try using a time calculator instead of making repeated estimations.
For even more creative possibilities, you can combine both processes. Using both timestretching and pitchshifting can help you to overcome problems faced by using one on its own, or varispeed/analog style.
For example, say you recorded a video interview, and you recorded the sound with a separate handheld recorder. Sometimes there may be inconsistencies with sample rates and frame rates, meaning that when you replace the camera's audio with the handheld’s, the audio might be out of sync with the video.
If the audio was too short/fast, you could timestretch it to make it longer, fitting the video length. The issue now is that the voices would be deeper than normal. To correct this you could pitchshift the stretched audio back to the original pitch. This dual process will cause artefacts, becoming more extreme the further you go.
This sample was varispeed stretched to be longer, which made it sound lower pitch. The stretched sample has then been pitchshifted to match the original recording. The result is a much slower spoken voice, but matching the original pitch. You will hear some strange artefacts and glitchiness.
This audio file demonstrates some of the ways you can combine pitchshifting and timestretching. The image details what has been done in each section. It shows how you can change the timing but keep the pitch the same. You should be able to hear the artefacts as the processes are stacked together.
If you don’t mind sacrificing some quality, combining the two can be a powerful creative sound design tool that can drastically change and recontextualise sound.
This example is made using only the Woosh 71 sample from our free Whoosh SFX Pack, processed by timestretching and pitchshifting. No other samples or effects. You can use these techniques creatively to start experimenting with sound design and push sound into new places.
You might have access to hundreds of sound effects from the Accusonus Free SFX Collection, but not everything always hits the spot perfectly for your video. An effect might have the right high end, but it’s missing the depth in the low end that you’re looking for. Or, maybe the sound is too simple, and it needs a few more elements in order to sound natural.
In situations like these, layering sounds in order to create the perfect effect is a skill that could save your video - time to sharpen up our sound design skills even further.
Some Sounds Work Together, Some Sounds Don’t
When choosing sound effects to layer, it’s important to choose effects that will sound good together. This has a lot to do with science, and how sound waves interact with each other, but when it comes down to it, your ear will be your best tool to use when choosing effects.
For this project, we’ll be using two sound effects from our Free SFX Cellar - Impact SFX Pack.
Here’s what our two chosen SFX sound like…
Since we mentioned horror movies before, let’s combine these two impacts to create a “creepy bell” effect that we could use to make our audience jump a little bit. All of the edits to these effects were done in Davinci Resolve’s Fairlight page, but most professional NLEs and DAWs have these same capabilities.
Strategy 1: Trim and Fade
One crucial thing about making two sounds work together in perfect harmony is to make sure that they start and end together. If the timing is off on one of the sound effects, the whole effect will sound off. The easiest way to accomplish this is to use frame by frame scrubbing in your video editor or DAW. Find the frame that contains the first sound in each effect, move back one frame, and trim the beginning of your effect to that point.
Making sure your effects end at the same time is generally just as important as making them start together, but there are times when you can make a short, quick hitting effect work with an effect that runs a little bit longer. The trick there is to fade out the shorter effect so it blends into the longer effect. A hard cut on the shorter effect will make it sound unnatural.
For this project, our sound effects already start at the same time so we don’t need to do anything at the beginning of the clip. They are also both longer effects that “ring out” after the initial impact, so all we have to do is trim them so they end at the same time, and add a fade at the end of both clips so that it sounds more natural.
Here’s what we have so far:
Strategy 2: Tweaking the Sound with EQ
Now that we have our effects trimmed and placed where we want them, it’s time to start making them sound the way that we want them to sound, so that they work together in harmony. For this, we will use EQ.
Using the EQ on our clips will allow us to remove the parts of each effect that we don’t want to hear in our effect. Let’s say that one effect has a great high end, but the low end will clash with the other effect. We can use our EQ to remove that low end. Or, alternatively, maybe one of your effects has a particular frequency that sticks out like a sore thumb when you put all of your effects together. The EQ is great for that.
For our first effect, which will act as the high end of the bell, we wanted to get rid of any low rumble that might clash with our second effect. There were also some undesirable noises in the higher frequencies. Let’s use low-pass and high-pass filters to get rid of those unwanted sounds.
Our second effect will be used for our low end. Since this effect is almost all low end, there’s very little to do here. We’ll use a slight high-pass and low-pass filter to get rid of the frequencies that seem to be interfering with our bell sound, and we’ll add a slight curve to the mid-lows to accentuate the initial impact.
Let’s take a listen to the combined sound as it stands now:
There are no hard rules when it comes to using EQ on your effects. Everything creative is subjective, so play around with it until you have the sound you want. One word of advice though: it’s almost always better to subtract the frequencies that you don’t want, instead of adding to the frequencies that you do want. We broke that “rule” in our second effect, but that’s because it created the sound that was desired.
Strategy 3: Panning and Stereo Width
Obviously, we need to make sure that our sounds are all coming from the same direction. Otherwise we’re going to confuse our audience, and potentially give them a whopping headache. If one of your sound effects is panned to the left, and the other to the right, it’s not going to sound natural.
Stereo width is equally important. Is the effect your creating meant to sound big and bold? Increasing the stereo width is a great way to do that. If your effect is supposed to be short, targeted, and quick-hitting, narrowing the stereo width might be what you need to do.
In our project, neither effect is panned to the left or right, so we don’t have to do any panning. In fact, when creating sound effects, it’s almost always better to have your effect panned to the center. That way you can export it and use your effect in future projects, and pan the whole effect as needed.
Now let’s look at stereo width. We want our bell sound to be front and center, so we will make the stereo width on our first effect as narrow as it can be. In Davinci Resolve, the Mono preset in the stereo width plugin will do quite nicely.
Our low end is what is going to give our effect depth, weight, and that general creepiness that we’re looking for. To accomplish this, let’s increase the stereo width of our second effect. The Max Separation preset in Davinci Resolve’s stereo width plugin gives us the sound we’re looking for here.
Notice how our effect now sounds bigger, deeper, and just a little bit creepier? We are well on our way to having a great creepy bell effect that we can use in our project.
Strategy 4: Tuning Sounds to Fuse them Together
Every sound in the known universe, no matter what it is, has some sort of underlying tone. If two tones don’t match up, it can be grating to the ear drums. For example, have you ever heard someone play an out-of-tune piano? Not very pleasant, is it?
Sound effects are the same. If the underlying tones of your effects don’t line up, the sound effect will distract from the story you’re trying to tell. Most NLEs and DAWs have a way to change the pitch and tone of your audio clips. It’s important to use these tools to make your effects sound like they belong together. Using your ear is the best way to accomplish this. You’ll know when it sounds good.
Since we are creating an effect that will have a horror-like sound, we don’t want the tones to be the exact same. That would be too pretty. Instead, we want to find two different tones that work together like a chord. Having a musical background helped in this case. Let’s listen to our effect so far:
Notice how the underlying tones of our effects created a sort of minor chord? That’s perfect for a horror movie.
Strategy 5: Levelling
Our effect is coming along quite nicely, but it still sounds like our two sounds are fighting each other for dominance. This is where levelling plays a big role. Using faders, you can determine which of your effects is going to be the primary sound, and which will be the supporting sound.
Remember our “rule” when it comes to audio editing: it’s almost always better to take away rather than to add. Rather than raising the level of our primary sound, we want to lower the level of our supporting sound.
For our effect, we want the high end of the bell to be the primary sound. Our low end will act as the supporting sound. Let’s lower the level of our second effect and take a listen:
Our creepy bell effect is now properly balanced, and sounding great. Now it’s time for the finishing touches.
Strategy 6: Final Tweaks
The last step in creating a new effect is to make sure all of our ingredients, so to speak, work together. For example, if one of our effects has a heavy dose of reverb and the other doesn’t, we might want to add reverb to one effect, or use the DeReverb plugin from the ERA Bundle to remove reverb from another. You can also add filters and other processes to the master track to change the sound of the entire effect.
In this project, both of our effects had a good amount of reverb, and they sound as if they match up, so all of our final tweaks will be to the master track. Let’s use reverb to really dial in the creepy feel of our bell. Most reverb plugins have presets that simulate different-sized spaces. The smaller the space, the more reverb you will have. The Bathroom reverb preset in Davinci Resolve will be perfect here.
The last thing to do is to tweak the master level so you don’t have any clipping, and just like magic, you have a brand new creepy bell effect that you can use to scare the pants off of your audience.
Learning how to layer and combine sound effects to create your own custom sounds will open up a world of unlimited possibilities in your audio projects. So go grab some effects from the SFX Cellar, and start creating! The only limit is your imagination.