Ear Training and Audio Programming Course: Compression I
Last updated: July 18th 2023
Introduction #
Welcome to my Ear Training and Audio Programming course.
(I'm publishing it in parts, in no particular order. The full course will eventually get a whole page and a suggested learning order.)
Audio compression #
Compression is probably the most used and thus most important tool for the audio engineer. (Close contender being the EQ, which I'll probably teach you next.)
Drum bus compression #
Within that, drum bus compression (as in the whole drum mix passed through a compressor) is one of the most used and thus most important tools for the metal and rock audio engineer.
Your Mission #
...should you choose to accept it, is to:
- Click any course file to load it.
- Hit play.
- Click the "+" to add a compressor.
- Click on the parameter names for guided tinkering.
- (Optionally: Upload any file of yours.)
- (But the hints and mini tutorials are obviously based on the course files. So for the guided tinkering part, you wanna use those.)
Main course files:
- Zander Noriega drum loops:
- Alternatively, load your own audio:
A quick rant on compression #
Compressors don't kill songs, shitty musicians do #
Avoid midwit talk about compression "killing" things.
Of course a compressor works by manipulating the loudness of the signal and "reducing dynamics."
But if loudness and "control" and "killing" is the only thing you think of when you hear "compressor," you're missing the point and absorbing too many poorly-thought audio engineering opinions.
Compressor to increase the life in a mix #
In fact, here's what I mostly use a compressor for: Discovering hidden rhythms. Particularly when compressing drum buses.
When you "raise the floor" in a drum mix, you might find yourself with a new drum pattern from the same recording. Which means compression can actually find the life in a signal. This, along with the fact that you can "automate" the settings, is obviously very useful for increasing the life in a mix.
Your love for dynamics is likely a myth #
I've never heard a song and thought "I don't like this Meshuggah song because it's compressed." No, Meshuggah songs are great, compressed or not.
I've also never thought "I like this Ska song because it's not compressed." No, Ska songs suck, compressed or not.
Programming notes #
If you're not a programmer, feel free to stop here.
This course is meant for the niche of people who want both things: Better ears, and audio programming (or even just computer programming) chops. But if you only want better ears, you're free to abort mission now.
Of course, this being a real thing, it's full of real code, ugly code, glue code, etc. totally unrelated to audio programming. Eg. The "HTML" (actually DOM) code just for making buttons and other UI things work, the code to track "application state," etc.
So for each course chapter, I will just be talking about a few cherry-picked topics, with only the relevant snippet of code. (Do feel free to inspect the whole thing if you're curious and have a strong stomach.)
Buffers #
As you know, reading from disks is slow(er), whereas reading from memory is fast.
For audio applications, where we want to go from our "code" to our speakers moving as quick as possible, we want to keep our audio in memory. We call this a "buffer."
The code for loading an audio file from an HTML input goes something like:
const reader = new FileReader(); // 1
reader.onload = async (event) => { // 2
const arrayBuffer = event.target.result; // 3
const audioBuffer = await audioContext.decodeAudioData(arrayBuffer); // 4
const source = audioContext.createBufferSource(); // 5
source.buffer = audioBuffer;
/* do something with `source`, eg. connect it to another node. */
}
- Use the general
FileReader
API to load the (hopefully audio) file. - The FileReader API will call whatever function we assign to
reader.onload
.- Google "Event handlers in the DOM." (DOM, not "JavaScript," to make sure you're learning about browser events.)
- I use
async
so I can useawait
inside the function. It's syntax sugar for "asynchronous" things without the cumbersome explicit "callbacks" or "promises" code.
- The general "array buffer" that contains our (hopefully audio) file data.
- We use the Web Audio API to try to decode the data as audio (and I use
await
for aforementioned syntax sugar reasons.) - Like the cowboy programmer that I am, I assume the decoding was successful, ie. that I have some sweet music data in
audioBuffer
, and go ahead and finally create a playable "node" (the Web Audio API is a graph-type of abstraction) without checking for errors.
The code gets naturally messy because there are different types of buffers. One buffer from when we loaded the file, which is some general API for file reading, then some other shit, and eventually we get to an AudioBuffer
, which as its name suggests is specialized for holding in-memory audio data.
How to implement a "bypass" feature? #
What's the best way (if any) to implement the "bypass" feature for a node in an audio processing graph?
Original idea: Replace node with a dummy one #
Based on the way things work in the analog world, where we also have a bunch of hardware components connected to each other, my original idea to implement a "bypass" button was:
- Find the node I want to bypass.
- Disconnect its inputs and outputs.
- Create a node to act as a "bypass" (say, a plain
GainNode
). - Connect the original node's inputs into my "bypass" node.
- Connect my "bypass node" into the original node's outputs.
Inputs? Outputs? What are you talking about! #
Already at the second step my dream came crashing down: Turns out Web Audio API nodes don't keep track of their inputs or outputs. Ie. there's no handy .inputs
(or .outputs
) property on an AudioNode
.
Which meant I needed to keep track of all that myself.
Ie. Stop using AudioNode#connect()
directly. Abstract over it, with some homemade logic to maintain inputs and outputs state for every node (or at least for every node that I want to be able to bypass later.)
Quick and dirty cowboy state #
I just threw all that shit into some weak maps (fuck strong maps!), which looks something like this:
let nodeInputsMap = new WeakMap;
let nodeOutputsMap = new WeakMap;
const connect = (src, dst) => {
const inputs = nodeInputsMap.get(dst) || [];
inputs.push(src);
nodeInputsMap.set(dst, inputs);
const outputs = nodeOutputsMap.get(src) || [];
outputs.push(dst);
nodeOutputsMap.set(src, outputs);
src.connect(dst);
}
That did the trick: As long as I stick to the discipline of using my custom connect()
instead of the nodes' .connect()
method, my program will keep track of every node's inputs and outputs at any time (and hopefully the weak map helps me avoid cancerous growth of orphan shit when the user stars adding and removing and adding shit all over the place.)
With that, I could now replace any node in the graph with my "bypass node" (just a GainNode
.)
Did it work? Yes #
It did work.
I implemented the bypass logic as I envisioned it, essentially as I would do in real life, if I were to replace one of the pedals in a guitar pedal chain with a special "bypass pedal" that doesn't touch the signal.
But also, it didn't work #
Alas, even though the code was doing what I intended, I would sometimes hear artifacts in the playback.
Sometimes they were subtle but they were there. (I guess this is where being an audio engineer too helps.)
So, the code seemed right, the switch from compressed sound to bypassed sound and back seemed to occur, but shit sounded weird.
Is it Web Audio? Chrome? My code? My computer? I don't know, and I really have no patience for getting stuck right now (particularly since I'm in the middle of a good ol' burnout.)
Back to the drawing board #
So I just rethought my approach for bypassing.
And I went for something which works, has no artifacts (at least none that I can hear), but could be potentially resource-hoggy for very large graphs.
What's this other approach?
Fuck it: Everyone gets a parallel gain node #
"You get bypass node! And you get a bypass node! /Oprah.gif."
Say we have INPUT (eg. the audio file) and OUTPUT (computer speaker):
INPUT ---> OUTPUT
And we want to stick a COMPRESSOR in between, ie.:
INPUT ---> COMPRESSOR ---> OUTPUT
Well, what I'm doing now is adding a parallel gain node (for "bypass" purposes) whenever I add a node to the graph.
So if you ask me to add a COMPRESSOR, I am doing this:
INPUT ---> COMPRESSOR ---> OUTPUT
\ ^
\ |
-> GAIN NODE -------
This way, the "bypass" logic could just about setting the COMPRESSOR to 0 gain, and the GAIN NODE to 1. And viceversa when the "bypass" button is toggled.
So I went and implemented that.
And did it work?
...Almost.
DynamicsCompressorNode does not have a .gain property #
So AudioNode
s don't keep track of their ins and outs, and a DynamicsCompressorNode
can't even control its volume.
Alright, message received: This API gives us the minimum possible for each type of node to do only that at which it specializes.
It's on us to architect everything else: How to manage the node's connections, etc.
Therefore, to set COMPRESSOR's gain to 0, I would need, well, a GainNode
.
The final bypass implementation #
And so this is what happens when I'm doing now when you ask me to add a COMPRESSOR (that we want to be able to "bypass"):
INPUT ---> GAIN NODE ---> COMPRESSOR ---> OUTPUT
\ ^
\ |
-> GAIN NODE ---------------------
The "bypass" toggling is a matter of setting one GAIN NODE's .gain.value
to 0 and the other one's to 1, and viceversa.
The code for adding a compressor now is something like:
const addCompressor = (compressor, lastConnected, audioContext) => {
const bypassNode = audioContext.createGain();
bypassNode.gain.value = 0;
const gainNode = audioContext.createGain();
const lastConnectedBeforeLocalConnections = lastConnected;
lastConnected.disconnect(audioContext.destination);
connect(lastConnected, gainNode);
connect(gainNode, compressor);
connect(compressor, audioContext.destination);
// for bypass
connect(lastConnectedBeforeLocalConnections, bypassNode);
connect(bypassNode, audioContext.destination);
bypassNodes.set(compressor, bypassNode);
gainNodes.set(compressor, gainNode);
}
(The lastConnected
shit is some other inner state management thing I'm too tired to talk about right now. But it's to do with letting the user create a chain of various compressors.)
And that's it. Now we have a (hopefully) working bypass button.
Questions and Concerns #
- Does the Web Audio API keep sending signal through the compressor even though the GAIN NODE before it is at 0?
- I know that in Logic Audio (my DAW since forever) a plugin bypass is not done this way. Anyone who's used Logic will tell you: Muting a channel does not free memory/CPU. Logic keeps running the processing in the background. Bypassing the plugin, on the other hand, does free the resources.
- So I still think my original idea was the proper one. I'll come back to this again. (Hopefully not anytime soon.)
- One good thing about this setup, though, is that it is (I think) exactly what is needed for a "Dry vs. Wet" control. So I'll be probably implementing that for the next lesson. "When life gives you lemons..."!
Anyway, those were just some of the adventures involved in this section of this chapter of the course!
Pain Not Necessary to Know #
(Extreme prog metal deep cut. Get it?)
I had other mis-adventures while programming this, but they were on the UI side of things. Those I won't bother you with. Boring frontend browser-scripting crap.
Yes, I'm going full "vanilla JS" (you really mean "vanilla DOM" when you say this).
No React. No nothing. I might use some tiny Math library here and there as I need to. But I'm not in the mood for dealing with the idiotic idiosyncracies of other people's big dumb DOM abstractions.
Fuck React, fuck Angular, Vue, all of that shit.
By the way, I hate to break it to you, "Everything is a {THING}" people, but: No, not everything is a pure function. Not everything is a "component." Not everything is an "Object." Not everything is a "Signal." Not everything is an "actor."
Cut the "Look, I found an abstraction I like, so now EVERYTHING has to be that!" bullshit. Just us whatever abstraction works for today's tasks and move on.
References #
- (Text/HTML) 1.19. The DynamicsCompressorNode Interface @ Web Audio API, 29 March 2023
- (Text/HTML) 1.20. The GainNode Interface @ Web Audio API, 29 March 2023
- (Text/HTML) 1.5. The AudioNode Interface @ Web Audio API, 29 March 2023
- (Text/HTML) AudioNode#connect(destinationParam, output) @ Web Audio API, 29 March 2023
- (Text/HTML) Rules of Hooks