As with many topics in the audio industry, the science behind acoustics and how digital signal processing makes it possible for us to record, edit, synthesize, mix, and master music entirely within the domain of a powerful and well-equipped computer is still a major source of confusion for many aspiring producers, engineers, and artists these days.
Before we can discuss which bit depth you should use and why, it’s best to understand what the term “bit depth” actually means first.
Step By Step, Bit By Bit
To get a good idea of what the bit depth actually represents in the digital domain, it helps to think of it first in terms of what it was before the Digital Era came along.
In the analog recording process, there are two main measurable aspects of sound that are directly correlative to their digitally reproduced counterparts: amplitude and frequency.
Amplitude and Meters
Amplitude (expressed in decibels or dB for short) is a fancier Latin word for loudness, which can be interpreted and measured in several ways depending on which meter you use. The most common types of meter types that are standard issue in meters within most DAW mixing boards are RMS and peak.
Peak is an indication of the transients that exist within the source recording and can be a good indicator for determining proper signal-to-noise ratio prior to hitting the record button, but for mixing purposes, RMS (or root mean square) is a more reliable indicator for judging the loudness of a sound recording based on the average human ear’s perception of loudness.
Dialing In Those Frequencies
Frequency (expressed in Hertz or Hz for short) has to do where sounds lie within the frequency spectrum, or rather, how bassy, bright, round, boomy, dull, or sharp they sound.
Derived originally from the Latin word frequentia, which means “assembly, multitude, or crowd”, it becomes fairly easy to understand what we refer to when we use the word “frequency” to describe a specific aspect of a sound, especially when we look at how sound works at the most basic level.
Fundamentally, sound is nothing more than a series of vibrations (which are really air molecules being compressed and expanded at varying rates) that our ears detect and our brain registers in concert with the other four senses we use to navigate and make sense of the world that surrounds us.
Frequency, in that sense, is a measurement of how quickly air molecules are compressed and expanded (audio engineers may also use the term “cycles” to describe frequencies).
When we factor in amplitude, or loudness, the actual event taking place is air molecules being vibrated within space of a certain size. Incidentally, this is also referred to as SPL, or sound pressure level (which actually precedes dB as an acoustical measurement).
The Frequency Spectrum
On the frequency spectrum, a bass guitar will likely sound most prominent within 100-200 Hz while an acoustic guitar will have more pleasant-sounding overtones at 1-2 kHz or 5 kHz and higher.
At the same time, the acoustic guitar may have some significant loud peaks in the lower frequency register due to the proximity effect of a close-up mic, which are easy to correct with either compression or EQ, or both.
Unless synthesized, most instruments have a frequency register that spans most of the spectrum, but they will be more dominant in specific frequency registers and hence less prominent in others.
This acoustic phenomenon is a clear-cut indication of how amplitude and frequency are really two sides of the same coin in that, when you change one, you will inevitably impact the other to the improvement or detriment of the end result.
Frequency vs Amplitude
In other words, much like Siamese twins, frequency and amplitude are simply inseparable. Increase the lows in one sound, and the overall perceived amplitude of that sound will increase.
Cut out the frequencies of another sound, and the amplitude drops. How this occurs largely depends on the sound source itself, but the correlation becomes self-explanatory once you actually take a recording and start mangling it with an EQ or compressor.
Okay, So What About Bit Depth Then?
Now that we’ve covered the analog equivalents of amplitude or frequency, we can make our way over to the digital realm and talk about how bit depth can improve the quality of your recording once you actually understand the resulting benefits.
In the simplest of terms, bit depth is amplitude or loudness converted into thousands upon thousands of neat little slices made out of digital ones and zeros.
Rich The Tweakmeister provides an excellent explanation of how this works in an awesomely written article on his website, which, once broken down, goes something like this:
Within an audio file that has a bit depth of 16 bits and a sampling rate of 44.1 kHz (we’ll get to that in a moment), there is a total of 65,536 levels per each slice of audio, of which there are 44,100 slices, which means that all-in-all there is a grand total of 2,890,137,600 levels if we add the number of levels per slice together!
If we set the bit depth to 24, the total rises to 16,777,216, which in turn increases the total number of levels (assuming the sampling rate is still 44.1 kHz) to 739,875,225,600 levels total!
That said, don’t get caught up on the math or the technical details – the numbers you see above are there to demonstrate the vast difference between how many levels of information can be captured in a 16-bit recording, and how a 24-bit recording can capture so much more.
Does 24-Bit Make Recordings Sound Better?
Well, if you have the right equipment, you can get a higher-quality recording, but if, for example, you have a 16-bit recording and you upsample (i.e., convert) it to 24 bits, you won’t make a dent whatsoever in the quality of the recording because there’s nothing in the recording at lower levels that the 24-bit setting will pick up on. Why’s that? Well, let’s consider the following:
The maximum dB range of a 16-bit recording is approximately 96 dB, whereas
The maximum dB range of a 24-bit recording is approximately 144 dB.
The difference between those dB ranges is a whopping 48 dB! This means you have a much higher signal-to-noise ratio for recording any musical genre where subtlety is a must, which definitely rings true for jazz, classical, country, and pretty much any genre that makes generous use of acoustical instruments.
Conversely, once you record at a bit-depth that restricts you a dynamic range of 96 dB, converting the file won’t reveal anything else below that limit because there’s nothing to be found there!
What About 32-Bit?
Internally, many modern DAWs have integrated 32-bit floating point functionality in their internal signal processing algorithms, which is highly useful for preventing intermittent clipping and distortion at the digital level, though recording in 32-bit is next to useless because 24-bit is more than sufficient to capture most if not all low-level signals. In addition, 32-bit files will simply clog up more space on your hard drive.
However, setting your bit depth to 32 bits prior to rendering your mix for mastering will help preserve the transients once you normalize the audio file containing your final mix, as mastering engineer extraordinaire Friedemann Tischmeyer explains in this insightful video.
16 vs 24 Bit Example Video
All in all, when recording, set your recording bit depth to 24 bits, and bounce at 32 bits and normalize the file (if you’re mastering the song yourself, that is) so that the transient peaks are preserved during mastering. Happy mixing!