Digital signals have been around for more than a century, and they've been ubiquitous in audio for better than 30 years, yet many people still don't understand them very well. In the years when digital signals were tied to a particular format -- *e.g.*, CD or SACD -- consumers really didn't need to know anything about how they worked. It was enough to say, "This one sounds better than that one." At the recording end of things, the digital technology used can have an impact on the sound, but so can a number of other factors that aren't so easily expressed in numbers.

But as more and more consumers get into computer audio, they are confronted with choices about *bit depth* and* sample rate* -- when buying and setting up systems, and, increasingly, when choosing which digital format to download music in. The purpose of this article is to explain what these terms mean and, more important, how they might affect your audio experience. The concepts aren't difficult to understand. There will be a few numbers, but no equations.

An electrical signal, in audio or anything else, is analog. That signal can be left in the analog domain and stored in many different ways, the two most common being magnetic tape, or a plastic disc with a long spiral groove cut into each side. Either way entails compromises, principally in limited frequency response and the addition of distortion and noise. Alternatively, we can store that signal digitally, which comes with its own set of compromises. There are a number of ways to convert an analog signal to a digital one, the most common of which is Pulse Code Modulation (PCM), in which the amplitude of an analog signal is recorded at discrete points in time. The* sample rate*, usually expressed in hertz (Hz), indicates how frequently the signal is sampled. The amplitude of the signal at any instant in time is represented by a discrete number. The number of discrete choices available is given by the *bit depth*. These two numbers tell you how much information is contained within a particular signal, as well as what kind of sonic information is retained and what is discarded.

So that I don't have to clutter the rest of the text with caveats, I'll get one thing out of the way right here. There are factors other than bit depth and sample rate that affect the sound of a digital signal -- digital filters are a good example -- that I don't discuss here for a number of reasons: There are too many variants, any nontechnical discussion of them is extremely superficial and therefore uninformative, they are rarely specified on audio components or recordings, and, most important, they have significantly less effect on the sound than do the sample rate and bit depth. Once you understand the basics, however, these topics do become interesting, and are incredibly important in wringing that last little bit of performance from a system or recording. We may decide to discuss them in a future series of articles.

**Bit depth**

In a 1-bit system, the signal can either be on or off. In a 4-bit system, the signal can be represented by 16 discrete values -- 0, 1, 2, 3, 4, 5 . . . 15. There is nothing in between. If the actual value of the signal is 7.6, and we have a system that rounds off to whole values, then a 7.6 will be recorded as an 8. The difference in the actual value and the recorded value -- in this case, 0.4 -- is called *quantization error*. That information has been thrown away and can never be recovered.

In an audio circuit, the value lies within a fixed range -- say, between -2 and +2 volts -- in which each discrete level represents an equal portion of that range. For our hypothetical 4-bit system, we are dividing it into 16 pieces. As we increase the number of pieces into which we can divide the maximum amplitude, we reduce the magnitude of the error that exists between the actual value of the signal and its recorded value. In a 16-bit system -- the CD standard -- there are 65,536 possible discrete values. That means that the value between any two recordable voltages for our +/-2V example is about 0.00006V, or 60µV. Consequently, the maximum error in the signal is only 30µV, which seems pretty small -- except when you consider that the maximum resolving power of a good analog circuit is at least an order of magnitude better than that. Fortunately, analog-to-digital converters with bit depths of 20 and 24 are quite common.

The ratio between the largest and smallest values recordable by a system is that system's *dynamic range*. Typically, dynamic range is expressed in decibels (dB), which represent this multiplicative difference logarithmically. There is a precise equation to convert from bit depth to dynamic range in dB, but the more useful rule of thumb is 6dB per bit. Applying that rule, a 16-bit system has a dynamic range of 96dB, and a 24-bit system has a dynamic range of 144dB. In the real world, electrical noise prevents A/D converters from achieving full 24-bit resolution. The best specification for dynamic range that I've seen on an ADC is 127dB, which is slightly better than 21-bit resolution. Other factors can drive that number even lower.

Analog dynamic range is not exactly the same as digital dynamic range, but they must be considered together. If you have a microphone preamp with a dynamic range of 110dB in front of a 24-bit converter, the best possible resolution for the system is between 18 and 19 bits. The same limitations are present when decoding the digital signal, although the loss of resolution after decoding may be subjectively less critical than the limits before encoding. One advantage of using a converter with greater bit depth than the system requires is that recording levels can more easily be set without worrying about digital clipping of the waveform or loss of low-level information.

An audio signal is the summation of all audio sources into a single waveform. The samples that we take of that waveform are not likely to occur at the peak of each frequency, but rather as the wave is rising or falling. As the amplitude of the signal changes, so must the value that is recorded. Otherwise, the shape of the reconstructed waveform, and therefore the sound, will be distorted.

Let's take the simple example of a single instrument playing a single note. We will assume a fundamental frequency plus the first five harmonics in diminishing relative amplitude, such that the fifth harmonic is one-sixth the amplitude of the fundamental. We will sample the signal at 16 times the highest-frequency component, so that there can be no doubt that all errors are due to quantization. The signal is quantized into 32 discrete levels -- representing a five-bit system. We can see that, even at this low bit depth, the sampled signal gives a fair representation of the wave. Errors occur when the input signal changes but the recorded value does not.

*The graph represents one cycle of the complex waveform as described. The original signal is shown in blue, the quantized signal in red. The arrows indicate some places where errors occur.*

Increasing the bit depth of a system will allow the digital data to more closely represent the analog input signal. Aside from increasing storage and processing requirements, there is no disadvantage to using a high bit depth, but for any given signal there is a point beyond which increasing the number of bits yields no increase in fidelity.

In my next article, I will cover the other component of a digital signal: its sample rate.

*. . . S. Andrea Sundaram*andreas@soundstagenetwork.com