So BJ’s thread over LPs, got seriously derailed, by a half baked argument over the (poorly defined) qualities of analogue vs digital.

In order to have a meaningful debate over the two, what’s being debated should be clearly defined. Let’s rule some issues out:

Convenience

Market relevance

Future obsolescence

Let’s also only debate the best of the two. Nobody’s arguing a freshly pressed LP played on a R1m system in a hermetically sealed environment is going to sound better than an mp3 played on a bluetooth speaker.

Bump gloves......fight!

First off, we don’t listen to digital music on any system. What we hear, is analogue. Digital, simply refers to the method of storage of music or sound. Some clever people have figured a way to store sound information in a digital language represented by 1s and 0s. At its simplest 1 would go beep, and 0 would be quiet. But that’s not music is it? Fortunately, digital language has become fantastically complex. We can use a sequence of 1s and 0s almost like geographic coordinates, to define complex information. Like geographic coordinates, the more precise you want to be, the more information you need to define.

This leads us to bit depth.

If I only define 1 as beep, I’m ignoring the tone, and loudness of that beep. I’m going to have to come up with longer “coordinates” for my digital language, much like those long series of numbers that define geographic coordinates. So, I define my digital language as having 16 “letters” or bits of information for each coordinate. Using 1s and 0s as the “letters”, I create “coordinates” which accurately describe the sound, I want to output. The resolution of my coordinates is thus 16 bits.

This brings us to sample rate.

So, I have millions of coordinates, defining a tone and loudness. In order for the output to sound good, I need to be feeding that information to the output stage very quickly. Mr. Nyquist (is it?) figured out that since the highest frequency that a human can hear is 20khz, you have to sample the digital “coordinates”, and produce the sound/tone translation, at twice that frequency, to accurately track the sound information, represented by those digital “coordinates”. Fortunately, 44.1khz is child’s play for modern semiconductors.

So, we store complex information regarding a sound in a mathematical language, which is then “read” at great speed, by a mathematical device, to produce an arguably jagged representation of sound (waveforms).

But, that’s just a simplified description of what is actually happening inside that mathematical device. That’s where the squared step representation of a sine wave illustration comes from. What is actually happening inside a DAC, is the computer is sampling the “coordinates” ahead of the output. The DAC mathematically calculates the possible waveforms that correspond to a sequence of those coordinates, by a process of elimination. The result is then further processed to eliminate artifacts. The computer then recreates the sound, in analogue. The resulting sequence of tones, are an accurate representation of the digitally stored sounds. This analogue output is then fed to an output stage, for amplification.

That’s my layman’s understanding of the process, so please feel free to correct me, or expound, if I’m leaving something important out.

The point is: we are dealing with high precision mathematical devices. Maths doesn’t lie. In my opinion, the difference between an expensive DAC, and a cheap DAC, have nothing to do with the mathematical part of the conversion. Rather, the analogue output stage, is where quality is defined. Mathematics doesn’t lie.

OK, so now we’re at the heart of the matter. We are dealing with highly accurate mathematical devices, which hand over to analogue output devices. The quality of those analogue output devices, are the defining factor of the quality of sound we hear.

We might as well be arguing over whether Krell is better than Mark Levinson.

Ding!

Fighters to the corners

Ding!

Round two