Measurements, Listening, and What Matters in Audio

Reaction score
The View from the Edge

Robert E. Greene

'.... Most fundamentally, I think that almost everything in audio can be explained by measurements, provided one does the correct measurements sufficiently carefully. In particular, I think a very great deal can be explained simply by frequency response and the closely related matter of phase response. (These are indeed closely related: In minimum phase devices, one determines the other.)

People sometimes fail to realize how much can be explained on this basis because they do not always recall—though TAS has told them often—how tiny the threshold is for audibility of response differences: 0.1dB changes can be audibly detected. Arithmetic shows that there are vast numbers of audibly distinguishable possibilities that would seem superficially quite close to “flat.”

Speakers may not need to be flat within ±0.1dB to sound “musical,” but they surely need to match with that precision to sound alike. There is a lot of room for variation in this, given that speakers are typically lucky to be ±1dB (no decimal point).

Attached to this rather abstract business is a practical matter of my own experience: I have found that almost every speaker can be improved audibly by some judicious EQ. People are reluctant, it seems, to take this up, but I have found it to be true.

A second point is that the room/speaker interaction is really critical. No matter how good a speaker is anechoically, if it has a 5dB dip between 100 and 200Hz from floor interaction it is going to sound wrong. (Long experience with DSP room correction devices has shown me how often such a hole develops.) Moreover, control of room reflections in general is a crucial matter. After a visit some years ago to an RFZ studio room [a room in which the listening seat is positioned in a reflection-free zone—RH] designed by Ole Lund Christensen and Poul Ladegaard in Denmark I formulated in my own mind the slogan “acoustics is everything.” And to a surprising extent, this has been the case in my continuing experience.

If you get these things under control—really flat speakers in a room with which they interact correctly—it is quite startling how “good” things will sound. In particular, it is possible to get the timbre of the reproduction remarkably close to what is actually on the recording. For this, it helps to listen sitting close to the speakers and with early reflections minimized. And, of course, the speakers need to be well behaved otherwise, e.g., good suppression of audible cabinet resonances and so on.

What about space? For decades, it has become a fashionable matter to worry about “soundstage,” but this has reached the point that recordings are expected to have a soundstage almost independently of what the recording is—to expect the soundstage to be a property of the playback system rather than reproducing what is recorded.


This idea of evaluating everything in terms of soundstage is potentially a major source of confusion. Since no one has any idea of what kind of soundstage ought to arise from most recordings, soundstage is not really a sensible criterion for evaluation of anything. Ironically, Harry Pearson, who popularized the soundstage idea initially, was firmly of the opinion that one should not use the sound off walls, and that the spatial impression that was really on the recording would be ideally correct listening out of doors. But this fundamental principle seems to have been lost.

Attached to the unstable soundstage matter is a general obsession with micro-effects, some of which may not even be detectable under blind conditions. Some of these tiny effects may be audible, but the important point is that there is seldom any mechanism for deciding if the changes are to the good or not. If there is no way to know why some change, of a power cord say, affected the sound, there is no way to decide whether the effect, if any, was positive or not. How could you tell? Believe the manufacturer? Believe reviewers, who have as little basis as you yourself? This is a major issue. Inferring from listening to recordings what is correct among possibilities that differ by very small amounts is a process fraught with peril.

My overwhelming experience personally is that if you get fundamentals right, all the tiny things will fade into insignificance. Tiny changes may remain audible, but they will not affect musical experience all that much. Back in Copenhagen, in Christensen’s and Ladegaard’s reflection-free-zone room, all the electronics sounded good. Various electronic devices did not become identical, but they all sounded good in any verifiable sense of the word. Electronics work; speakers in rooms usually do not work so well. But when the room and speaker thing works well, the electronic things fade in significance. When the big things are right, the small things count for little. It is as if, in practice, we worry about small things because we have not been able to get all the big things right.


Concentrate on fundamentals; that would be my suggestion. And finally, never ever forget that the recording dominates. Remember forever what Peter McGrath said in the pages of TAS a few years ago, about how a cassette recording with a good microphone setup sounded far better than an ultra-high-resolution recording of the same event with a less fortunate mike setup. Understanding that the big things count most seems to me the beginning of audio wisdom. What makes audio bad is first of all acoustically bad recordings, not the medium but the microphone pickup—there are few really good ones—and second acoustically bad playback, speakers, and rooms. The rest has mostly turned out to be not worth worrying about by comparison in my experience. Acoustics—in the sense of microphones, speakers, and their room interaction—really is almost everything!


Εδώ επιπλεόν αναλυση του παραπάνω:


Reaction score
Listening vs. Measuring

At Benchmark, listening is the final exam that determines if a design passes from engineering to production. When all of the measurements show that a product is working flawlessly, we spend time listening for issues that may not have shown up on the test station. If we hear something, we go back and figure out how to measure what we heard. We then add this test to our arsenal of measurements.


Measurement Techniques Must be Driven by Listening Tests

Listening tests are never perfect and for this reason it is essential that we develop measurements for each artifact that we identify in a listening test. An APx555 test set has far more resolution than human hearing, but it has no intelligence. We have to tell it exactly what to measure and how to measure it. When we hear something that we cannot measure, we are just not doing the right measurements.

Listening Tests Reveal Problems but not the Root Cause

Any design process that relies solely on listening tests is doomed to fail. If we just listen, redesign, and then repeat, we fail to identify the root cause of the defect and we never approach perfection. We may arrive at a solution that just masks the artifact with another less-objectionable artifact. On the other hand if we focus on eliminating every artifact that we can measure, we can quickly converge on a solution that approaches sonic transparency. At Benchmark, if we can measure an artifact, we don't try to determine if it is low enough to be inaudible, we simply try to eliminate it. This process eliminates all but the most elusive artifacts.

Audible Artifacts that Elude Traditional Measurements

To date, one of the most elusive artifacts that we have encountered is the issue of intersample overs. These are intersample peaks that exceed 0 dBFS while the sample values themselves never exceed 0 dBFS. These peaks can reach +3 dBFS and can cause DSP overloads in fixed-point PCM sigma-delta converters and sample rate converters. It is important to note that the DSP overloads are caused by the finite boundaries of the fixed-point math and not by some inherent defect in PCM or in the upsampling process.


The lesson from these examples is that we need to do a comprehensive set of tests on audio circuits before concluding that they are defect free. The circuits will cheat if they can!

If we were to run a very comprehensive set of standard audio tests on D/A converters, the best multi-bit sigma-delta DACs would measure much better than the best ladder DACs or 1-bit DSD DACs. Given a choice between a DSD (1-bit sigma delta DAC), non-oversampled PCM (ladder DAC) and oversampled PCM (multi-bit sigma-delta DAC), our measurements would clearly show that the third choice should be the most transparent. Nevertheless, an early prototype of the DAC2 failed the final exam, it failed the listening test. Our listening tests revealed the inter-sample over problem described earlier. Once the root cause was identified with lab measurements, we were able to fix the prototype and add a new test to our arsenal.



Reaction score
Τί είναι αυτό; Άνοιγμα θέματος και 2 copy-paste στα Αγγλικά; Πες βρε δυο λόγια την άποψή σου για αυτά!

Ορίστε κι εγώ! Μόλις μου ήρθε, για να μπορέσουμε να κάνουμε αυτά που λέει ο Greene παραπάνω:


Κοιτάξτε τί λέει μέσα στο κουτί!!


Staff online

  • abcd
    Πρώην Διοικητής
  • spylab
    Wile E. Coyote Συντονιστής Θρησκευτικών


Νεότερο μέλος