biline.ca Behind the Wall - Speakers and stuff
Jump to the biline.ca Main SectionJump to the Ottawa, Canada SectionJump to the Computer SectionJump to the X-Box SectionJump to the Xmods Section3D Printing Section


Return to biline.ca Audio/Video Section

The Audio Critic
Home Page

The Audio Critic's Web 'Zine

Read some articles from The Audio Critic Magazine.

Download Issues of The Audio Critic.

What they say about The Audio Critic


Hi-End Flummery

I have been a fan of 'The Audio Critic' for years now, they are crusaders who wish to simply de-mystify Audio technology into measureable facts and numbers that can show differences in various pieces of the audio chain. The truth sometimes is harder to accept than the version often presented by the media in general. As a matter of fact another individual who works to accomplish the same thing with regards to paranormal and pseudoscientific claims is James Randi. James Randi has an international reputation as a magician and escape artist, but today he is best known as the world's most tireless investigator and demystifier of paranormal and pseudoscientific claims.Randi has pursued "psychic" spoonbenders, exposed the dirty tricks of faith healers, investigated homeopathic water "with a memory," and generally been a thorn in the sides of those who try to pull the wool over the public's eyes in the name of the supernatural. So I was surprized to find this blurb in James Randi's weekly Commentary July 23,2004. James Randi's website is a must read and you can visit it by simply clicking on the image below.


Update: Round 2

On October 21st Round two started!
click Here to jump to that section.

This is the section of his Commentary that caught my attention:HI-END FLUMMERY

Reader 'Andrew' writes, re what he calls, "Bad science in Stereophile Magazine":

While this is not, strictly speaking, a claim of paranormal powers, you will recognize many familiar elements. Stereophile Magazine, and similar publications, promotes various audio equipment, fancy power cords, gold speaker wire, that sort of thing. They run tests which "prove" that these are better.

The article I provide a link to stereophile.com is, in summary, the editors of Stereophile stating that they do not use double-blind testing because it gives them different — "wrong" — answers. ABX is the terminology used for a particular kind of double-blind audio testing, a very easy kind to do. In effect, Stereophile Magazine is "dowsing" for whatever equipment they are promoting each month. Using techniques beloved of dowsers everywhere, they always find the right (usually more expensive) equipment!

James Randi responds with the following:
Well, yes, this is a paranormal claim, Andrew, if there actually is an advantage to having speaker leads that conduct a signal because they're treated magically — the only description one can make of the "special processes" they go through. I've had run-ins with Stereophile before. Click Here (the excerpt is below). We discussed doing proper tests of their ridiculous claims for such devices as the 'Tice Clock', a simple and definitive procedure that would certainly show the truth behind the nonsense — but they opted out half-way into the discussion. I also pursued George Tice himself, and found that he kept running away from proper tests, even though I had top audio people and the very best equipment available to do the work. It was ever thus. Bold claims, then retreat. And they're never embarrassed, because they know that the suckers will continue to buy the products.

This excerpt is from randi.org
I received an interesting note from reader Bob Holmes who went through his own early epiphany and learned from it:

You may be interested in my own personal experience with a phenomenon similar to the ideomotor effect. As a youngster, I was a Hi-Fi nut — building all my own equipment. On one occasion I had built what I considered to be the ultimate preamp and decided to give it an A/B test. [Alternating between the two modes being examined.] It was absolutely amazing — as I switched back and forth between my old and new preamps I was astounded at the beauty and clarity of the new unit's sound. Imagine my chagrin and embarrassment when I discovered that I had incorrectly wired the A/B switch. It was doing absolutely nothing!

Similar to the ideomotor effect, I was hearing what I wanted to hear. I can laugh about it today, but it taught me more about human psychology than I ever learned in college. This, I believe, is the root cause of the utter tripe and nonsense one can read in Stereophile magazine today (come to think of it, wine rating probably falls into this category as well).

James Randi responds with the following:
That magazine, Stereophile, has published articles that make most pseudoscience look pale. The 'Tice Clock', a regular Radio Shack digital clock treated with liquid nitrogen and a 'secret process' to align electrons in the power supply (?) is only one of the products it tested and approved, as well as $1800 speaker cables marked with arrows to indicate in which direction the electricity should travel. But, as with all obsessions, these are items that afficionados simply must have, because they're expensive and 'in'.

You will need to set some time aside to read these links:
The Highs & Lows of Double-Blind Testing
The Truth Should Out
They contain close to 20 pages of text where every facet of the question of A-B and A-B-X comparision tests are presented, it delves into much detail but I think the response by The Audio Critic's Thomas A. Nousaine sums up the arguments and puts everything into perspective.

The Double-Blind Debate
Editor: Les Leventhal's 'Type 1 and Type 2 Errors in the Statistical Analysis of Listening Tests' (JAES, Vol.34 No.6) caused me to revisit experimental design and statistical analysis. Dr. Leventhal offers good basic statistical advice, but falls into a trap I've often found myself in. It's easy to forget that an experiment is valid based on its design, and that statistics only report on the reliability of the results. I also believe that the binomial parameter p, which the author uses as an analog for listener sensitivity, is quite high relative to the position established by the audiophile camp and in actual practice. This renders the author's conclusions about fairness moot.

Experiments are made valid (ie, measure what they claim to measure) by good design, not by statistical analysis. The perfect experiment would be completely free of bias, perfectly sensitive to the variable under test, and would require only one trial. However, the experimenter, after conducting such an experiment, might be uncertain that his method was perfect so he repeats it just to be sure. Then, through statistical analysis, the probability of chance results (Type 1 error) or insensitivity (Type 2 error) can be determined. Note that even with one trial the results are valid. 1000 or 1,000,000 trials more do not increase the validity of the work. However, the reliability increases with more trials as does confidence that the results are true.

The significance of statistics can be seen with the experiment that is just one hair short of perfect. Suppose there is a one-in-a-million chance that the experiment is not perfect: in a million trials, a 'false' will turn up as a 'true' one time. The experiment is conducted and a 'false' occurs. The experimenter is then killed in a freak accident before he can conduct any more trials. Here we can have a valid experiment (1/1,000,000 probability of Type 1 error) with untrue results. Statistical verification through repetition is thus really necessary, a prerequisite for valid results, but it is not the cause of those results.

Statistics can also verify biased results. A million trials of a biased test are just as invalid as one trial, but more reliable. The moral is that validity can only be determined by examining the test and its inherent characteristics. Leventhal is right by concluding that aggregation of "unfair" results is unfair, but he fails to examine the test itself for fairness. Statistics are just numbers. They are neither fair nor unfair. Numbers just don't care.

Fairness and high sensitivity are just what makes the ABX method so appealing: it contains the validity elements that constitute a fair test. Listener and administrator bias are controlled by concealing the identity of the device under test. The listener gets direct, level-controlled access to the device under test, the control device and X, with multidirectional switching and user-controlled duration. Contrast this to the open evaluation with usually no more than one or two switch trials, no controls over listener or administrator bias or level, often with references that aren't even present during the test, and no recorded numerical results or statistical analysis.

Which is the most fair?
How about sensitivity? Les Leventhal makes his entire fairness case around the idea that subtle differences may only be present 60-80% of the time during the tests. When p approaches 0.9 (differences present 90% of the time), the fairness coefficient evens up and even a 16-trial test meets all criteria for both Type 1 and 2 error. Notice that probability of error is not the same as actual error. Even a perfect one-trial experiment would have an unacceptably high risk of Type 1 and 2 error. So what makes for a sensitive listening test? What actual values can we expect for p?

A casual survey of any of the underground magazines shows that audiophiles typically find it fairly easy to perceive differences. Leventhal implies that p may be a low value when there is nothing in the audiophile position to support such a notion. Read any decent "audiophile" review and draw your own conclusion as to the value of p inherent in their position.

An examination of the 16-N tests referenced by Dr. Leventhal reveals conditions indicative of high sensitivity. Clark and Greenhill auditioned the devices under test prior to the test to identify sonic characteristics. The ABX blind tests were performed using their personal reference systems, with familiar program material and at their leisure. I find it difficult to believe that this procedure might have a sensitivity of under 0.9.

A low sensitivity value of, say, 0.6 for p suggests that for every 10 trials only 6 real trials occur. Thus one must increase the sample size to add enough real trials to avoid Type 2 error. A low-sensitivity test of 16 trials is only a 10-trial test under these conditions. If the differences are only present on 60% of all the program material available, and if your material is chosen from a random sample, then the sensitivity issue might apply. However, the identification of material where differences are present is imperative for sensitive testing. It also enables us to test for differences that may only be present 10%, or even 1%, of the time. We can make these tests by selecting programs in which differences are present 100% of the time during the test. It seems to me that this is what audiophiles do, and precisely what Clark and Greenhill, Shanefield, Lipshitz and Vanderkooy, et al, do also.

For tests using listener groups it may be difficult to give all listeners completely sensitive programs. However, because the sample is now much larger, only 100 total trials are needed to reduce the risk of Type 2 error to less than 1% with a listener sensitivity of 0.7. Using 10 listeners in a 16-trial test would mean 160 total trials.

I find it interesting that no one has difficulty discovering differences during subjective evaluations. However, during the open sessions I've participated in the general sensitivity level of the listeners often seems to be greater than one (p equal to or greater than 1.0). Differences abound. However, sometimes these differences mystically disappear under blind conditions. Why? It seems to me that many of them are a part of the relationship or interface between the listener and that gear. The things the listener hears are as much a part of the listener as they are a part of the equipment. Withholding the identity of the equipment breaks the bond with the listener and the differences disappear.

As an audiophile, it is important to me to know which differences are attributable to the equipment alone. Those which are part of the listener interface may not apply to me. The ABX method is the only test I am aware of that makes this important distinction. It is the only one that has both scientific validity and statistical reliability. I don't doubt that listeners and golden ears hear what they hear, but there is scant evidence that others would hear it. While the debate rages on, I will devote my energy to areas where there is no argument about the existence of major differences. Loudspeakers, anyone?—Thomas A. Nousaine, Chicago, IL

 


Round 2
It would seem this fight is far from over as the weekly commentary by James Randi on Oct 21st held these comments about our beloved Stereophile magazine:

THEY’RE STILL ON THE RUN

Do you remember the silly claims of Stereophile Magazine that prompted me to offer them a million dollars if they could prove any of the trash they were offering their readers? Well, they’re still hiding under the bed – or under that huge rock with Sylvia Browne – to avoid meeting the challenge. Just do a search on the main Swift page for “Stereophile,” to refresh your memory on that brouhaha. Well, now reader John McKillop sends us to www.stereophile.com/asweseeit/110/index.html to find an article written back in 1987 by J. Gordon Holt, the man who founded Stereophile Magazine in 1962. Holt apparently had the present management beat for brains. The article is titled, “L'Affaire Belt,” and refers to the ridiculous claims made back then by one Peter Belt, “inventor” of magical devices that improve everything from harmonics to hysterics.

I have news: Mr. Belt is still making those silly claims, and is still getting rich by selling garbage to naïve audiophiles. We must wonder, as reader McKillop does, whether Art Dudley – a willingly flummoxed reviewer for Stereophile – and/or John Atkinson, present editor of the magazine – ever read this discussion by their founder, of the hilarious Peter Belt pretensions. Go there and see a thoughtful, well-reasoned, article that handles honestly what the present Stereophile management has chosen to ignore: blatant fakery, fraud, and swindling in the audio business. I’ll quote a pertinent section from the 18-year-old article here that should – but won’t – seriously embarrass Atkinson and Dudley. Holt recognized reality, and wasn’t reluctant to share it with his readers. Unfortunately, he sold the magazine in 1982, and the woo-woos immediately took over. Here’s the 1987 excerpt:

For self-styled golden ears to be claiming, and trying, to be "objective" is to deny reality, because perception is not like instrumentation. Everything we perceive is filtered through a judgmental process which embodies all of our previous related experiences, and the resulting judgment is as much beyond conscious control as a preference for chocolate over vanilla. We cannot will ourselves to feel what we do not feel. Thus, when perceptions are so indistinct as to be wide open to interpretation, we will tend to perceive what we want to perceive or expect to perceive or have been told that we should perceive. This, I believe, explains the reports that Peter Belt's devices work as claimed.

Perhaps what bothers me so much about the Belt affair is the alacrity with which supposedly rational, technically savvy individuals have accepted, on the basis of subjective observation alone, something which all their scientific and journalistic background should tell them warrants a great deal of skepticism. But then, perhaps I shouldn't be that surprised.

Despite heroic efforts to educate our population, the US (and, apparently, the UK) has been graduating scientific illiterates for more than 40 years. And where knowledge ends, superstition begins. Without any concepts of how scientific knowledge is gleaned from intuition, hypothesis, and meticulous investigation, or what it accepts today as truth, anything is possible. Without the anchor of science, we are free to drift from one idea to another, accepting or "keeping an open mind about" as many outrageous tenets as did the "superstitious natives" we used to scorn 50 years ago. (We still do, but it's unfashionable to admit it.) Many of our beliefs are based on nothing more than a very questionable personal conviction that, because something should be true, then it must be. (Traditional religion is the best example of this.) The notion that a belief should have at least some objective support is scorned as being "closed-minded," which has become a new epithet. In order to avoid that dread appellation, we are expected to pretend to be open to the possibility that today's flight of technofantasy may prove to be tomorrow's truth, no matter how unlikely. Well, I don't buy that.

Nor do we, Mr. Holt, but the suckers still buy the garbage…. I am seldom presented with such a succinct, powerful, and to-the-point summary of what we at the JREF battle, every day. Our very own Kramer, who handles the claims for the JREF prize, has sterling expertise and experience in the audio field, as well; regarding the Stereophile matter, he offers this comment:

As a recording engineer and producer of some notoriety, I am always shocked to see the level of gullibility among those allegedly trained in the Recording Arts and Sciences, where people who call themselves "professionals" willingly jettison all reason (along with everything they have learned about the physics of sound) in blind submission to these preposterous audio pseudo-products, the belief in which I can only compare to the belief in the miraculous, more akin to crying statues, bleeding icons, and flying carpets than to anything in the world of reason. It is the stuff of pure fiction, and worse (the word “fraud” comes to mind), and the support of these products renders any publication that champions their efficacy a permanent laughing stock, which is precisely what Stereophile Magazine has become.

Well that's not a pretty picture is it?

 

<< Previous Page | Article Index



All of the pictures and information contained within the www.biline.ca website are the property of Jeff Mathurin please do NOT use any of the contents of this website without consent. If you would like to contact me for any reason then feel free to use the contact form by clicking Here