Web edition : Wednesday, October 19th, 2011
It turns out that the old adage about statistics and damned lies wasn?t a joke. Sticks and stones may be bonebreakers, and words inflict no (physical) pain, but numbers can kill.
In 2004, for instance, a statistical analysis suggested that antidepressant drugs raised the risk of suicide in youngsters and adolescents, leading the U.S. Food and Drug Administration to require a ?black box? warning label. And guess what happened? Suicide rates among kids went up. It seems likely that the dramatic warning discouraged some kids from taking the drugs they needed, later studies suggested. Not only that, but the original statistical evidence was not as conclusive as the FDA had portrayed it, a subsequent statistical analysis showed.
You might wonder, of course, why the statistics were sound in the subsequent study but villainous in the first one. What turns damned lies into valuable truths? In this case, confidence in the later analysis stems from its use of a different statistical philosophy, specifically the approach named for the 18th century clergyman Thomas Bayes.
Bayes proposed a method for calculating probabilities (published in 1764, after his death) that became widely used by mathematicians for well over a century. But Bayesian statistical methods were declared numerica non grata in the early decades of the 20th century, when the now standard methods of statistical analysis were devised and then imposed on the scientific enterprise via brainwashing in graduate school. In recent years, the Bayesian approach has made a comeback, thanks largely to the availability of powerful computers capable of carrying out the often complex Bayesian calculations. But the Bayes rebirth also owes a lot to a handful of statisticians who have long trumpeted its superiority, despite scorn from the standard-statistics community, whose members are known as ?frequentists.?
In fact, Bayesian stats are now used without fear in many scientific arenas. ?Today Bayesian methods are challenging the supremacy of the frequentist approaches in a wide array of areas of application,? writes statistician Stephen Fienberg of Carnegie Mellon University in a recent paper in Statistical Science.
In guiding judgments about public policy, though, Bayesianism?s influence has remained limited. Yet Bayesian methods have repeatedly proved their mettle, Fienberg says, so it?s time for them to take their place at the forefront of public policy analyses.
?Bayesian approaches ? are well accepted and should become the norm in public settings,? he declares.
He cites several examples where policy would be properly illuminated by Bayesian math. Correcting U.S. Census undercounts would be one appropriate use, he contends. And for assessing environmental issues, such as climate change, Bayesian statistics would confer more credibility on probabilistic forecasts than standard methods do. (Some Bayesian-based methods have been applied to this question and have confirmed predictions that temperatures will keep rising. But further study with Bayesian stats should produce more precise and reliable estimates of just how hot it will get than older approaches, Fienberg argues.) And Bayesian methods could be lifesavers if applied to the FDA?s policies for testing and approving new drugs.
?The Bayesian approach can provide faster and more useful clinical trial information in a wide variety of circumstances in comparison with frequentist methodology,? Fienberg declares.
Bayes differs from standard stats by starting with an assumption, or estimate, or guess, of what the outcome of some study is likely to be: a ?prior probability.? Frequentists have long shunned such a ?subjective? approach to doing science. But the prior probability need not always be completely subjective ? results from a preceding study may allow a reasonably accurate estimate of what the prior probability should be for the next study. And in any case, a Bayesian analysis could always be performed multiple times with different prior probabilities factored into the calculations. That way, you could see whether disagreements about the proper prior would really matter much in the end.
Sure, some people still deny that the Bayesian approach is superior. And in fact, it?s correct to say that sometimes frequentist methods work fine ? depending on what it is you want to know.
In that respect, the main difference between the two statistical philosophies is not really about how they are done, but what they do. Frequentist approaches are usually based on taking a sample, performing a test and making observations to test a hypothesis, taking into account how likely it is that the sample you studied was representative of what was sampled. This method tells you how likely it was to get the result you got, given your hypothesis. Bayesian stats tell you how likely your hypothesis is, given the results you got. Most of the time, that?s what you want to know.
adam savage adam savage mos def jack o lantern jack o lantern dave thomas kris humphries
No comments:
Post a Comment