Salsburg, D. 2001.
``
The Lady Tasting Tea
-- How statistics revolutionized science
in the twentieth century''.
Owl Book.
Chapter 12. The Confidence Trick
(continuation)
Probability versus confidence level
Neyman's procedure does not break down, regardless of how
complicated the problem, which is one reason it is so
widely used in statistical analyses. Neyman's real
problem with confidence intervals was not the problem
that Fisher's anticipated. It was the problem that Bowley
raised at the beginning of the discussion. What does
probability mean in this context? In his answer, Neyman
fell back on the frequentist definition of real-life
probability. As he said here, and made clearer in a later
paper on confidence intervals, the confidence interval
has to be viewed not in terms of each conclusion but
as a process. In the long run, the statistician who always
computes 95 percent conclusion intervals will find that
the true value of the parameter lies within the
computed interval 95 percent of the time. Note that, to
Neyman, the probability asscociated with the confidence
interval was not the probability that we are correct.
It was the frequency of correct statements that a
statistician who uses his method will make in the long
run. It says nothing about how ``accurate'' the current
estimate is.
In spite of the questions about the meaning of probability
in this context, Neyman's confidence bounds have become
the standard method of computing an interval estimate.
Most scientists compute 90 percent or 95 percent
confidence bounds and act as if they are sure that the
interval contains the true value of the parameter.
No one talks or writes about ``fiducial distributions''
today. The idea died with Fisher. As he tried to make
the idea work, Fisher produced a great deal of clever
and important research. Some of that research has become
mainstream; other parts remain in the incomplete state
in which he left them.