Loading AI tools
One of a number of different types of statistical inference From Wikipedia, the free encyclopedia
Fiducial inference is one of a number of different types of statistical inference. These are rules, intended for general application, by which conclusions can be drawn from samples of data. In modern statistical practice, attempts to work with fiducial inference have fallen out of fashion in favour of frequentist inference, Bayesian inference and decision theory. However, fiducial inference is important in the history of statistics since its development led to the parallel development of concepts and tools in theoretical statistics that are widely used. Some current research in statistical methodology is either explicitly linked to fiducial inference or is closely connected to it.
This article includes a list of general references, but it lacks sufficient corresponding inline citations. (February 2010) |
The general approach of fiducial inference was proposed by Ronald Fisher.[1][2] Here "fiducial" comes from the Latin for faith. Fiducial inference can be interpreted as an attempt to perform inverse probability without calling on prior probability distributions.[3] Fiducial inference quickly attracted controversy and was never widely accepted.[4] Indeed, counter-examples to the claims of Fisher for fiducial inference were soon published.[citation needed] These counter-examples cast doubt on the coherence of "fiducial inference" as a system of statistical inference or inductive logic. Other studies showed that, where the steps of fiducial inference are said to lead to "fiducial probabilities" (or "fiducial distributions"), these probabilities lack the property of additivity, and so cannot constitute a probability measure.[citation needed]
The concept of fiducial inference can be outlined by comparing its treatment of the problem of interval estimation in relation to other modes of statistical inference.
Fisher designed the fiducial method to meet perceived problems with the Bayesian approach, at a time when the frequentist approach had yet to be fully developed. Such problems related to the need to assign a prior distribution to the unknown values. The aim was to have a procedure, like the Bayesian method, whose results could still be given an inverse probability interpretation based on the actual data observed. The method proceeds by attempting to derive a "fiducial distribution", which is a measure of the degree of faith that can be put on any given value of the unknown parameter and is faithful to the data in the sense that the method uses all available information.
Unfortunately Fisher did not give a general definition of the fiducial method and he denied that the method could always be applied.[citation needed] His only examples were for a single parameter; different generalisations have been given when there are several parameters. A relatively complete presentation of the fiducial approach to inference is given by Quenouille (1958), while Williams (1959) describes the application of fiducial analysis to the calibration problem (also known as "inverse regression") in regression analysis.[5] Further discussion of fiducial inference is given by Kendall & Stuart (1973).[6]
Fisher required the existence of a sufficient statistic for the fiducial method to apply. Suppose there is a single sufficient statistic for a single parameter. That is, suppose that the conditional distribution of the data given the statistic does not depend on the value of the parameter. For example, suppose that n independent observations are uniformly distributed on the interval . The maximum, X, of the n observations is a sufficient statistic for ω. If only X is recorded and the values of the remaining observations are forgotten, these remaining observations are equally likely to have had any values in the interval . This statement does not depend on the value of ω. Then X contains all the available information about ω and the other observations could have given no further information.
The cumulative distribution function of X is
Probability statements about X/ω may be made. For example, given α, a value of a can be chosen with 0 < a < 1 such that
Thus
Then Fisher might say that this statement may be inverted into the form
In this latter statement, ω is now regarded as variable and X is fixed, whereas previously it was the other way round. This distribution of ω is the fiducial distribution which may be used to form fiducial intervals that represent degrees of belief.
The calculation is identical to the pivotal method for finding a confidence interval, but the interpretation is different. In fact older books use the terms confidence interval and fiducial interval interchangeably.[citation needed] Notice that the fiducial distribution is uniquely defined when a single sufficient statistic exists.
The pivotal method is based on a random variable that is a function of both the observations and the parameters but whose distribution does not depend on the parameter. Such random variables are called pivotal quantities. By using these, probability statements about the observations and parameters may be made in which the probabilities do not depend on the parameters and these may be inverted by solving for the parameters in much the same way as in the example above. However, this is only equivalent to the fiducial method if the pivotal quantity is uniquely defined based on a sufficient statistic.
A fiducial interval could be taken to be just a different name for a confidence interval and give it the fiducial interpretation. But the definition might not then be unique.[citation needed] Fisher would have denied that this interpretation is correct: for him, the fiducial distribution had to be defined uniquely and it had to use all the information in the sample.[citation needed]
Fisher admitted that "fiducial inference" had problems. Fisher wrote to George A. Barnard that he was "not clear in the head" about one problem on fiducial inference,[7] and, also writing to Barnard, Fisher complained that his theory seemed to have only "an asymptotic approach to intelligibility".[7] Later Fisher confessed that "I don't understand yet what fiducial probability does. We shall have to live with it a long time before we know what it's doing for us. But it should not be ignored just because we don't yet have a clear interpretation".[7]
Dennis Lindley[8] showed that fiducial probability lacked additivity, and so was not a probability measure. Cox points out[9] that the same argument applies to the so-called "confidence distribution" associated with confidence intervals, so the conclusion to be drawn from this is moot. Fisher sketched "proofs" of results using fiducial probability. When the conclusions of Fisher's fiducial arguments are not false, many have been shown to also follow from Bayesian inference.[citation needed][6]
In 1978, J. G. Pederson wrote that "the fiducial argument has had very limited success and is now essentially dead".[10] Davison wrote "A few subsequent attempts have been made to resurrect fiducialism, but it now seems largely of historical importance, particularly in view of its restricted range of applicability when set alongside models of current interest."[11]
Fiducial inference is still being studied and its principles may be valuable for some scientific applications.[12][13]
Seamless Wikipedia browsing. On steroids.
Every time you click a link to Wikipedia, Wiktionary or Wikiquote in your browser's search results, it will show the modern Wikiwand interface.
Wikiwand extension is a five stars, simple, with minimum permission required to keep your browsing private, safe and transparent.