Loading AI tools
Interpretation of probability From Wikipedia, the free encyclopedia
Frequentist probability or frequentism is an interpretation of probability; it defines an event's probability as the limit of its relative frequency in infinitely many trials (the long-run probability).[2] Probabilities can be found (in principle) by a repeatable objective process (and are thus ideally devoid of opinion). The continued use of frequentist methods in scientific inference, however, has been called into question.[3][4][5]
The development of the frequentist account was motivated by the problems and paradoxes of the previously dominant viewpoint, the classical interpretation. In the classical interpretation, probability was defined in terms of the principle of indifference, based on the natural symmetry of a problem, so, for example, the probabilities of dice games arise from the natural symmetric 6-sidedness of the cube. This classical interpretation stumbled at any statistical problem that has no natural symmetry for reasoning.
In the frequentist interpretation, probabilities are discussed only when dealing with well-defined random experiments. The set of all possible outcomes of a random experiment is called the sample space of the experiment. An event is defined as a particular subset of the sample space to be considered. For any given event, only one of two possibilities may hold: It occurs or it does not. The relative frequency of occurrence of an event, observed in a number of repetitions of the experiment, is a measure of the probability of that event. This is the core conception of probability in the frequentist interpretation.
A claim of the frequentist approach is that, as the number of trials increases, the change in the relative frequency will diminish. Hence, one can view a probability as the limiting value of the corresponding relative frequencies.
The frequentist interpretation is a philosophical approach to the definition and use of probabilities; it is one of several such approaches. It does not claim to capture all connotations of the concept 'probable' in colloquial speech of natural languages.
As an interpretation, it is not in conflict with the mathematical axiomatization of probability theory; rather, it provides guidance for how to apply mathematical probability theory to real-world situations. It offers distinct guidance in the construction and design of practical experiments, especially when contrasted with the Bayesian interpretation. As to whether this guidance is useful, or is apt to mis-interpretation, has been a source of controversy. Particularly when the frequency interpretation of probability is mistakenly assumed to be the only possible basis for frequentist inference. So, for example, a list of mis-interpretations of the meaning of p-values accompanies the article on p-values; controversies are detailed in the article on statistical hypothesis testing. The Jeffreys–Lindley paradox shows how different interpretations, applied to the same data set, can lead to different conclusions about the 'statistical significance' of a result.[citation needed]
As Feller notes:[lower-alpha 1]
There is no place in our system for speculations concerning the probability that the sun will rise tomorrow. Before speaking of it we should have to agree on an (idealized) model which would presumably run along the lines "out of infinitely many worlds one is selected at random ..." Little imagination is required to construct such a model, but it appears both uninteresting and meaningless.[6]
The frequentist view may have been foreshadowed by Aristotle, in Rhetoric,[7] when he wrote:
the probable is that which for the most part happens — Aristotle Rhetoric[8]
Poisson (1837) clearly distinguished between objective and subjective probabilities.[9] Soon thereafter a flurry of nearly simultaneous publications by Mill, Ellis (1843)[10] and Ellis (1854),[11] Cournot (1843),[12] and Fries introduced the frequentist view. Venn (1866, 1876, 1888)[1] provided a thorough exposition two decades later. These were further supported by the publications of Boole and Bertrand. By the end of the 19th century the frequentist interpretation was well established and perhaps dominant in the sciences.[9] The following generation established the tools of classical inferential statistics (significance testing, hypothesis testing and confidence intervals) all based on frequentist probability.
Alternatively,[13] Bernoulli[lower-alpha 2] understood the concept of frequentist probability and published a critical proof (the weak law of large numbers) posthumously (Bernoulli, 1713).[14] He is also credited with some appreciation for subjective probability (prior to and without Bayes theorem).[15][lower-alpha 3][16] Gauss and Laplace used frequentist (and other) probability in derivations of the least squares method a century later, a generation before Poisson.[13] Laplace considered the probabilities of testimonies, tables of mortality, judgments of tribunals, etc. which are unlikely candidates for classical probability. In this view, Poisson's contribution was his sharp criticism of the alternative "inverse" (subjective, Bayesian) probability interpretation. Any criticism by Gauss or Laplace was muted and implicit. (However, note that their later derivations of least squares did not use inverse probability.)
Major contributors to "classical" statistics in the early 20th century included Fisher, Neyman, and Pearson. Fisher contributed to most of statistics and made significance testing the core of experimental science, although he was critical of the frequentist concept of "repeated sampling from the same population";[17] Neyman formulated confidence intervals and contributed heavily to sampling theory; Neyman and Pearson paired in the creation of hypothesis testing. All valued objectivity, so the best interpretation of probability available to them was frequentist.
All were suspicious of "inverse probability" (the available alternative) with prior probabilities chosen by using the principle of indifference. Fisher said, "... the theory of inverse probability is founded upon an error, [referring to Bayes theorem] and must be wholly rejected."[18] While Neyman was a pure frequentist,[19][lower-alpha 4] Fisher's views of probability were unique: Both Fisher and Neyman had nuanced view of probability. von Mises offered a combination of mathematical and philosophical support for frequentism in the era.[20][21]
According to the Oxford English Dictionary, the term frequentist was first used by M.G. Kendall in 1949, to contrast with Bayesians, whom he called non-frequentists.[22][23] Kendall observed
"The Frequency Theory of Probability" was used a generation earlier as a chapter title in Keynes (1921).[7]
The historical sequence:
The primary historical sources in probability and statistics did not use the current terminology of classical, subjective (Bayesian), and frequentist probability.
Probability theory is a branch of mathematics. While its roots reach centuries into the past, it reached maturity with the axioms of Andrey Kolmogorov in 1933. The theory focuses on the valid operations on probability values rather than on the initial assignment of values; the mathematics is largely independent of any interpretation of probability.
Applications and interpretations of probability are considered by philosophy, the sciences and statistics. All are interested in the extraction of knowledge from observations—inductive reasoning. There are a variety of competing interpretations;[24] All have problems. The frequentist interpretation does resolve difficulties with the classical interpretation, such as any problem where the natural symmetry of outcomes is not known. It does not address other issues, such as the dutch book.
Seamless Wikipedia browsing. On steroids.
Every time you click a link to Wikipedia, Wiktionary or Wikiquote in your browser's search results, it will show the modern Wikiwand interface.
Wikiwand extension is a five stars, simple, with minimum permission required to keep your browsing private, safe and transparent.