Falsifiability (or refutability) is a deductive standard of evaluation of scientific theories and hypotheses, introduced by the philosopher of science Karl Popper in his book The Logic of Scientific Discovery (1934).[B] A theory or hypothesis is falsifiable if it can be logically contradicted by an empirical test.

Pair of black swans swimming
Here are two black swans, but even with no black swans to possibly falsify it, "All swans are white" would still be shown falsifiable by "Here is a black swan"—a black swan would still be a state of affairs, only an imaginary one.[A]

Popper emphasized the asymmetry created by the relation of a universal law with basic observation statements[C] and contrasted falsifiability to the intuitively similar concept of verifiability that was then current in logical positivism. He argued that the only way to verify a claim such as "All swans are white" would be if one could theoretically observe all swans,[D] which is not possible. On the other hand, the falsifiability requirement for an anomalous instance, such as the observation of a single black swan, is theoretically reasonable and sufficient to logically falsify the claim.

Popper proposed falsifiability as the cornerstone solution to both the problem of induction and the problem of demarcation. He insisted that, as a logical criterion, his falsifiability is distinct from the related concept "capacity to be proven wrong" discussed in Lakatos's falsificationism.[E][F][G] Even being a logical criterion, its purpose is to make the theory predictive and testable, and thus useful in practice.

By contrast, the Duhem–Quine thesis says that definitive experimental falsifications are impossible[1] and that no scientific hypothesis is by itself capable of making predictions, because an empirical test of the hypothesis requires one or more background assumptions.[2]

Popper's response is that falsifiability does not have the Duhem problem[H] because it is a logical criterion. Experimental research has the Duhem problem and other problems, such as the problem of induction,[I] but, according to Popper, statistical tests, which are only possible when a theory is falsifiable, can still be useful within a critical discussion.

As a key notion in the separation of science from non-science and pseudoscience, falsifiability has featured prominently in many scientific controversies and applications, even being used as legal precedent.

The problem of induction and demarcation

One of the questions in the scientific method is: how does one move from observations to scientific laws? This is the problem of induction. Suppose we want to put the hypothesis that all swans are white to the test. We come across a white swan. We cannot validly argue (or induce) from "here is a white swan" to "all swans are white"; doing so would require a logical fallacy such as, for example, affirming the consequent.[3]

Popper's idea to solve this problem is that while it is impossible to verify that every swan is white, finding a single black swan shows that not every swan is white. Such falsification uses the valid inference modus tollens: if from a law we logically deduce , but what is observed is , we infer that the law is false. For example, given the statement "all swans are white", we can deduce "the specific swan here is white", but if what is observed is "the specific swan here is not white" (say black), then "all swans are white" is false. More accurately, the statement that can be deduced is broken into an initial condition and a prediction as in in which "the thing here is a swan" and "the thing here is a white swan". If what is observed is C being true while P is false (formally, ), we can infer that the law is false.

For Popper, induction is actually never needed in science.[J][K] Instead, in Popper's view, laws are conjectured in a non-logical manner on the basis of expectations and predispositions.[4] This has led David Miller, a student and collaborator of Popper, to write "the mission is to classify truths, not to certify them".[5] In contrast, the logical empiricism movement, which included such philosophers as Moritz Schlick, Rudolf Carnap, Otto Neurath, and A. J. Ayer wanted to formalize the idea that, for a law to be scientific, it must be possible to argue on the basis of observations either in favor of its truth or its falsity. There was no consensus among these philosophers about how to achieve that, but the thought expressed by Mach's dictum that "where neither confirmation nor refutation is possible, science is not concerned" was accepted as a basic precept of critical reflection about science.[6][7][8]

Popper said that a demarcation criterion was possible, but we have to use the logical possibility of falsifications, which is falsifiability. He cited his encounter with psychoanalysis in the 1910s. It did not matter what observation was presented, psychoanalysis could explain it. Unfortunately, the reason it could explain everything is that it did not exclude anything also.[L] For Popper, this was a failure, because it meant that it could not make any prediction. From a logical standpoint, if one finds an observation that does not contradict a law, it does not mean that the law is true. A verification has no value in itself. But, if the law makes risky predictions and these are corroborated, Popper says, there is a reason to prefer this law over another law that makes less risky predictions or no predictions at all.[M][N] In the definition of falsifiability, contradictions with observations are not used to support eventual falsifications, but for logical "falsifications" that show that the law makes risky predictions, which is completely different.

On the basic philosophical side of this issue, Popper said that some philosophers of the Vienna Circle had mixed two different problems, that of meaning and that of demarcation, and had proposed in verificationism a single solution to both: a statement that could not be verified was considered meaningless. In opposition to this view, Popper said that there are meaningful theories that are not scientific, and that, accordingly, a criterion of meaningfulness does not coincide with a criterion of demarcation.[O]

From Hume's problem to non problematic induction

The problem of induction is often called Hume's problem. David Hume studied how human beings obtain new knowledge that goes beyond known laws and observations, including how we can discover new laws. He understood that deductive logic could not explain this learning process and argued in favour of a mental or psychological process of learning that would not require deductive logic. He even argued that this learning process cannot be justified by any general rules, deductive or not.[9] Popper accepted Hume's argument and therefore viewed progress in science as the result of quasi-induction, which does the same as induction, but has no inference rules to justify it.[10][11] Philip N. Johnson-Laird, professor of psychology, also accepted Hume's conclusion that induction has no justification. For him induction does not require justification and therefore can exist in the same manner as Popper's quasi-induction does.[12]

When Johnson-Laird says that no justification is needed, he does not refer to a general inductive method of justification that, to avoid a circular reasoning, would not itself require any justification. On the contrary, in agreement with Hume, he means that there is no general method of justification for induction and that's ok, because the induction steps do not require justification.[12] Instead, these steps use patterns of induction, which are not expected to have a general justification: they may or may not be applicable depending on the background knowledge. Johnson-Laird wrote: "[P]hilosophers have worried about which properties of objects warrant inductive inferences. The answer rests on knowledge: we don't infer that all the passengers on a plane are male because the first ten off the plane are men. We know that this observation doesn't rule out the possibility of a woman passenger."[12] The reasoning pattern that was not applied here is enumerative induction.

Popper was interested in the overall learning process in science, to quasi-induction, which he also called the "path of science".[10] However, Popper did not show much interest in these reasoning patterns, which he globally referred to as psychologism.[13] He did not deny the possibility of some kind of psychological explanation for the learning process, especially when psychology is seen as an extension of biology, but he felt that these biological explanations were not within the scope of epistemology.[P][Q] Popper proposed an evolutionary mechanism to explain the success of science,[14] which is much in line with Johnson-Laird's view that "induction is just something that animals, including human beings, do to make life possible",[12] but Popper did not consider it a part of his epistemology.[15] He wrote that his interest was mainly in the logic of science and that epistemology should be concerned with logical aspects only.[R] Instead of asking why science succeeds he considered the pragmatic problem of induction.[16] This problem is not how to justify a theory or what is the global mechanism for the success of science but only what methodology do we use to pick one theory among theories that are already conjectured. His methodological answer to the latter question is that we pick the theory that is the most tested with the available technology: "the one, which in the light of our critical discussion, appears to be the best so far".[16] By his own account, because only a negative approach was supported by logic, Popper adopted a negative methodology.[S] The purpose of his methodology is to prevent "the policy of immunizing our theories against refutation". It also supports some "dogmatic attitude" in defending theories against criticism, because this allows the process to be more complete.[17] This negative view of science was much criticized and not only by Johnson-Laird.

In practice, some steps based on observations can be justified under assumptions, which can be very natural. For example, Bayesian inductive logic[18] is justified by theorems that make explicit assumptions. These theorems are obtained with deductive logic, not inductive logic. They are sometimes presented as steps of induction, because they refer to laws of probability, even though they do not go beyond deductive logic. This is yet a third notion of induction, which overlaps with deductive logic in the following sense that it is supported by it. These deductive steps are not really inductive, but the overall process that includes the creation of assumptions is inductive in the usual sense. In a fallibilist perspective, a perspective that is widely accepted by philosophers, including Popper,[19] every logical step of learning only creates an assumption or reinstates one that was doubted—that is all that science logically does.

The elusive distinction between the logic of science and its applied methodology

Popper distinguished between the logic of science and its applied methodology.[E] For example, the falsifiability of Newton's law of gravitation, as defined by Popper, depends purely on the logical relation it has with a statement such as "The brick fell upwards when released".[20][T] A brick that falls upwards would not alone falsify Newton's law of gravitation. The capacity to verify the absence of conditions such as a hidden string[U] attached to the brick is also needed for this state of affairs[A] to eventually falsify Newton's law of gravitation. However, these applied methodological considerations are irrelevant in falsifiability, because it is a logical criterion. The empirical requirement on the potential falsifier, also called the material requirement,[V] is only that it is observable inter-subjectively with existing technologies. There is no requirement that the potential falsifier can actually show the law to be false. The purely logical contradiction, together with the material requirement, are sufficient. The logical part consists of theories, statements, and their purely logical relationship together with this material requirement, which is needed for a connection with the methodological part.

The methodological part consists, in Popper's view, of informal rules, which are used to guess theories, accept observation statements as factual, etc. These include statistical tests: Popper is aware that observation statements are accepted with the help of statistical methods and that these involve methodological decisions.[21] When this distinction is applied to the term "falsifiability", it corresponds to a distinction between two completely different meanings of the term. The same is true for the term "falsifiable". Popper said that he only uses "falsifiability" or "falsifiable" in reference to the logical side and that, when he refers to the methodological side, he speaks instead of "falsification" and its problems.[F]

Popper said that methodological problems require proposing methodological rules. For example, one such rule is that, if one refuses to go along with falsifications, then one has retired oneself from the game of science.[22] The logical side does not have such methodological problems, in particular with regard to the falsifiability of a theory, because basic statements are not required to be possible. Methodological rules are only needed in the context of actual falsifications.

So observations have two purposes in Popper's view. On the methodological side, observations can be used to show that a law is false, which Popper calls falsification. On the logical side, observations, which are purely logical constructions, do not show a law to be false, but contradict a law to show its falsifiability. Unlike falsifications and free from the problems of falsification, these contradictions establish the value of the law, which may eventually be corroborated.

Popper wrote that an entire literature exists because this distinction between the logical aspect and the methodological aspect was not observed.[G] This is still seen in a more recent literature. For example, in their 2019 article Evidence based medicine as science, Vere and Gibson wrote "[falsifiability has] been considered problematic because theories are not simply tested through falsification but in conjunction with auxiliary assumptions and background knowledge."[23]

Basic statements and the definition of falsifiability

Basic statements

In Popper's view of science, statements of observation can be analyzed within a logical structure independently of any factual observations.[W][X] The set of all purely logical observations that are considered constitutes the empirical basis. Popper calls them the basic statements or test statements. They are the statements that can be used to show the falsifiability of a theory. Popper says that basic statements do not have to be possible in practice. It is sufficient that they are accepted by convention as belonging to the empirical language, a language that allows intersubjective verifiability: "they must be testable by intersubjective observation (the material requirement)".[24][Y] See the examples in section § Examples of demarcation and applications.

In more than twelve pages of The Logic of Scientific Discovery,[25] Popper discusses informally which statements among those that are considered in the logical structure are basic statements. A logical structure uses universal classes to define laws. For example, in the law "all swans are white" the concept of swans is a universal class. It corresponds to a set of properties that every swan must have. It is not restricted to the swans that exist, existed or will exist. Informally, a basic statement is simply a statement that concerns only a finite number of specific instances in universal classes. In particular, an existential statement such as "there exists a black swan" is not a basic statement, because it is not specific about the instance. On the other hand, "this swan here is black" is a basic statement. Popper says that it is a singular existential statement or simply a singular statement. So, basic statements are singular (existential) statements.

The definition of falsifiability

Thornton says that basic statements are statements that correspond to particular "observation-reports". He then gives Popper's definition of falsifiability:

"A theory is scientific if and only if it divides the class of basic statements into the following two non-empty sub-classes: (a) the class of all those basic statements with which it is inconsistent, or which it prohibits—this is the class of its potential falsifiers (i.e., those statements which, if true, falsify the whole theory), and (b) the class of those basic statements with which it is consistent, or which it permits (i.e., those statements which, if true, corroborate it, or bear it out)."

Thornton, Stephen, Thornton 2016, at the end of section 3

As in the case of actual falsifiers, decisions must be taken by scientists to accept a logical structure and its associated empirical basis, but these are usually part of a background knowledge that scientists have in common and, often, no discussion is even necessary.[Z] The first decision described by Lakatos[26] is implicit in this agreement, but the other decisions are not needed. This agreement, if one can speak of agreement when there is not even a discussion, exists only in principle. This is where the distinction between the logical and methodological sides of science becomes important. When an actual falsifier is proposed, the technology used is considered in detail and, as described in section § Dogmatic falsificationism, an actual agreement is needed. This may require using a deeper empirical basis,[AA] hidden within the current empirical basis, to make sure that the properties or values used in the falsifier were obtained correctly (Andersson 2016 gives some examples).

Popper says that despite the fact that the empirical basis can be shaky, more comparable to a swamp than to solid ground,[AA] the definition that is given above is simply the formalization of a natural requirement on scientific theories, without which the whole logical process of science[W] would not be possible.

Initial condition and prediction in falsifiers of laws

In his analysis of the scientific nature of universal laws, Popper arrived at the conclusion that laws must "allow us to deduce, roughly speaking, more empirical singular statements than we can deduce from the initial conditions alone."[27] A singular statement that has one part only cannot contradict a universal law. A falsifier of a law has always two parts: the initial condition and the singular statement that contradicts the prediction.

However, there is no need to require that falsifiers have two parts in the definition itself. This removes the requirement that a falsifiable statement must make prediction. In this way, the definition is more general and allows the basic statements themselves to be falsifiable.[27] Criteria that require that a law must be predictive, just as is required by falsifiability (when applied to laws), Popper wrote, "have been put forward as criteria of the meaningfulness of sentences (rather than as criteria of demarcation applicable to theoretical systems) again and again after the publication of my book, even by critics who pooh-poohed my criterion of falsifiability."[28]

Falsifiability in model theory

Scientists such as the Nobel laureate Herbert A. Simon have studied the semantic aspects of the logical side of falsifiability.[29][30] These studies were done in the perspective that a logic is a relation between formal sentences in languages and a collection of mathematical structures. The relation, usually denoted , says the formal sentence is true when interpreted in the structure —it provides the semantic of the languages.[AB] According to Rynasiewicz, in this semantic perspective, falsifiability as defined by Popper means that in some observation structure (in the collection) there exists a set of observations which refutes the theory.[31] An even stronger notion of falsifiability was considered, which requires, not only that there exists one structure with a contradicting set of observations, but also that all structures in the collection that cannot be expanded to a structure that satisfies contain such a contradicting set of observations.[31]

Examples of demarcation and applications

Newton's theory

In response to Lakatos who suggested that Newton's theory was as hard to show falsifiable as Freud's psychoanalytic theory, Popper gave the example of an apple that moves from the ground up to a branch and then starts to dance from one branch to another.[T] Popper thought that it was a basic statement that was a potential falsifier for Newton's theory, because the position of the apple at different times can be measured. Popper's claims on this point are controversial, since Newtonian physics does not deny that there could be forces acting on the apple that are stronger than Earth's gravity.

Einstein's equivalence principle

Another example of a basic statement is "The inert mass of this object is ten times larger than its gravitational mass." This is a basic statement because the inert mass and the gravitational mass can both be measured separately, even though it never happens that they are different. It is, as described by Popper, a valid falsifier for Einstein's equivalence principle.[AC]

Evolution

Industrial melanism

Thumb
A black-bodied and white-bodied peppered moth

In a discussion of the theory of evolution, Popper mentioned industrial melanism[32] as an example of a falsifiable law. A corresponding basic statement that acts as a potential falsifier is "In this industrial area, the relative fitness of the white-bodied peppered moth is high." Here "fitness" means "reproductive success over the next generation".[AD][AE] It is a basic statement, because it is possible to separately determine the kind of environment, industrial vs natural, and the relative fitness of the white-bodied form (relative to the black-bodied form) in an area, even though it never happens that the white-bodied form has a high relative fitness in an industrial area.

Precambrian rabbit

A famous example of a basic statement from J. B. S. Haldane is "[These are] fossil rabbits in the Precambrian era." This is a basic statement because it is possible to find a fossil rabbit and to determine that the date of a fossil is in the Precambrian era, even though it never happens that the date of a rabbit fossil is in the Precambrian era. Despite opinions to the contrary,[33] sometimes wrongly attributed to Popper,[AF] this shows the scientific character of paleontology or the history of the evolution of life on Earth, because it contradicts the hypothesis in paleontology that all mammals existed in a much more recent era. Richard Dawkins adds that any other modern animal, such as a hippo, would suffice.[34][35][36]

Simple examples of unfalsifiable statements

Thumb
Thumb
Even if it is accepted that angels exist, "All angels have large wings" is not falsifiable, because no technology exists to identify and observe angels.

A simple example of a non-basic statement is "This angel does not have large wings." It is not a basic statement, because though the absence of large wings can be observed, no technology (independent of the presence of wings[AG]) exists to identify angels. Even if it is accepted that angels exist, the sentence "All angels have large wings" is not falsifiable.

Another example from Popper of a non-basic statement is "This human action is altruistic." It is not a basic statement, because no accepted technology allows us to determine whether or not an action is motivated by self-interest. Because no basic statement falsifies it, the statement that "All human actions are egotistic, motivated by self-interest" is thus not falsifiable.[AH]

Omphalos hypothesis

Some adherents of young-Earth creationism make an argument (called the Omphalos hypothesis after the Greek word for navel) that the world was created with the appearance of age; e.g., the sudden appearance of a mature chicken capable of laying eggs. This ad hoc hypothesis introduced into young-Earth creationism is unfalsifiable because it says that the time of creation (of a species) measured by the accepted technology is illusory and no accepted technology is proposed to measure the claimed "actual" time of creation. Moreover, if the ad hoc hypothesis says that the world was created as we observe it today without stating further laws, by definition it cannot be contradicted by observations and thus is not falsifiable. This is discussed by Dienes in the case of a variation on the Omphalos hypothesis, which, in addition, specifies that God made the creation in this way to test our faith.[37]

Useful metaphysical statements

Grover Maxwell [es] discussed statements such as "All men are mortal."[38] This is not falsifiable, because it does not matter how old a man is, maybe he will die next year.[39] Maxwell said that this statement is nevertheless useful, because it is often corroborated. He coined the term "corroboration without demarcation". Popper's view is that it is indeed useful, because Popper considers that metaphysical statements can be useful, but also because it is indirectly corroborated by the corroboration of the falsifiable law "All men die before the age of 150." For Popper, if no such falsifiable law exists, then the metaphysical law is less useful, because it is not indirectly corroborated.[AI] This kind of non-falsifiable statements in science was noticed by Carnap as early as 1937.[40]

Thumb
Clyde Cowan conducting the neutrino experiment (c.1956)

Maxwell also used the example "All solids have a melting point." This is not falsifiable, because maybe the melting point will be reached at a higher temperature.[38][39] The law is falsifiable and more useful if we specify an upper bound on melting points or a way to calculate this upper bound.[AJ]

Another example from Maxwell is "All beta decays are accompanied with a neutrino emission from the same nucleus."[41] This is also not falsifiable, because maybe the neutrino can be detected in a different manner. The law is falsifiable and much more useful from a scientific point of view, if the method to detect the neutrino is specified.[42] Maxwell said that most scientific laws are metaphysical statements of this kind,[43] which, Popper said, need to be made more precise before they can be indirectly corroborated.[AI] In other words, specific technologies must be provided to make the statements inter-subjectively-verifiable, i.e., so that scientists know what the falsification or its failure actually means.

In his critique of the falsifiability criterion, Maxwell considered the requirement for decisions in the falsification of, both, the emission of neutrinos (see § Dogmatic falsificationism) and the existence of the melting point.[41] For example, he pointed out that had no neutrino been detected, it could have been because some conservation law is false. Popper did not argue against the problems of falsification per se. He always acknowledged these problems. Popper's response was at the logical level. For example, he pointed out that, if a specific way is given to trap the neutrino, then, at the level of the language, the statement is falsifiable, because "no neutrino was detected after using this specific way" formally contradicts it (and it is inter-subjectively-verifiable—people can repeat the experiment).

Natural selection

In the 5th and 6th editions of On the Origin of Species, following a suggestion of Alfred Russel Wallace, Darwin used "Survival of the fittest", an expression first coined by Herbert Spencer, as a synonym for "Natural Selection".[AK] Popper and others said that, if one uses the most widely accepted definition of "fitness" in modern biology (see subsection § Evolution), namely reproductive success itself, the expression "survival of the fittest" is a tautology.[AL][AM][AN]

Darwinist Ronald Fisher worked out mathematical theorems to help answer questions regarding natural selection. But, for Popper and others, there is no (falsifiable) law of Natural Selection in this, because these tools only apply to some rare traits.[AO][AP] Instead, for Popper, the work of Fisher and others on Natural Selection is part of an important and successful metaphysical research program.[44]

Mathematics

Popper said that not all unfalsifiable statements are useless in science. Mathematical statements are good examples. Like all formal sciences, mathematics is not concerned with the validity of theories based on observations in the empirical world, but rather, mathematics is occupied with the theoretical, abstract study of such topics as quantity, structure, space and change. Methods of the mathematical sciences are, however, applied in constructing and testing scientific models dealing with observable reality. Albert Einstein wrote, "One reason why mathematics enjoys special esteem, above all other sciences, is that its laws are absolutely certain and indisputable, while those of other sciences are to some extent debatable and in constant danger of being overthrown by newly discovered facts."[45]

Historicism

Popper made a clear distinction between the original theory of Marx and what came to be known as Marxism later on.[46] For Popper, the original theory of Marx contained genuine scientific laws. Though they could not make preordained predictions, these laws constrained how changes can occur in society. One of them was that changes in society cannot "be achieved by the use of legal or political means".[AQ] In Popper's view, this was both testable and subsequently falsified. "Yet instead of accepting the refutations", Popper wrote, "the followers of Marx re-interpreted both the theory and the evidence in order to make them agree. ... They thus gave a 'conventionalist twist' to the theory; and by this stratagem, they destroyed its much advertised claim to scientific status."[AR][AS] Popper's attacks were not directed toward Marxism, or Marx's theories, which were falsifiable, but toward Marxists who he considered to have ignored the falsifications which had happened.[47] Popper more fundamentally criticized 'historicism' in the sense of any preordained prediction of history, given what he saw as our right, ability and responsibility to control our own destiny.[47]

Use in courts of law

Falsifiability has been used in the McLean v. Arkansas case (in 1982),[48] the Daubert case (in 1993)[49] and other cases. A survey of 303 federal judges conducted in 1998[AT] found that "[P]roblems with the nonfalsifiable nature of an expert's underlying theory and difficulties with an unknown or too-large error rate were cited in less than 2% of cases."[50]

McLean v. Arkansas case

In the ruling of the McLean v. Arkansas case, Judge William Overton used falsifiability as one of the criteria to determine that "creation science" was not scientific and should not be taught in Arkansas public schools as such (it can be taught as religion). In his testimony, philosopher Michael Ruse defined the characteristics which constitute science as (see Pennock 2000, p. 5, and Ruse 2010):

  • It is guided by natural law;
  • It has to be explanatory by reference to natural law;
  • It is testable against the empirical world;
  • Its conclusions are tentative, i.e., are not necessarily the final word; and
  • It is falsifiable.

In his conclusion related to this criterion Judge Overton stated that:

While anybody is free to approach a scientific inquiry in any fashion they choose, they cannot properly describe the methodology as scientific, if they start with the conclusion and refuse to change it regardless of the evidence developed during the course of the investigation.

William Overton, McLean v. Arkansas 1982, at the end of section IV. (C)

Daubert standard

In several cases of the United States Supreme Court, the court described scientific methodology using the five Daubert factors, which include falsifiability.[AU] The Daubert result cited Popper and other philosophers of science:

Ordinarily, a key question to be answered in determining whether a theory or technique is scientific knowledge that will assist the trier of fact will be whether it can be (and has been) tested. Scientific methodology today is based on generating hypotheses and testing them to see if they can be falsified; indeed, this methodology is what distinguishes science from other fields of human inquiry. Green 645. See also C. Hempel, Philosophy of Natural Science 49 (1966) ([T]he statements constituting a scientific explanation must be capable of empirical test); K. Popper, Conjectures and Refutations: The Growth of Scientific Knowledge 37 (5th ed. 1989) ([T]he criterion of the scientific status of a theory is its falsifiability, or refutability, or testability) (emphasis deleted).

Harry Blackmun, Daubert 1993, p. 593

David H. Kaye[AV] said that references to the Daubert majority opinion confused falsifiability and falsification and that "inquiring into the existence of meaningful attempts at falsification is an appropriate and crucial consideration in admissibility determinations."[AW]

Connections between statistical theories and falsifiability

Considering the specific detection procedure that was used in the neutrino experiment, without mentioning its probabilistic aspect, Popper wrote "it provided a test of the much more significant falsifiable theory that such emitted neutrinos could be trapped in a certain way". In this manner, in his discussion of the neutrino experiment, Popper did not raise at all the probabilistic aspect of the experiment.[42] Together with Maxwell, who raised the problems of falsification in the experiment,[41] he was aware that some convention must be adopted to fix what it means to detect or not a neutrino in this probabilistic context. This is the third kind of decisions mentioned by Lakatos.[51] For Popper and most philosophers, observations are theory impregnated. In this example, the theory that impregnates observations (and justifies that we conventionally accept the potential falsifier "no neutrino was detected") is statistical. In statistical language, the potential falsifier that can be statistically accepted (not rejected to say it more correctly) is typically the null hypothesis, as understood even in popular accounts on falsifiability.[52][53][54]

Different ways are used by statisticians to draw conclusions about hypotheses on the basis of available evidence. Fisher, Neyman and Pearson proposed approaches that require no prior probabilities on the hypotheses that are being studied. In contrast, Bayesian inference emphasizes the importance of prior probabilities.[55] But, as far as falsification as a yes/no procedure in Popper's methodology is concerned, any approach that provides a way to accept or not a potential falsifier can be used, including approaches that use Bayes' theorem and estimations of prior probabilities that are made using critical discussions and reasonable assumptions taken from the background knowledge.[AX] There is no general rule that considers as falsified an hypothesis with small Bayesian revised probability, because as pointed out by Mayo and argued before by Popper, the individual outcomes described in detail will easily have very small probabilities under available evidence without being genuine anomalies.[56] Nevertheless, Mayo adds, "they can indirectly falsify hypotheses by adding a methodological falsification rule".[56] In general, Bayesian statistic can play a role in critical rationalism in the context of inductive logic,[57] which is said to be inductive because implications are generalized to conditional probabilities.[58] According to Popper and other philosophers such as Colin Howson, Hume's argument precludes inductive logic, but only when the logic makes no use "of additional assumptions: in particular, about what is to be assigned positive prior probability".[59] Inductive logic itself is not precluded, especially not when it is a deductively valid application of Bayes' theorem that is used to evaluate the probabilities of the hypotheses using the observed data and what is assumed about the priors. Gelman and Shalizi mentioned that Bayes' statisticians do not have to disagree with the non-inductivists.[60]

Because statisticians often associate statistical inference with induction, Popper's philosophy is often said to have a hidden form of induction. For example, Mayo wrote "The falsifying hypotheses ... necessitate an evidence-transcending (inductive) statistical inference. This is hugely problematic for Popper".[61] Yet, also according to Mayo, Popper [as a non-inductivist] acknowledged the useful role of statistical inference in the falsification problems: she mentioned that Popper wrote her (in the context of falsification based on evidence) "I regret not studying statistics" and that her thought was then "not as much as I do".[62]

Lakatos's falsificationism

Imre Lakatos divided the problems of falsification in two categories. The first category corresponds to decisions that must be agreed upon by scientists before they can falsify a theory. The other category emerges when one tries to use falsifications and corroborations to explain progress in science. Lakatos described four kind of falsificationisms in view of how they address these problems. Dogmatic falsificationism ignores both types of problems. Methodological falsificationism addresses the first type of problems by accepting that decisions must be taken by scientists. Naive methodological falsificationism or naive falsificationism does not do anything to address the second type of problems.[63][64] Lakatos used dogmatic and naive falsificationism to explain how Popper's philosophy changed over time and viewed sophisticated falsificationism as his own improvement on Popper's philosophy, but also said that Popper some times appears as a sophisticated falsificationist.[65] Popper responded that Lakatos misrepresented his intellectual history with these terminological distinctions.[66]

Dogmatic falsificationism

A dogmatic falsificationist ignores that every observation is theory-impregnated. Being theory-impregnated means that it goes beyond direct experience. For example, the statement "Here is a glass of water" goes beyond experience, because the concepts of glass and water "denote physical bodies which exhibit a certain law-like behaviour" (Popper).[67] This leads to the critique that it is unclear which theory is falsified. Is it the one that is being studied or the one behind the observation?[AY] This is sometimes called the 'Duhem–Quine problem'. An example is Galileo's refutation of the theory that celestial bodies are faultless crystal balls. Many considered that it was the optical theory of the telescope that was false, not the theory of celestial bodies. Another example is the theory that neutrinos are emitted in beta decays. Had they not been observed in the Cowan–Reines neutrino experiment, many would have considered that the strength of the beta-inverse reaction used to detect the neutrinos was not sufficiently high. At the time, Grover Maxwell [es] wrote, the possibility that this strength was sufficiently high was a "pious hope".[41]

A dogmatic falsificationist ignores the role of auxiliary hypotheses. The assumptions or auxiliary hypotheses of a particular test are all the hypotheses that are assumed to be accurate in order for the test to work as planned.[68] The predicted observation that is contradicted depends on the theory and these auxiliary hypotheses. Again, this leads to the critique that it cannot be told if it is the theory or one of the required auxiliary hypotheses that is false. Lakatos gives the example of the path of a planet. If the path contradicts Newton's law, we will not know if it is Newton's law that is false or the assumption that no other body influenced the path.

Lakatos says that Popper's solution to these criticisms requires that one relaxes the assumption that an observation can show a theory to be false:[F]

If a theory is falsified [in the usual sense], it is proven false; if it is 'falsified' [in the technical sense], it may still be true.

Imre Lakatos, Lakatos 1978, p. 24

Methodological falsificationism replaces the contradicting observation in a falsification with a "contradicting observation" accepted by convention among scientists, a convention that implies four kinds of decisions that have these respective goals: the selection of all basic statements (statements that correspond to logically possible observations), selection of the accepted basic statements among the basic statements, making statistical laws falsifiable and applying the refutation to the specific theory (instead of an auxiliary hypothesis).[AZ] The experimental falsifiers and falsifications thus depend on decisions made by scientists in view of the currently accepted technology and its associated theory.

Naive falsificationism

According to Lakatos, naive falsificationism is the claim that methodological falsifications can by themselves explain how scientific knowledge progresses. Very often a theory is still useful and used even after it is found in contradiction with some observations. Also, when scientists deal with two or more competing theories which are both corroborated, considering only falsifications, it is not clear why one theory is chosen above the other, even when one is corroborated more often than the other. In fact, a stronger version of the Quine-Duhem thesis says that it is not always possible to rationally pick one theory over the other using falsifications.[69] Considering only falsifications, it is not clear why often a corroborating experiment is seen as a sign of progress. Popper's critical rationalism uses both falsifications and corroborations to explain progress in science.[BA] How corroborations and falsifications can explain progress in science was a subject of disagreement between many philosophers, especially between Lakatos and Popper.[BB]

Popper distinguished between the creative and informal process from which theories and accepted basic statements emerge and the logical and formal process where theories are falsified or corroborated.[E][BC][BD] The main issue is whether the decision to select a theory among competing theories in the light of falsifications and corroborations could be justified using some kind of formal logic.[BE] It is a delicate question, because this logic would be inductive: it justifies a universal law in view of instances. Also, falsifications, because they are based on methodological decisions, are useless in a strict justification perspective. The answer of Lakatos and many others to that question is that it should.[BF][BG] In contradistinction, for Popper, the creative and informal part is guided by methodological rules, which naturally say to favour theories that are corroborated over those that are falsified,[BH] but this methodology can hardly be made rigorous.[BI]

Popper's way to analyze progress in science was through the concept of verisimilitude, a way to define how close a theory is to the truth, which he did not consider very significant, except (as an attempt) to describe a concept already clear in practice. Later, it was shown that the specific definition proposed by Popper cannot distinguish between two theories that are false, which is the case for all theories in the history of science.[BJ] Today, there is still on going research on the general concept of verisimilitude.[70]

From the problem of induction to falsificationism

Hume explained induction with a theory of the mind[71] that was in part inspired by Newton's theory of gravitation.[BK] Popper rejected Hume's explanation of induction and proposed his own mechanism: science progresses by trial and error within an evolutionary epistemology. Hume believed that his psychological induction process follows laws of nature, but, for him, this does not imply the existence of a method of justification based on logical rules. In fact, he argued that any induction mechanism, including the mechanism described by his theory, could not be justified logically.[72] Similarly, Popper adopted an evolutionary epistemology, which implies that some laws explain progress in science, but yet insists that the process of trial and error is hardly rigorous and that there is always an element of irrationality in the creative process of science. The absence of a method of justification is a built-in aspect of Popper's trial and error explanation.

As rational as they can be, these explanations that refer to laws, but cannot be turned into methods of justification (and thus do not contradict Hume's argument or its premises), were not sufficient for some philosophers. In particular, Russell once expressed the view that if Hume's problem cannot be solved, “there is no intellectual difference between sanity and insanity”[72] and actually proposed a method of justification.[73][74] He rejected Hume's premise that there is a need to justify any principle that is itself used to justify induction.[BL] It might seem that this premise is hard to reject, but to avoid circular reasoning we do reject it in the case of deductive logic. It makes sense to also reject this premise in the case of principles to justify induction. Lakatos's proposal of sophisticated falsificationism was very natural in that context.

Therefore, Lakatos urged Popper to find an inductive principle behind the trial and error learning process[BM] and sophisticated falsificationism was his own approach to address this challenge.[BN][BO] Kuhn, Feyerabend, Musgrave and others mentioned and Lakatos himself acknowledged that, as a method of justification, this attempt failed, because there was no normative methodology to justify—Lakatos's methodology was anarchy in disguise.[BP][BQ][BR][BS][BT]

Falsificationism in Popper's philosophy

Popper's philosophy is sometimes said to fail to recognize the Quine-Duhem thesis, which would make it a form of dogmatic falsificationism. For example, Watkins wrote "apparently forgetting that he had once said 'Duhem is right [...]', Popper set out to devise potential falsifiers just for Newton's fundamental assumptions".[75] But, Popper's philosophy is not always qualified of falsificationism in the pejorative manner associated with dogmatic or naive falsificationism.[76] The problems of falsification are acknowledged by the falsificationists. For example, Chalmers points out that falsificationists freely admit that observation is theory impregnated.[77] Thornton, referring to Popper's methodology, says that the predictions inferred from conjectures are not directly compared with the facts simply because all observation-statements are theory-laden.[78] For the critical rationalists, the problems of falsification are not an issue, because they do not try to make experimental falsifications logical or to logically justify them, nor to use them to logically explain progress in science. Instead, their faith rests on critical discussions around these experimental falsifications.[4] Lakatos made a distinction between a "falsification" (with quotation marks) in Popper's philosophy and a falsification (without quotation marks) that can be used in a systematic methodology where rejections are justified.[79] He knew that Popper's philosophy is not and has never been about this kind of justification, but he felt that it should have been.[BM] Sometimes, Popper and other falsificationists say that when a theory is falsified it is rejected,[80][81] which appears as dogmatic falsificationism, but the general context is always critical rationalism in which all decisions are open to critical discussions and can be revised.[82]

Controversies

Methodless creativity versus inductive methodology

As described in section § Naive falsificationism, Lakatos and Popper agreed that universal laws cannot be logically deduced (except from laws that say even more). But unlike Popper, Lakatos felt that if the explanation for new laws cannot be deductive, it must be inductive. He urged Popper explicitly to adopt some inductive principle[BM] and sets himself the task to find an inductive methodology.[BU] However, the methodology that he found did not offer any exact inductive rules. In a response to Kuhn, Feyerabend and Musgrave, Lakatos acknowledged that the methodology depends on the good judgment of the scientists.[BP] Feyerabend wrote in "Against Method" that Lakatos's methodology of scientific research programmes is epistemological anarchism in disguise[BQ] and Musgrave made a similar comment.[BR] In more recent work, Feyerabend says that Lakatos uses rules, but whether or not to follow any of these rules is left to the judgment of the scientists.[BS] This is also discussed elsewhere.[BT]

Popper also offered a methodology with rules, but these rules are also not-inductive rules, because they are not by themselves used to accept laws or establish their validity. They do that through the creativity or "good judgment" of the scientists only. For Popper, the required non deductive component of science never had to be an inductive methodology. He always viewed this component as a creative process beyond the explanatory reach of any rational methodology, but yet used to decide which theories should be studied and applied, find good problems and guess useful conjectures.[BV] Quoting Einstein to support his view, Popper said that this renders obsolete the need for an inductive methodology or logical path to the laws.[BW][BX][BY] For Popper, no inductive methodology was ever proposed to satisfactorily explain science.

Ahistorical versus historiographical

Section § Methodless creativity versus inductive methodology says that both Lakatos's and Popper's methodology are not inductive. Yet Lakatos's methodology extended importantly Popper's methodology: it added a historiographical component to it. This allowed Lakatos to find corroborations for his methodology in the history of science. The basic units in his methodology, which can be abandoned or pursued, are research programmes. Research programmes can be degenerative or progressive and only degenerative research programmes must be abandoned at some point. For Lakatos, this is mostly corroborated by facts in history.

In contradistinction, Popper did not propose his methodology as a tool to reconstruct the history of science. Yet, some times, he did refer to history to corroborate his methodology. For example, he remarked that theories that were considered great successes were also the most likely to be falsified. Zahar's view was that, with regard to corroborations found in the history of science, there was only a difference of emphasis between Popper and Lakatos.

As an anecdotal example, in one of his articles Lakatos challenged Popper to show that his theory was falsifiable: he asked "Under what conditions would you give up your demarcation criterion?".[83] Popper replied "I shall give up my theory if Professor Lakatos succeeds in showing that Newton's theory is no more falsifiable by 'observable states of affairs' than is Freud's."[84] According to David Stove, Lakatos succeeded, since Lakatos showed there is no such thing as a "non-Newtonian" behaviour of an observable object. Stove argued that Popper's counterexamples to Lakatos were either instances of begging the question, such as Popper's example of missiles moving in a "non-Newtonian track", or consistent with Newtonian physics, such as objects not falling to the ground without "obvious" countervailing forces against Earth's gravity.[85]

Normal science versus revolutionary science

Thomas Kuhn analyzed what he calls periods of normal science as well as revolutions from one period of normal science to another,[86] whereas Popper's view is that only revolutions are relevant.[BZ][CA] For Popper, the role of science, mathematics and metaphysics, actually the role of any knowledge, is to solve puzzles.[CB] In the same line of thought, Kuhn observes that in periods of normal science the scientific theories, which represent some paradigm, are used to routinely solve puzzles and the validity of the paradigm is hardly in question. It is only when important new puzzles emerge that cannot be solved by accepted theories that a revolution might occur. This can be seen as a viewpoint on the distinction made by Popper between the informal and formal process in science (see section § Naive falsificationism). In the big picture presented by Kuhn, the routinely solved puzzles are corroborations. Falsifications or otherwise unexplained observations are unsolved puzzles. All of these are used in the informal process that generates a new kind of theory. Kuhn says that Popper emphasizes formal or logical falsifications and fails to explain how the social and informal process works.

Unfalsifiability versus falsity of astrology

Popper often uses astrology as an example of a pseudoscience. He says that it is not falsifiable because both the theory itself and its predictions are too imprecise.[CC] Kuhn, as an historian of science, remarked that many predictions made by astrologers in the past were quite precise and they were very often falsified. He also said that astrologers themselves acknowledged these falsifications.[CD]

Epistemological anarchism vs the scientific method

Paul Feyerabend rejected any prescriptive methodology at all. He rejected Lakatos's argument for ad hoc hypothesis, arguing that science would not have progressed without making use of any and all available methods to support new theories. He rejected any reliance on a scientific method, along with any special authority for science that might derive from such a method.[87] He said that if one is keen to have a universally valid methodological rule, epistemological anarchism or anything goes would be the only candidate.[88] For Feyerabend, any special status that science might have, derives from the social and physical value of the results of science rather than its method.[89]

Sokal and Bricmont

In their book Fashionable Nonsense (from 1997, published in the UK as Intellectual Impostures) the physicists Alan Sokal and Jean Bricmont criticised falsifiability.[90] They include this critique in the "Intermezzo" chapter, where they expose their own views on truth in contrast to the extreme epistemological relativism of postmodernism. Even though Popper is clearly not a relativist, Sokal and Bricmont discuss falsifiability because they see postmodernist epistemological relativism as a reaction to Popper's description of falsifiability, and more generally, to his theory of science.[91]

See also

Notes

  1. Popper discusses the notion of imaginary state of affairs in the context of scientific realism in Popper 1972, Chap.2, Sec.5: (emphasis added) "[H]uman language is essentially descriptive (and argumentative), and an unambiguous description is always realistic: it is of something—of some state of affairs which may be real or imaginary. Thus if the state of affairs is imaginary, then the description is simply false and its negation is a true description of reality, in Tarski's sense." He continues (emphasis added) "Tarski's theory more particularly makes clear just what fact a statement P will correspond to if it corresponds to any fact: namely the fact that p. ... a false statement P is false not because it corresponds to some odd entity like a non-fact, but simply because it does not correspond to any fact: it does not stand in the peculiar relation of correspondence to a fact to anything real, though it stands in a relation like 'describes' to the spurious state of affairs that p."
  2. Popper wanted the main text of the 1959 English version, The Logic of Scientific Discovery, to conform to the original, thus refused to make substantial corrections and only added notes and appendices and marked them with an asterisk (see Popper 1959, Translators' note).
  3. The falsifiability criterion is formulated in terms of basic statements or observation statements without requiring that we know which ones of these observation statements correspond to actual facts. These basic statements break the symmetry, while being purely logical concepts.
  4. "All swans are white" is often chosen as an example of a falsifiable statement, because for some 1500 years, the black swan existed in the European imagination as a metaphor for that which could not exist. Had the presumption concerning black swans in this metaphor be right, the statement would still have been falsifiable.
  5. Thornton 2016, sec. 3: "Popper has always drawn a clear distinction between the logic of falsifiability and its applied methodology. The logic of his theory is utterly simple: if a single ferrous metal is unaffected by a magnetic field it cannot be the case that all ferrous metals are affected by magnetic fields. Logically speaking, a scientific law is conclusively falsifiable although it is not conclusively verifiable. Methodologically, however, the situation is much more complex: no observation is free from the possibility of error—consequently we may question whether our experimental result was what it appeared to be."
  6. Popper 1983, Introduction 1982: "We must distinguish two meanings of the expressions falsifiable and falsifiability:
    "1) Falsifiable as a logical-technical term, in the sense of the demarcation criterion of falsifiability. This purely logical concept—falsifiable in principle, one might say—rests on a logical relation between the theory in question and the class of basic statements (or the potential falsifiers described by them).
    "2) Falsifiable in the sense that the theory in question can definitively or conclusively or demonstrably be falsified ("demonstrably falsifiable").
    "I have always stressed that even a theory which is obviously falsifiable in the first sense is never falsifiable in this second sense. (For this reason I have used the expression falsifiable as a rule only in the first, technical sense. In the second sense, I have as a rule spoken not of falsifiability but rather of falsification and of its problems)."
  7. Popper 1983, Introduction 1982: "Although the first sense refers to the logical possibility of a falsification in principle, the second sense refers to a conclusive practical experimental proof of falsity. But anything like conclusive proof to settle an empirical question does not exist. An entire literature rests on the failure to observe this distinction." For a discussion related to this lack of distinction, see Rosende 2009, p. 142.
  8. Falsifiability does not require falsification. A past, present and even a future falsification would be a problematic requirement: it cannot be achieved, because definitive rigorous falsifications are impossible and, if a theory nevertheless met this requirement, it would not be much better than a falsified theory.
  9. Popper's argument is that inductive inference is a fallacy : "I hold with Hume that there simply is no such logical entity as an inductive inference; or, that all so-called inductive inferences are logically invalid".[CE][CF]
  10. Popper 1983, chap. 1, sec. 3: "It seems that almost everybody believes in induction; believes, that is, that we learn by the repetition of observations. Even Hume, in spite of his great discovery that a natural law can neither be established nor made 'probable' by induction, continued to believe firmly that animals and men do learn through repetition: through repeated observations as well as through the formation of habits, or the strengthening of habits, by repetition. And he upheld the theory that induction, though rationally indefensible and resulting in nothing better than unreasoned belief, was nevertheless reliable in the main—more reliable and useful at any rate than reason and the processes of reasoning; and that 'experience' was thus the unreasoned result of a (more or less passive) accumulation of observations. As against all this, I happen to believe that in fact we never draw inductive inferences, or make use of what are now called 'inductive procedures'. Rather, we always discover regularities by the essentially different method of trial and error."
  11. Popper 1959, part I, chap. 2, sec. 11: "[I] dispense with the principle of induction: not because such a principle is as a matter of fact never used in science, but because I think that it is not needed; that it does not help us; and that it even gives rise to inconsistencies."
  12. Popper 1962, p. 35: "As for Adler, I was much impressed by a personal experience. Once, in 1919, I reported to him a case which to me did not seem particularly Adlerian, but which he found no difficulty in analysing in terms of his theory of inferiority feelings, although he had not even seen the child. Slightly shocked, I asked him how he could be so sure. 'Because of my thousandfold experience,' he replied; whereupon I could not help saying: 'And with this new case, I suppose, your experience has become thousand-and-one-fold.'"
  13. Thornton 2007, p. 3: "However, a theory that has successfully withstood critical testing is thereby 'corroborated', and may be regarded as being preferable to falsified rivals. In the case of rival unfalsified theories, for Popper, the higher the informative content of a theory the better it is scientifically, because every gain in content brings with it a commensurate gain in predictive scope and testability."
  14. Popper 1959, p. 19: "Various objections might be raised against the criterion of demarcation here proposed. In the first place, it may well seem somewhat wrong-headed to suggest that science, which is supposed to give us positive information, should be characterized as satisfying a negative requirement such as refutability. However, I shall show, in sections 31 to 46, that this objection has little weight, since the amount of positive information about the world which is conveyed by a scientific statement is the greater the more likely it is to clash, because of its logical character, with possible singular statements. (Not for nothing do we call the laws of nature 'laws': the more they prohibit the more they say.)"
  15. Feigl 1978: "Karl Popper, an Austrian-born British philosopher of science, in his Logik der Forschung (1935; The Logic of Scientific Discovery), insisted that the meaning criterion should be abandoned and replaced by a criterion of demarcation between empirical (scientific) and transempirical (nonscientific, metaphysical) questions and answers—a criterion that, according to Popper, is to be testability."
  16. Popper 1972, Sec. 1.9: "Quite apart from [Hume's psychological theory of induction], I felt that psychology should be regarded as a biological discipline, and especially that any psychological theory of the acquisition of knowledge should be so regarded. Now if we transfer to human and animal psychology [the method that consists in choosing the best tested theory among conjectured theories], we arrive, clearly, at the well-known method of trial and error-elimination."
  17. Popper 1959, Sec. 85: "What I have here in mind is not a picture of science as a biological phenomenon ...: I have in mind its epistemological aspects."
  18. Popper 1959, pp. 7–8: "This latter is concerned not with questions of fact (Kant's quid facti?), but only with questions of justification or validity (Kant's quid juris?). Its questions are of the following kind. Can a statement be justified? And if so, how? Is it testable? Is it logically dependent on certain other statements? Or does it perhaps contradict them? In order that a statement may be logically examined in this way, it must already have been presented to us. Someone must have formulated it, and submitted it to logical examination."
  19. Popper 1972, Sec. 1.8: "The fundamental difference between my approach and the approach for which I long ago introduced the label 'inductivist' is that I lay stress on negative arguments, such as negative instances or counter-examples, refutations, and attempted refutations—in short, criticism".
  20. Popper 1974, p. 1005: "Newton's theory ... would equally be contradicted if the apples from one of my, or Newton's, apple trees were to rise from the ground (without there being a whirling about), and begin to dance round the branches of the apple tree from which they had fallen."
  21. In a spirit of criticism, Watkins (Watkins1984, Sec. 8.52) liked to refer to invisible strings instead of some abstract law to explain this kind of evidence against Newton's Gravity.
  22. The requirement that the language must be empirical is known in the literature as the material requirement. For example, see Nola & Sankey 2014, pp. 256, 268 and Shea 2020, Sec 2.c. This requirement says that the statements that describe observations, the basic statements, must be intersubjectively verifiable.
  23. In Popper's description of the scientific procedure of testing, as explained by Thornton (see Thornton 2016, Sec. 4), there is no discussion of factual observations except in those tests that compare the theory with factual observations, but in these tests too the procedure is mostly logical and involves observations that are only logical constructions (Popper 1959, pp. 9–10): "We may if we like distinguish four different lines along which the testing of a theory could be carried out. First there is the logical comparison of the conclusions among themselves, by which the internal consistency of the system is tested. Secondly, there is the investigation of the logical form of the theory, with the object of determining whether it has the character of an empirical or scientific theory, or whether it is, for example, tautological. Thirdly, there is the comparison with other theories, chiefly with the aim of determining whether the theory would constitute a scientific advance should it survive our various tests. And finally, there is the testing of the theory by way of empirical applications of the conclusions which can be derived from it. ... Here too the procedure of testing turns out to be deductive. With the help of other statements, previously accepted, certain singular statements—which we may call 'predictions'—are deduced from the theory; especially predictions that are easily testable or applicable. From among these statements, those are selected which are not derivable from the current theory, and more especially those which the current theory contradicts."
  24. Popper 1959, p. 9: "According to the view that will be put forward here, the method of critically testing theories, and selecting them according to the results of tests, always proceeds on the following lines. From a new idea, put up tentatively, and not yet justified in any way—an anticipation, a hypothesis, a theoretical system, or what you will—conclusions are drawn by means of logical deduction. These conclusions are then compared with one another and with other relevant statements, so as to find what logical relations (such as equivalence, derivability, compatibility, or incompatibility) exist between them."
  25. In practice, technologies change. When the interpretation of a theory is modified by an improved technological interpretation of some properties, the new theory can be seen as the same theory with an enlarged scope. For example, Herbert Keuth [de], (Keuth 2005, p. 43) wrote: "But Popper's falsifiability or testability criterion does not presuppose that a definite distinction between testable and non testable statement is possible ... technology changes. Thus a hypotheses that was first untestable may become testable later on."
  26. Popper 1959, section 7, page 21: "If falsifiability is to be at all applicable as a criterion of demarcation, then singular statements must be available which can serve as premisses in falsifying inferences. Our criterion therefore appears only to shift the problem—to lead us back from the question of the empirical character of theories to the question of the empirical character of singular statements.
    "Yet even so, something has been gained. For in the practice of scientific research, demarcation is sometimes of immediate urgency in connection with theoretical systems, whereas in connection with singular statements, doubt as to their empirical character rarely arises. It is true that errors of observation occur and that they give rise to false singular statements, but the scientist scarcely ever has occasion to describe a singular statement as non-empirical or metaphysical."
  27. Popper 1962, p. 387: "Before using the terms 'basic' and 'basic statement', I made use of the term 'empirical basis', meaning by it the class of all those statements which may function as tests of empirical theories (that is, as potential falsifiers). In introducing the term 'empirical basis' my intention was, partly, to give an ironical emphasis to my thesis that the empirical basis of our theories is far from firm; that it should be compared to a swamp rather than to solid ground."
  28. This perspective can be found in any text on model theory. For example, see Ebbinghaus 2017.
  29. Popper put as an example of falsifiable statement with failed falsifications Einstein's equivalence principle. See Popper 1983, Introduction, sec. I: "Einstein's principle of proportionality of inert and (passively) heavy mass. This equivalence principle conflicts with many potential falsifiers: events whose observation is logically possible. Yet despite all attempts (the experiments by Eötvös, more recently refined by Rickle) to realize such a falsification experimentally, the experiments have so far corroborated the principle of equivalence."
  30. Fisher 1930, p. 34: "Since m measures fitness to survive by the objective fact of representation in future generations,"
  31. Popper 1980, p. 611: "It does appear that some people think that I denied scientific character to the historical sciences, such as palaeontology, or the history of the evolution of life on Earth. This is a mistake, and I here wish to affirm that these and other historical sciences have in my opinion scientific character; their hypotheses can in many cases be tested."
  32. If the criteria to identify an angel was simply to observe large wings, then "this angel does not have large wings" would be a logical contradiction and thus not a basic statement anyway.
  33. Popper 1983, Introduction, xx: "This theory ['All human actions are egotistic, motivated by self-interest'] is widely held: it has variants in behaviourism, psychoanalysis, individual psychology, utilitarianism, vulgar-marxism, religion, and sociology of knowledge. Clearly this theory, with all its variants, is not falsifiable: no example of an altruistic action can refute the view that there was an egotistic motive hidden behind it."
  34. Popper 1974, p. 1038: "[A]s indeed is the case in Maxwell's example, when existential statements are verified this is done by means of stronger falsifiable statements. ... What this means is this. Whenever a pure existential statement, by being empirically "confirmed", appears to belong to empirical science, it will in fact do so not on its own account, but by virtue of being a consequence of a corroborated falsifiable theory."
  35. Keuth 2005, p. 46: "[T]he existential quantifier in the symbolized version of "Every solid has a melting point" is not inevitable; rather this statement is actually a negligent phrasing of what we really mean."
  36. Darwin 1869, pp. 72: "I have called this principle, by which each slight variation, if useful, is preserved, by the term natural selection, in order to mark its relation to man's power of selection. But the expression often used by Mr. Herbert Spencer, of the Survival of the Fittest, is more accurate, and is sometimes equally convenient."
  37. Thompson 1981, pp. 52–53, Introduction: "For several years, evolutionary theory has been under attack from critics who argue that the theory is basically a tautology. The tautology is said to arise from the fact that evolutionary biologists have no widely accepted way to independently define 'survival' and 'fitness.' That the statement, 'the fit survive,' is tautological is important, because if the critics are correct in their analysis, the tautology renders meaningless much of contemporary evolutionary theorizing. ... The definition of key evolutionary concepts in terms of natural selection runs the risk of making evolutionary theory a self-contained, logical system which is isolated from the empirical world. No meaningful empirical prediction can be made from one side to the other side of these definitions. One cannot usefully predict that nature selects the fittest organism since the fittest organism is by definition that which nature selects."
  38. Waddington 1959, pp. 383–384: "Darwin's major contribution was, of course, the suggestion that evolution can be explained by the natural selection of random variations. Natural selection, which was at first considered as though it were a hypothesis that was in need of experimental or observational confirmation, turns out on closer inspection to be a tautology, a statement of an inevitable, although previously unrecognized, relation. It states that the fittest individuals in a population (defined as those which leave most offspring) will leave most offspring. Once the statement is made, its truth is apparent. This fact in no way reduces the magnitude of Darwin's achievement; only after it was clearly formulated, could biologists realize the enormous power of the principle as a weapon of explanation."
  39. Popper 1994, p. 90: "If, more especially, we accept that statistical definition of fitness which defines fitness by actual survival, then the theory of the survival of the fittest becomes tautological, and irrefutable."
  40. Thompson 1981, p. 53, Introduction: "Even if it did not make a tautology of evolution theory, the use of natural selection as a descriptive concept would have serious drawbacks. While it is mathematically tractable and easy to model in the laboratory, the concept is difficult to operationalize in the field. For field biologists, it is really a hypothetical entity. Clear, unambiguous instances of the operation of natural selection are difficult to come by and always greeted with great enthusiasm by biologists (Kettlewell, 1959 [the case of the peppered moths]; Shepherd, 1960). Thus, although the concept has much to recommend it as an explanatory one, it seems an overly abstract formulation on which to base a descriptive science."
  41. Popper 1978, p. 342: "However, Darwin's own most important contribution to the theory of evolution, his theory of natural selection, is difficult to test. There are some tests, even some experimental tests; and in some cases, such as the famous phenomenon known as "industrial melanism", we can observe natural selection happening under our very eyes, as it were. Nevertheless, really severe tests of the theory of natural selection are hard to come by, much more so than tests of otherwise comparable theories in physics or chemistry."
  42. Popper 1995, Chap.15 sec. III (page 101 here): "In Marx's view, it is vain to expect that any important change can be achieved by the use of legal or political means; a political revolution can only lead to one set of rulers giving way to another set—a mere exchange of the persons who act as rulers. Only the evolution of the underlying essence, the economic reality can produce any essential or real change—a social revolution."
  43. Popper 1962, p. 37: "In some of its earlier formulations (for example in Marx's analysis of the character of the 'coming social revolution') their predictions were testable, and in fact falsified. Yet instead of accepting the refutations the followers of Marx re-interpreted both the theory and the evidence in order to make them agree. In this way they rescued the theory from refutation; but they did so at the price of adopting a device which made it irrefutable. They thus gave a 'conventionalist twist' to the theory; and by this stratagem they destroyed its much advertised claim to scientific status."
  44. Thornton 2016, Sec. 2: "The Marxist account of history too, Popper held, is not scientific, although it differs in certain crucial respects from psychoanalysis. For Marxism, Popper believed, had been initially scientific, in that Marx had postulated a theory which was genuinely predictive. However, when these predictions were not in fact borne out, the theory was saved from falsification by the addition of ad hoc hypotheses which made it compatible with the facts. By this means, Popper asserted, a theory which was initially genuinely scientific degenerated into pseudo-scientific dogma."
  45. Surveys were mailed to all active U.S. district court judges in November 1998 (N = 619). 303 usable surveys were obtained for a response rate of 51%. See Krafka 2002, p. 9 in archived pdf.
  46. The Daubert case and subsequent cases that used it as a reference, including General Electric Co. v. Joiner and Kumho Tire Co. v. Carmichael, resulted in an amendment of the Federal Rules of Evidence (see Rules of Evidence 2017, p. 15, Rule 702 and Rule 702 Notes 2011). The Kumho Tire Co. v. Carmichael case and other cases considered the original Daubert factors, but the amended rule, rule 702, even though it is often referred to as the Daubert standard, does not include the original Daubert factors or mention falsifiability or testability and neither does the majority opinion delivered by William Rehnquist in the General Electric Co. v. Joiner case.
  47. Not to be confused with David Kaye (law professor), United Nations special rapporteur. David H. Kaye is distinguished professor of law at Penn State Law.
  48. Kaye 2005, p. 2: "several courts have treated the abstract possibility of falsification as sufficient to satisfy this aspect of the screening of scientific evidence. This essay challenges these views. It first explains the distinct meanings of falsification and falsifiability. It then argues that while the Court did not embrace the views of any specific philosopher of science, inquiring into the existence of meaningful attempts at falsification is an appropriate and crucial consideration in admissibility determinations. Consequently, it concludes that recent opinions substituting mere falsifiability for actual empirical testing are misconstruing and misapplying Daubert."
  49. As Lakatos pointed out, scientists decide among themselves using critical discussions which potential falsifiers are accepted. There is no strict constraints on which method can be used to take the decision.
  50. Popper 1962, p. 111: "Against the view here developed one might be tempted to object (following Duhem 28) that in every test it is not only the theory under investigation which is involved, but also the whole system of our theories and assumptions—in fact, more or less the whole of our knowledge—so that we can never be certain which of all these assumptions is refuted. But this criticism overlooks the fact that if we take each of the two theories (between which the crucial experiment is to decide) together with all this background knowledge, as indeed we must, then we decide between two systems which differ only over the two theories which are at stake. It further overlooks the fact that we do not assert the refutation of the theory as such, but of the theory together with that background knowledge; parts of which, if other crucial experiments can be designed, may indeed one day be rejected as responsible for the failure. (Thus we may even characterize a theory under investigation as that part of a vast system for which we have, if vaguely, an alternative in mind, and for which we try to design crucial tests.)"
  51. These four decisions are mentioned in Lakatos 1978, pp. 22–25. A fifth decision is mentioned later by Lakatos to allow even more theories to be falsified.
  52. Popper 1959, p. 91: "It may now be possible for us to answer the question: How and why do we accept one theory in preference to others? The preference is certainly not due to anything like a experiential justification of the statements composing the theory; it is not due to a logical reduction of the theory to experience. We choose the theory which best holds its own in competition with other theories; the one which, by natural selection, proves itself the fittest to survive. This will be the one which not only has hitherto stood up to the severest tests, but the one which is also testable in the most rigorous way. A theory is a tool which we test by applying it, and which we judge as to its fitness by the results of its applications."
  53. Lakatos says that Popper is not the sophisticated falsificationist that he describes, but not the naive falsificationist either (see Lakatos 1978): "In an earlier paper,' I distinguished three Poppers: Popper0, Popper1, and Popper2. Popper0 is the dogmatic falsificationist ... Popper1 is the naive falsificationist, Popper2 the sophisticated falsificationist. ... The real Popper has never explained in detail the appeal procedure by which some 'accepted basic statements', may be eliminated. Thus the real Popper consists of Popper1 together with some elements of Popper2."
  54. Popper clearly distinguishes between the methodological rules and the rules of pure logic (see Popper 1959, p. 32): "Methodological rules are here regarded as conventions. They might be described as the rules of the game of empirical science. They differ from the rules of pure logic"
  55. Popper 1959, p. 27: "The theory of method, in so far as it goes beyond the purely logical analysis of the relations between scientific statements, is concerned with the choice of methods—with decisions about the way in which scientific statements are to be dealt with."
  56. Zahar wrote a brief summary of Lakatos's position regarding Popper's philosophy. He says (see Zahar 1983, p. 149): "The important question of the possibility of a genuine logic of [scientific] discovery" is the main divergence between Lakatos and Popper. About Popper's view, Zahar wrote (see Zahar 1983, p. 169): "To repeat: Popper offers a Darwinian account of the progress of knowledge. Progress is supposed to result negatively from the elimination by natural selection of defective alternatives. ... There is no genuine logic of discovery, only a psychology of invention juxtaposed to a methodology which appraises fully fledged theories."
  57. In Lakatos terminology, the term "falsified" has a different meaning for a naive falsificationist than for a sophisticated falsificationist. Putting aside this confusing terminological aspect, the key point is that Lakatos wanted a formal logical procedure to determine which theories we must keep (see Lakatos 1978, p. 32): "For the naive falsificationist a theory is falsified by a ('fortified') 'observational' statement which conflicts with it (or which he decides to interpret as conflicting with it). For the sophisticated falsificationist a scientific theory T is falsified if and only if another theory T' has been proposed with the following characteristics: ( 1 ) T' has excess empirical content over T: that is, it predicts novel facts, that is, facts improbable in the light of, or even forbidden, by (2) T' explains the previous success of T, that is, all the unrefuted content of T is included (within the limits of observational error) in the content of T'; and (3) some of the excess content of T' is corroborated."
  58. In his critique of Popper (see Kuhn 1970, p. 15), Kuhn says that the methodological rules are not sufficient to provide a logic of discovery: "rules or conventions like the following: 'Once a hypothesis has been proposed and tested, and has proved its mettle, it may not be allowed to drop out without 'good reason'. A 'good reason' may be, for instance: replacement of the hypothesis by another which is better testable; or the falsification of one of the consequences of the hypothesis.'
    Rules like these, and with them the entire logical enterprise described above, are no longer simply syntactic in their import. They require that both the epistemological investigator and the research scientist be able to relate sentences derived from a theory not to other sentences but to actual observations and experiments. This is the context in which Sir Karl's term 'falsification' must function, and Sir Karl is entirely silent about how it can do so."
  59. Popper gives an example of a methodological rule that uses corroborations (see Popper 1959, p. 32): "Once a hypothesis has been proposed and tested, and has proved its mettle, it may not be allowed to drop out without 'good reason'. A 'good reason' may be, for instance: replacement of the hypothesis by another which is better testable; or the falsification of one of the consequences of the hypothesis."
  60. Popper 1959, section 23, 1st paragraph: "The requirement of falsifiability which was a little vague to start with has now been split into two parts. The first, the methodological postulate (cf. section 20), can hardly be made quite precise. The second, the logical criterion, is quite definite as soon as it is clear which statements are to be called 'basic'."
  61. Popper 1983, Introduction, V: "The hope further to strengthen this theory of the aims of science by the definition of verisimilitude in terms of truth and of content was, unfortunately, vain. But the widely held view that scrapping this definition weakens my theory is completely baseless."
  62. Morris & Brown 2021, Sec. 3: Hume explicitly models his account of the fundamental principles of the mind's operations—the principles of association—on the idea of gravitational attraction.
  63. Russell 1948, Part VI, Sec. II: "We have therefore to seek for principles, other than induction, such that, given certain data not of the form “this A is a B”, the generalization “'all A is B”' has a finite probability. Given such principles, and given a generalization to which they apply, induction can make the generalization increasingly probable, with a probability which approaches certainty as a limit when the number of favourable instances in indefinitely increased."
  64. Zahar 1983, p. 167: "Lakatos urged Popper explicitly to adopt some inductive principle which would synthetically link verisimilitude to corroboration."
  65. Lakatos 1978, Sec. 1.1: I shall try to explain—and further strengthen—this stronger Popperian position which, I think, may escape Kuhn's strictures and present scientific revolutions not as constituting religious conversions but rather as rational progress.
  66. Lakatos 1978, Sec. 1.2.b: The other alternative is to ... replace the naive versions of methodological falsificationism ... by a sophisticated version which would give a new rationale of falsification and thereby rescue methodology and the idea of scientific progress.
  67. Lakatos 1978, pp. 116–117: "The methodology of research programmes was criticized both by Feyerabend and by Kuhn. According to Kuhn: '[Lakatos] must specify criteria which can be used at the time to distinguish a degenerative from a progressive research programme; and so on. Otherwise, he has told us nothing at all.' Actually, I do specify such criteria. But Kuhn probably meant that '[my] standards have practical force only if they are combined with a time limit (what looks like a degenerating problem shift may be the beginning of a much longer period of advance)'. Since I specify no such time limit, Feyerabend concludes that my standards are no more than 'verbal ornament'. A related point was made by Musgrave in a letter containing some major constructive criticisms of an earlier draft, in which he demanded that I specify, for instance, at what point dogmatic adherence to a programme ought to be explained 'externally' rather than 'internally'. Let me try to explain why such objections are beside the point. One may rationally stick to a degenerating programme until it is overtaken by a rival and even after. What one must not do is to deny its poor public record. Both Feyerabend and Kuhn conflate methodological appraisal of a programme with firm heuristic advice about what to do. It is perfectly rational to play a risky game: what is irrational is to deceive oneself about the risk. This does not mean as much licence as might appear for those who stick to a degenerating programme. For they can do this mostly only in private."
  68. Watkins 1989, p. 6: "Although Paul Feyerabend and Alan Musgrave evaluated [Lakatos's view] in opposite ways, they agreed about its nature. Feyerabend hailed it as an 'anarchism in disguise' (Feyerabend, Against Method, 1975), while Musgrave rather deplored the fact that Lakatos had 'gone a long way towards epistemological anarchism' (Musgrave 1976, p. 458). Musgrave added: 'Lakatos deprived his standards of practical force, and adopted a position of "anything goes"' (Musgrave 1976, p. 478)."
  69. Musgrave 1976, p. 458: "My third criticism concerns the question of whether Lakatos's methodology is in fact a methodology in the old-fashioned sense: whether, that is, it issues in advice to scientists. I shall argue that Lakatos once had sound views on this matter, but was led, mistakenly in my opinion, to renounce them. In renouncing them, he has gone a long way towards epistemological anarchism."
  70. Feyerabend 1978, p. 15: "Lakatos realized and admitted that the existing standards of rationality, standards of logic included, are too restrictive and would have hindered science had they been applied with determination. He therefore permitted the scientist to violate them ... However, he demanded that research programmes show certain features in the long run — they must be progressive. In Chapter 16 of [Against Method] (and in my essay 'On the Critique of Scientific Reason': Feyerabend 1978b, p. 120) I have argued that this demand no longer restricts scientific practice. Any development agrees with it. The demand (standard) is rational, but it is also empty. Rationalism and the demands of reason have become purely verbal in the theory of Lakatos." See also Feyerabend 1981, p. 148.
  71. Couvalis 1997, pp. 74-75: "There is a sense in which Feyerabend is right. Lakatos fails to give precise mechanical rules for when a theory has been finally falsified. Yet an appropriate question might be whether such rules are possible or necessary to make science rational. ... There are, however, many rough and ready rules, the application of which has to be learned in practical contexts. ... This does not mean that precise rules cannot be used in certain contexts, but we need to use our judgement to decide when those rules are to be used."
  72. Lakatos 1978, p. 112: "It should be pointed out, however, that the methodology of scientific research programmes has more teeth than Duhem's conventionalism: instead of leaving it to Duhem's unarticulated common sense to judge when a 'framework' is to be abandoned, I inject some hard Popperian elements into the appraisal of whether a programme progresses or degenerates or of whether one is overtaking another. That is, I give criteria of progress and stagnation within a programme and also rules for the 'elimination' of whole research programmes."
  73. Zahar (Zahar 1983, p. 168) recognizes that formal rules in a methodology cannot be rational. Yet, at the level of the technology, that is, at the practical level, he says, scientists must nevertheless take decisions. Popper's methodology does not specify formal rules, but non-rational decisions will still have to be taken. He concludes that "Popper and Lakatos differ only over the levels at which they locate non-rationality in science: Lakatos at the level of an inductive principle which justifies technology, and Popper at the lower-level of technology itself."
  74. Popper 1959, Sec. Elimination of Psychologism
  75. Einstein wrote (see Yehuda 2018, p. 41): "The supreme task of the physicist is to arrive at those universal elementary laws from which the cosmos can be built up by pure deduction. There is no logical path to these laws; only intuition, resting on sympathetic understanding of experience, can reach them."
  76. Einstein wrote (see Feldman & Williams 2007, p. 151 and ): "I am convinced that we can discover by means of purely mathematical constructions the concepts and laws connecting them with each other, which furnish the key to the understanding of natural phenomena. ... Experience remains, of course, the sole criterion of the physical utility of a mathematical construction. But the creative principle resides in mathematics. In a certain sense, therefore, I hold it true that pure thought can grasp reality, as the ancients dreamed."
  77. Kuhn 1974, p. 802: "I suggest then that Sir Karl has characterized the entire scientific enterprise in terms that apply only to its occasional revolutionary parts. His emphasis is natural and common: the exploits of a Copernicus or Einstein make better reading than those of a Brahe or Lorentz; Sir Karl would not be the first if he mistook what I call normal science for an intrinsically uninteresting enterprise. Nevertheless, neither science nor the development of knowledge is likely to be understood if research is viewed exclusively through the revolutions it occasionally produces."
  78. Watkins 1970, p. 28: "Thus we have the following clash: the condition which Kuhn regards as the normal and proper condition of science is a condition which, if it actually obtained, Popper would regard as unscientific, a state of affairs in which critical science had contracted into defensive metaphysics. Popper has suggested that the motto of science should be: Revolution in permanence! For Kuhn, it seems, a more appropriate maxim would be: Not nostrums but normalcy!"
  79. Popper 1994, pp. 155–156: "It is my view that the methods of the natural as well as the social sciences can be best understood if we admit that science always begins and ends with problems. The progress of science lies, essentially, in the evolution of its problems. And it can be gauged by the increasing refinement, wealth, fertility, and depth of its problems. ... The growth of knowledge always consists in correcting earlier knowledge. Historically, science begins with pre-scientific knowledge, with pre-scientific myths and pre-scientific expectations. And these, in turn, have no 'beginnings'."
  80. Popper 1962, p. 37: "[B]y making their interpretations and prophecies sufficiently vague [astrologers] were able to explain away anything that might have been a refutation of the theory had the theory and the prophecies been more precise. In order to escape falsification they destroyed the testability of their theory. It is a typical soothsayer's trick to predict things so vaguely that the predictions can hardly fail: that they become irrefutable."
  81. Kuhn 1970, pp. 7–8: "Astrology is Sir Karl's most frequently cited example of a 'pseudo-science'. He [Popper] says: 'By making their interpretations and prophecies sufficiently vague they [astrologers] were able to explain away anything that might have been a refutation of the theory had the theory and the prophecies been more precise. In order to escape falsification they destroyed the testability of the theory.' Those generalizations catch something of the spirit of the astrological enterprise. But taken at all literally, as they must be if they are to provide a demarcation criterion, they are impossible to support. The history of astrology during the centuries when it was intellectually reputable records many predictions that categorically failed. Not even astrology's most convinced and vehement exponents doubted the recurrence of such failures. Astrology cannot be barred from the sciences because of the form in which its predictions were cast."
  82. Greenland 1998, p. 545.
  83. Grayling 2019, p. 397.

Abbreviated references

References

Further reading

Wikiwand in your browser!

Seamless Wikipedia browsing. On steroids.

Every time you click a link to Wikipedia, Wiktionary or Wikiquote in your browser's search results, it will show the modern Wikiwand interface.

Wikiwand extension is a five stars, simple, with minimum permission required to keep your browsing private, safe and transparent.