Verificationism, also known as the verification principle or the verifiability criterion of meaning, is a doctrine in philosophy which asserts that a statement is meaningful only if it is either empirically verifiable (can be confirmed through the senses) or a tautology (true by virtue of its own meaning or its own logical form). Verificationism rejects statements of metaphysics, theology, ethics and aesthetics as meaningless in conveying truth value or factual content, though they may be meaningful in influencing emotions or behavior.[1]

Verificationism was a central thesis of logical positivism, a movement in analytic philosophy that emerged in the 1920s by philosophers who sought to unify philosophy and science under a common naturalistic theory of knowledge.[2] The verifiability criterion underwent various revisions throughout the 1920s to 1950s. However, by the 1960s, it was deemed to be irreparably untenable.[3] Its abandonment would eventually precipitate the collapse of the broader logical positivist movement.[4]

Origins

The roots of verificationism may be traced to at least the 19th century, in philosophical principles that aim to ground scientific theory in verifiable experience, such as C.S. Peirce's pragmatism and the work of conventionalist Pierre Duhem,[3] who fostered instrumentalism.[5] Verificationism, as principle, would be conceived in the 1920s by the logical positivists of the Vienna Circle, who sought an epistemology whereby philosophical discourse would be, in their perception, as authoritative and meaningful as empirical science.[6] The movement established grounding in the empiricism of David Hume,[7] Auguste Comte and Ernst Mach, and the positivism of the latter two, borrowing perspectives from Immanuel Kant and defining their exemplar of science in Einstein's general theory of relativity.[8]

Ludwig Wittgenstein's Tractatus, published in 1921, established the theoretical foundations for the verifiability criterion of meaning.[9] Building upon Gottlob Frege's work, the analytic–synthetic distinction was also reformulated, reducing logic and mathematics to semantical conventions. This would render logical truths (being unverifiable by the senses) tenable under verificationism, as tautologies.[10]

Revisions

Logical positivists within the Vienna Circle recognized quickly that the verifiability criterion was too stringent. Specifically, universal generalizations were noted to be empirically unverifiable, rendering vital domains of science and reason, including scientific hypothesis, meaningless under verificationism, absent revisions to its criterion of meaning.[11]

Rudolf Carnap, Otto Neurath, Hans Hahn and Philipp Frank led a faction seeking to make the verifiability criterion more inclusive, beginning a movement they referred to as the "liberalization of empiricism". Moritz Schlick and Friedrich Waismann led a "conservative wing" that maintained a strict verificationism. Whereas Schlick sought to redefine universal generalizations as tautological rules, thereby to reconcile them with the existing criterion, Hahn argued that the criterion itself should be weakened to accommodate non-conclusive verification.[12] Neurath, within the liberal wing, proposed the adoption of coherentism, though challenged by Schlick's foundationalism. However, his physicalism would eventually be adopted over Mach's phenomenalism by most members of the Vienna Circle.[11][13]

In 1936, Carnap sought a switch from verification to confirmation.[11] Carnap's confirmability criterion (confirmationism) would not require conclusive verification (thus accommodating for universal generalizations) but allow for partial testability to establish degrees of confirmation on a probabilistic basis. Carnap never succeeded in finalising his thesis despite employing abundant logical and mathematical tools for this purpose. In all of Carnap's formulations, a universal law's degree of confirmation was zero.[14]

In Language, Truth and Logic, published that year, A. J. Ayer distinguished between strong and weak verification. This system espoused conclusive verification, yet allowed for probabilistic inclusion where verifiability is inconclusive. He also distinguished theoretical from practical verifiability, proposing that statements that are verifiable in principle should be meaningful, even if unverifiable in practice.[15][16]

Criticisms

Philosopher Karl Popper, a graduate of the University of Vienna, though not a member within the ranks of the Vienna Circle, was among the foremost critics of verificationism. He identified three fundamental deficiencies in verifiability as a criterion of meaning:[17]

  • Verificationism rejects universal generalizations, such as "all swans are white," as meaningless. Popper argues that while universal statements cannot be verified, they can be proven false, a foundation on which he was to propose his criterion of falsifiability.
  • Verificationism allows existential statements, such as “unicorns exist”, to be classified as scientifically meaningful, despite the absence of any definitive method to show that they are false (one could possibly find a unicorn somewhere not yet examined).
  • Verificationism is meaningless by virtue of its own criterion because it cannot be empirically verified. Thus the concept is self-defeating.

Popper regarded scientific hypotheses to never be completely verifiable, as well as not confirmable under Carnap's thesis.[9][18] He also considered metaphysical, ethical and aesthetic statements often rich in meaning and important in the origination of scientific theories.[9]

Other philosophers also voiced their own criticisms of verificationism:

Falsifiability

In The Logic of Scientific Discovery (1959), Popper proposed falsifiability, or falsificationism. Though formulated in the context of what he perceived were intractable problems in both verifiability and confirmability, Popper intended falsifiability, not as a criterion of meaning like verificationism (as commonly misunderstood),[24] but as a criterion to demarcate scientific statements from non-scientific statements.[9]

Notably, the falsifiability criterion would allow for scientific hypotheses (expressed as universal generalizations) to be held as provisionally true until proven false by observation, whereas under verificationism, they would be disqualified immediately as meaningless.[9]

In formulating his criterion, Popper was informed by the contrasting methodologies of Albert Einstein and Sigmund Freud. Appealing to the general theory of relativity and its predicted effects on gravitational lensing, it was evident to Popper that Einstein's theories carried significantly greater predictive risk than Freud's of being falsified by observation. Though Freud found ample confirmation of his theories in observations, Popper would note that this method of justification was vulnerable to confirmation bias, leading in some cases to contradictory outcomes. He would therefore conclude that predictive risk, or falsifiability, should serve as the criterion to demarcate the boundaries of science.[25]

Though falsificationism has been criticized extensively by philosophers for methodological shortcomings in its intended demarcation of science,[17] it would receive acclamatory adoption among scientists.[18] Logical positivists too adopted the criterion, even as their movement ran its course, catapulting Popper, initially a contentious misfit, to carry the richest philosophy out of interwar Vienna.[24]

Legacy

In 1967, John Passmore, a leading historian of 20th-century philosophy, wrote, "Logical positivism is dead, or as dead as a philosophical movement ever becomes".[4] Logical positivism's fall heralded postpositivism, where Popper's view of human knowledge as hypothetical, continually growing and open to change ascended[24] and verificationism, in academic circles, became mostly maligned.[3]

In a 1976 TV interview, A. J. Ayer, who had introduced logical positivism to the English-speaking world in the 1930s[26] was asked what he saw as its main defects, and answered that "nearly all of it was false".[4] However, he soon said that he still held "the same general approach", referring to empiricism and reductionism, whereby mental phenomena resolve to the material or physical and philosophical questions largely resolve to ones of language and meaning.[4] In 1977, Ayer had noted:[3]

"The verification principle is seldom mentioned and when it is mentioned it is usually scorned; it continues, however, to be put to work. The attitude of many philosophers reminds me of the relationship between Pip and Magwitch in Dickens's Great Expectations. They have lived on the money, but are ashamed to acknowledge its source."

In the late 20th and early 21st centuries, the general concept of verification criteria—in forms that differed from those of the logical positivists—was defended by Bas van Fraassen, Michael Dummett, Crispin Wright, Christopher Peacocke, David Wiggins, Richard Rorty, and others.[3]

See also

References

Wikiwand in your browser!

Seamless Wikipedia browsing. On steroids.

Every time you click a link to Wikipedia, Wiktionary or Wikiquote in your browser's search results, it will show the modern Wikiwand interface.

Wikiwand extension is a five stars, simple, with minimum permission required to keep your browsing private, safe and transparent.