PHILOSOPHY OF SCIENCE AN INTRODUCTION
Philosophy of Science
Thomas J. Hickey
© Copyright 1995, 2005, 2016, 2020 by Thomas J. Hickey
Chapter 1. Overview
Both successful science and contemporary philosophy of science are pragmatic. In science, as in life, realistic pragmatism is what works successfully. This introductory book is a concise synthesis of the elementary principles of the contemporary realistic neopragmatist philosophy of science, the philosophy that the twentieth century has bequeathed to the twenty-first century. This chapter defines some basic concepts.
1.01 Aim of Philosophy of Science
Traditionally the purpose of philosophy of science was viewed in terms of justifying a superior epistemic status for empirical science. But on the contemporary realistic neopragmatist view today the aim of philosophy of science is to characterize the practices that have made the empirical sciences so unquestionably successful. Therefore:
The aim of contemporary realistic neopragmatist philosophy of science is to discover principles that describe successful practices of basic-science research, in order to advance contemporary science by application of the principles.
The principles are set forth as a metatheory, which is sketched in this book. Basic science creates new language: new theories, new laws and new explanations. Applied science uses scientific explanations to change the real world, e.g., new technologies, new social policies and new medical therapies. Philosophy of science pertains to basic-science practices and language.
1.02 Computational Philosophy of Science
Computational philosophy of science is the design, development and application of computer systems that proceduralize and mechanize productive basic-research practices in science.
Philosophers of science can no longer be content with more hackneyed recitations of the Popper-Kuhn debates of half a century ago, much less more debating ancient futile ethereal metaphysical issues such as realism vs. idealism.
In the “Introduction” to his Models of Discovery (1977) 1978 Nobel-laureate economist Herbert Simon (1916-2001), a founder of artificial intelligence, wrote that dense mists of romanticism and downright know-nothingism have always surrounded the subject of scientific discovery and creativity. The pragmatist philosophers Charles Sanders Peirce (1839-1914) and Norwood Russell Hanson (1924-1967) had described a nonprocedural analysis for developing theories. Peirce called this nonprocedural practice “abduction”; Hanson called it “retroduction”. In the 1970’s Hickey (1940) in his Introduction to Metascience: An Information Science Approach to Methodology of Scientific Research (1976) called the mechanized approach “metascience”. In the 1980’s philosopher of science Paul Thagard (1950) in his Computational Philosophy of Science (1988) named it “computational philosophy of science”. Today in computational philosophy of science procedural strategies for the rational reconstruction of new theories are coded into the design of what Simon called “discovery systems”.
Thus, contemporary philosophy of science has taken the computational turn. Mechanized information processing for successful basic-research practices (a.k.a. “artificial intelligence”) has permeated almost every science, and is now intruding into philosophy of science. Today computerized discovery systems facilitate investigations in both the sciences and in philosophy of science in a new specialty called “computational philosophy of science”.
Mechanized reconstruction of successful developmental episodes in the history of science is typically used to test the plausibility of discovery-system designs. But the proof of the pudding is in the eating: application of computer systems at the frontier of a science, where prediction is also production in order to propose new empirically superior theories, further tests the systems. Now philosophers of science may be expected to practice what they preach by participating in basic-science research to produce empirically adequate contributions. Contemporary application of the discovery systems gives the philosopher of science a participatory and consequential rôle in basic-science research.
1.03 Two Perspectives on Language
Philosophy of language supplies an organizing analytical framework that integrates contemporary philosophy of science. In philosophy of language philosophers have since Alfred Tarski (1902-1982) distinguished two perspectives called “object language” and “metalanguage”.
Object language is discourse about nonlinguistic reality including domains that the particular sciences investigate as well as about the realities and experiences of ordinary everyday life.
Metalanguage is language about language, either object language or metalanguage.
Much of the discourse in philosophy of science is in the metalinguistic perspective. Important metalinguistic terms include “theory”, “law”, “test design”, “observation report” and “explanation”, all of which are pragmatic classifications of the uses of language. For example in the contemporary realistic neopragmatist philosophy a “theory” is a universally quantified hypothesis proposed for empirical testing. A “test design” is a universally quantified discourse presumed for the empirical testing of a theory in order to identify the subject of the theory independently of the theory and to describe the procedures for performing the test; it is viewed as unproblematic for the empirical test. The computer instructions coded in discovery systems are also metalinguistic expressions, because these systems input, process and output object language for the sciences.
1.04 Dimensions of Language
Using the metalinguistic perspective, philosophers analyze language into what Rudolf Carnap (1891-1970) called “dimensions” of language. The dimensions of interest to realistic neopragmatist philosophers are syntax, semantics, ontology, and pragmatics.
Syntax refers to the structure of language. Syntax is arrangements of symbols such as linguistic ink marks on paper, which display structure. Examples of syntactical symbols include terms such as words and mathematical variables and the sentences and mathematical equations constructed with the terms.
Syntactical rules regulate construction of grammatical expressions such as sentences and equations out of terms, which are usually arranged by concatenation into strings or in some cases organized into matrices or arrays.
Semantics refers to the meanings associated with syntactical symbols. Syntax without semantics is literally meaningless. Associating meanings with the symbols makes the syntax “semantically interpreted”.
A stereotypic pedagogical sentence structure that philosophers employ to exemplify their discussions about language is the categorical form of statement, such as “Every X is Y”, and that practice will be followed in the discourses in this book.
Semantical rules describe and analyze the meanings associated with elementary syntactical symbols, i.e. terms. For heuristic demonstration philosophers have traditionally found simple statements in categorical form to be useful. In the metalinguistic perspective belief in semantically interpreted universally quantified sentences such as the categorical affirmation “Every crow is black” enables sentences to function as semantical rules displaying the complex meanings of the sentences’ component descriptive terms. Belief in the statement “Every crow is black” makes the phrase “black crow” redundant, thus displaying the meaning of “black” as a component part of the meaning of “crow”. The lexical entries in a unilingual dictionary are an inventory of semantical rules for a language. This is not “rocket science”, but there are academic philosophers who prefer obscurantism and refuse to acknowledge componential semantics.
Ontology refers to the aspects of reality described by the relativized perspectivist semantics of interpreted sentences believed to be true, especially belief due to experience or to systematic empirical testing. This is the thesis of ontological relativity. Ontology is typically of greater interest to philosophers than to linguists.
Semantics is knowledge of reality, while ontology is reality as known, i.e. semantics is the perspectivist signification of reality, and ontology is the aspects of reality signified by semantics. Ontology is the aspects of mind-independent reality that is cognitively captured with a perspective revealed by a term’s semantics.
Not all discourses are equally realistic; the semantics and ontologies of discourses are as realistic as the discourses are empirically adequate. Since all semantics is relativized and ultimately comes from sense stimuli, there is no semantically interpreted syntax of language that is utterly devoid of any associated ontology. If all past falsified explanations were completely unrealistic, then so too are all currently accepted explanations and all future ones, because they are destined to be falsified eventually. Better to acknowledge in all explanations the limited degree of realism and truth that they have to offer. Scientists recognize that they investigate reality and are motivated to do so. Few would have taken up their basic-research careers had they thought they were merely constructing fictions and fantasies with their theories or fabricating semantically vacuous discourses.
Pragmatics in philosophy of science refers to how scientists use language, namely to create and to test theories, and thereby to develop scientific laws used in test designs and in scientific explanations. The pragmatic dimension includes both the semantic and syntactical dimensions.
1.05 Classification of Functional Topics
Basic-science research practices can be classified into four essential sequential functions performed in basic research. They are:
1. Aim of Basic Science
The successful outcome (and thus the aim) of basic-science research is explanations made by developing theories that satisfy critically empirical tests, which theories are thereby made scientific laws that can function in scientific explanations and test designs.
The institutionalized aim of basic science is the culturally shared aim that regulates development of explanations, which are the final products of basic-scientific research. The institutionalized views and values of science have evolved considerably over the last several centuries, and will continue to evolve episodically in unforeseeable ways with future advances of science.
Discovery is the construction of new and empirically more adequate theories. A scientific theory is a universally quantified statement proposed for testing. The semantics of newly constructed theories reveal new perspectives and ontologies.
A mechanized discovery system produces a transition from an input-language state description containing currently available information to an output-language state description containing generated and tested new theories.
Contemporary realistic neopragmatism is consistent with computerized discovery systems, which aim to proceduralize and then to mechanize new theory construction, in order to advance contemporary science. The computerized discovery system is not a psychological theory; it is a constructional linguistic metatheory. To borrow a phrase firstly used in philosophy by Carnap in his Aufbau (1928) but with a different meaning for computational philosophy of science, a discovery system is a dynamic diachronic linguistic constructional procedure called a “rational reconstruction”, rational because it is procedural.
Both romantics and positivists define “theory” semantically, while contemporary realistic neopragmatists define “theory” pragmatically, i.e., by its function in basic-research science. Therefore for realistic neopragmatists “theory” is universally quantified language that is proposed for testing, and “test-design” is universally quantified language that is presumed for testing. And scientific laws are former theories that have been tested with nonfalsifying test outcomes.
Criticism pertains to the criteria for the acceptance or rejection of theories. The only criterion for scientific criticism that is acknowledged by the contemporary realistic neopragmatist is the empirical criterion, which is operative in an empirical test.
On the realistic neopragmatist thesis of relativized semantics and ontological relativity, semantics and ontologies can never trump the empirical criterion for criticism, because acceptance of ontologies in science is based upon empirical adequacy of a theory especially as demonstrated by repeated nonfalsifying empirical test outcomes. Thus like the romantics, realistic neopragmatists permit description of intersubjective mental states in social-science theories and explanations, but unlike many romantic sociologists and economists realistic neopragmatists never require or employ such mentalistic description as a criterion for critical acceptance.
Syntactical transformations of the surface structure of theories produce the nontruth-functional hypothetical-conditional logical form that exhibits the deep structure of the theory language in a test thereby explicitly displaying the essential empirical contingency and the logic of falsification, while preserving the semantics of the surface structure. Given the variety and complexity of surface-structure forms the deep- structure form serves, as it were, as the essential common denominator for testing. The logic operative in the deep structure of an empirical test is a modus tollens deduction with the surface structure of the tested theory transformed into a nontruth-functional hypothetical-conditional statement. In practice, however, the surface structure actually used by scientists may be more convenient for empirical tests.
Test-designs are universally quantified statements that are presumed for testing. Test designs characterize the subject of the test, and describe procedures for execution of the test. They also include universal statements that are semantical rules for the test-outcome statements, which are asserted with particular quantification, when the test design is executed and the test outcome is produced.
Observation language is particularly quantified test-design and test-outcome statements with their semantics defined in the universally quantified test-design language including the test outcome language.
An explanation is language that describes the occurrence of individual events and conditions that are caused by the occurrence of other described individual events and conditions according to universally quantified law statements.
The surface structure of a law for an explanation may be very complex mathematics. But syntactical transformations producing the nontruth-functional hypothetical-conditional logical argument form generate the deep structure underlying the surface structure. The logic operative in the deep structure of an explanation is a modus ponens deduction with the surface structure of the explaining law transformed into a nontruth-functional hypothetical-conditional statement displaying both the empirical conditionality in the constituent laws and the logic of explanation. Whenever possible the explanation is predictive of future events or for evidence of past events due to the universality claim of the explaining law. Scientific laws are not unconditional, nor are explanations historicist or prophesying.
In some cases laws may be said to be “explained” in the sense that a set of laws may be arranged into a deductive system with some laws derived from other laws. However, in a deductive system the choice of axioms is formally arbitrary.
1.06 Classification of Modern Philosophies
Twentieth-century philosophies of science may be segmented into three generic classes. They are romanticism, positivism and pragmatism. Romanticism is a philosophy for social and cultural sciences. Positivism is a philosophy for all sciences and it originated in reflection on Newtonian physics. Contemporary realistic neopragmatism is a philosophy for all sciences, and it originated in reflection on quantum physics.
Each generic type has many representative authors advocating philosophies expressing similar concepts for such metalinguistic terms as “theory”, “law” and “explanation”. Philosophies within each generic classification have their differences, but they are much more similar to each other than to those in either of the two other types. The relation between the philosophies and the four functional topics are cross-referenced in the next chapter.
Chapter 2. Modern Philosophies
This second chapter briefly sketches three generic types of twentieth-century philosophy of science in terms of the four functional topics. Philosophy of language will be taken up in chapter 3. Then all these elements will be integrated in a more detailed discussion of the four functional topics in chapter 4.
Romanticism has no representation in the natural sciences today, but it is still widely represented in the social sciences including economics and sociology. It has its roots in the eighteenth-century German idealist philosophers including notably Immanuel Kant (1770-1831), a transitional figure between enlightenment and romantic eras, and especially Georg Hegel (1724-1804) with the latter’s historicism and his emphasis on evolving ideas in social culture. Idealism is of purely antiquarian interest to philosophers today, and is irrelevant both to science and to philosophy of science.
Romantics have historically defaulted to the positivist philosophy for the natural sciences, but they reject using the positivist philosophy for the social sciences. Romantics maintain that there is a fundamental difference between sciences of nature and sciences of culture, i.e. social sciences.
Aim of science
For romantics the aim of the social sciences is an investigation of culture that yields an “interpretative understanding” of “human action”, by which is meant explanation of social interactions in terms of intersubjective mental states, i.e., shared ideas and motives, views and values including the economists’ maximizing behaviors as set forth in the rationality postulates, which are culturally shared by members of social groups.
This concept of the aim of science and of explanation is a “foundational agenda”, because it requires reduction of the social sciences to a social-psychology foundation, i.e., description of observed social behavior by reference to culturally shared intersubjective social-psychological mental states.
Romantics say “social theory” is language describing intersubjective mental states, notably culturally shared ideas and motivations, which are deemed the causes of “human action”.
For romantics the creation of “theory” in social science may originate either:
(1) in the social scientist’s introspective reflection on his own ideas and motivations originating in his actual or imaginary personal experiences, which ideas and motives are then imputed to the social members he is investigating, or
(2) in empirical survey research reporting social members’ verbally expressed intersubjective ideas and motivations.
Some romantics call the imputed motives based in introspective reflection “substantive reasoning” or “interpretative understanding”. But all romantic social scientists deny that social theory can be developed by data analysis exclusively or by observation of overt behavior alone. Romantics thus oppose their view of the aim of science to that of the positivists’ such as the sociologist George Lundberg (1895-1966) and the behavioristic psychologist B.F. Skinner (1904-1990). Romantics say that they explain consciously purposeful and motivated “human action”, while behaviorists say they explain publicly observable “human behavior” with no reference to mental states.
For romantics the criterion for criticism is “convincing interpretative understanding” that “makes substantive sense” of conscious motivations, which are deemed to be the underlying “causal mechanisms” of observed “human action”.
Causality is an ontological concept, and nearly all romantics impose their mentalistic ontology as the criterion for criticism, while making empirical or statistical analyses at most optional or supplementary.
Furthermore, many romantic social scientists require as a criterion that a social theory must be recognizable in the particular investigator’s own introspectively known intersubjective personal experience. In Max Weber’s (1864-1920) terms this is called verstehen. It is the thesis that empathetic insight is a necessary and valuable tool in the study of human action, which is without counterpart in the natural sciences. It effectively makes all sociology what has been called “folk sociology”.
Romantics maintain that only “theory”, i.e., language describing intersubjective ideas and motives, can “explain” conscious purposeful human action.
Motives are the “mechanisms” referenced as “causal” explanations, which are also called “theoretical” explanations. Observed regularities are deemed incapable of “explaining”, even if they enable correct predictions.
Some formerly romantic social scientists such as the institutionalist economist Wesley C. Mitchell (1874-1948) and the functionalist sociologist Robert K. Merton (1910-2003) have instead chosen to focus on objective outcomes rather than intersubjective motives. This focus would institutionalize testability and thus validate the scientific status of sociology. But the focus on objective outcomes still represents a minority view in academic social science. Although philosophically anachronistic Romanticism still prevails among social “scientists” in academia today, and its antiscientific orientation continues to have its residual effects in the social “sciences”.
Positivism was a reaction against the speculative metaphysics of the nineteenth century, and it carries forth many views of the preceding enlightenment era. Its origins are in the eighteenth-century British empiricist philosophers including John Locke (1632-1704) and most notably David Hume (1711-1776). But not until later in the nineteenth century did positivism get its name from the French philosopher Auguste Comte (1798-1857), who also founded positivist sociology.
The interwar “neopositivists” were the last incarnation of positivism. In 1936 Alfred J. Ayer (1910-1989) wrote a positivist manifesto titled Language, Truth and Logic, which dismissed all metaphysical discourse as literally nonsense, because metaphysical propositions are deemed empirically unverifiable. Therein he set forth the positivist verification principle of meaning that statements are semantically vacuous unless they are verifiable observationally.
Neopositivists also attempted to apply the symbolic logic fabricated by Bertrand Russell (1872-1970) and Alfred Whitehead (1861-1947) in their Principia Mathematica (1910-1913) early in the twentieth century. Neopositivists such as Carnap had fantasized that the Russellian truth-functional symbolic logic can serve philosophy, as mathematics has served physics. They are therefore also called “logical positivists”. In the later twentieth century positivism was relegated to the dustbin of history.
Contrary to romantics, positivists believe that all sciences including the social sciences share the same philosophy of science. They therefore reject the romantics’ dichotomy of sciences of nature and sciences of culture.
The positivists’ ideas about all four of the functional topics in philosophy of science were greatly influenced by their reflections upon Newtonian physics.
Aim of science
For positivists the aim of science is to produce explanations having objectivity grounded in “observation language”, which by its nature describes observed phenomena.
Their concept of the aim of science is thus also called a “foundational agenda”, although the required foundation is quite different from that of the romantics. But not all positivists were foundationalists. Otto Neurath’s (1882-1945) famous antifoundational boat metaphor compares scientists to sailors who must rebuild their ship on the open sea when they are unable to break it down in dry dock on terra firma. Neurath was a member of the Vienna Circle positivists.
Positivists believed that empirical laws are inferentially discovered by inductive generalization based on repeated observations. They define empirical laws as universally quantified statements containing only “observation terms” describing observable entities or phenomena.
Early positivists such as Ernst Mach (1826-1916) recognized only empirical laws for valid scientific explanations. But after Einstein’s achievements neopositivists such as Carnap recognized hypothetical theories for valid scientific explanations, if the theories could be linguistically related to language used to report the relevant observations. Unlike empirical laws, theories are not produced by induction from repeated singular observations.
Neopositivists believed that theories are discovered by creative imagination, but they left unexplained the creative process of developing theories. They define theories as universally quantified statements containing any “theoretical terms”, i.e., terms describing unobservable or never observed entities or phenomena.
The early positivists’ criterion for criticism is publicly accessible observation expressed in language containing only “observation terms”, which are terms that describe only observable entities or phenomena.
The later positivists or neopositivists maintain that theories are indirectly and tentatively warranted by observationally based empirical laws, when the valid laws can be logically derived from the theories.
Like Hume positivists deny that either laws or theories can be permanently validated empirically, but they require that the general laws be founded in observation language as a condition for the objectivity needed for valid science. And they maintain that particularly quantified observation statements describing singular events are incorrigible and beyond revision.
All positivists reject the romantics’ verstehen thesis of criticism. They argue that empathy is not a reliable tool, and that the methods of obtaining knowledge in the social sciences are the same as those used in the physical sciences. They complain that subjective verstehen may easily involve erroneous imputation of the idiosyncrasies of the observer’s experiences or fantasies to the subjects of inquiry.
Positivists and specifically Carl Hempel (1905-1997) and Paul Oppenheim (1885-1977) in their “Logic of Explanation” in the journal Philosophy of Science (1948) advocate the “covering-law” schema for explanation.
According to the “covering-law” schema for explanation, statements describing observable individual events are explained if they are derived deductively from other observation-language statements describing observable individual events together with “covering”, i.e., universally quantified empirical laws.
This concept of explanation has also been called the “deductive-nomological model”.
The neopositivists also maintained that theories explain laws, when the theories are premises from which the empirical laws are deductively derived as theorems. The deduction is enabled by the mediation of “bridge principles”. Bridge principles are sentences that relate the theoretical terms in an explaining theory to the observation terms in the explained empirical laws. The paradigmatic case is the deduction of Kepler’s laws from Newton’s theory.
We are now said to be in a “postpositivist’ era in the history of Western philosophy, but this term merely says that positivism has been relegated to history; it says nothing of what has replaced it. What has emerged is a new coherent master narrative appropriately called “contemporary realistic neopragmatism”, which was occasioned by Heisenberg’s reflections on his quantum theory, and is currently the ascendant philosophy in American academia. Contemporary realistic neo-pragmatism is a general philosophy for all empirical sciences, both social and natural sciences.
Neopragmatism has antecedent versions in the classical pragmatists, notably those of Charles Peirce, William James (1842-1910) and John Dewey (1859-1952). Some theses in classical pragmatism such as the importance of belief have been carried forward into the new. In contemporary realistic neopragmatism belief is strategic, because it controls relativized semantics, which signifies and thus reveals a correspondingly relativized ontology that is realistic to the degree that the belief is empirically adequate. Especially important is Dewey’s emphasis on participation and his pragmatic thesis that the logical distinctions and methods of scientific inquiry develop out of scientists’ successful problem-solving processes.
The provenance of the contemporary realistic neopragmatist philosophy of science is 1932 Nobel-laureate physicist Werner Heisenberg’s (1901-1976) reflections on the language in his revolutionary quantum theory in microphysics. There have been various alternative semantics and thus ontologies proposed for the quantum theory. Most physicists today have accepted one that has ambiguously been called the “Copenhagen interpretation”.
There are two versions of the Copenhagen interpretation. Contrary to the alternative “hidden variables” view of David Bohm (1917-1992), both of the Copenhagen versions assert a thesis called “duality”. The duality thesis is that the wave and particle manifestations of the electron are two aspects of the same entity, as Heisenberg says in his Physical Principles of the Quantum Theory (1930), rather than two separate entities, as Bohm says.
1922 Nobel-laureate Niels Bohr (1885-1962), founder of the Copenhagen Institute for Physics, proposed a version called “complementarity”. His version says that the mathematical equations of quantum theory must be viewed instrumentally instead of descriptively, because only ordinary discourse and its refinement in the language of classical physics are able to describe physical reality. Instrumentalism is the doctrine that scientific theories are not descriptions of reality, but are meaningless yet useful linguistic instruments that enable correct predictions.
The quantum theory says that the electron has both wave and particle properties, but in classical physics the semantics of the terms “wave” and “particle” are mutually exclusive – a wave is spread out in space while a particle is a concentrated point. Therefore, Bohr maintained that description of the electron’s duality as both “wave” and “particle” is an empirically indispensable semantic antilogy that he called “complementarity”.
Heisenberg, a colleague of Bohr at the Copenhagen Institute, proposed an alternative version of the Copenhagen interpretation. His version also contains the idea of the wave-particle duality, but he said that the mathematical expression of the quantum theory is realistic and descriptive rather than merely instrumental. And since the equations describing both the wave and particle properties of the electron are mathematically consistent, he disliked Bohr’s complementarity antilogy. Later Erwin Schrödinger (1887-1961) showed that Heisenberg’s matrix mechanics and Schrödinger’s wave mechanics are mathematically transformable into each other. Yale University’s Norwood Russell Hanson, an advocate of the Copenhagen physics, said in his Patterns of Discovery: An Inquiry into the Conceptual Foundations of Science (1958) that Bohr maintained a “naïve epistemology”.
Duality is a thesis in physics while complementarity is a thesis in philosophy of language. These two versions of the Copenhagen interpretation differ not in their physics, but in their philosophy of language. Bohr’s philosophy is called a “naturalistic” view of semantics, which requires what in his Atomic Physics and the Description of Nature (1934) he called “forms of perception”. Heisenberg’s philosophy is today called an “artifactual” view of semantics, in which the equations of the quantum theory supply the linguistic context, which defines the concepts that the physicist uses for observation.
1921 Nobel-laureate physicist Albert Einstein (1879-1955) had influenced Heisenberg’s philosophy of language, which has later been incorporated into the contemporary realistic neopragmatist philosophy of language. And consistent with his relativized semantics Heisenberg effectively practiced ontological relativity and maintained that the quantum reality exists as “potentia” prior to determination to a wave or a particle by execution of a measurement operation. For Heisenberg indeterminacy is physically real.
The term “complementarity” has since acquired some conventionality to signify duality, and is now ambiguous as to the issue between Bohr and Heisenberg, since physicists typically disregard the linguistic issue.
For more about Heisenberg and quantum theory the reader is referred to BOOK II and BOOK IV at the free web site www.philsci.com or in the e-book Twentieth-Century Philosophy of Science: A History, which is available at Internet booksellers through hyperlinks in the web site.
The distinctive linguistic philosophy of Einstein and especially Heisenberg as incorporated into the contemporary realistic neopragmatist philosophy can be summarized in three theses, which may be taken as basic principles in contemporary realistic neopragmatist philosophy of language:
Thesis I: Relativized semantics
Relativized semantics is the perspectivist meanings defined by the linguistic context consisting of universally quantified statements believed to be true.
The seminal work is “Quantum Mechanics and a Talk with Einstein (1925-1926)” in Heisenberg’s Physics and Beyond (1971). There Heisenberg relates that in April 1925, when he presented his matrix-mechanics quantum physics to the prestigious Physics Colloquium at the University of Berlin, Einstein, who was in the assembly, afterward invited Heisenberg to chat in his home that evening. In their conversation Einstein said that he no longer accepts the positivist view of observation including such positivist ideas as operational definitions. Instead he issued the aphorism: “the theory decides what the physicist can observe”.
The event was historic. Einstein’s aphorism about observation contradicts the fundamental positivist thesis that there is a natural dichotomous separation between the semantics of observation language and that of theory language. Positivists believed that the objectivity of science requires that the vocabulary and semantics used for incorrigible observation must be uncontaminated by the vocabulary and semantics of speculative and provisional theory, if indeed theory is meaningful at all.
In the next chapter titled “Fresh Fields (1926-1927)” in the same book Heisenberg reports that Einstein’s 1925 discussion with him in Berlin had later occasioned his own reconsideration of observation. Heisenberg recognized that classical Newtonian physical theory had led him to conceptualize the observed track of the electron as continuous in the cloud chamber – an instrument for microphysical observation developed by 1927 Nobel-laureate C.T.R. Wilson (1869-1961) – and therefore to misconceive the electron as simultaneously having a definite position and momentum like all Newtonian bodies in motion.
Recalling Einstein’s aphorism that the theory decides what the physicist can observe, Heisenberg reconsidered what is observed in the cloud chamber. He rephrased his question about the electron tracks in the cloud chamber using the concepts of the new quantum theory instead of those of the classical Newtonian theory. He therefore reports that he asked himself: Can the quantum mechanics represent the fact that an electron finds itself approximately in a given place and that it moves approximately at a given momentum? In answer to this newly formulated question he found that these approximations can be represented mathematically. He reports that he then developed a mathematical representation, which he called the “uncertainty relations”, the historic contribution for which he received the Nobel Prize in 1932.
Later Hanson expressed Einstein’s aphorism that the theory decides what the physicist can observe by saying observation is “theory-laden” and likewise Karl R. Popper (1902-1994) by saying it is “theory-impregnated”. Thus for realistic neopragmatists the semantics of all descriptive terms is determined by the linguistic context consisting of universally quantified statements believed to be true.
In his Against Method (1975) Paul K. Feyerabend (1924-1994) also recognized employment of relativized semantics to create new observation language for discovery, and he called the practice “counterinduction”. To understand counterinduction, it is necessary to understand the realistic neopragmatist concept of “theory”: theories are universally quantified statements that are proposed for testing. Feyerabend found that Galileo (1564-1642) had practiced counterinduction in the Dialogue Concerning the Two Chief World Systems (1632), where Galileo reinterpreted apparently falsifying observations in common experience by using the concepts from the apparently falsified heliocentric theory instead of the concepts from the prevailing geocentric theory. Likewise, Heisenberg had also practiced counterinduction to reconceptualize the perceived sense stimuli observed as the electron track in the cloud chamber by using quantum concepts instead of classical Newtonian concepts that appeared to falsify quantum physics, and he then developed the indeterminacy relations.
Counterinduction is using the semantics of an apparently falsified theory to revise the test-design language that had supplied the semantics of the language describing the apparently falsifying observations, and thereby to produce new language for observation.
Like Einstein, contemporary realistic neopragmatists say that the theory decides what the scientist can observe. Reality has its determinant character independently of human cognition. But realistic semantics is relativized in the sense that the meanings of descriptive terms used for reporting observations are not merely names or labels for such as Aristotelian forms, Kantian phenomena or positivist sensations, much less for nominalist entities in a cookie-cutter world, but rather are defined by the context in which they occur.
More specifically in “Five Milestones of Empiricism” in his Theories and Things (1981) Harvard’s realistic neopragmatist philosopher of language Willard van Quine says that the meanings of words are abstractions from the truth conditions of the sentences that contain them, and that it was this recognition of the semantic primacy of sentences that give us contextual definition.
The defining context consists of universally quantified statements that proponents believe to be true. The significance is that the acceptance of a new theory superseding an earlier one and sharing some of the same descriptive terms produces a semantical change in the descriptive terms shared by the theories and their common test design. The change consists of replacement of some semantical component parts in the meanings of the terms in the old theory with some parts in the meanings of the terms in the new theory. Thus, Einstein for example changed the meanings of such terms as “space” and “time”, which occur in both the Newtonian and relativity theories. And Heisenberg changed the meanings of the terms “wave” and “particle”, such that they are no longer mutually exclusive.
For more about Quine the reader is referred to BOOK III at the free web site www.philsci.com or in the e-book Twentieth-Century Philosophy of Science: A History, which is available at Internet booksellers through hyperlinks in the web site.
Thesis II: Empirical underdetermination
Empirical underdetermination refers to the limited ability of the perspectivist semantics of language at any given time to signify reality.
Measurement errors or inaccuracies and conceptual vagueness, which can be reduced indefinitely but never completely eliminated, exemplify the omnipresent and ever-present empirical underdetermination of descriptive language that occasions observational ambiguity and theoretical pluralism. Even concepts of quantized phenomena have vagueness. But no semantically interpreted syntax is completely underdetermined empirically such that it is utterly devoid of any ontological significance.
Einstein recognized that a plurality of alternative but empirically adequate theories could be consistent with the same observational description, a situation that he called “an embarrassment of riches”. Additional context including law statements in improved test-design language contributes additional semantics to the observational description in the test designs, thus reducing while never completely eliminating empirical underdetermination.
In his Word and Object (1960) Quine introduced the phrase “empirical underdetermination”, and wrote that the positivists’ “theoretical” terms are merely more empirically underdetermined than terms they called “observation” terms. Thus contrary to positivists the types of terms are not qualitatively different. Quine also says that reference to ontology is “inscrutable”; reference to relativized ontology is as inscrutable as signification by semantics is empirically underdetermined.