INTRODUCTION TO PHILOSOPHY OF SCIENCE
Book I Page 3
3.19 Rejection of Meaning Invariance
The semantics of every descriptive term is determined by the term’s linguistic context consisting of a set of universally logically quantified statements believed to be true, such that a change in any of those contextual beliefs changes some component parts of the terms’ meanings.
In science the linguistic context consisting of universally quantified statements believed to be true may include both theories undergoing or awaiting empirical testing and law statements used in test designs, which jointly contribute to the semantics of their shared descriptive terms.
When the observation-theory dichotomy is rejected, the language that reports observations becomes subject to semantical change or what Feyerabend called “meaning variance”. For the convinced believer in a theory the statements of the theory contribute meaning parts to the semantics of descriptive language used to report observations, such that for the believer a revision of the theory changes part of the semantics of the relevant observational description.
3.20 Rejection of the Analytic-Synthetic Dichotomy
All universally quantified categorical affirmations believed to be true are both analytic and empirical.
On the positivist view the truth of analytic sentences can be known a priori, i.e., by reflection on the meanings of their descriptive terms, while synthetic sentences require empirical investigation to determine their truth status, such that their truth can only be known a posteriori. Thus to know the truth status of the analytic sentence “Every bachelor is unmarried”, it is unnecessary to take a survey of bachelors to determine whether or not any such men are currently married. However, determining the truth status of the sentence “Every crow is black” requires an empirical investigation of the crow-bird population and then a generalizing inference.
On the alternative realistic neopragmatist view the semantics of all descriptive terms are contextually determined, such that all universally quantified categorical affirmations believed to be true are analytic statements. But their truth status is not thereby known a priori, because they are also synthetic, i.e., empirical, firstly known a posteriori by experience.
This dualism implies that when any universally quantified affirmation is believed to be empirically true, the sentence can then be used analytically as a semantical rule, such that the meaning of its predicate offers a partial analysis of the meaning of its subjectterm. To express this analytic-empirical dualism Quine used the phrase “analytical hypotheses”.
Thus “Every crow is black” is as analytic as “Every bachelor is unmarried”, so long as both statements are believed to be true. The meaning of “bachelor” includes the idea of being unmarried and makes the phrase “unmarried bachelor” pleonastic. Similarly so long as one believes that all crows are black, then the meaning of “crow” includes the idea of being black and makes the phrase “black crow” pleonastic. The only difference between the beliefs is the degree of conventionality in usage, such that the phrase “married bachelor” seems more antilogous than the phrase “white crow”.
In science the most important reason for belief is empirical adequacy demonstrated by reproducible and repeated nonfalsifying empirical test outcomes. Thus it may be said that while the Kantians conjured the synthetic a priori to fabricate fictitious eternal verities for science, the realistic neopragmatists on the contrary recognize the analytic a posteriori to justify decisive empirical criticism for science.
3.21 Semantical Rules
A semantical rule is a universally logically quantified statement or equation believed to be true and viewed in logical supposition in the metalinguistic perspective, such that the meaning of the predicate term displays some of the component parts of the meaning of the subject term.
The above discussion of analyticity leads immediately to the idea of “semantical rules”, a phrase also found in the writings of such philosophers as Carnap and Alonzo Church (1903-1995) but with a different meaning in the realistic neopragmatist philosophy. In the contemporary realistic neopragmatist philosophy semantical rules are statements in the metalinguistic perspective, because they are about language. And their constituent terms are viewed in logical supposition, because as semantical rules the statements are about meanings as opposed to nonlinguistic reality (See below, Section 3.26).
Semantical rules are enabled by the complex nature of the semantics of descriptive terms. But due to psychological habit that enables prereflective linguistic fluency, meanings are experienced wholistically and unreflectively. Thus if a fluent speaker of English were asked about crows, his answer would likely be in ontological terms such as the real creature’s black color rather than as a reflection on the componential semantics of the term “crow” with its semantical component of black. Reflective semantical analysis is needed to appreciate the componential nature of the meanings of descriptive terms.
3.22 Componential vs. Wholistic Semantics
On the realistic neopragmatist view when there is a transition due to a falsifying test outcome from an old theory to a new theory having the same test design, for the advocates of the falsified old theory who consequently reconsider, there occurs a semantical change in the descriptive terms shared by the old and new theories, due to the replacement of some of the meaning parts of the old theory with meaning parts from the tested and nonfalsified new theory. But the meaning parts contributed by the common test-design language remain unaffected.
Semantical change had vexed the early post-modern pragmatists, when they initially accepted the artifactual thesis of the semantics of language. When they rejected a priori analytic truth, many of them mistakenly also rejected analyticity altogether. And when they accepted the contextual determination of meaning, they mistakenly took an indefinitely large context as the elemental unit of language for consideration. They typically construed this elemental context as consisting of either an explicitly stated whole theory with no criteria for individuating theories, or an even more inclusive “paradigm”, i.e., a whole theory together with many associated pre-articulate skills and tacit beliefs. This wholistic (or “holistic”) semantical thesis is due to using the psychological experience of meaning instead of making semantic analyses that enable recognition of the componential nature of lexical meaning.
On this wholistic view therefore a new theory that succeeds an alternative older one must, as Feyerabend maintains, completely replace the older theory including all its observational semantics and ontology, because its semantics is viewed as an indivisible unit. In his Patterns of Discovery Hanson attempted to explain such wholism in terms of Gestalt psychology. And following Hanson the historian of science Kuhn, who wrote a popular monograph titled Structure of Scientific Revolutions, explained the complete replacement of an old theory by a newer one as a “Gestalt switch”.
The philosopher of science Feyerabend tenaciously maintained wholism, but attempted to explain it by his own interpretation of an ambiguity he found in Benjamin Lee Whorf’s (1897-1941) thesis of linguistic relativity also known as the “Sapir-Whorf hypothesis” formulated jointly by Whorf and Edward Sapir (1884-1931), a Yale University Linguist. In his “Explanation, Reduction and Empiricism”, in Minnesota Studies in the Philosophy of Science (1962) and later in his Against Method Feyerabend proposes semantic “incommensurability”, which he says is evident when an alternative theory is not recognized to be an alternative. He cites the transition from classical to quantum physics as an example of such semantic incommensurability.
The thesis of semantic incommensurability was also advocated by Kuhn, who later proposed “incommensurability with comparability”. But incommensurability with comparability is inconsistent as Hilary Putnam (1926-2016) observed in his Reason, Truth and History (1981), because comparison presupposes that there are some commensurabilities. Kuhn then revised the idea to admit “partial” incommensurability that he believed enables incommensurability with comparability but without explaining how incommensurability can be partial.
Semantic incommensurability can only occur in language describing phenomena that have never previously been observed, i.e., an observation for which the current state of the language has no semantic values (See below, Section 3.24). But it is very seldom that a new observation is indescribable with the stock of existing descriptive terms in the language. The scientist may be able to resort to what Hanson called “phenomenal seeing”. Furthermore incommensurability does not occur in scientific revolution understood as theory revision, which is a reorganization of pre-existing articulate descriptive language.
A wholistic semantical thesis including notably the semantic incommensurability thesis creates a pseudo problem for the decidability of empirical testing in science, as the transition to a new theory implies complete replacement of the semantics of the descriptive terms used for test design and observation. Complete replacement deprives the two alternative theories of any semantical continuity, such that their language cannot even describe the same phenomena or address the same problem. In fact the new theory cannot even be said to be an alternative to the old one, much less an empirically more adequate one. Such empirical undecidability due to alleged semantical wholism would logically deny science both production of progress and recognition of its history of advancement. The untenable character of the situation is comparable to the French entomologist August Magnan whose book titled Insect Flight (1934) set forth a contemporary aerodynamic analysis proving that bees cannot fly. But bees do fly, and empirical tests do decide.
The thesis of componential semantics resolves the wholistic semantical muddle in the linguistic theses proffered by philosophers such as Hanson, Kuhn and Feyerabend. Philosophers of science have overlooked componential semantics, but linguists have long recognized componential analysis in semantics, as may be found for example in George L. Dillon’s (1944) Introduction to Contemporary Linguistic Semantics (1977). Some other linguists use the phrase “lexical decomposition”. With the componential semantical thesis it is unnecessary to accept any wholistic view of semantics much less any incommensurable discontinuity in language in episodes of theory development.
The expression of the componential aspect of semantics most familiar to philosophers of language is the analytic statement. But the realistic neopragmatists’ rejection of the analytic-synthetic dichotomy with its a priori truth claim need not imply the rejection of analyticity as such. The contextual determination of meaning exploits the analytic-empirical dualism. When there is a semantical change in the descriptive terms in a system of beliefs due to a revision of some of the beliefs, some component parts of the terms’ complex meanings remain unaffected, while other parts are dropped and new ones added. Thus on the realistic neopragmatist view when there is a transition from an old theory to a new theory having the same test design, for the former advocates of the falsified old theory there occurs a semantical change in the descriptive terms shared by the old and new theories, due to the replacement of the meaning parts from the old theory with meaning parts from the tested and nonfalsified new theory, while the shared meaning parts contributed by the common test-design language remain unaffected.
For empirical testing in science the component meaning parts that remain unaffected by the change from one theory to a later alternative one consist of those parts contributed by the statements of test design shared by the alternative theories. Therein is found the semantical continuity that enables empirical testing of alternative theories to be decidable between them.
Thus a revolutionary change in scientific theory, such as for example the replacement of Newton’s theory of gravitation with Einstein’s, has the effect of changing only part of the semantics of the terms common to both the old and new theories. It leaves the semantics supplied by test-design language unaffected, so Arthur Eddington (1882-1942) could test both Newton’s and Einstein’s theories of gravitation simultaneously by describing the celestial photographic observations in his 1919-eclipse test. There is no semantic incommensurability between these theories.
For more about the philosophies of Kuhn, Feyerabend, and Eddington’s 1919-eclipse test readers are referred to BOOK VI at the free web site www.philsci.com or in the e-book Twentieth-Century Philosophy of Science: A History, which is available at Internet booksellers through hyperlinks in the web site.
3.23 Componential Artifactual Semantics Illustrated
The set of affirmations believed to be true and predicating characteristics universally and univocally of the term “crow” such as “Every crow is black” are semantical rules describing component parts of the complex meaning of “crow”. But if a field ornithologist captures a white bird specimen that exhibits all the characteristics of a crow except its black color, he must make a semantical decision. He must decide whether he will continue to believe “Every crow is black” and that he holds in his birdcage some kind of white noncrow bird, or whether he will no longer believe “Every crow is black” and that the white rara avis in his birdcage is a white crow, such as perhaps an albino crow. Thus a semantical decision must be made. Color could be made a criterion for species identification instead of the ability to breed, although many other beliefs would also then be affected, an inconvenience that is typically avoided as a disturbing violation of the linguistic preference that Quine calls the principle of “minimum mutilation” of the web of belief.
Use of statements like “Every crow is black” may seem simplistic for science (if not quite bird-brained). But as it happens, a noteworthy revision in the semantics and ontology of birds has occurred due to a five-year genetic study launched by the Field Museum of Natural History in Chicago, the results of which were reported in the journal Science in June 2008. An extensive computer analysis of 30,000 pieces of nineteen bird genes showed that contrary to previously held belief falcons are genetically more closely related to parrots than to hawks, and furthermore that falcons should no longer be classified in the biological order originally named for them. As a result of the new genetic basis for classification, the American Ornithologists Union has revised its official organization of bird species, and many bird watchers’ field guides have been revised accordingly. Now well informed bird watchers will classify, conceptualize and observe falcons differently, because some parts of the meaning complex for the term “falcon” have been replaced with a genetically based conceptualization. Yet given the complexity of genetics some biologists argue that the concept of species is arbitrary.
Our semantical decisions alone neither create, nor annihilate, nor change mind-independent reality. But semantical decisions may change our mind-dependent linguistic characterizations of mind-independent reality and thus the ontologies, i.e., the signified aspects of reality that the changed semantics reveals. This is due to the perspectivist nature of relativized semantics and thus of relativized ontology.
3.24 Semantic Values
Semantic values are the elementary semantic component parts distributed among the meaning complexes associated with the descriptive terms of a language at a point in time.
For every descriptive term there are semantical rules with each rule’s predicate describing component parts of the common subject term’s meaning complex. A linguistic system therefore contains a great multitude of elementary components of meaning complexes that are shared by many descriptive terms, but are never uniquely associated with any single term, because all words have dictionary definitions analyzing the lexical entry’s several component parts. These elementary components may be called “semantic values”.
Semantic values are the smallest elements in any meaning complex at a given point in time, and thus they describe the most elementary ontological aspects of the real world that are distinguished by the semantics of a language at the given point in time. The indefinitely vast residual mind-independent reality not captured by any semantic values and that the language user’s semantics is therefore unable to signify at the given point in time is due to the vast empirical underdetermination of the whole language at the time.
Different languages have different semantics and therefore display different ontologies. Where the semantics of one language displays some semantic values not contained in the semantics of another language, the two languages may be said to be semantically incommensurable to one another. Translation is therefore made inexact, as has long been recognized by the old refrain, “traduttore, traditore”.
A science at different times in its history may also have semantically incommensurable language, when a later theory contains semantic values not contained in the earlier law or theory with even the same test design. But incommensurability does not occur in scientific revolutions understood as theory revisions, because the revision is a reorganization of pre-existing articulate information. When incommensurability occurs, it occurs at times of discovery that occasion articulation of new semantic values due to new observations, even though the new observations may occasion a later theory revision.
3.25 Univocal and Equivocal Terms
A descriptive term’s use is univocal, if no universally quantified negative categorical statement accepted as true can relate any of the predicates in the several universal categorical affirmations functioning as semantical rules for the same subject term. Otherwise the term is equivocal.
If two semantical rules have the form “Every X is A” and “Every X is B”, and if it is also believed that “No A is B”, then the terms “A” and “B” symbolize parts of different meanings for the term “X”, and “X” is equivocal. Otherwise “A” and “B” symbolize different parts of the same meaning complex associated with the univocal term “X”.
The definitions of descriptive terms such as common nouns and verbs in a unilingual dictionary function as semantical rules. Implicitly they are universally quantified logically, and are always presumed to be true. Usually each lexical entry in a large dictionary such as the Oxford English Dictionary offers several different meanings for a descriptive term, because terms are routinely equivocal. Language economizes on words by giving them several different meanings, which the fluent listener or reader can distinguish in context. Equivocations are the raw materials for puns (and for deconstructionist escapades). There is always at least one semantical rule for the meaning complex for each univocal use of a descriptive term, because to be meaningful, the term must be part of the linguistic system of beliefs. If the use is conventional, it must be capable of a lexical entry in a unilingual dictionary, or otherwise recognized by members of some trade or clique as part of their argot.
A definition, i.e., a lexical entry in a unilingual dictionary, functions as a semantical rule. But the dictionary definition is only a minimal description of the meaning complex of a univocal descriptive term, and it is seldom the whole description. Univocal terms routinely have many semantical rules, when many characteristics can be predicated of a given subject in universally quantified beliefs. Thus there are multiple predicates that universally characterize crows, characteristics known to the ornithologist, and which may fill a paragraph or more in his ornithological reference book.
Descriptive terms can become partially equivocal through time, when some parts of the term’s meaning complex are unaffected by a change of defining beliefs, while other parts are simply dropped as archaic or are replaced by new parts contributed by new beliefs. In science this partial equivocation occurs when one theory is replaced by a newer one due to a test outcome, while the test design for both theories remains the same. A term common to old and new theory may on occasion remain univocal only with respect to the parts contributed by the test-design language.
3.26 Signification and Supposition
Supposition enables identifying ambiguities not due to differences in signification that make equivocations, but instead are ambiguities due to differences in relating the semantics to its ontology.
The signification of a descriptive term is its meaning, and terms with two or more alternative significations are equivocal in the sense described immediately above in Section 3.25. The signification of a univocal term has different suppositions, when it describes its ontology differently due to its having different functions in the sentences containing it.
Historically the subject term in the categorical proposition is said to be in “personal” supposition, because it references entities, while the predicate term is said to be in “simple” or “formal” supposition, because the predicate signifies attributes without referencing any individual entities manifesting the attributes. For this reason unlike the subject term the predicate term in the categorical proposition is not logically quantified with any syncategorematic quantifiers such as “every” or “some”. For example in “Every crow is black” the subject term “crow” is in personal supposition, while the predicate “black” is in simple supposition; so too for “No crow is black”.
The subject-term rôle in a sentence in object language has personal supposition, because it references entities.
The predicate-term rôle in a sentence in object language has simple or formal supposition, because it signifies attributes without referencing the entities manifesting the attributes.
Both personal and simple suppositions are types of “real” supposition, because they are different ways of talking about extramental nonlinguistic reality. They operate in expressions in object language and thus describe ontologies as either attributes or the referenced individuals characterized by the signified attributes.
In logical supposition the meaning of a term is considered specifically as a meaning.
Real supposition is contrasted with “logical” supposition, in which the meaning of the term is considered in the metalinguistic perspective exclusively as a meaning, i.e., only semantics is considered and not extramental ontology. For example in “Black is a component part of the meaning of crow”, the terms “crow” and “black” in this statement are in logical supposition. Similarly to say in explicit metalanguage “‘Every crow is black’ is a semantical rule” to express “Black is a component part of the meaning of crow”, is again to use both “crow” and “black” in logical supposition.
Furthermore just to use “Every crow is black” as a semantical rule in order to exhibit its meaning composition without actually saying that it is a semantical rule, is also to use the sentence in the metalinguistic perspective and in logical supposition. The difference between real and logical supposition in such use of a sentence is not exhibited syntactically, but is pragmatic and depends on a greater context revealing the intention of the writer or speaker. Whenever a universally quantified categorical affirmation is used in the metalinguistic perspective as a semantical rule for analysis in the semantical dimension, both the subject and predicate terms are in logical supposition. Lexical entries in dictionaries are in the metalinguistic perspective and in logical supposition, because they are about language and are intended to describe meanings.
In all the above types of supposition the same univocal term has the same signification. But another type of so-called supposition proposed incorrectly in ancient times is “material supposition”, in which the term is referenced in metalanguage as a linguistic symbol in the syntactical dimension with no reference to a term’s semantics or ontology. An example is “’Crow’ is a four-letter word”. In this example “crow” does not refer either to the individual real bird or to its characteristics as in real supposition or to the universal concept of the creature as in logical supposition. Thus material supposition is not supposition properly so called, because the signification is different. It is actually an alternative meaning and thus a type of semantical equivocation. Some modern philosophers have used other vocabularies for recognizing this equivocation, such as Stanislaw Lesńiewski’s (1886-1939) “use” (semantics) vs. “mention” (syntax) and Carnap’s “material mode” (semantics) vs. “formal mode” (syntax).
3.27 Aside on Metaphor
A metaphor is a predication to a subject term that is intended to include only selected parts of the meaning complex conventionally associated with the predicate term, so the metaphorical predication is a true statement due to the exclusion of the remaining parts in the predicate’s meaning complex that would conventionally make the metaphorical predication a false statement.
In the last-gasp days of decadent neopositivism some positivist philosophers invoked the idea of metaphor to explain the semantics of theoretical terms. And a few were closet Cartesians who used it in the charade of justifying realism for theoretical terms. The theoretical term was the positivists’ favorite hobbyhorse. But both realism and the semantics of theories are unproblematic for contemporary realistic neopragmatists. In his “Posits and Reality” (1954) in his Ways of Paradox (1961) Quine said that all language is empirically underdetermined, and that the only difference between positing microphysical entities [like electrons] and macrophysical entities [like elephants] is that the statements describing the former are more empirically underdetermined than those describing the latter. Thus contrary to the neopositivists the realistic neopragmatists admit no qualitative dichotomy between the positivists’ so-called observation terms and their so-called theoretical terms.
As science and technology advance, concepts of microphysical entities like electrons are made less empirically underdetermined, as occurred for example with the development of the cloud chamber. While contemporary realistic neopragmatist philosophers of science recognize no need to explain so-called theoretical terms by metaphor or otherwise, metaphor is nevertheless a linguistic phenomenon often involving semantical change and it can easily be analyzed and explained with componential semantics.
It has been said that metaphors are both (unconventionally) true and (conventionally) false. In a speaker or writer’s conventional or so-called “literal” linguistic usage the entire conventional meaning complex associated with a univocal predicate term of a universal categorical affirmation is operative. But in a speaker or writer’s metaphorical linguistic usage only some selected component part or parts of the entire meaning complex associated with the univocal predicate term are operative, and the remaining parts of the meaning complex are intended to be excluded, i.e., suspended from consideration and ignored. If the excluded parts were included, then the metaphorical statement would indeed be false. But the speaker or writer implicitly expects the hearer or reader to recognize and suspend from consideration the excluded parts of the predicate’s conventional semantics, while the speaker or writer uses the component part that he has tacitly selected for describing the subject truly.
Consider for example the metaphorical statement “Every man is a wolf.” The selected meaning component associated with “wolf” that is intended to be predicated truly of “man” might describe the wolf’s predatory behaviors, while the animal’s quadrupedal anatomy, which is conventionally associated with “wolf”, is among the excluded meaning components for “wolf” that are not intended to be predicated truly of “man”.
A listener or reader may or may not succeed in understanding the metaphorical predication depending on his ability to select the applicable parts of the predicate’s semantics tacitly intended by the issuer of the metaphor. But there is nothing arcane or mysterious about metaphors, because they can be explained in “literal” (i.e., conventional) terms to the uncomprehending listener or reader. To explain the metaphorical predication of a descriptive term to a subject term is to list explicitly those categorical affirmations intended to be true of that subject and that set forth just those parts of the predicate’s meaning that the issuer of the metaphor intends to be applicable.
The explanation may be further elaborated by listing separately the categorical affirmations that are not viewed as true of the subject, but which are associated with the predicated term when it is predicated conventionally. Or these may be expressed as universal negations stating what is intended to be excluded from the predicate’s meaning complex in the particular metaphorical predication, e.g., “No man is quadrupedal.” In fact such negative statements might be given as hints by a picaresque issuer of the metaphor for the uncomprehending listener.
A semantical change occurs when the metaphorical predication becomes conventional, and this change to conventionality produces an equivocation. The equivocation consists of two “literal” meanings: the original one and a derivative meaning that is now a dead metaphor. As a dead man is no longer a man, so a dead metaphor is no longer a metaphor. A dead metaphor is a meaning from which the suspended parts in the metaphor have become conventionally excluded to produce a new “literal” meaning. Trite metaphors, when not just forgotten, metamorphose into new literals, as they eventually become conventional.
There is an alternative “interactionist” concept of metaphor that was proposed by Max Black (1909-1988), a Cambrian positivist philosopher, in his Models and Metaphors (1962). On Black’s interactionist view both the subject and predicate terms change their meanings in the metaphorical statement due to a semantical “interaction” between them. Black does not describe the process of interaction. Curiously he claims for example that the metaphorical statement “Man is a wolf” allegedly makes wolves seem more human and men seem more lupine. This is merely obscurantism; it is not logical, because the statement “Every man is a wolf” in not universally convertible; recall the ancient square of opposition in logic: “Every man is a wolf” does not imply logically “Every wolf is a man”. The metaphorical use of “wolf” in “Every man is a wolf” therefore does not make the subject term “man” a metaphor. “Man” becomes a metaphor only if there is an independent acceptance of “Every wolf is a man”, where “man” occurs as a predicate.
3.28 Clear and Vague Meaning
Vagueness is empirical underdetermination, and can never be eliminated completely, since our concepts can never grasp reality exhaustively.
Meanings are more or less clear and vague, such that the greater the clarity, the less the vagueness. In “Verifiability” in Logic and Language (1952) Friedrich Waismann (1896-1954) called this inexhaustible residual vagueness the “open texture” of concepts.
Vagueness in the semantics of a univocal descriptive term is reduced and clarity is increased by the addition of universal categorical affirmations and/or negations accepted as true, to the list of the term’s semantic rules with each rule having the term as a common subject.
Additional semantical rules increase clarity. The clarification is supplied by the semantics of the predicates in the added universal categorical affirmations and/or negations. Thus if the list of universal statements believed to be true are in the form “Every X is A” and “Every X is B”, then clarification of X with respect to a descriptive predicate “C” consists in adding to the list either the statement in the form “Every X is C” or the statement in the form “No X is C”. Clarity is thereby added by amending the meaning of “X”.
Clarity is also increased by adding semantical rules that relate any of the univocal predicates in the list of semantical rules for the same subject thus increasing coherence.
If the predicate terms “A” and “B” in the semantical rules with the form “Every X is A” and “Every X is B” are related by the statements in the form “Every A is B” or “Every B is A”, then one of the statements in the expanded list can be logically derived from the others by a syllogism. Awareness of the deductive relationship and the consequent display of structure in the meaning complex associated with the term “X” clarifies the complex meaning of “X”, because the deductive relation makes the semantics more integrated. Clarity is thus added by exhibiting semantic structure in a deductive system. The resulting coherence also supplies psychological satisfaction, because people prefer to live in a coherent world. However “Every A is B” and “Every B is A” are also empirical statements that may be falsified, and if not falsified when tested they offer more than psychological satisfaction, because they are what Ernest Nagel (1901-1985) calls “correspondence rules”, when the new laws occur in what he calls “heterogeneous reductions”.
These additional semantical rules relating the predicates may be negative as well as affirmative. Additional universal negations offer clarification by exhibiting equivocation. Thus if two semantical rules are in the form “Every X is A” and “Every X is B”, and if it is also believed that “No A is B” or its equivalent “No B is A”, then the terms “A” and “B” symbolize parts of different meanings for the term “X”, and “X” is equivocal. Clarity is thus added by the negation.
3.29 Semantics of Mathematical Language
The semantics for a descriptive mathematical variable intended to take measurement values is determined by its context consisting of universally quantified statements believed to be true including mathematical expressions in the theory language proposed for testing and/or in the test-design language presumed for testing.
Both test designs and theories often involve mathematical expressions. Thus the semantics for the descriptive variables common to a test design and a theory may be supplied wholly or in part by mathematical expressions, such that the structure of their meaning complexes is partly mathematical. The semantics-determining statements in test designs for mathematically expressed theories may include mathematical equations, measurement language describing the subject measured, the measurement procedures, the metric units and any employed apparatus.
Some of these statements may suggest what 1946 Nobel-laureate physicist Percy Bridgman (1882-1961) in his Logic of Modern Physics (1927) calls “operational definitions”, because the statements describing the measurement procedures and apparatus contribute meaning to the descriptive term that occurs in a test design. Bridgman says that a concept is a set of operations. But contrary to Bridgman and as even the positivist Carnap recognized in his Philosophical Foundations of Physics (1966), each of several operational definitions for the same term does not constitute a separate definition for the term’s concept of the measured subject thereby making the term equivocal. Likewise realistic neopragmatists say that descriptions of different measurement procedures contribute different parts to the meaning of the univocal descriptive term, unless the different procedures produce different measurement values, where the differences are greater than the estimated measurement errors in the overlapping ranges of measurement. Also contrary to Bridgman operational definitions have no special status; they are just one of many possible types of statement often found in a test design. Furthermore the semantics is not the real measurement procedures as a nominalist would maintain, but rather the semantics is the concept of the measurement procedures. Realistic neopragmatists need not accept Bridgman’s nominalism; the operational definition contributes to the concept that is the semantics for the test design.
3.30 Semantical State Descriptions
A semantical state description for a scientific profession is a synchronic display of the semantical composition of the various meanings of the partially equivocal descriptive terms in the several alternative theories functioning as semantical rules and addressing a single problem defined by a common test design.
The above discussions in philosophy of language have focused on descriptive terms such as words and mathematical variables, and then on statements and equations that are constructed with the terms. For computational philosophy of science there is an even larger unit of language, which is the semantical state description.
In his Meaning and Necessity Carnap had introduced a concept of semantical state description in his philosophy of semantical systems. Similarly in computational philosophy of science a state description is a semantical description but different from Carnap’s, which he illustrates with the Russellian symbolic logic. The statements and/or equations in the realistic neopragmatist semantical state description supplying the terms for a discovery system’s input and the statements and/or equations constituting the output semantical state description are all semantical rules expressed in the surface structure of the language of the science. Each alternative theory or law in a state description has its distinctive semantics for its constituent descriptive terms, because a term shared by several alternative theories or laws is partly equivocal. But the term is also partly univocal due at least to the common test-design statements and/or equations that are also semantical rules, which are operative in state descriptions.
In computational philosophy of science the state description is a synchronic and thus a static semantical display. The state description contains vocabulary actually used in the surface structure of a science both in an initial state description supplying object-language terms inputted to a discovery system, and in a terminal state description containing new object-language statements or equations output generated by a computerized discovery-system’s execution. No transformation into a deep structure is needed. The initial state description represents the current frontier of research for the specific problem. Both input and output state descriptions for a discovery-system execution address only one problem identified by the common test design, and thus for computational philosophers of science they represent only one scientific “profession” (See below, Section 3.47).
For semantical analysis a state description consists of universally quantified statements and/or equations. These statements and/or equations including theories and the test design from which the inputted terms were extracted, are included in the state description although not for discovery system input, because these statements and/or equations would prejudice the output. Statements and/or equations function as semantical rules in the generated output only. Thus for discovery-system input, the language input is a set of descriptive terms found in the input state description and extracted from the statements and/or equations of the several currently untested theories addressing the same unsolved problem as defined by a common test design at a given point in time.
Descriptive terms extracted from the statements and/or equations constituting falsified theories might also be included to produce a cumulative state description for input, because the terms from previously falsified theories represent available information at the historical or current point in time. Descriptive terms salvaged from falsified theories have scrap value, because they may be recycled productively through the theory-developmental process. Furthermore terms and variables from tested and currently nonfalsified theories could also conceivably be included, just to see what new comes out. Empirical underdetermination permits scientific pluralism; reality is complex and full of surprises.
3.31 Diachronic Comparative-Static Analysis
A diachronic comparative-static display consists of two chronologically successive state descriptions containing theory statements for the same problem defined by the same test design and therefore addressed by the same scientific profession.
In computational philosophy of science comparative-static comparison is typically a comparison of a discovery system’s originating input and generated output state descriptions of theory statements for purposes of contrast.
State descriptions contain statements and equations that operate as semantical rules displaying the meanings of the constituent descriptive terms and variables. Comparison of the statements and equations in two chronologically separated state descriptions containing the same test design for the same profession exhibits semantical changes resulting from the transition.
3.32 Diachronic Dynamic Analysis
The dynamic diachronic metalinguistic analysis not only consists of two state descriptions representing two chronologically successive language states sharing a common subset of descriptive terms in their common test design, but also describes a process of linguistic change between the two successive state descriptions.
Such transitions in science are the result of two pragmatic functions in basic research, namely theory development and theory testing. A change of state description into a new one is produced whenever a new theory is constructed or whenever a theory is eliminated by a falsifying test outcome.
3.33 Computational Philosophy of Science
Computational philosophy of science is the development of mechanized discovery systems that can explicitly proceduralize and thus mechanize a transition applied to language in the current state description of a science, in order to develop a new state description containing new and empirically adequate theories.
The discovery systems created by the computational philosopher of science represent diachronic dynamic metalinguistic analyses. The systems proceduralize developmental transitions explicitly with a mechanized system design, in order to accelerate the advancement of a contemporary state of a science. Their various procedural system designs are metalinguistic logics for rational reconstructions of a scientific discovery process. The discovery systems produce surface-structure theories as may actually be found in the object language of the applicable science.
A discovery-system in computational philosophy of science is a mechanized finite-state generative grammar that produces sentences or equations from inputted descriptive terms or variables. As a grammar it is creative in Noam Chomsky’s sense in his Syntactical Structures, because when encoded in a computer language and executed, the system produces new theories that have never previously been considered in the particular scientific profession. A mechanized discovery system is Feyerabend’s principle of theory proliferation applied with mindless abandon. Nevertheless it is a finite-state generative grammar, in which control of the size and quality of the generated output is accomplished by empirical testing of the generated theories. Empirical testing is enabled by associating measurement data with the inputted variables. Thus the system designs often employ one or another type of applied numerical methods. In an execution run a system usually rejects most of the generated theories.
Presently few philosophy professors have the needed competencies for this new and emerging area in philosophy of science, with the result that few curricula in academic philosophy departments will expose students to discovery systems, much less actually enable them to develop complex AI systems. Among today’s academic philosophers the mediocrities will simply ignore this new area, while latter-day Luddites will shrilly reject it. Lethargic and/or reactionary academics that dismiss it are fated to spend their careers denying its merits and evading it, as they are inevitably marginalized, destined to die in obscurity.
The exponentially growing capacities of computer hardware and the proliferation of computer-systems designs have already been enhancing the mechanized practices of basic-scientific research in many sciences. Thus in his Extending Ourselves (2004) University of Virginia philosopher of science and cognitive scientist Paul Humphreys reports that computational science for scientific analysis has already far outstripped natural human capabilities, and that it currently plays a central rôle in the development of many physical and life sciences. Neither philosophy of science nor the retarded social sciences can escape such developments much longer.
In the “Introduction” to their Empirical Model Discovery and Theory Evaluation: Automatic Selection Methods in Econometrics (2014) David F. Hendry and Jurgen A. Doornik of Oxford University’s “Program for Economic Modeling at their Institute for New Economic Thinking” write that automatic modeling has “come of age”. Hendry was head of Oxford’s Economics Department at Oxford University, England, from 2001 to 2007, and at this writing is Director of the Economic Modeling Program at Oxford University’s Martin School. These authors have developed a mechanized general-search algorithm that they call AUTOMETRICS for determining the equation specifications for econometric models.
Artificial intelligence today is producing an institutional change in both the sciences and the humanities. In “MIT Creates a College for Artificial Intelligence, Backed by $1 Billion” The New York Times (16 October 2018) reported that the Massachusetts Institute of Technology (MIT) will create a new college with fifty new faculty positions and many fellowships for graduate students, in order to integrate artificial intelligence systems into both its humanities and its science curricula. The article quoted L. Rafael Reif, president of MIT as stating that he wanted artificial intelligence to make a university-wide impact and to be used by everyone in every discipline [presumably including philosophy of science]. And the article also quoted Melissa Nobles, dean of MIT’s School of Humanities and Sciences, as stating that the new college will enable the humanities to survive, not by running from the future, but by embracing it.
Computational philosophy of science is the future that has arrived, even when it is called by other names as practiced by scientists working in their special fields instead of being called “metascience”, “computational philosophy of science” or “artificial intelligence”. Our twenty-first century perspective shows that computational philosophy of science has indeed “come of age”, as Hendry and Doornik report. So, there is hope that the next generation of academic journal editors and their favorite referees – whose peer-reviewed publications now operate as havens for mediocrities, reactionaries, parasites, Luddites and group-thinking hacks – will stop running from the future and belatedly acknowledge the power and productivity of artificial intelligence.
3.34 An Interpretation Issue
In “A Split in Thinking among Keepers of Artificial Intelligence” The New York Times (18 July 1993) reported that scientists attending the annual meeting of the American Association of Artificial Intelligence expressed disagreement about the goals of artificial intelligence. Some maintained the traditional view that artificial-intelligence systems should be designed to simulate intuitive human intelligence, while others maintained that the phrase “artificial intelligence” is merely a metaphor that has become an impediment, and that AI systems should be designed to exceed the limitations of intuitive human intelligence – a pragmatic goal.
There is also ambiguity in the literature as to what a state description represents and how the discovery system’s processes are to be interpreted. The phrase “artificial intelligence” has been used in both interpretations but with slightly different meanings.
On the linguistic analysis interpretation, which is the view taken herein the state description represents the language state for a language community constituting a single scientific profession identified by a test design. Like the diverse members of a profession, the system produces a diversity of new theories. But no psychological claims are made about intuitive thinking processes.
Computer discovery systems are generative grammars that generate and test theories.
On the linguistic analysis interpretation, the computer discovery systems are mechanized generative grammars that construct and test theories. The AI system inputs and outputs are both surface-structure object-language state descriptions. The instructional code of the computer system is in the metalinguistic perspective, and exhibits diachronic dynamic procedures for theory development. The various procedural discovery system designs are rational reconstructions of the discovery process. As such the linguistic analysis interpretation is neither a separate philosophy of science nor a psychologistic agenda. It is compatible with the contemporary realistic neopragmatism and its use of generative grammars makes it closely related to computational linguistics.
On the cognitive-psychology interpretation the state description represents a scientist’s cognitive state consisting of mental representations and the discovery system represents the scientist’s cognitive processes.
Computer discovery systems are psychological hypotheses about intuitive human problem-solving processes.
Contemporary views in cognitive psychology are illustrated in Cognitive Psychology: An Overview for Cognitive Scientists (1992) by Lawrence W. Barsalou of the University of Chicago, who writes that cognitive psychology has used internal psychological constructs (internal constructs that are rejected altogether by the behaviorist school). He says that these constructs almost always describe information-processing mechanisms, and that their plausibility rests primarily on their ability to explain behavioral data. He notes that internal psychological constructs are analogous to a computer’s information flows: neurological mechanisms and cognitive constructs in the brain are analogous to electronics and information processing in computers respectively (P. 10).
The originator of the cognitive-psychology interpretation is Simon. In his Scientific Discovery: Computational Explorations of the Creative Processes (1987) and earlier works Simon writes that he seeks to investigate the psychology of discovery processes, and to provide an empirically tested theory of the information-processing mechanisms that are implicated in those processes. There he states that an empirical test of the systems as psychological theories of intuitive human discovery processes would involve presenting the computer programs and some human subjects with identical problems, and then comparing their behaviors. But Simon admits that his book provides nothing by way of comparison with human performance. And in discussions of particular applications involving particular historic discoveries, he also admits that in some cases the historical scientists actually performed their discoveries differently than the way the systems performed the rediscoveries.
The academic philosopher Thagard, who follows Simon’s cognitive psychology interpretation, originated the name “computational philosophy of science” in his Computational Philosophy of Science (1988). Hickey admits that it is more descriptive than the name “metascience” that he had proposed in his Introduction to Metascience over a decade earlier. Thagard defines computational philosophy of science as “normative cognitive psychology”. His cognitive-psychology systems have successfully replicated developmental episodes in history of science, but the relation of their system designs to systematically observed human cognitive processes is still unexamined. And their outputted theories to date have not yet proposed any new contributions to the current state of any science.
In their “Processes and Constraints in Explanatory Scientific Discovery” in Proceedings of the Thirteenth Annual Meeting of the Cognitive Science Society (2008) Pat Langley and Will Bridewell, who advocate Simon’s cognitive-psychology interpretation, appear to depart from the cognitive-psychology interpretation or at least to redefine it. They state that they have not aimed to “mimic” the detailed behavior of human researchers, but that instead their systems address the same tasks as scientists and carry out search through similar problem spaces. This much might also be said of the linguistic-analysis approach.
The relation between the psychological and the linguistic perspectives can be illustrated by way of analogy with man’s experience with flying. Since primitive man first saw a bird spread its wings and escape the hunter by flight, mankind has been envious of birds’ ability to fly. This envy is illustrated in ancient Greek mythology by the character Icarus, who escaped from the labyrinth of Crete with wings that he made of wax. But Icarus flew too close to the hot sun, so that he fell from the sky as the wax melted, and he then drowned in the Aegean Sea. Icarus’ fatally flawed choice of materials notwithstanding, his basic design concept was a plausible one in imitation of the evidently successful flight capability of birds. Call Icarus’ design concept the “wing-flapping” technology. In fact in the 1930’s there was a company called Gray Goose Airways, which claimed to have developed a wing-flapping aircraft they called an “ornithopter”. But pity the investor who holds equity shares in Gray Goose Airways today, because his common-stock certificates are good only for folded-paper toy-glider airplanes. A contemporary development of the wing-flapping technology might serve well for an ornithological investigation of how birds fly, but it is not the fixed-wing technology used for modern flight, which evolved quite pragmatically.
When proposed imitation of nature fails, pragmatic innovation prevails, in order to achieve the practical aim. Therefore when asking how a computational philosophy of science should be conceived, it is necessary firstly to ask about the aim of basic science, and then to ask whether or not computational philosophy of science is adequately characterized as “normative cognitive psychology”, as Thagard would have it. Contemporary realistic neopragmatist philosophy of science views the aim of basic science as the production of a linguistic artifact having the status of an “explanation”, which includes law language that had earlier been a proposed theory and has not been falsified when tested empirically. The aim of a computational philosophy of science in turn is derivative from the aim of science: to enhance scientists’ research practices by developing and employing mechanized procedures capable of achieving the aim of basic science. The computational philosopher of science should feel at liberty to employ any technology that achieves this aim with or without any reliance upon or relevance to psychology.
So, is artificial intelligence computerized psychology or computerized linguistics? There is as yet no unanimity. To date the phrase “computational philosophy of science” need not commit one to either interpretation. Which interpretation prevails in academia will likely depend on which academic department productively takes up the movement. If the psychologists develop new and useful systems that produce contributions to an empirical science, the psychologistic interpretation will prevail. If the philosophers take it up successfully, their linguistic-analysis interpretation will prevail. It is an issue of credentialed tribalism. It is an issue tainted with academic tribalism.
For more about Simon, Langley, and Thagard and about discovery systems and computational philosophy of science readers are referred to BOOK VIII at the free web site www.philsci.com or in the e-book Twentieth-Century Philosophy of Science: A History, which is available at Internet booksellers through hyperlinks in the web site.