INTRODUCTION TO PHILOSOPHY OF SCIENCE
Book I Page 4
3.30 Semantical State Descriptions
A semantical state description for a scientific profession is a synchronic display of the semantical composition of the various meanings of the partially equivocal descriptive terms in the alternative theories functioning as semantical rules and addressing a single problem defined by a common test design.
The above discussions in philosophy of language have focused on descriptive terms such as words and mathematical variables, and then on statements and equations that are constructed with the terms. For computational philosophy of science there is an even larger unit of language, which is the semantical state description.
In his Meaning and Necessity (1947) Carnap had introduced a concept of linguistic state description in his philosophy of semantical systems. Similarly in computational philosophy of science a state description is a semantical description but different from Carnap’s. The statements and/or equations supplying a discovery system’s input state description and those constituting the output state description are semantical rules. Each alternative theory or law in the state description has its distinctive semantics for its constituent descriptive terms. A term shared by several alternative theories or laws is thus partly equivocal. But the term is also partly univocal due to the common test-design statements that are also semantical rules.
In computational philosophy of science the state description is a synchronic and thus a static semantical display. The state description contains language actually used in a science both in an initial state description containing object-language input to a discovery system, and in a terminal state description containing object-language output generated by a computerized discovery-system’s execution. The initial state description represents the frontier of research for the specific problem. Both input and output state descriptions for a discovery-system execution address only one problem identified by the common test design, and thus for computational philosophers of science they represent only one scientific “profession”.
A discovery-system is a mechanized finite-state generative grammar that produces sentences or equations from descriptive terms or variables. As a grammar it is creative in Noam Chomsky’s (1928) sense, because when encoded in a computer language and executed, the system produces new statements, i.e. theories that have never previously been stated in the particular scientific profession. A discovery system is Feyerabend’s principle of theory proliferation applied with mindless abandon. To control the size and quality of the output, the system tests the empirical adequacy of the generated novel theories and inevitably rejects most of them. Associating measurement data with the inputted variables enables empirical testing, so that the system designs typically employ one or another type of applied numerical methods.
For semantical analysis the state description consists of universally quantified statements and/or equations. The statements and/or equations from which the terms were extracted including theories and test design are part of the state description although not for discovery system input, because they would prejudice the output. Statements and/or equations function as semantical rules in the generated output only. Thus for discovery-system input, the input state description is a listing of descriptive terms extracted from the statements and/or equations of the several currently untested theories addressing the same unsolved problem as defined by a test design at a given point in time.
Descriptive terms extracted from the statements and/or equations of falsified theories can also be included to produce a cumulative state description for input, because the terms from previously falsified theories represent available information at the historical or current point in time. Descriptive terms salvaged from falsified theories have scrap value, because they may be recycled through the theory-developmental process. Furthermore terms and variables from tested and nonfalsified theories could also conceivably be included, just to see what new comes out. Empirical underdetermination permits scientific pluralism, and the world is full of surprises.
3.31 Diachronic Comparative-Static Analysis
A diachronic comparative-static display consists of two chronologically successive state descriptions of theory statements for the same problem defined by the same test design and therefore addressed by the same scientific profession.
State descriptions contain statements and equations that operate as semantical rules displaying the meanings of the constituent terms and variables. Comparison of the statements and equations in the two chronologically separated state descriptions containing the same test design exhibits changes in meanings through time.
In computational philosophy of science this is a comparative-static semantical analysis, i.e., a comparison of a discovery system’s input and output state descriptions of theory statements.
3.32 Diachronic Dynamic Analysis
The dynamic diachronic metalinguistic analysis not only consists of two state descriptions representing two chronologically successive language states sharing a common subset of descriptive terms in their common test design, but also exhibits a process of linguistic change between the two successive state descriptions.
Such transitions in science are the result of two functions in basic research, namely theory development and theory testing. A change of state description into a new one is produced whenever a new theory is proposed or whenever a theory is eliminated by a falsifying test outcome.
3.33 Computational Philosophy of Science
Computational philosophy of science is the development of computerized discovery systems that can proceduralize explicitly the past achievements of successful scientists, and then apply the successfully mechanized procedures to the current state description of a science to develop a new state description containing one or several new and empirically superior theories.
The discovery systems created by the computational philosopher of science represent diachronic dynamic metalinguistic analyses. The systems proceduralize a transitional process explicitly according to the computerized system design, in order ultimately to accelerate the contemporary advancement of a science by mechanizing a transition episode. Then by applying the system to the current state description for the science the systems generate new theories. The discovery systems typically include empirical criteria for selecting a subset of the generated theories for output as tested and nonfalsified theories either for further predictive testing or for use as laws in explanations and test designs.
But presently few philosophy professors have the needed competencies to contribute to computational philosophy of science, and thus few curricula in university philosophy departments encourage much less actually prepare students for contributing to this new and emerging area in philosophy of science. Among today’s academic philosophers the mediocrities will quietly ignore this new discipline, while the Luddites among them will loudly reject it. Lethargic and/or reactionary academics that dismiss it are fated to spend their careers evading it, as they are inevitably marginalized.
The exponentially growing capacities of computer hardware and proliferation of computer-systems designs have already been enhancing the practices of basic-scientific research in many sciences. Thus in his Extending Ourselves (2004) University of Virginia philosopher of science and cognitive scientist Paul Humphreys reports that computational science for scientific analysis has already far outstripped natural human capabilities and that it currently plays a central rôle in the development of many physical and life sciences. Philosophy of science cannot escape such developments. Computational philosophy of science is achieving ascendancy in twenty-first-century philosophy of science due to those who are opportunistic enough to master both the necessary computer skills and the requisite working competencies in an empirical science.
For example in the “Introduction” to their Empirical Model Discovery and Theory Evaluation: Automatic Selection Methods in Econometrics (2014) David F. Hendry and Jurgen A. Doornik of Oxford University’s Program of Economic Modeling at their Institute for New Economic Thinking write that automatic modeling has “come of age.” Hendry was head of Oxford’s Economics Department from 2001 to 2007, and is presently Director of the Economic Modeling Program at Oxford’s Martin School. And Doornik is a colleague at the Institute. These authors have developed a mechanized general-search algorithm that they call AUTOMETRICS for determining the equation specifications for econometric models.
The computer is here to stay, and today computational philosophy of science is becoming institutionalized in academia. For example in “MIT Creates a College for Artificial Intelligence, Backed by $1 Billion” the New York Times (16 October 2018) reported that the Massachusetts Institute of Technology will create a new college for with fifty new faculty positions and many more fellowships for graduate students, in order to integrate artificial intelligence systems into its humanities and science curricula. The article quoted L. Rafael Reif, president of MIT, as stating that he wanted artificial intelligence to make a university-wide impact and to be used by everyone in every discipline – not excluding philosophy of science. And the article also quoted Melissa Nobles, dean of MIT’s School of Humanities and Sciences, as stating that the new college will enable the humanities to survive, not by running from the future, but by embracing it.
Computational philosophy of science is the future that has arrived, even when it is called by other names as practiced by scientists working in their special fields instead of being called “metascience”, “computational philosophy of science” or “artificial intelligence”. Our twenty-first century perspective shows that computational philosophy of science has indeed “come of age”, as Doornik said. And its ascendancy is inevitable.
3.34 An Interpretation Issue
There is ambiguity in the literature as to what a state description represents and how the discovery system’s processes are to be interpreted. The phrase “artificial intelligence” has been used in both interpretations but with slightly different meanings.
On the linguistic analysis interpretation, which is the view taken herein, the state description represents the language state for a language community constituting a single scientific profession defined by a test design. Like the diverse members of a profession, the system produces a diversity of new theories. But no psychological claims are made about intuitive thinking processes. Computational philosophy of science so interpreted is a technique for a specialized type of linguistic analysis employing mechanized generative grammars.
The computer discovery systems are generative grammars that generate and test theories. The system inputs and outputs are both object-language state descriptions. The instructional code of the computer system is in the metalinguistic perspective, and exhibits diachronic dynamic procedures for theory development. As such the linguistic analysis interpretation is neither a separate philosophy of science nor a psychologistic agenda. It is compatible with the contemporary pragmatism and its use of generative grammars makes it closely related to computational linguistics.
On the cognitive-psychology interpretation the state description represents a scientist’s cognitive state consisting of mental representations and the discovery system represents the scientist’s cognitive processes. The originator of the cognitive-psychology interpretation is Herbert Simon. In his Scientific Discovery: Computational Explorations of the Creative Processes (1987) and other works Simon writes that he seeks to investigate the psychology of discovery processes, and to provide an empirically tested theory of the information-processing mechanisms that are implicated in that process.
He states that an empirical test of the systems as psychological theories of human discovery processes would involve presenting the computer programs and some human subjects with identical problems, and then comparing their behaviors. But Simon admits that his book provides nothing by way of comparison with human performance. And in discussions of particular applications involving particular historic discoveries, he also admits that in some cases the historical scientists actually performed their discoveries differently than the way that the systems performed the rediscoveries.
In their “Processes and Constraints in Explanatory Scientific Discovery” in Proceedings of the Thirteenth Annual Meeting of the Cognitive Science Society (2008) Langley and Bridewell, who advocate cognitive psychology, appear to depart from the cognitive psychology interpretation. They state that they have not aimed to “mimic” the detailed behavior of human researchers, but that instead their systems address the same tasks as scientists and carry out search through similar problem spaces: This much might also be said of the linguistic-analysis approach.
The academic philosopher Paul Thagard, who follows Simon’s interpretation, originated the name “computational philosophy of science” in 1988 in his book Computational Philosophy of Science: Hickey admits that it is more descriptive than the name “metascience” that Hickey had proposed in his Introduction to Metascience: An Information Science Approach to Methodology of Scientific Research in 1976. Thagard defines computational philosophy of science as “normative cognitive psychology”. The cognitive-psychology systems have successfully replicated developmental episodes in history of science, but the relation of their system designs to systematically observed human cognitive processes is still unexamined. And their outputted theories to date have not yet contributed to the current state of any science.
In “A Split in Thinking among Keepers of Artificial Intelligence” the New York Times (18 Jul. 1993) reported that scientists attending the annual meeting of the American Association of Artificial Intelligence expressed disagreement about the goals of artificial intelligence. Some maintained the traditional view that artificial-intelligence systems should be designed to simulate intuitive human intelligence, while others maintained that the phrase “artificial intelligence” is merely a metaphor that has become an impediment, and that AI systems should be designed to exceed the limitations of intuitive human intelligence.
So, is artificial intelligence computerized psychology or computer superiority? To date the phrase “computational philosophy of science” need not commit one to either interpretation. Which interpretation prevails in academia will likely depend on which academic department productively takes up the movement. If the psychologists develop new and useful systems, the psychologistic interpretation will prevail. If the philosophers take it up successfully, their linguistic-analysis interpretation will prevail.
For more about Simon, Langley, and Thagard and about discovery systems and computational philosophy of science readers are referred to BOOK VIII at www.philsci.com or in the e-book Twentieth-Century Philosophy of Science: A History.
3.35 Ontological Dimension
Ontology is the aspects of mind-independent reality revealed by semantics.
Ontology is the metalinguistic dimension after syntax and semantics, and it presumes both of them. Semantics is description of reality; ontology is reality as described by semantics. Ontology is the reality correlative to what is signified by semantics. Semantically interpreted syntax describes ontology most realistically, when the statement is warranted empirically by repeated nonfalsifying test outcomes. In science ontology is more adequately realistic, when described by the semantics of either a scientific law or an observation report having its semantics defined by a law. The semantics of falsified theories display ontology less realistically due to the falsified theories’ demonstrated lesser empirical adequacy.
3.36 Metaphysical and Scientific Realism
Metaphysical realism is the thesis that there exists mind-independent reality, which is accessible to and accessed by human cognition.
Traditionally philosophers have spilt much wasted ink arguing over realism and its alternatives, but the above statement is disarmingly simple. It is simply the affirmation of reality and its accessibility with human knowledge. Most importantly “metaphysical realism” does not mean any characterization of reality. Nor is it reality described in some transcendental, all-encompassing or God’s-eye point of view; it is what makes falsifying test outcomes falsifying.
In the section titled “Is There Any Justification for External Realism” in his Mind, Language and Society: Philosophy in the Real World (1995) University of California realist philosopher John R. Searle (1932) refers to metaphysical realism as “external realism”, by which he means that the world exists independently of our representations of it. He says that realism does not say how things are, but only that there is a way that they are. The way that they are would include Heisenberg’s “potentia” as the quantum theory describes reality with its indeterminacy relations and duality thesis. The theory describes microphysical reality as being that certain way and not otherwise, such that the theory is testable and falsifiable.
Searle denies that external realism can be justified, because any attempt at justification presupposes what it attempts to justify. In other words all arguments for metaphysical realism are circular, because realism must firstly be accepted. Any attempt to find out about the real world presupposes that there is a way that things are. He goes on to affirm the picture of science as giving us objective knowledge of independently existing reality, and that this picture is taken for granted in the sciences.
Similarly in “Scope and Language of Science” in Ways of Paradox (1976) Harvard University realist philosopher Willard van Quine writes that we cannot significantly question the reality of the external world or deny that there is evidence of external objects in the testimony of our senses, because to do so is to dissociate the terms “reality” and “evidence” from the very application that originally did most to invest these terms with whatever intelligibility they may have for us. And to emphasize the primal origin of realism Quine writes that we imbibe this primordial awareness “with our mother’s milk”. He thus affirms what he calls his “unregenerate realism”. These statements by Searle, Quine and others of their ilk are not logical arguments or inferences; they are affirmations.
Hickey joins these contemporary realist philosophers. He maintains that metaphysical realism, the thesis that there exists mind-independent reality accessible to and accessed by cognition, is the “primal prejudice” that cannot be proved or disproved but can only be affirmed or denied. And he affirms that it is a correct and universal prejudice, even though there are delusional psychotics and sophistic academics that are in denial. Contrary to Descartes and latter-day rationalists, metaphysical realism is neither a conclusion nor an inference nor an extrapolation. It cannot be proved logically, established by philosophy or science, validated or justified in any discursive manner including figures of speech such as analogy or metaphor. Hickey regards misguided pedantics who say otherwise as “closet Cartesians”, because they never admit they are neo-Cartesians. The imposing, intruding, recalcitrant, obdurate otherness of mind-independent reality is immediately self-evident at the dawn of a person’s consciousness; it is a most rudimentary experience. Dogs and cats are infra-articulate and nonreflective realists. To dispute realism is to step through the looking glass into Alice’s labyrinth of logomanchy, of metaphysical jabberwocky where, as Schopenhauer believed, the world is a dream. It is to indulge in the philosophers’ hallucinatory narcotic.
Scientific realism is the thesis that a tested and currently nonfalsified theory offers the most empirically adequate and thus most realistic description of reality at the current time.
After stating that the notion of reality independent of language is in our earliest impressions, Quine adds that it is then carried over into science as a matter of course. He writes that realism is the robust state of mind of the scientist, who has never felt any qualms beyond the negotiable uncertainties internal to his science.
N.B. Contrary to Feyerabend the phrase “scientific realism” does not mean scientism, the thesis that only science describes reality.
3.37 Ontological Relativity Defined
When metaphysical realism is joined with relativized semantics, the result is ontological relativity.
Ontological relativity in science is the thesis that the semantics of a theory or law and its constituent descriptive terms describe aspects of reality.
A scientific law is a tested and nonfalsified universally quantified statement or mathematical expression that prior to its decisive testing had been a theory.
The ontology of a theory or law is as realistic as it is empirically adequate.
Understanding scientific realism requires consideration of ontological relativity. Ontological relativity is the subordination of ontology to empiricism. We cannot separate ontology from semantics, because we cannot step outside of our knowledge and compare our knowledge with reality, in order to validate a correspondence. But we can distinguish our semantics from the ontology it reveals, as we do when we distinguish logical and real suppositions respectively in statements. We describe mind-independent reality with our perspectivist semantics, and ontology is reality as it is revealed empirically more or less adequately by our semantics. Our semantics and thus ontologies cannot be exhaustive, but ontologies are more or less adequately realistic, as the semantics is more or less adequately empirical.
Prior to the evolution of contemporary pragmatism philosophers had identified realism as such with one or another particular ontology, which they erroneously viewed as the only ontology on the assumption that there can be only one ontology. Such is the error made by some physicists who believe that they are defending realism, when they defend Bohm’s “hidden variable” interpretation of quantum theory. Such too is the error in Popper’s proposal for his propensity interpretation of quantum theory. Nonetheless both Bohm and Heisenberg rejected the ontological thesis that the kind of existence familiar to us can be extrapolated into the atomic order of magnitude. And contrary to Einstein’s EPR thesis of a single uniform ontology for physics, Aspect, Dalibard, and Roger’s findings from their 1982 nonlocality experiments demonstrated empirically the Copenhagen interpretation’s semantics and ontology.
Advancing science has produced revolutionary changes. And as the advancement of science has produced new theories with new semantics exhibiting new ontologies, some prepragmatist scientists and philosophers found themselves attacking a new theory and defending an old theory, because they had identified realism with the ontology associated with the older falsified theory. As Feyerabend notes in his Against Method, scientists have criticized a new theory using the semantics and ontology of an earlier theory. Such a perversion of scientific criticism is still common in the social sciences where romantic ontologies are invoked as criteria for criticism.
With ontological relativity realism is no longer uniquely associated with any one particular ontology. The ontological-relativity thesis does not deny metaphysical realism, but depends on it. It distinguishes the mind-independent plenitude from the ontologies revealed by the descriptive semantics of more or less empirically adequate beliefs. Ontological relativity enables admitting change of ontology without resorting to instrumentalism, idealism, phenomenalism, solipsism, any of the several varieties of antirealism, or any other such denial of metaphysical realism.
Thus ontological relativity solves the modern problem of reconciling conceptual revision in science with metaphysical realism. Ontological relativity enables acknowledging the creative variability of knowledge operative in the relativized semantics and consequently mind-dependent ontologies that are defined in constructed theories, while at the same time acknowledging the regulative discipline of mind-independent reality operative in the empirical constraint in tests with their possibly falsifying outcomes.
In contemporary pragmatist philosophy of science metaphysical realism is logically prior to and presumed by all ontologies as the primal prejudice, while the choice of an ontology is based upon the empirically demonstrated adequacy of the theory describing the ontology. Indulging in futile disputations about metaphysical realism will not enhance achievement of the aims of either science or philosophy of science, nor will dismissing such disputations encumber achieving those aims. Ontological relativity leaves ontological decisions to the scientist rather than the metaphysician. And the superior empirical adequacy of a new law yields the increased truth of a new law and the increased realism in the ontology that the new law reveals.
3.38 Ontological Relativity Illustrated
There is no semantically interpreted syntax that does not reveal some more or less realistic ontology; since all semantics is relativized and ultimately comes from sense stimuli, no semantically interpreted syntax – not even descriptions of hallucinations – is utterly devoid of ontological significance.
To illustrate ontological relativity consider the semantical decision about red ravens mentioned in the above discussion about componential artifactual semantics (Section 3.23). The decision is ontological as well as semantical. For the bird watcher who found a red but otherwise raven-looking bird and decides to reject the belief “Every raven is black”, the phrase “red raven” becomes a description for a type of existing birds. Once that semantical decision is made, red ravens suddenly populate many trees in the world, however long ago Darwinian Mother Nature had evolved the observed avian creatures. But if his decision is to persist in believing “Every raven is black”, then there are no red ravens in existence, because whatever kind of creature the bird watcher found and that Mother Nature had long ago evolved, the red bird is not a raven. The availability of the choice illustrates the artifactuality of the relativized semantics of language and of the consequently relativized ontology that the relativized semantics reveals about mind-independent reality.
Relativized semantics makes ontology no less relative whether the affirmed entity is an elephant, an electron, or an elf. Beliefs that enable us routinely to make successful predictions are deemed more empirically adequate and thus more realistic and truer than those less successfully predictive. And we recognize the reality of the entities, attributes or any other characteristics that enable those routinely successful predicting beliefs. Thus if positing evil elves conspiring mischievously enabled predicting the collapse of market-price bubbles on Wall Street more accurately and reliably than the postulate of euphoric humans speculating greedily, then we would decide that the ontology of evil elves is as adequately realistic as it was found to be adequately empirical, and we would busy ourselves investigating elves, as we would do with elephants and electrons for successful predictions about elephants and electrons. On the other hand were our price predictions to fail, then those failures would inform us that our belief in the elves of Wall Street is as empirically inadequate as the discredited belief in the legendary gnomes of Zürich, and we would decide that the ontology of elves is as inadequately realistic, as it was found to be inadequately empirical.
Consider another illustration. Today we reject an ontology of illnesses due to possessing demons as inadequately realistic, because we do not find ontological claims about possessing demons to be empirically adequate for effective medical practice. But it could have been like the semantics of “atom”. The semantics and ontology of “atom” have changed greatly since the days of the ancient philosophers Leucippus and his pupil Democritus. The semantics of “atom” has since been revised repeatedly under the regulation of empirical research in physics, as when 1906 Nobel laureate J.J. Thomson discovered that the atom is not simple, and thus today we still accept a semantics and ontology of atoms. Similarly the semantics of “demon” might too have been revised to become as beneficial as the modern meaning of “bacterium”, had empirical testing regulated an evolving semantics and ontology of “demon”.
Both ancient and modern physicians may observe and describe some of the same symptoms for a certain infectious disease in a sick patient and both demons and bacteria are viewed as living agents, thus giving some continuity to the semantics and ontology of “demon” through the ages. But today’s physicians’ medical understanding, diagnoses and remedies are quite different. If the semantics and ontology of “demon” had been revised under the regulation of increasing empirical adequacy, then today scientists might materialize (i.e., visualize) demons with microscopes, physicians might write incantations (i.e., prescriptions), and pharmacists might dispense antidemonics (i.e., antibiotics) to exorcise (i.e., to cure) possessed (i.e., infected) sick persons. But then terms such as “materialize”, “incantation”, “antidemonics”, “exorcise” and “possessed” would also have acquired new semantics in the more empirically adequate modern contexts than those of ancient medical beliefs. And the descriptive semantics and ontology of “demon” would have been revised to exclude what we now find empirically to be inadequately realistic, such as a demon’s immateriality.
This thesis can be found in Quine’s “Two Dogmas of Empiricism” (1952) in his Logical Point of View (1953) even before he came to call it “ontological relativity” almost fifty years later. There he says that physical objects are conceptually imported into the linguistic system as convenient intermediaries, as irreducible posits comparable epistemologically to the gods of Homer. But physical objects are epistemologically superior to other posits including the gods of Homer, because the former have proved to be more efficacious as a device for working a manageable structure into the flux of experience. As a realist, he might have added explicitly that experience is experience of something, and that physical objects are more efficacious than whimsical gods for making correct predictions.
Or consider the tooth-fairy ontology. In some cultures young children losing their first set of teeth are told that if they place a lost tooth under the pillow at bedtime, a tooth-fairy person having large butterfly wings will exchange the tooth for a coin as they sleep. The child who does so and routinely finds a coin the next morning, has an empirically warranted belief in the semantics describing a winged person that leaves coins under pillows and is called a “tooth fairy”. This belief is no less empirical than belief in the semantics positing an invisible force (or field) that pulls apples from their trees to the ground and is called “gravity”. But should the child forget to advise his mother that he placed a recently lost tooth under his pillow, he will rise the next morning to find no coin, and may become suspicious.
Then like the bird watcher with a red raven-looking bird, the child has semantical and ontological choices. He may continue to define “tooth fairy” as a benefactor other than his mother, and reject the tooth-fairy semantics and ontology as inadequately realistic. Or like the ancient astronomers who concluded that the morning star and the evening star are the same luminary and not stellar, i.e., the planet Venus, he may revise his semantics of “tooth fairy” to conclude that his mother and the tooth fairy are the same benefactor and not winged. But later when he publicly calls his mother “tooth fairy”, he will be encouraged to revise this semantics of “tooth fairy”, and to accept the more conventional ontology that excludes tooth fairies, as modern physicians exclude ghostly demons. This sociology of knowledge and ontology has been insightfully examined by the sociologists of knowledge Peter Berger (1929-2017) and Thomas Luckmann (1927-2016) in The Social Construction of Reality (1966).
Or consider ontological relativity in fictional literature. “Fictional ontology” is an oxymoron. But fictional literature resembles metaphor, because its discourse is recognized as having both true and false aspects (Section 3.27). For fictional literature the reader views as true the parts of the text that reveal reality adequately, and the reader views as untrue the parts that he views critically and finds to be inadequately realistic. Sympathetic readers, who believe Mark Twain’s portrayal of the slavery ontology, recognize an ontology that is realistic about the racist antebellum South. And initially unsympathetic readers who upon reading Twain’s portrayal of Huckleberry Finn’s dawning awareness of fugitive slave Jim’s humanity notwithstanding Huck’s racist upbringing, may thus be led to accept the more realistic ontology that is without the dehumanizing fallacies of the South’s racism. Ontological relativity enables recognition that such reconceptualization can reveal a more realistic ontology not only in science but also in all discourse including even fiction.
Getting back to science, consider the Eddington eclipse test of Einstein’s relativity theory mentioned above in the discussion of componential semantics (Section 3.22). That historic astronomical test is often said to have “falsified” Newton’s theory. Yet today the engineers of the U.S. National Aeronautics and Space Administration (NASA) routinely use Newton’s physics to navigate interplanetary rocket flights through our solar system. Thus it must be said that Newton’s “falsified” theory is not completely false or NASA could never use it. Newtonian ontology is realistic, but is now known to be less realistic than the Einsteinian ontology, because the former has been demonstrated to be less empirically adequate.
Cause and effect are ontological categories, which in science can be described by tested and nonfalsified nontruth-functional hypothetical-conditional statements thus having the status of laws. The nontruth-functional hypothetical-conditional law statement claiming a causal dependency is an empirical universal statement. It is therefore never proved and is always vulnerable to future falsification. But ontological relativity means that a statement’s empirical adequacy warrants belief in its ontological claim of causality including when the relation is stochastic. Nonfalsification does not make the statement affirm merely a Humean constant psychological conjunction. When in the progress of science a causal claim is not empirically falsified by empirical testing, it is made evident thereby that the causality claim is more adequately true and thus more realistic than previously hypothesized.
Correlation indicates causality, unless and until the correlation is empirically invalidated.
3.40 Ontology of Mathematical Language
In the categorical proposition the logically quantified subject term references individuals and describes the attributes that enable identifying the referenced individuals, while the predicate term describes only attributes without referencing the instantiated individuals manifesting the attributes. The referenced extramental real entities and their semantically signified extramental real attributes constitute the ontology described by the categorical proposition that is believed to be true due to its experimentally or otherwise experientially demonstrated empirical adequacy. These existential conditions are expressed explicitly by the copula term “is” as in “Every raven is black”.
However, the ontological claim made by the mathematical equation in science is not only about instantiated individuals or their attributes. The individual instances referenced by the descriptive variables in the applied mathematical equation are instances of individual measurement results, which are acquired by executing measurement procedures that yield numeric values for the descriptive variables. The individual measurement results are related to the measured reality by nonmathematical language, which includes description of the measured subject, the metric, the measurement procedures, and any apparatus all of which are described in test-design language.
Also calculated and predicted values for descriptive variables describing effects in equations with measurement values for other variables describing causal factors make ontological claims, which are tested empirically. Untested theories make relatively more hypothetical quantitative causal claims. Tested and nonfalsified equations are quantitative causal laws, unless and until they are eventually falsified.