INTRODUCTION TO PHILOSOPHY OF SCIENCE
Book I Page 6
4.08 Philosophy’s Cultural Lag
Adequate understanding of the successful departures from institutionalized basic research is elusive even for philosophers. There exists a time lag between the evolution of the institution of science and developments in philosophy of science, since the latter depend on the realization of the former. For example more than a quarter of a century passed between Heisenberg’s philosophical reflections on the language of his indeterminacy relations in quantum theory and the consequent emergence and ascendancy of the contemporary pragmatist philosophy of science in academic philosophy.
4.09 Cultural Lags among Sciences
Not only are there cultural lags between the institutionalized practices of science and philosophy of science, there are also cultural lags among the several sciences. Philosophers of science have preferred to examine physics and astronomy, because historically these have been the most advanced sciences since the Scientific Revolution benchmarked with Copernicus and Newton.
Institutional changes occur with lengthy time lags due to such impediments as intellectual mediocrity, technical incompetence, risk aversion, or vested interests in the conventional ideas and the received wisdom. The newer social and behavioral sciences have remained institutionally retarded. Naïve sociologists and economists are blithely complacent in their amateurish philosophizing about basic social-science research, often adopting prescriptions and proscriptions that contemporary philosophers of science recognize as anachronistic and fallacious. The result has been the emergence and survival of retarding philosophical superstitions in these retarded social sciences, especially to the extent that they have looked to their own less successful histories to formulate their ersatz philosophies of science.
But sociologists and economists continue to enforce a romantic philosophy of science, because they believe that sociocultural sciences must have fundamentally different philosophies of science than the natural sciences. Similarly behaviorist psychologists continue to impose the anachronistic positivist philosophy of science. On the contemporary pragmatist philosophy these sciences are institutionally retarded, because they erroneously impose preconceived semantical and ontological commitments as criteria for scientific criticism. Pragmatists can agree with Popper, who said that science is “subjectless” meaning that valid science is not defined by any particular semantics or ontology.
Pragmatists tolerate any semantics or ontology that romantics or positivists may include in scientific explanations, theories and laws, but pragmatists recognize only the empirical criterion for criticism.
4.10 Scientific Discovery
“Discovery” refers to the development of new theories.
Contemporary pragmatism is consistent with the use of computerized discovery systems.
Discovery is the first step toward realizing the aim of science. The problem of scientific discovery for contemporary pragmatist philosophers of science is to proceduralize and to then mechanize the development of universally quantified statements for empirical testing with nonfalsifying test outcomes, thereby making laws for use in explanations and test designs.
Much has already been said in the above discussions of philosophy of scientific language in chapter 3 about the pragmatic basis for the definition of theory language, about the semantic basis for the individuation of theories, and about state descriptions. Those discussions will be assumed in the following comments about the mechanized development of new theories.
4.11 Discovery Systems
A discovery system is a computer system that produces a transition from an input-language state description containing currently available information to an output-language state description containing generated and tested new theories.
In the “Introduction” to his Models of Discovery (1977) Simon, one of the founders of artificial intelligence wrote that dense mists of romanticism and downright knownothingness have always surrounded the subject of scientific discovery and creativity. Therefore the most significant development addressing the problem of scientific discovery has been the relatively recent mechanized discovery systems in a new specialty called “computational philosophy of science”.
The ultimate aim of the computational philosopher of science is to facilitate the advancement of contemporary sciences by participating in and contributing to the successful basic-research work of the scientist. The contemporary pragmatist philosophy of science thus carries forward the pragmatist John Dewey’s emphasis on participation. Unfortunately few academic philosophers have the requisite computer skills much less a working knowledge of any empirical science for participation in basic research. Hopefully that will change in the twenty-first century.
Every useful discovery system to date has contained procedures both for constructional theory creation and for critical theory evaluation for quality control of the generated output and for quantity control of the system’s otherwise unmanageably large output. Theory creation introduces new language into the current state description to produce a new state description, while falsification eliminates language from the current state description to produce a new state description. Thus both theory development and theory testing enable a discovery system to offer a specific and productive diachronic dynamic procedure for linguistic change to advance empirical science.
The discovery systems do not merely implement an inductivist strategy of searching for repetitions of individual instances, notwithstanding that statistical inference is employed in some system designs. The system designs are mechanized procedural strategies that search for patterns in the input information. Thus they implement Hanson’s thesis in Patterns of Discovery that in a growing research discipline inquiry seeks the discovery of new patterns in data. They also implement Feyerabend’s “plea for hedonism” in Criticism and the Growth of Knowledge (1971) to produce a proliferation of theories. But while many are made, mercifully few are chosen thanks to the empirical testing routines in the systems to control for quality of the outputted equations.
4.12 Types of Theory Development
In his Introduction to Metascience (1976) Hickey distinguishes three types of theory development, which he calls theory extension, theory elaboration and theory revision. This classification is vague and some types may be overlapping.
Theory extension is the use of a currently tested and nonfalsified explanation to address a new scientific problem.
The extension could be as simple as adding hypothetical statements to make a general explanation more specific for the type of problem at hand. A more complex strategy for theory extension is analogy. In his Computational Philosophy of Science (1988) Thagard describes his strategy for mechanized theory development, which consists in the patterning of a proposed solution to a new problem by analogy with an existing explanation originally for a different subject. Using his system design based on this strategy his discovery system called PI (an acronym for “Process of Induction”) reconstructed development of the theory of sound waves by analogy with the description of water waves. The system was his Ph.D. dissertation in philosophy of science at University of Toronto, Canada.
In his Mental Leaps: Analogy in Creative Thought (1995) Thagard further explains that analogy is a kind of nondeductive logic he calls “analogic”. It firstly involves the “source analogue”, which is the known domain that the investigator already understands in terms of familiar patterns, and secondly involves the “target analogue”, which is the unfamiliar domain that the investigator is trying to understand. Analogic is the strategy whereby the investigator understands the targeted domain by seeing it in terms of the source domain. Analogic requires a “mental leap”, because the two analogues may initially seem unrelated. And the mental leap is called a “leap”, because analogic is not conclusive like deductive logic.
It may be noted that if the output state description generated by analogy such as the PI system is radically different from anything previously seen by the affected scientific profession containing the target analogue, then the members of that profession may experience the communication constraint to the high degree that is usually associated with a theory revision. The communication constraint is discussed below (Section 4.26).
Theory elaboration is the correction of a currently falsified theory to create a new theory by adding new factors or variables that correct the falsified universally quantified statements and erroneous predictions of the old theory.
The new theory has the same test design as the old theory. The correction is not merely ad hoc excluding individual exceptional cases, but rather is a change in the universally quantified statements. This process is often misrepresented as “saving” a falsified theory, but in fact it creates a new one.
For example the introduction of a variable for the volume quantity and development of a constant coefficient for the particular gas could elaborate Gay-Lussac’s law for gasses into the combined Gay-Lussac’s law, Boyle’s law and Charles’ law. Similarly Friedman’s macroeconomic quantity theory might be elaborated into a Keynesian liquidity-preference function by the introduction of an interest rate, to account for the cyclicality manifest in an annual time series describing the calculated velocity parameter and to display the liquidity trap phenomenon.
Pat Langley’s BACON discovery system exemplifies theory elaboration. It is named after the English philosopher Francis Bacon (1561-1626) who thought that scientific discovery can be routinized. BACON is a set of successive and increasingly sophisticated discovery systems that make quantitative laws and theories from input measurements. Langley designed and implemented BACON in 1979 as the thesis for his Ph.D. dissertation written in the Carnegie-Mellon department of psychology under the direction of Simon. A description of the system is in Simon’s Scientific Discovery: Computational Explorations of the Creative Processes (1987).
BACON uses Simon’s heuristic-search design strategy, which may be construed as a sequential application of theory elaboration. Given sets of observation measurements for two or more variables, BACON searches for functional relations among the variables. BACON has simulated the discovery of several historically significant empirical laws including Boyle’s law of gases, Kepler’s third planetary law, Galileo’s law of motion of objects on inclined planes, and Ohm’s law of electrical current.
Theory revision is the reorganization of currently existing information to create a new theory.
Its results may be radically different and may thus be said to occasion a “paradigm change”, so it might be undertaken after repeated attempts at both theory extension and theory elaborations have failed to correct a previously falsified theory. The source for the input state description for mechanized theory revision consists of the descriptive vocabulary from the currently untested theories addressing the problem at hand. The descriptive vocabulary from previously falsified theories may also be included as inputs to make an accumulative state description, because the vocabularies in rejected theories can be productively cannibalized for their scrap value. The new theory is most likely to be called revolutionary if the revision is great, because theory revision typically produces greater change to the current language state than does theory extension or theory elaboration thus producing psychologically disorienting semantical dissolution.
Hickey’s METAMODEL discovery system constructed the Keynesian macroeconomic theory from U.S. statistical data available prior to 1936, the publication year of Keynes’ revolutionary General Theory of Employment, Interest and Money. The applicability of the METAMODEL for this theory revision was already known in retrospect by the fact that, as 1980 Nobel-laureate econometrician Lawrence Klein wrote in his Keynesian Revolution (1949, pp. 13 & 124), all the important parts of Keynes theory can be found in the works of one or another of his predecessors. Hickey’s METAMODEL discovery system described in his Introduction to Metascience (1976) is a mechanized generative grammar with combinatorial transition rules producing longitudinal econometric models. The grammar is a finite-state generative grammar both to satisfy the collinearity restraint for the regression-estimated equations and to satisfy the formal requirements for executable multi-equation predictive models. The system tests for collinearity, statistical significance, serial correlation, goodness-of-fit properties of the equations, and for accurate out-of-sample retrodictions. Simon calls this combinatorial type of system a “generate-and-test” design.
Hickey also used his METAMODEL system in 1976 to develop a post-classical macrosociometric functionalist model of the American national society with fifty years of historical time-series data. To the shock, chagrin and dismay of academic sociologists it is not a social-psychological theory, and four sociological journals therefore rejected Hickey’s paper, which describes the model and its findings about the American national society’s dynamics and stability characteristics.
The paper is reprinted as “Appendix I” to BOOK VIII at www.philsci.com or in the e-book Twentieth-Century Philosophy of Science: A History.
The academic sociologists’ a priori ontological commitments to romanticism and its social-psychological reductionism rendered the editors and their chosen referees invincibly obdurate. The referees also betrayed their Luddite mentality toward mechanized theory development. The referee criticisms are described in “Appendix II” to BOOK VIII at www.philsci.com or in the e-book Twentieth-Century Philosophy of Science: A History.
Later in the mid-1980’s Hickey integrated his macrosociometric model into a Keynesian macroeconometric model to produce an institutionalist macroeconometric model while employed as Deputy Director and Senior Economist for the Indiana Department of Commerce, Division of Economic Analysis during the Orr-Mutz Administration.
4.13 Examples of Successful Discovery Systems
There are several examples of successful discovery systems in use. John Sonquist developed his AID system for his Ph.D. dissertation in sociology at the University of Chicago. His dissertation was written in 1961, when William F. Ogburn was department chairman, which was before the romantics took over the University of Chicago sociology department. He described the system in his Multivariate Model Building: Validation of a Search Strategy (1970). The system has long been used at the University of Michigan Survey Research Center. Now modified as the CHAID system using chi-squared (χ2), Sonquist’s discovery system is available commercially in both the SAS and SPSS software packages. Its principal commercial application is for list processing for market analysis and for risk analysis as well as for academic investigations in social science. It is not only the oldest mechanized discovery system but also the most widely used in practical applications to date.
Robert Litterman developed his BVAR system for his Ph.D. dissertation in economics at the University of Minnesota. He described the system in his Techniques for Forecasting Using Vector Autoregressions (1984). The economists at the Federal Reserve Bank of Minneapolis have used his system for macroeconomic and regional economic analysis. The State of Connecticut and the State of Indiana have also used it for regional economic analysis.
Having previously received an M.A. degree in economics Hickey had intended to develop his METAMODEL computerized discovery system for a Ph.D. dissertation in philosophy of science while a graduate student in the philosophy department of the University of Notre Dame, South Bend, Indiana. But the Notre Dame philosophers were obstructionist to Hickey views, and Hickey dropped out. He then developed his computerized discovery system as a nondegree student at San Jose City College in San Jose, CA, a two-year associate-arts degree community college, which has a better computer and better teachers than Notre Dame’s graduate school.
For thirty years afterwards Hickey used his discovery system occupationally, working as a research econometrician in both business and government. For six of those years he used his system for Institutionalist macroeconometric modeling and regional econometric modeling for the State of Indiana Department of Commerce. He also used it for econometric market analysis and risk analysis for various business corporations including USX/United States Steel Corporation, BAT(UK)/Brown and Williamson Company, Pepsi/Quaker Oats Company, Altria/Kraft Foods Company, Allstate Insurance Company, and TransUnion LLC.
In 2004 TransUnion’s Analytical Services Group purchased a perpetual license to use his METAMODEL system for their consumer credit risk analyses using their proprietary TrenData aggregated quarterly time series extracted from their truly huge national database of consumer credit files. Hickey used the models generated by the discovery system to forecast payment delinquency rates, bankruptcy filings, average balances and other consumer borrower characteristics that affect risk exposure for lenders. He also used the system for Quaker Oats and Kraft Foods to discover the sociological and demographic factors responsible for the secular long-term market dynamics of food products and other nondurable consumer goods.
In 2007 Michael Schmidt, a Ph.D. student in computational biology at Cornell University, and his dissertation director, Hod Lipson developed their system EUREQA at Cornell University’s Artificial Intelligence Lab. The system automatically develops predictive analytical models from data using a strategy they call an “evolutionary search” for invariant relationships, which converges on the simplest and most accurate equations fitting the inputted data. The system splits the data set into two parts, one to develop the model and the other to validate its accuracy. If models do not perform exceptionally well on both tests, they will not be outputted for display to users. The outputted models are presented as mathematical equations, interactive visualizations, and plain-language explanations. The system has been used by many business corporations, universities and government agencies including Alcoa, California Institute of Technology, Cargill, Corning, Dow Chemical, General Electric, Amazon, Shell and NASA.
For more about discovery systems and computational philosophy of science readers are referred to BOOK VIII at www.philsci.com or in the e-book Twentieth-Century Philosophy of Science: A History.
4.14 Scientific Criticism
Criticism pertains to the criteria for the acceptance or rejection of theories.
The only criterion for scientific criticism that is acknowledged by the contemporary pragmatist is the empirical criterion.
The philosophical literature on scientific criticism has little to say about the specifics of experimental design. Most often philosophical discussion of criticism pertains to the criteria for acceptance or rejection of theories and more recently to the decidability of empirical testing.
In earlier times when the natural sciences were called “natural philosophy” and social sciences were called “moral philosophy”, nonempirical considerations operated as criteria for the criticism and acceptance of descriptive narratives. Even today some philosophers and scientists have used their semantical and ontological preconceptions as criteria for the criticism of scientific theories including preconceptions about causality or specific causal factors. Such semantical and ontological preconceptions have misled them to reject new empirically superior theories. In his Against Method Feyerabend noted that the ontological preconceptions used to criticize new theories have often been the semantical and ontological claims expressed by previously accepted and since falsified theories.
What historically has separated the empirical sciences from their origins in natural and moral philosophy is the empirical criterion, and it is responsible for the advancement of science and for its enabling practicality in application. Whenever in the history of science there has been a conflict between the empirical criterion and any nonempirical criteria for the evaluation of new theories, it is eventually the empirical criterion that ultimately decides theory selection.
Contemporary pragmatists accept relativized semantics, scientific realism, and thus ontological relativity, and they therefore reject all prior semantical or ontological criteria for scientific criticism including the romantics’ mentalistic ontology requiring social-psychological or any other kind of reductionism.
4.15 Logic of Empirical Testing
An empirical test is:
(1) An effective decision procedure that can be schematized as a modus tollens logical deduction from a set of one or several universally quantified theory statements expressible in a nontruth-functional hypothetical-conditional heuristic schema proposed for testing,
(2) Together with an antecedent particularly quantified description of the initial test conditions,
(3) Which jointly conclude to a consequent particularly quantified description of a produced (predicted) test-outcome event that is compared with the observed test-outcome description.
In order to express explicitly the dependency of the produced effect upon the realized initial conditions in an empirical test, the universally quantified theory statements can be schematized as a nontruth-functional hypothetical-conditional heuristic schema, i.e., as a statement with the logical form “For every A if A, then C.” This hypothetical-conditional heuristic schema represents a system of one or several universally quantified related theory statements or equations that describe a dependency of the occurrence of events described by “C” upon the occurrence of events described by “A”. In some cases the dependency is expressed as a bounded stochastic density function for the values of predicted probabilities. For advocates who believe in the theory, the hypothetical-conditional heuristic schema is the theory-language context that contributes meaning parts to the complex semantics of the theory’s constituent descriptive terms including the terms common to the theory and test design. But the theory’s semantical contribution cannot be operative in a test for the test to be independent of the theory.
The antecedent “A” includes the set of universally quantified statements of test design that describe the initial conditions and test procedures that must be realized for execution of an empirical test of the theory together with the description of the procedures needed for their realization. These statements are always presumed to be true or the test design is rejected as invalid. They contribute meaning parts to the complex semantics of the terms common to theory and test design, and do so independently of the theory’s semantical contributions. The universal logical quantification indicates that any execution of the experiment is but one of an indefinitely large number of possible test executions, whether or not the test is repeatable at will.
When the test is executed, the logical quantification of “A” is changed to particular quantification to describe the realized initial conditions in the individual test execution. When the universally quantified test-design and test-outcome statements have their logical quantification changed to particular quantification, the belief status and thus definitional rôle of the universally quantified test-design confer upon their particularly quantified versions the status of “fact” for all who accept the test design. The theory statements in the hypothetical-conditional heuristic schema are also given particular quantification. In a mathematically expressed theory the test execution consists in measurement actions and assignment of the resulting measurement values to the variables in “A”. In a mathematically expressed single-equation theory, “A” includes the independent variables in the equation of the theory. In a multi-equation system whether recursively structured or simultaneous, the exogenous variables are assigned values by measurement, and are included in “A”. In longitudinal models with dated variables the lagged-values of endogenous variables that are the initial condition for a test and that initiate the recursion through successive iterations to generate predictions, must also be included in “A”.
The consequent “C” represents the set of universally quantified statements of the theory that describe the predicted outcome of every correct execution of a test design. Its logical quantification is changed to particular quantification to describe the predicted outcome for the individual test execution. In a mathematically expressed single-equation theory, “C” is the dependent variable in the equation of the theory. When no value is assigned to any variable, the equation is universally quantified. When the prediction value of a dependent variable is calculated from the measurement values of the independent variables, it becomes particularly quantified. In a multi-equation theory, whether recursively structured or a simultaneous-equation system, the solution values for the endogenous variables are included in “C”. In longitudinal models with dated variables the current-dated values of endogenous variables that are calculated by solving the model through successive iterations are included in “C”.
The conditional statement of theory does not say “For every A and for every C if A, then C”. It only says “For every A if A, then C”. In other words the conditional statement of theory only expresses a sufficient condition for the production of the phenomenon described by C upon realization of the test conditions given by “A”, and not a necessary condition. Each of several alternative test designs described in “A” may be sufficient to produce “C”. This occurs for example, if there are theories proposing alternative causal factors for the same outcome described in “C”. Or if there are equivalent measurement procedures or instruments described in “A” that produce alternative measurements each having expected values falling within the range of the other’s measurement error.
Let another particularly quantified statement denoted “O” describe the observed test outcome of an individual test execution. The report of the test outcome “O” shares vocabulary with the prediction statements “C”. But the semantics of the terms in “O” is determined exclusively by the universally quantified test-design statements rather than by the statements of the theory, and thus for the test its semantics is independent of the theory’s semantical contribution. In an individual predictive test execution “O” represents observations and/or measurements made and measurement values assigned after the prediction is made, and it too has particular logical quantification to describe the observed outcome resulting from the individual execution of the test. There are three outcome scenarios:
Scenario I: If “A” is false in an individual test execution, then regardless of the truth of “C” the test execution is simply invalid due to a scientist’s failure to comply with its test design, and the empirical adequacy of the theory remains unaffected and unknown. The empirical test is conclusive only if it is executed in accordance with its test design. Contrary to the logical positivists, the truth table for the truth-functional logic is therefore not applicable to testing in empirical science, because in science a false antecedent, “A”, does not make the hypothetical-conditional statement true by logic of the test.
Scenario II: If “A” is true and the consequent “C” is false, as when the theory conclusively makes erroneous predictions, then the theory is falsified, because the hypothetical conditional “For every A if A, then C” is false. Falsification occurs when the statements “C” and “O” are not accepted as describing the same thing within the range of vagueness and/or measurement error, which are manifestations of empirical underdetermination. The falsifying logic of the test is the modus tollens argument form, according to which the conditional-hypothetical heuristic schema expressing the theory is falsified, when one affirms the antecedent clause and denies the consequent clause. This is the falsificationist philosophy of scientific criticism advanced by Charles S. Peirce, the founder of classical pragmatism, and later advocated by Popper.
For more on Popper readers are referred to BOOK V at www.philsci.com or in the e-book Twentieth-Century Philosophy of Science: A History.
The response to a falsification may or may not be attempts to develop a new theory. Responsible scientists will not deny a falsifying outcome of a test, so long as they accept its test design and test execution. Characterization of falsifying anomalous cases is informative, because it contributes to articulation of a new problem that a new and more empirically adequate theory must solve. Some scientists may, as Kuhn said, simply believe that the anomalous outcome is an unsolved problem for the tested theory without attempting to develop a new theory. But such a response is either an ipso facto rejection of the tested theory, a de facto rejection of the test design or simply a disengagement from attempts to solve the new problem. And contrary to Kuhn this procrastinating response to anomaly need not imply that the falsified theory has been given institutional status, unless the science itself is institutionally retarded.
For more on Kuhn readers are referred to BOOK VI at www.philsci.com or in the e-book Twentieth-Century Philosophy of Science: A History.
Scenario III: If “A” and “C” are both true, the hypothetical-conditional heuristic schema expressing the tested theory is validly accepted as asserting a causal dependency between the phenomena described by the antecedent and consequent clauses. The hypothetical-conditional statement does not merely assert a Humean psychological constant conjunction. Causality is an ontological category describing a real dependency, and the causal claim is asserted on the basis of ontological relativity due to the empirical adequacy demonstrated by the nonfalsifying test outcome. Because the nontruth-functional hypothetical-conditional statement is empirical, causality claims are always subject to future testing, falsification, and then revision. This is also true when the conditional expresses a mathematical function.
Furthermore if the test design is afterwards modified such that it changes the characterization of the subject of the theory, then even a nonfalsifying test outcome should be reconsidered and the theory should be retested for the new definition of the subject. If the retesting produces a falsifying outcome, then the new information in the modification of the test design has made the terms common to the two test designs equivocal and has contributed parts to alternative meanings. But if the test outcome is not falsification, then the new information is merely new parts added to the univocal meaning of the terms common to the old and new test-design language. Such would be the case if the new information resembles what the positivists called a new “operational definition”, as for example a new and additional way to measure temperature for extreme values that cannot be measured by the old measurement operation, but which yields the same temperature values within the range of measurement errors, where the alternative operations produce overlapping results.
On the contemporary pragmatist philosophy a theory that has been tested is no longer theory, once the test outcome is known and the test execution is accepted as correct. If the theory has been falsified, it is merely rejected language unless the falsified theory is still useful for the lesser truth it contains. But if it has been tested with a nonfalsifying test outcome, then it is empirically warranted and thus deemed a scientific law until it is tested again and falsified. The law is still hypothetical because it is empirical, but it is less hypothetical than it had previously been as a theory proposed for testing. The law may thereafter be used either in an explanation or in a test design for testing some other theory.
For example the elaborate engineering documentation for the Large Hadron Collider at CERN, the Conseil Européen pour la Recherche Nucléaire, is based on previously tested science. After installation of the collider is complete and validated, the science in that engineering is not what is tested when the particle accelerator is operated for the microphysical experiments, but rather the employed science is presumed true and contributes to the test design semantics for investigations and experiments performed with the accelerator.
4.16 Test Logic Illustrated
Consider the simple heuristic case of Gay-Lussac’s law for a fixed amount of gas in an enclosed container as a theory proposed for testing. The container’s volume is constant throughout the experimental test, and therefore is not represented by a variable. The theory is (T'/T)*P = P', where the variable P means gas pressure, the variable T means the gas temperature, and the variables T' and P' are incremented values for T and P in a controlled experimental test, where T' = T ± ΔT, and P' is the predicted outcome that is produced by execution of the test design.
The statement of the theory may be heuristic schematized in the hypothetical-conditional form “For every A if A, then C”, where “A” includes (T'/T)*P, and “C” states the calculated prediction value of P', when temperature is incremented by ΔT from T to T'. The theory is universally quantified, and thus claims to be true for every execution of the experimental test. And for proponents of the theory, who are believers in the theory, the semantics of T, P, T' and P' are mutually contributing to the semantics of each other, a fact exhibited explicitly in this case, because the equation is monotonic, such that each variable can be expressed mathematically as a function of all the others by simple algebraic transformations.
“A” also includes the universally quantified test-design statements. These statements describe the experimental set up, the procedures for executing the test and initial conditions to be realized for execution of a test. They include description of the equipment used including the container, the heat source, the instrumentation used to measure the magnitudes of heat and pressure, and the units of measurement for the magnitudes involved, namely the pressure units in atmospheres and the temperature units in degrees Kelvin (K°). And they describe the procedure for executing the repeatable experiment. This test-design language is also universally quantified and thus also contributes meaning components to the semantics of the variables P, T and T' in “A” for all interested scientists who accept the test design.
The procedure for performing the experiment must be executed as described in the test-design language, in order for the test to be valid. The procedure will include firstly measuring and recording the initial values of T and P. For example let T = 200°K and P = 1.6 atmospheres. Let the incremented measurement value be recorded as ΔT = 200°K, so that the measurement value for T' is made to be 400°K. The description of the execution of the procedure and the recorded magnitudes are expressed in particularly quantified test-design language for this particular test execution. The value of P' is then calculated.
The test outcome consists of measuring and recording the resulting observed incremented value for pressure. Let this outcome be represented by particularly quantified statement O using the same vocabulary as in the test design. But only the universally quantified test-design statements define the semantics of O, so that the test is independent of the theory. In this simple experiment one can simply denote the measured value for pressure by the variable O. The test execution would also likely be repeated to enable estimation of the range of measurement error in T, T', P and O, and the measurement error propagated into P' calculation. A mean average of the measurement values from repeated executions would be calculated for each of these variables. Deviations from the mean are estimates of the amounts of measurement error, and statistical standard deviations could summarize the dispersion of measurement errors about the mean averages.
The mean average of the test-outcome measurements for O is compared to the mean average of the predicted measurements for P' to determine the test outcome. If the values of P' and O are equivalent within their estimated ranges of measurement error, i.e., are sufficiently close to 3.2 atmospheres as to be within the measurement errors, then the theory is deemed not to have been falsified. After repetitions with more extreme incremented values with no falsifying outcome, the theory will likely be deemed sufficiently warranted empirically to be deemed a law, as it is today.
4.17 Semantics of Empirical Testing
Much has already been said about the artifactual character of semantics, about componential semantics, and about semantical rules. In the semantical discussion that follows these concepts are brought to bear upon the discussion of the semantics of empirical testing and of test outcomes.
The ordinary semantics of empirical testing is as follows:
If a test has a nonfalsifying outcome, then for the theory’s developer and advocates the semantics of the tested theory is unchanged. Since they had proposed the theory in the belief that it would not be falsified, their belief in the theory makes it function for them as a set of one or several semantical rules. Thus for them both the theory and the test design are accepted as true, and after the nonfalsifying test outcome both the theory and test-design statements continue to contribute parts to the complex meanings of the descriptive terms common to both theory and test design, as before the test.
But if the test outcome is a falsification, then there is a semantical change produced in the theory for the developer and advocates of the tested theory who accept the test outcome as a falsification. The unchallenged test-design statements continue to contribute semantics to the terms common to the theory and test design by contributing their parts to the meaning complexes of each of those common terms. But the component parts of those meanings contributed by the falsified theory statements are excluded from the semantics of those common terms for the proponents who no longer believe in the theory due to the falsifying test outcome.