INTRODUCTION TO PHILOSOPHY OF SCIENCE

Book I Page 6

      
4.08 Philosophy’s Cultural Lag

Adequate understanding of the successful departures from institutionalized basic research is elusive even for philosophers.  There exists a time lag between the evolution of the institution of science and developments in philosophy of science, since the latter depend on the realization of the former.  For example more than a quarter of a century passed between Heisenberg’s philosophical reflections on the language of his indeterminacy relations in quantum theory and the consequent emergence and ascendancy of the contemporary pragmatist philosophy of science in academic philosophy.

4.09 Cultural Lags among Sciences

Not only are there cultural lags between the institutionalized practices of science and philosophy of science, there are also cultural lags among the several sciences.  Philosophers of science have preferred to examine physics and astronomy, because historically these have been the most advanced sciences since the historic Scientific Revolution benchmarked with Copernicus and Newton.

Institutional changes occur with lengthy time lags due to such impediments as intellectual mediocrity, technical incompetence, risk aversion, or vested interests in the conventional ideas and the received wisdom.  The newer social and behavioral sciences have remained institutionally retarded.  Naïve sociologists and economists are blithely complacent in their amateurish philosophizing about basic social-science research, often adopting prescriptions and proscriptions that contemporary philosophers of science recognize as anachronistic and fallacious.  The result has been the emergence and survival of retarding philosophical superstitions in these retarded social sciences, especially to the extent that they have looked to their own less successful histories to formulate their ersatz philosophies of science.

Thus sociologists and economists continue to enforce a romantic philosophy of science, because they believe that sociocultural sciences must have fundamentally different philosophies of science than the natural sciences.  Similarly behaviorist psychologists continue to impose the anachronistic positivist philosophy of science. On the contemporary pragmatist philosophy these sciences are institutionally retarded, because they erroneously impose preconceived semantical and ontological commitments as criteria for scientific criticism.  Pragmatists can agree with Popper, who said that science is “subjectless” meaning that science is not defined by any particular semantics or ontology.

Pragmatists tolerate any semantics or ontology that romantics or positivists may include in scientific explanations, theories and laws, but pragmatists recognize only the empirical criterion for criticism.


4.10 Scientific Discovery

“Discovery” refers to the development of new theories.

Contemporary pragmatism is consistent with use of computerized discovery systems.

Discovery is the first step toward realizing the aim of science.  The problem of scientific discovery for contemporary pragmatist philosophers of science is to describe and to proceduralize the development of universally quantified statements for empirical testing with nonfalsifying test outcomes, thereby making laws for use in explanations and test designs.

Much has already been said in the above discussions of philosophy of scientific language in Chapter 3 about the pragmatic basis for the definition of theory language, about the semantic basis for the individuation of theories, and about state descriptions.  Those discussions will be assumed in the following comments about the mechanized development of new theories.


4.11 Discovery Systems

The discovery system produces a transition from an input-language state description containing currently available information to an output-language state description containing the generated and tested new theories.

In the “Introduction” to his Models of Discovery 1978 Nobel-laureate Herbert Simon, one of the founders of artificial intelligence, wrote that dense mists of romanticism and downright knownothingness have always surrounded the subject of scientific discovery and creativity.  Therefore the most significant development addressing the problem of scientific discovery has been the relatively recent mechanized discovery systems in a new specialty called “computational philosophy of science”. 

The ultimate aim of the computational philosopher of science is to facilitate the advancement of contemporary sciences by participating in and contributing to the successful basic-research work of the scientist.  The contemporary pragmatist philosophy of science thus carries forward John Dewey’s emphasis on participation.  But few academic philosophers have the requisite computer skills much less a working knowledge of any empirical science for participation in basic research.  That may change.

Every useful discovery system to date has contained procedures both for constructional theory creation and for critical theory evaluation for quality control of the generated output and for quantity control of the system’s otherwise unmanageably large output.  Theory creation introduces new language into the current state description to produce a new state description, while falsification eliminates language from the current state description to produce a new state description. Thus both theory development and theory testing enable a discovery system to offer a specific and productive diachronic dynamic procedure for linguistic change to advance empirical science.

The discovery systems do not merely implement an inductivist strategy of searching for repetitions of individual instances, notwithstanding that statistical inference is employed in some system designs.  The system designs are mechanized procedural strategies that search for patterns in the input information.  Thus they implement Hanson’s thesis in Patterns of Discovery that in a growing research discipline inquiry seeks the discovery of new patterns in data.  They also implement Feyerabend’s “plea for hedonism” in Criticism and the Growth of Knowledge to produce a proliferation of theories.  But while many are made, mercifully few are chosen due to the empirical testing routines in the systems.


4.12 Types of Theory Development

In his Introduction to Metascience (1976) Hickey distinguishes three types of theory development, which he calls theory extension, theory elaboration and theory revision.

Theory extension is the use of a currently tested and nonfalsified explanation to address a new scientific problem. 

The extension could be as simple as adding hypothetical statements to make a general explanation more specific for the problem at hand.  A more complex strategy for theory extension is analogy.  In his Computational Philosophy of Science (1988) Thagard describes his strategy for mechanized theory development, which consists in the patterning of a proposed solution to a new problem by analogy with an existing explanation originally for a different subject.  Using his system design based on this strategy his discovery system called PI (an acronym for “Process of Induction”) reconstructed development of the theory of sound waves by analogy with the description of water waves.  The system was his Ph.D. dissertation.

In his Mental Leaps: Analogy in Creative Thought (1995) Thagard further explains that analogy is a kind of nondeductive logic, which he calls “analogic”.  It firstly involves the “source analogue”, which is the known domain that the investigator already understands in terms of familiar patterns, and secondly involves the “target analogue”, which is the unfamiliar domain that the investigator is trying to understand.  Analogic is the strategy whereby the investigator understands the targeted domain by seeing it in terms of the source domain.  Analogic requires a “mental leap”, because the two analogues may initially seem unrelated.  And the mental leap is called a “leap”, because analogic is not conclusive like deductive logic.

It may be noted that if the output state description generated by analogy such as the PI system is radically different from anything previously seen by the affected scientific profession containing the target analogue, then the members of that profession may experience the communication constraint to the high degree that is usually associated with a theory revision.  The communication constraint is discussed below (Section 4.26).

Theory elaboration is the correction of a currently falsified theory to create a new theory by adding new factors or variables that correct the falsified universally quantified statements and erroneous predictions of the old theory. 

Langley’s system BACON is a sequential application of theory elaboration using Simon’s “heuristic search” algorithm.

The new theory has the same test design as the old theory. The correction is not merely ad hoc excluding individual exceptional cases, but rather is a change in the universally quantified statements. This process is often misrepresented as “saving” a falsified theory, but in fact it creates a new one.

For example the introduction of a variable for the volume quantity and development of a constant coefficient for the particular gas could elaborate Gay-Lussac’s law for gasses into the combined Gay-Lussac’s law, Boyle’s law and Charles’ law.  Similarly Friedman’s macroeconomic quantity theory might be elaborated into a Keynesian liquidity-preference function by the introduction of an interest rate, to account for the cyclicality manifest in an annual time series describing the calculated velocity parameter and to display the liquidity trap phenomenon.

Pat Langley’s BACON discovery system implements theory elaboration.  It is named after the English philosopher Francis Bacon (1561-1626) who thought that scientific discovery can be routinized.  BACON is a set of successive and increasingly sophisticated discovery systems that make quantitative laws and theories from input measurements.  Langley designed and implemented BACON in 1979 as the thesis for his Ph.D. dissertation written in the Carnegie-Mellon department of psychology under the direction of Simon.  A description of the system is in Simon’s Scientific Discovery: Computational Explorations of the Crea­tive Processes (1987).

BACON uses Simon’s heuristic-search design strategy, which may be construed as a sequential application of theory elaboration.  Given sets of observation measurements for two or more variables, BACON searches for functional relations among the variables.  BACON has simulated the discovery of several historically significant empirical laws including Boyle’s law of gases, Kepler’s third planetary law, Galileo’s law of motion of objects on inclined planes, and Ohm’s law of electrical current.

Theory revision is a reorganization of currently existing information to create a new theory.

Hickey’s METAMODEL system uses what Simon called a “generate-and-test” design.

Its results may be radically different, so it might be undertaken after repeated attempts at both theory extension and theory elaborations have failed to correct a previously falsified theory.  The source for the input state description for mechanized theory revision consists of the descriptive vocabulary from the currently untested theories addressing the problem at hand.  The descriptive vocabulary from previously falsified theories may also be included as inputs to make an accumulative state description, because the vocabularies in rejected theories can be productively cannibalized for their scrap value.  The new theory is most likely to be called revolutionary if the revision is great, because theory revision typically produces greater change to the current language state than does theory extension or theory elaboration thus producing psychologically disorienting semantical dissolution.

Hickey’s METAMODEL discovery system synthesizes theory revisions.  It constructed the Keynesian macroeconomic theory from U.S. statistical data available prior to 1936, the publication year of Keynes’ revolutionary General Theory of Employment, Interest and Money.  The applicability of the METAMODEL for this theory revision was already known in retrospect by the fact that, as 1980 Nobel-laureate econometrician Lawrence Klein wrote in his Keynesian Revolution (1947), all the important parts of Keynes theory can be found in the works of one or another of his predecessors.  Hickey’s METAMODEL discovery system described in his Introduction to Metascience (1976) is a mechanized generative grammar with combinatorial transition rules producing econometric models.  The grammar is a finite-state generative grammar both to satisfy the collinearity restraint for the regression-estimated equations and to satisfy the formal requirements for executable multi-equation predictive models.  The system tests for collinearity, statistical significance, serial correlation, goodness-of-fit properties of the equations, and for accurate out-of-sample retrodictions. Simon calls this combinatorial type of system a “generate-and-test” design.

Hickey also used his METAMODEL system in 1976 to develop a post-classical macrosociometric functionalist model of the American national society with fifty years of historical time-series data. To the shock, chagrin and dismay of academic sociologists it is not a social-psychological theory, and four sociological journals therefore rejected Hickey’s paper describing the model and its findings about the national society’s dynamics and stability characteristics.  The paper is reprinted as “Appendix I” to BOOK VIII in www.philsci.com. 

The academic sociologists’ a priori ontological commitments to romanticism and social-psychological reductionism rendered the referees invincibly obdurate.  Their criticisms also betrayed their Luddite mentality toward mechanized theory development.  Later in the mid-1980’s Hickey integrated his macrosociometric model into a Keynesian macroeconometric model to produce an institutionalist macroeconometric model for the Indiana Department of Commerce, Division of Economic Analysis.


4.13 Examples of Successful Discovery Systems

There are several examples of successful discovery systems in actual use.  John Sonquist developed his AID system for his Ph.D. dissertation in sociology at the University of Chicago.  His dissertation was written in 1961, when William F. Ogburn was department chairman, which was before the romantics took over the University of Chicago sociology department.  He described the system in his Multivariate Model Building: Validation of a Search Strategy (1970).  The system has long been used at the University of Michigan Survey Research Center.  Now modified as the CHAID system using χ2 Sonquist’s discovery system is available commercially in SAS and SPSS statistical software packages. Its principal commercial application is for list processing for market analysis and for risk analysis as well as for academic investigations in social science.  It is not only the oldest mechanized discovery system but also the most widely used in practical applications to date.

Robert Litterman developed his BVAR system for his Ph.D. dissertation in economics at the University of Minnesota.  He described the system in his Techniques for Forecasting Using Vector Autoregressions (1984).  The economists at the Federal Reserve Bank of Minneapolis have long used his system for macroeconomic and regional economic analysis.  The State of Connecticut and the State of Indiana have also used it for regional economic analysis.

Having previously received an M.A. degree in economics Hickey had intended to develop his METAMODEL computerized discovery system for a Ph.D. dissertation in philosophy of science while a graduate student in the philosophy department of the University of Notre Dame, South Bend, Indiana.  But the Notre Dame philosophers under the chairmanship of a Reverend Ernan McMullin were obstructionist to Hickey views, and Hickey dropped out.  He then developed his computerized discovery system as a nondegree student at San Jose City College in San Jose, California.

For thirty years afterwards Hickey used his discovery system occupationally, working as a research econometrician in both business and government.  For six of those years he used his system for Institutionalist macroeconometric modeling and regional econometric modeling for the State of Indiana Department of Commerce.  He also used it for econometric market analysis and risk analysis for various business corporations including USX/United States Steel Corporation, BAT(UK)/Brown and Williamson Company, Pepsi/Quaker Oats Company, Altria/Kraft Foods Company, Allstate Insurance Company, and TransUnion LLC.

In 2004 TransUnion’s Analytical Services Group purchased a perpetual license to use his METAMODEL system for their consumer credit risk analyses using their proprietary TrenData aggregated quarterly time series extracted from their large national database of consumer credit files.  Hickey used the models generated by the discovery system to forecast payment delinquency rates, bankruptcy filings, average balances and other consumer borrower characteristics that constitute risk exposure for lenders, especially during the contractionary phase of the business cycle.  He also used the system at Quaker Oats and Kraft Foods to discover the sociological and demographic factors responsible for the secular long-term market dynamics of food products and other nondurable consumer goods.

Readers wishing to know more about discovery systems and computational philosophy of science see BOOK VIII at www.philsci.com.

 
4.14 Scientific Criticism

Criticism pertains to the criteria for the acceptance or rejection of theories.

The only criterion for scientific criticism that is acknowledged by the contemporary pragmatist is the empirical criterion.

The philosophical literature on scientific criticism has little to say about the specifics of experimental design.  Most often philosophical discussion of criticism pertains to the criteria for acceptance or rejection of theories and more recently to the decidability of empirical testing.  

In earlier times when the natural sciences were called “natural philosophy” and social sciences were called “moral philosophy”, nonempirical criteria operated as criteria for the criticism and acceptance of descriptive narratives.  Even today some philosophers and scientists have used their semantical and ontological preconceptions as criteria for the criticism of scientific theories including preconceptions about causality or specific causal factors.  Such semantical and ontological preconceptions have misled them to reject new empirically superior theories.  In his Against Method Feyerabend noted that the ontological preconceptions used to criticize new theories have often been the semantical and ontological claims expressed by previously accepted theories. 

But what historically has separated the empirical sciences from their origins in natural and moral philosophy is the empirical criterion, and it is responsible for the advancement of science and for its enabling practicality in application. Whenever in the history of science there has been a conflict between the empirical criterion and any nonempirical criteria for the evaluation of new theories, it is eventually the empirical criterion that ultimately decides theory selection.

Contemporary pragmatists accept relativized semantics, scientific realism, and thus ontological relativity, and they therefore reject all prior semantical or ontological criteria for scientific criticism including the romantics’ mentalistic ontology requiring social-psychological reductionism. 


4.15 Logic of Empirical Testing

An empirical test is a decision procedure consisting of a modus tollens deduction from a set of one or several universally quantified theory statements expressible in a nontruth-functional hypothetical-conditional schema proposed for testing together with an antecedent particularly quantified description of the initial test conditions, which jointly conclude to a consequent particularly quantified description of a produced (predicted) test-outcome event that is compared with the observed test-outcome description.

In order to express explicitly the dependency of the produced effect upon the realized initial conditions in an empirical test, the universally quantified theory statements can be schematized as a nontruth-functional hypothetical-conditional statement, i.e., as a statement with the logical form “For every A if A, then C.”  The hypothetical-conditional statement represents a system of one or several universally quantified related theory statements or equations that describe a dependency of the occurrence of an event described by “C” upon the occurrence of an event described by “A”.  The dependency may be expressed as the range of stochastic boundary limits for the values of predicted probabilities.  For advocates who believe in the theory, the hypothetical-conditional statement is the theory-language context that contributes meaning parts to the complex semantics of the theory’s constituent descriptive terms including the terms common to the theory and test design.  But the theory’s semantical contribution cannot be operative in the test for the test to be independent of the theory.

The antecedent “A” also includes the set of universally quantified statements of test design that describe the initial conditions that must be realized for execution of an empirical test of the theory together with the description of the procedures needed for their realization.  These statements are always presumed to be true or the test design is rejected as invalid.  They contribute meaning parts to the complex semantics of the terms common to theory and test design, and do so independently of the theory’s semantical contributions.  The universal logical quantification indicates that any execution of the experiment is but one of an indefinitely large number of possible test executions, whether or not the test is repeatable at will.

When the test is executed, the logical quantification of “A” is changed to particular quantification to describe the realized initial conditions in the individual test execution. When the universally quantified test-design and test-outcome statements have their logical quantification changed to particular quantification, the belief status and thus definitional rôle of the universally quantified test-design confer upon their particularly quantified versions the status of “facts” for all who accept the test design.  In a mathematically expressed theory the test execution consists in measurement actions and assignment of the resulting measurement values to the variables in “A”.  In a mathematically expressed single-equation theory, “A” includes the independent variables in the equation of the theory.  In a multi-equation system whether recursively structured or simultaneous, the exogenous variables are assigned values by measurement, and are included in “A”.  In longitudinal models with dated variables the lagged-values of endogenous variables that are the initial condition for a test and that initiate the recursion through successive iterations to generate predictions, must also be included in “A”.

The consequent “C” represents the set of universally quantified statements of the theory that describe the predicted outcome of every correct execution of a test design.  Its logical quantification is changed to particular quantification to describe the predicted outcome in an individual test execution.  In a mathematically expressed single-equation theory, “C” is the dependent variable in the equation of the theory.  When no value is assigned to any variable, the equation is universally quantified. When the prediction value of a dependent variable is calculated from the measurement values of the independent variables, it becomes particularly quantified. In a multi-equation theory, whether recursively structured or a simultaneous-equation system, the solution values for the endogenous variables are included in “C”.  In longitudinal models with dated variables the current-dated values of endogenous variables that are calculated by solving the model through successive iterations are included in “C”.

The conditional statement of theory does not say “For every A and for every C if A, then C”.  It only says “For every A if A, then C”.  In other words the conditional statement of theory only expresses a sufficient condition for the production of the phenomenon described by C upon realization of the test conditions given by “A”, and not a necessary condition.  Alternative test designs described in “A” are sufficient to produce “C”.  This may occur for example, if there are theories proposing alternative causal factors for the same outcome described in “C”.  Or if there are equivalent measurement procedures or instruments described in “A” that produce alternative measurements falling within the range of their measurement errors, such that the errors are small relative to the predicted values described by “C”.

Let another particularly quantified statement denoted “O” describe the observed test outcome of an individual test execution.  The report of the test outcome “O” shares vocabulary with the prediction statements “C”.  But the semantics of the terms in “O” is determined exclusively by the universally quantified test-design statements rather than by the statements of the theory, and thus for the test its semantics is independent of the theory’s semantical contribution.  In an individual predictive test execution “O” represents observations and/or measurements made and measurement values assigned after the prediction is made, and it too has particular logical quantification to describe the observed outcome resulting from the individual execution of the test.  There are three outcome scenarios:

Scenario I: If “A” is false in an individual test execution, then regardless of the truth of “C” the test execution is simply invalid due to a scientist’s failure to comply with its test design, and the empirical adequacy of the theory remains unaffected and unknown.  The empirical test is conclusive only if it is executed in accordance with its test design.  Contrary to the logical positivists, the truth table for the truth-functional Russellian logic is therefore not applicable to testing in empirical science, because in science a false antecedent, “A”, does not make the hypothetical-conditional statement true by logic of the test.

Scenario II: If “A” is true and the consequent “C” is false, as when the theory conclusively makes erroneous predictions, then the theory is falsified, because the hypothetical conditional “For every A if A, then C” is false.  Falsification occurs when the statements “C” and “O” are not accepted as describing the same thing within the range of vagueness and/or measurement error, which are manifestations of empirical underdetermination.  The falsifying logic of the test is the modus tollens argument form, according to which the conditional-hypothetical statement expressing the theory is falsified, when one affirms the antecedent clause and denies the consequent clause.  This is the falsificationist philosophy of scientific criticism advanced by Charles S. Peirce, the founder of pragmatism, and later advocated by Karl Popper.  Readers seeking more on Popper are referred to BOOK V at www.philsci.com.

The response to a falsification may or may not be attempts to develop a new theory.  Responsible scientists will seldom deny a falsifying outcome of a test, if they have accepted its test design and test execution.  Characterization of falsifying anomalous cases is informative, because it contributes to articulation of a new problem that a new and more empirically adequate theory must solve.  Some scientists may, as Kuhn said, simply believe that the anomalous outcome is an unsolved problem for the tested theory without attempting to develop a new theory.  But such a response is either an ipso facto rejection of the tested theory, a de facto rejection of the test design or simply a disengagement from attempts to solve the problem.  And contrary to Kuhn this procrastinating response to anomaly need not imply that the falsified theory has been given institutional status, unless the science itself is institutionally retarded.  Readers seeking more on Kuhn are referred to BOOK VI at www.philsci.com.

Scenario III:  If “A” and “C” are both true, the hypothetical-conditional statement expressing the tested theory is validly accepted as asserting a causal dependency between the phenomena described by the antecedent and consequent clauses.  The hypothetical-conditional statement does not merely assert a Humean constant conjunction.  Causality is an ontological category describing a real dependency, and the causal claim is asserted on the basis of ontological relativity due to the empirical adequacy demonstrated by the nonfalsifying test outcome.  Because the nontruth-functional hypothetical-conditional statement is empirical, causality claims are always subject to future testing, falsification, and then revision. This is also true when the conditional expresses a mathematical function.

Furthermore if the test design is modified such that it changes the characterization of the subject of the theory, then even a nonfalsifying test outcome should be reconsidered and the theory should be retested for the new definition of the subject.  If the retesting produces a falsifying outcome, then the new information in the modification of the test design has made the terms common to the two test designs equivocal and has contributed parts to alternative meanings.  But if the test outcome is not falsification, the new information is merely new parts added to the univocal meaning of the terms common to the old and new test-design language.  Such would be the case if the new information were what the positivists called a new “operational definition”, as for example a new and additional way to measure temperature for extreme values that cannot be measured by the old operation, but which yields the same temperature values within the range of measurement errors, where the alternative operations produce overlapping results.

On the contemporary pragmatist philosophy a theory that has been tested is no longer theory, once the test outcome is known and the test execution is accepted as correct.  If the theory has been falsified, it is merely rejected language.  But if it has been tested with a nonfalsifying test outcome, then it is empirically warranted and thus deemed a scientific law until it is tested again and falsified.  The law is still hypothetical because it is empirical, but it is less hypothetical than it had previously been as a theory proposed for testing.  The law may thereafter be used either in an explanation or in a test design for testing some other theory.

For example the elaborate engineering documentation for the Large Hadron Collider at CERN, the Conseil Européen pour la Recherche Nucléaire, is based on previously tested science.  After installation of the collider is complete, the science in that engineering is not what is tested when the particle accelerator is operated for the microphysical experiments, but rather it is presumed true and contributes to the test design language for experiments performed with the accelerator.


4.16 Test Logic Illustrated

Consider the simple case of Gay-Lussac’s law for a fixed amount of gas in an enclosed container as a theory proposed for testing.  The container’s volume is constant throughout the experimental test, and therefore is not represented by a variable.  The theory is (T'/T)*P = P', where the variable P means gas pressure, the variable T means the gas temperature, and the variables T' and P' are incremented values for T and P in a controlled experimental test, where T' = T ± ΔT, and P' is the predicted outcome that is produced by execution of the test design.

The statement of the theory may be schematized in the hypothetical-conditional form “For every A if A, then C”, where “A” includes (T'/T)*P, and “C” states the calculated prediction value of P', when temperature is incremented by ΔT from T to T'.   The theory is universally quantified, and thus claims to be true for every execution of the experimental test.  And for proponents of the theory, who are believers in the theory, the semantics of T, P, T' and P' are mutually contributing to the semantics of each other, a fact that could be made explicit in this case, because the equation is monotonic such that each variable can be expressed mathematically as a function of all the others by simple algebraic transformations.

“A” also includes the test-design statements.  These statements describe the experimental set up, the procedures for executing the test and initial conditions to be realized for execution of a test.  They include description of the equipment used including the container, the heat source, the instrumentation used to measure the magnitudes of heat and pressure, and the units of measurement for the magnitudes involved, namely the pressure units in atmospheres and the temperature units in degrees Kelvin (K°). And they describe the procedure for executing the repeatable experiment.  This test-design language is also universally quantified and thus also contributes meaning components to the semantics of the variables P, T and T' in “A” for all interested scientists who accept the test design.

 The procedure for performing the experiment must be executed as described in the test-design language, in order for the test to be valid. The procedure will include firstly measuring and recording the initial values of T and P.  For example let T be 200°K and P be 1.6 atmospheres. Let the incremented measurement value be recorded as ΔT = 200°K, so that the measurement value for T' is made to be 400°K.  The description of the execution of the procedure and the recorded magnitudes are expressed in particularly quantified test-design language for this particular test execution.  The value of P' is then calculated.

The test outcome consists of measuring and recording the resulting observed incremented value for pressure.  Let this outcome be represented by particularly quantified statement O using the same vocabulary as in the test design.  Only the universally quantified test-design statements define the semantics of O, so that the test is independent of the theory.  In this simple experiment one can simply denote the measured value for pressure by the variable O.  The test execution would also likely be repeated to enable estimation of the range of measurement error in T, T', P and O, and the error propagated into P'.  A mean average of the measurement values from repeated executions would be calculated for each of these variables.  Deviations from the mean are estimates of the amounts of measurement error, and statistical standard deviations could summarize the dispersion of measurement errors about the mean averages.

The mean average of the test-outcome measurements for O is compared to the mean average of the predicted measurements for P' to determine the test outcome.  If the values of P' and O are within the estimated ranges of measurement error, i.e., are sufficiently close to 3.2 atmospheres as to be within the measurement errors, then the theory is deemed not to have been falsified.  After repetitions with more extreme incremented values with no falsifying outcome, the theory will likely be deemed sufficiently warranted empirically to be called a law, as it is today.


4.17 Semantics of Empirical Testing

Much has already been said about the artifactual character of semantics, about componential semantics, and about semantical rules.  In the semantical discussion that follows these concepts are brought to bear upon the discussion of empirical testing and test outcomes.

If a test has a nonfalsifying outcome, then for the theory’s developer and advocates the semantics of the tested theory is unchanged.  Since they had proposed the theory in the belief that it would not be falsified, their belief in the theory makes it function for them as a set of semantical rules.  Thus for them both the theory and the test design are accepted as true, and after the nonfalsifying test outcome both sets of statements continue to contribute parts to the complex meanings of the descriptive terms common to both theory and test design, as before the test.

But when the test outcome is a falsification, there is a semantical change produced in the theory for the developer and advocates of the tested theory who accept the test outcome as a falsification.  The unchallenged test-design statements continue to contribute semantics to the terms common to the theory and test design by contributing their parts to the meaning complexes of each of the terms common to the theory and test design.  But the component parts of those meanings contributed by the falsified theory statements are excluded from the semantics of those common terms for the proponents who no longer believe in the theory due to the falsifying test outcome.


Pages [1] [2] [3] [4] [5] [6] [7]
NOTE: Pages do not corresponds with the actual pages from the book