INTRODUCTION TO PHILOSOPHY OF SCIENCE

Book I Page 7

4.15 Logic of Empirical Testing

Different sciences often have different surface structures.  But a syntactical transformation of the surface structure of the theory into nontruth-functional conditional logical form is a rational reconstruction that exhibits the deep structure of the theory explicitly displaying the essential contingency of the test.  The deep structure of the language of an empirical test is:

(1)     an effective decision procedure that can be schematized as a modus tollens logical deduction from a set of one or several universally quantified theory statements expressed in a nontruth-functional hypothetical-conditional schema

(2)     together with a particularly quantified antecedent description of the initial test conditions as defined in the test design

(3)     that jointly conclude to a consequent particularly quantified description of a produced (predicted) test-outcome event

(4)      that is compared with the observed test-outcome description.

In order to express explicitly the dependency of the produced effect upon the realized initial conditions in an empirical test, the universally quantified theory statements can be schematized as a nontruth-functional hypothetical-conditional schema, i.e., as a statement with the logical form “For every A if A, then C.”

This hypothetical-conditional schema “For every A if A, then C.” represents a system of one or several universally quantified related theory statements or equations that describe a dependency of the occurrence of events described by “C” upon the occurrence of events described by “A”.  In some cases the dependency is expressed as a bounded stochastic density function for the values of predicted probabilities.  For advocates who believe in the theory, the hypothetical-conditional schema is the theory-language context that contributes meaning parts to the complex semantics of the theory’s constituent descriptive terms including the terms common to the theory and test design.  But the theory’s semantical contribution cannot be operative in a test for the test to be independent of the theory, since the test outcome is not true by definition; it is empirically contingent.

The antecedent “A” also includes the set of universally quantified statements of test design that describe the initial conditions that must be realized for execution of an empirical test of the theory including the statements describing the procedures needed for their realization.  These statements constituting “A” are always presumed to be true or the test design is rejected as invalid, as is any test made with it.  The test-design statements are semantical rules that contribute meaning parts to the complex semantics of the terms common to theory and test design, and do so independently of the theory’s semantical contributions.  The universal logical quantification indicates that any execution of the test is but one of an indefinitely large number of possible test executions, whether or not the test is repeatable at will.

When the test is executed, the logical quantification of “A” is changed from universal to particular quantification to describe the realized initial conditions in the individual test execution. When the universally quantified test-design and test-outcome statements have their logical quantification changed to particular quantification, the belief status and thus definitional rôle of the universally quantified test-design confer upon their particularly quantified versions the status of “fact” for all who decided to accept the test design.  Nietzsche (1844-1900) said that there are no facts; there are only interpretations.  Hickey says that due to relativized semantics with its empirical underdetermination and to ontological relativity with its referential inscrutability, all facts are interpretations of reality.  Failure to recognize the interpreted character of facts is to indulge in what Wilfred Sellars (1912-1989) in his Science, Perception and Reality (1963) called “the myth of the given”.

The theory statements in the hypothetical-conditional schema are also given particular quantification for the test execution.  In a mathematically expressed theory the test execution consists in measurement actions and assignment of the resulting measurement values to the variables in “A”.  In a mathematically expressed single-equation theory, “A” includes the independent variables in the equation of the theory and the test procedure.  In a multi-equation system whether recursively structured or simultaneous, all the exogenous variables are assigned values by measurement, and are included in “A”.  In longitudinal models with dated variables the lagged-values of endogenous variables that are the initial condition for a test and that initiate the recursion through successive iterations to generate predictions, must also be in “A”.

The consequent “C” represents the set of universally quantified statements of the theory that predict the outcome of every correct execution of a test design.  The conditional’s logical quantification is changed from universal to particular quantification to describe the predicted outcome for the individual test execution.  In a mathematically expressed single-equation theory the dependent variable of the theory’s equation is in “C”.  When no value is assigned to any variable, the equation is universally quantified. When the predicted value of a dependent variable is calculated from the measurement values of the independent variables, the equation has been particularly quantified. In a multi-equation theory, whether recursively structured or a simultaneous-equation system, the solution values for all the endogenous variables are included in “C”.  In longitudinal models with dated variables the current-dated values of endogenous variables for each iteration of the model, which are calculated by solving the model through successive iterations, are included in “C”.

The theory statement need only say “For every A if A, then C”.  The conditional statement expressing a theory need not say “For every A and for every C if A, then C, (or “For every A if and only if A, then C” or “For every A, iff A, then C”) unless the conditional statement is convertible, i.e., a biconditional statement, also saying “For every C if C, then A”.  The uniconditional “For every A if A, then C” is definitive of functional relations in mathematically expressed theories.   In other words the conditional statement of theory need only express a sufficient condition for the correct prediction made in C upon realization of the test conditions described in “A”, and not a necessary condition.   This occurs if scientific pluralism (See below, Section 4.20) occasions multiple theories proposing alternative causal factors for the same outcome predicted correctly in “C”.  Or if there are equivalent measurement procedures or instruments described in “A” that produce alternative measurements with each having values falling within the range of the other’s measurement error. 

Let another particularly quantified statement denoted “O” describe the observed test outcome of an individual test execution.  The report of the test outcome “O” shares vocabulary with the prediction statements in “C”.  But the semantics of the terms in “O” is determined exclusively by the universally quantified test-design statements rather than by the statements of the theory, and thus for the test its semantics is independent of the theory’s semantical contribution.  In an individual test execution “O” represents observations and/or measurements made and measurement values assigned apart from the prediction in “C”, and it too has particular logical quantification to describe the observed outcome resulting from the individual execution of the test.  There are three possible outcome scenarios:

Scenario I: If “A” is false in an individual test execution, then regardless of the truth of “C” the test execution is simply invalid due to a scientist’s failure to comply with the agreed test design, and the empirical adequacy of the theory remains unaffected and unknown.  The empirical test is conclusive only if it is executed in accordance with its test design.  Contrary to the logical positivists, the truth table for the truth-functional logic is therefore not applicable to testing in empirical science, because in science a false antecedent, “A”, does not make the hypothetical-conditional statement true by logic of the test.

Scenario II: If “A” is true and the consequent “C” is false, as when the theory conclusively makes erroneous predictions, then the theory is falsified, because the hypothetical conditional “For every A if A, then C” is false.  Falsification occurs when the prediction statements in “C” and the observation reports in “O” are not accepted as describing the same thing within the range of vagueness and/or measurement error that are manifestations of empirical underdetermination.  The falsifying logic of the test is the modus tollens argument form, according to which the conditional-hypothetical schema expressing the theory is falsified, when one affirms the antecedent clause and denies the consequent clause.  This is the falsificationist philosophy of scientific criticism initially advanced by Peirce, the founder of classical pragmatism, and later advocated by Popper.  For more on Popper readers are referred to BOOK V at the free web site www.philsci.com or in the e-book Twentieth-Century Philosophy of Science: A History, which is available in the web site through hyperlinks in the web site to Internet booksellers.

The response to a conclusive falsification may or may not be attempts to develop a new theory.  Responsible scientists will not deny a falsifying outcome of a test, so long as they accept its test design and test execution.  Characterization of falsifying anomalous cases is informative, because it contributes to articulation of a new problem that a new and more empirically adequate theory must solve.  Some scientists may, as Kuhn said, simply believe that the anomalous outcome is an unsolved problem for the tested theory without attempting to develop a new theory.  But such a response is either an ipso facto rejection of the tested theory, a de facto rejection of the test design or simply a disengagement from attempts to solve the new problem.  And contrary to Kuhn this procrastinating response to anomaly need not imply that the falsified theory has been given institutional status, unless the science itself is institutionally retarded. 

For more on Kuhn readers are referred to BOOK VI at the free web site www.philsci.com or in the e-book Twentieth-Century Philosophy of Science: A History, which is available in the web site through hyperlinks to Internet booksellers.

Scenario III:  If “A” and “C” are both true, then the hypothetical-conditional schema expressing the tested theory is validly accepted as asserting a causal dependency between the phenomena described by the antecedent and consequent clauses, even if the conditional statement was merely an assumption.  But the acceptance is not a logically necessary conclusion, because to say that it is logical is to commit the fallacy of affirming the consequent.  The acceptance is of an empirical and thus falsifiable statement.  Yet the nontruth-functional hypothetical-conditional statement does not merely assert a Humean psychological constant conjunction.  Causality is an ontological category describing a real dependency, and the causal claim is asserted on the basis of ontological relativity due to the empirical adequacy demonstrated by the nonfalsifying test outcome.  Because the nontruth-functional hypothetical-conditional statement is empirical, causality claims are always subject to future testing, falsification, and then revision. This is also true when the conditional represents a mathematical function.

But if the test design is afterwards modified such that it changes the characterization of the subject of the theory, then a previous nonfalsifying test outcome should be reconsidered and the theory should be retested for the new definition of the subject.  If the retesting produces a falsifying outcome, then the new information in the modification of the test design has made the terms common to the two test designs equivocal and has contributed parts to alternative meanings.  But if the test outcome is not falsification, then the new information is merely new parts added to the meaning of the univocal terms common to the old and new test-design description.  Such would be the case for example for a new and additional way to measure temperature for extreme values that cannot be measured by the old measurement procedure, but which yields the same temperature values within the range of measurement errors, where the alternative procedures produce overlapping measurement results.

On the contemporary pragmatist philosophy a theory that has been tested is no longer theory, once the test outcome is known and the test execution is accepted as correct.  If the theory has been falsified, it is merely rejected language unless the falsified theory is still useful for the lesser truth it contains.  But if it has been tested with a nonfalsifying test outcome, then it is empirically warranted and thus deemed a scientific law until it is later tested again and falsified.  The law is still hypothetical because it is empirical, but it is less hypothetical than it had previously been as a theory proposed for testing.  The law may thereafter be used either in an explanation or in a test design for testing some other theory.

For example the elaborate engineering documentation for the Large Hadron Collider at CERN, the Conseil Européen pour la Recherche Nucléaire, is based on previously tested science.  After installation of the collider is complete and it is known to function successfully, the science in that engineering is not what is tested when the particle accelerator is operated for the microphysical experiments, but rather the employed science is presumed true and contributes to the test design semantics for experiments performed with the accelerator.


4.16 Test Logic Illustrated

For theories using a mathematical grammar, the mathematical grammar in the object language is typically the most efficient and convenient way to express the theory and to test it.  But philosophers of science may transform the mathematical forms of expression representing the surface structures into the deep structure consisting of a nontruth-functional conditional form that exhibits explicitly the essential empirical contingency expressed by the theory. 

Consider the simple heuristic case of Gay-Lussac’s law for a fixed amount of gas in an enclosed container as a theory proposed for testing, a case in which the surface structure is the mathematical equation, which can be transformed into the deep structure expressed as a nontruth-functional conditional sentence.  The container’s volume is constant throughout the experimental test, and therefore is not represented by a variable.  The mathematical equation that is the surface structure of the theory is (T'/T)*P = P', where the variable P means gas pressure, the variable T means the gas temperature, and the variables T' and P' are incremented values for T and P in a controlled experimental test, where T' = T ± ΔT, and P' is the predicted outcome that is produced by execution of the test design.  The statement of the theory may be schematized in the nontruth-functional hypothetical-conditional form “For every A if A, then C”, where “A” includes (T'/T)*P, and “C” states the calculated prediction value of P', when temperature is incremented by ΔT from T to T'.   The theory is universally quantified, and thus claims to be true for every execution of the experimental test.  And for proponents of the theory, who are believers in the theory, the semantics of T, P, T' and P' are mutually contributing to the semantics of each other, a fact exhibited explicitly in this case, because the equation is monotonic, such that each variable can be expressed as a mathematical function of all the others by simple algebraic transformations.

“A” also includes the universally quantified test-design statements.  These statements describe the experimental set up, the procedures for executing the test and initial conditions to be realized for execution of a test.  They include description of the equipment used including the container, the heat source, the instrumentation used to measure the magnitudes of heat and pressure, and the units of measurement for the magnitudes involved, namely the pressure units in atmospheres and the temperature units in degrees Kelvin (K°). And they describe the procedure for executing the repeatable experiment.  This test-design language is also universally quantified and thus also contributes meaning components to the semantics of the variables P, T and T' in “A” for all interested scientists who accept the test design.

 The procedure for performing the experiment must be executed as described in the test-design language, in order for the test to be valid. The procedure will include firstly measuring and recording the initial values of T and P.  For example let T = 175°K and P = 1.0 atmospheres. Let the incremented measurement value be recorded as ΔT = 15°K, so that the measurement value for T' is made to be 190°K.  The description of the execution of the procedure and the recorded magnitudes are expressed in particularly quantified test-design language for this particular test execution.  The value of P' is then calculated.

The test outcome consists of measuring and recording the resulting observed incremented value for pressure.  Let this outcome be represented by particularly quantified statement O using the same vocabulary as in the test design.  But only the universally quantified test-design statements define the semantics of O, so that the test is independent of the theory.  In this simple experiment one can simply denote the measured value for the resulting observed pressure by the variable O.  The test execution would also likely be repeated to enable estimation of the range of measurement error in T, T', P and O, and the measurement error propagated into P' by calculation.  A mean average of the measurement values from repeated executions would be calculated for each of these variables.  Deviations from the mean are estimates of the amounts of measurement error, and statistical standard deviations could summarize the dispersion of measurement errors about the mean averages.

The mean average of the test-outcome measurements for O is compared to the mean average of the predicted measurements for P' to determine the test outcome.  If the values of P' and O are equivalent within their estimated ranges of measurement error, i.e., are sufficiently close to 1.0545 atmospheres as to be within the measurement errors, then the theory is deemed not to have been falsified.  After repetitions with more extreme incremented values with no falsifying outcome, the theory will likely be deemed sufficiently warranted empirically to be called a law, as it is today.


4.17 Semantics of Empirical Testing

Much has already been said about the artifactual character of semantics, about componential semantics, and about semantical rules.  In the semantical discussion that follows, these concepts are brought to bear upon the discussion of the semantics of empirical testing and of test outcomes.

The ordinary semantics of empirical testing is as follows:

If a test has a nonfalsifying outcome, then for the theory’s developer and its advocates the semantics of the tested theory is unchanged by the test.

Since they had proposed the theory in the belief that it would not be falsified, their belief in the theory makes it function for them as a set of one or several semantical rules.  Thus for them both the theory and the test design continue to be accepted as true, and after the nonfalsifying test outcome both the theory and test-design statements continue to contribute parts to the complex meanings of the descriptive terms common to both theory and test design, as before the test.

But if the test outcome is a falsification, then there is a semantical change produced in the theory for the developer and the advocates of the tested theory who accept the test outcome as a falsification. 

The unchallenged test-design statements continue to contribute semantics to the terms common to the theory and test design by contributing their parts – their semantic values – to the meaning complexes of each of those common terms.  But the component parts of those meanings contributed by the falsified theory statements are excluded from the semantics of those common terms for the proponents who no longer believe in the theory due to the falsifying test, because the falsified theory statements no longer function as semantical rules.  


4.18 Test Design Revision

Empirical tests are conclusive decision procedures only for scientists who agree on which language is proposed theory and which language is presumed test design, and who furthermore accept both the test design and the test-execution outcomes produced with the accepted test design.

The decidability of empirical testing is not absolute.  Popper had recognized that the statements reporting the observed test outcome, which he called “basic statements”, require agreement by the cognizant scientists, and that those basic statements are subject to future reconsideration.

All universally quantified statements are hypothetical, but theory statements are relatively more hypothetical than test-design statements, because the interested scientists agree that in the event of a falsifying test outcome, revision of the theory will likely be more productive than revision of the test design. 

But a dissenting scientist who does not accept a falsifying test outcome of a theory has either rejected the report of the observed test outcome or reconsidered the test design.  If he has rejected the outcome of the individual test execution, he has merely questioned whether or not the test was executed in compliance with its agreed test design.  Independent repetition of the test with more conscientious fidelity to the design may answer such a challenge to the test’s validity one way or the other. 

But if in response to a falsifying test outcome the dissenting scientist has reconsidered the test design itself, he has thereby changed the semantics involved in the test in a fundamental way.  Such reconsideration amounts to rejecting the design as if it was falsified, and letting the theory define the subject of the test and the problem under investigation a rôle reversal in the pragmatics of test-design language and theory language that makes the original test design and the falsifying test execution irrelevant.

In his “Truth, Rationality, and the Growth of Know­ledge” (1961) reprinted in Conjectures and Refutations (1963) Popper rejects such a dissenting response to a test, calling it a “content-decreasing stratagem”.  He admonishes that the fundamental maxim of every critical discussion is that one should “stick to the problem”.  But as James B. Conant (1873-1978) recognized to his dismay in his On Understanding Science: An Historical Approach (1947) the history of science is replete with such prejudicial responses to scientific evidence that have nevertheless been productive and strategic to the advancement of basic science in historically important episodes.  The prejudicially dissenting scientists may decide that the design for the falsifying test supplied an inadequate description of the problem that the tested theory is intended to solve, often if he developed the theory himself and did not develop the test design.  The semantical change produced for such a recalcitrant believer in the theory affects the meanings of the terms common to the theory and test-design statements.  The parts of the meaning complex that had been contributed by the rejected test-design statements are the parts that are excluded from the semantics of one or several of the descriptive terms common to the theory and test-design statements.  Such a semantical outcome can indeed be said to be “content decreasing”, as Popper said.

But a scientist’s prejudiced or “tenacious” (per Feyerabend) rejection of an apparently falsifying test outcome may have a contributing function in the development of science.  It may function as what Feyerabend called a “detecting device”, a practice he called “counterinduction”, which is a strategy that he illustrated in his examination of Galileo’s arguments for the Copernican cosmology.  Galileo used the apparently falsified heliocentric theory as a “detecting device” by letting his prejudicial belief in the heliocentric theory control the semantics of the apparently falsifying observational description.  This enabled Galileo to reinterpret observations previously described with the equally prejudiced alternative semantics built into the Aristotelian geocentric cosmology. 

Counterinduction was also the strategy used by Heisenberg, when he reinterpreted the observational description of the electron track in the Wilson cloud chamber using Einstein’s aphorism that the theory decides what the physicist can observe, and Heisenberg reports that he then developed his indeterminacy relations using his matrix-mechanics quantum concepts.

An historic example of using an apparently falsified theory as a detecting device involves the discovery of the planet Neptune.  In 1821, when Uranus happened to pass Neptune in its orbit – an alignment that had not occurred since 1649 and was not to occur again until 1993 – Alexis Bouvard (1767-1843) developed calculations predicting future positions of the planet Uranus using Newton’s celestial mechanics.  But observations of Uranus showed significant deviations from the predicted positions.

A first possible response would have been to dismiss the deviations as measurement errors and preserve belief in Newton’s celestial mechanics. But the astronomical measurements were repeatable, and the deviations were large enough that they were not dismissed as observational errors.  The deviations were recognized to have presented a new problem.

A second possible response would have been to give Newton’s celestial mechanics the hypothetical status of a theory, to view Newton’s law of gravitation as falsified by the anomalous observations of Uranus, and then to attempt to revise Newtonian celestial mechanics.  But by then confidence in Newtonian celestial mechanics was very high, and no alternative to Newton’s physics had yet been proposed.  Therefore there was great reluctance to reject Newtonian physics.

A third possible response, which was historically taken, was to preserve belief in the Newtonian celestial mechanics, to modify the test-design language by proposing a new auxiliary hypothesis of a gravitationally disturbing planet, and then to reinterpret the observations by supplementing the description of the deviations using the auxiliary hypothesis.  Disturbing phenomena can “contaminate” even supposedly controlled laboratory experiments.  The auxiliary hypothesis changed the semantics of the test-design description with respect to what was observed.

In 1845 both John Couch Adams (1819-1892) in England and Urbain Le Verrier (1811-1877) in France independently using apparently falsified Newtonian physics as a detecting device made calculations of the positions of a disturbing postulated planet to guide future observations in order to detect the postulated disturbing body by telescope.  On 23 September 1846 using Le Verrier’s calculations Johann Galle (1812-1910) observed the postulated planet with the telescope of the Royal Observatory in Berlin. 

Theory is language proposed for testing, and test design is language presumed for testing.  But here the pragmatics of the discourses was reversed.  In this third response the Newtonian gravitation law was not deemed a tested and falsified theory, but rather was presumed to be true and used for a new test design.  The modified test-design language was given the relatively more hypothetical status of theory by the auxiliary hypothesis of the postulated planet thus newly characterizing the observed deviations in the positions of Uranus.  The nonfalsifying test outcome of this new hypothesis was Galle’s observational detection of the postulated planet, which was named Neptune.  This discovery is an example of the theory-elaboration practice with the modified version of the original test design functioning as a new theory.

But counterinduction is after all just a strategy, and it is more an exceptional practice than the routine one.  Le Verrier’s counterinduction strategy failed to explain a deviant motion of the planet Mercury when its orbit comes closest to the sun, a deviation known as its perihelion precession.  In 1843 Le Verrier presumed to postulate a gravitationally disturbing planet that he named Vulcan and predicted its orbital positions.  However unlike Le Verrier, Einstein had given Newton’s celestial mechanics the more hypothetical status of theory language, and he viewed Newton’s law of gravitation as having been falsified by the anomalous perihelion precession.  He had initially attempted a revision of Newtonian celestial mechanics by generalizing on his special theory of relativity.  This first such attempt is known as his Entwurf version, which he developed in 1913 in collaboration with his mathematician friend Marcel Grossman.  But working in collaboration with his friend Michele Besso he found that the Entwurf version had clearly failed to account accurately for Mercury’s orbital deviations; it showed only 18 seconds of arc per century instead of the actual 43 seconds.

In 1915 he finally abandoned the Entwurf version, and under prodding from the mathematician David Hilbert (1862-1943) he turned to mathematics exclusively to produce his general theory of relativity.  He then developed his general theory, and announced his correct prediction of the deviations in Mercury’s orbit to the Prussian Academy of Sciences on 18 November 1915.  He received a congratulating letter from Hilbert on “conquering” the perihelion motion of Mercury.  After years of delay due to World War I his general theory was further vindicated by Arthur Eddington’s (1888-1944) historic eclipse test of 1919.  Some astronomers reported that they had observed a transit of a planet across the sun’s disk, but these claims were found to be spurious when larger telescopes were used, and Le Verrier’s postulated planet Vulcan has never been observed.  MIT professor Thomas Levenson (1958) relates the history of the futile search for Vulcan in his The Hunt for Vulcan (2015).

Le Verrier’s response to Uranus’ deviant orbital observations was the opposite to Einstein’s response to the deviant orbital observations of Mercury.  Le Verrier reversed the rôles of theory and test-design language by preserving his belief in Newton’s physics and using it to revise the test-design language with his postulate of a disturbing planet. Einstein viewed Newton’s celestial mechanics to be hypothetical, because he believed that the Newtonian theory statements were more likely to be productively revised than test-design statements, and he took the anomalous orbital observations of Mercury to falsify Newton’s physics, thus indicating that theory revision was needed.  Empirical tests are conclusive decision procedures only for scientists who agree on which language is proposed theory and which is presumed test design, and who furthermore accept both the test design and the test-execution outcomes produced with the accepted test design.

For more about Feyerabend on counterinduction readers are referred to BOOK VI at the free web site www.philsci.com or in the e-book Twentieth-Century Philosophy of Science: A History, which is available in the web site through hyperlinks to Internet booksellers.

There are also more routine cases of test design revision that do not occasion counterinduction.  In such cases there is no rôle reversal in the pragmatics of theory and test design, but there may be an equivocating revision in the test-design semantics depending on the test outcome due to a new observational technique or instrumentality, which may have originated in what Feyerabend called “auxiliary sciences”, e.g., development of a superior microscope or telescope.  If retesting a previously nonfalsified theory with the new test design with the new observational technique or instrumentality does not produce a falsifying outcome, then the result is merely a refinement that has reduced the empirical underdetermination in the semantics of the test-design language (See below, Section 4.19).   But if the newly accepted test design occasions a falsification, then it has produced a semantical equivocation between the statements of the old and new test-designs, and has thereby redefined the subject of the tested theory.


4.19 Empirical Underdetermination

Conceptual vagueness and measurement error are manifestations of empirical underdetermination, which may occasion scientific pluralism.

The empirical underdetermination of language may make an empirical test design incapable of producing a decisive theory-testing outcome.  Two manifestations of empirical underdetermination are conceptual vagueness and measurement error.  All concepts have vagueness that can be reduced indefinitely but can never be eliminated completely.  This is also true of concepts of quantized objects.  Mathematically expressed theories use measurement data that always contain measurement inaccuracy that can be reduced indefinitely but never eliminated completely.

Scientists prefer measurements and mathematically expressed theories, because they can measure the amount of prediction error in the theory, when the theory is tested.  But separating measurement error from a theory’s prediction error can be problematic.  Repeated careful execution of the measurement procedure, if the test is repeatable, enables statistical estimation of the range of measurement error.  But in research using historical time-series data such as in economics, repetition is not typically possible.


4.20 Scientific Pluralism

 Scientific pluralism is recognition of the co-existence of multiple empirically adequate alternative explanations due to undecidability resulting from the empirical underdetermination in a test-design.

All language is always empirically underdetermined by reality.  Empirical underdetermination explains how two or more semantically alternative empirically adequate explanations can have the same test-design.  This means that there are several theories having alternative explanatory factors and yielding accurate predictions that are alternatives to one another, while predicting differences that are small enough to be within the range of the estimated measurement error in the test design.  In such cases empirical underdetermination due to the current test design imposes undecidability on the choice among the alternative explanations.

Econometricians are accustomed to alternative empirically adequate econometric models.  This occurs because measurement errors in aggregate social statistics are often large in comparison to those available in laboratory sciences.  In such cases each social-science model has different equation specifications, i.e., different causal variables in the equations of the model, and makes different predictions for some of the same prediction variables that are accurate within the relatively large range of estimated measurement error.  And discovery systems with empirical test procedures routinely proliferate empirically adequate alternative explanations for output.  They produce what Einstein called “an embarrassment of riches”.  Logically this multiplicity of alternative explanations means that there may be alternative empirically warranted nontruth-functional hypothetical conditional schemas in the form “For all A if A, then C” having alternative causal antecedents “A” and making different but empirically adequate predictions that are the empirically indistinguishable consequents “C”.

Empirical underdetermination is also manifested as conceptual vagueness.  For example to develop his three laws of planetary motion Johannes Kepler (1591-1630), a heliocentrist, used the measurement observations of Mars that had been collected by Tycho Brahe (1546-1601), a type of geocentrist.  Brahe had an awkward geocentric-heliocentric cosmology, in which the fixed earth is the center of the universe, the stars and the sun revolve around the earth, and the other planets revolve around the sun.  Kepler used Brahe’s astronomical measurement data.  There was empirical underdetermination in these measurement data, as in all measurement data. 

But measurement error was not the operative empirical underdetermination permitting the alternative cosmologies, because both astronomers used the same data.  In this case increased measurement accuracy might not have eliminated this pluralism.  Kepler was a convinced Copernican placing the sun at the center of the universe.  His belief in the Copernican heliocentric cosmology made the semantic parts contributed by that heliocentric cosmology become for him component parts of the semantics of the language used for celestial observation, thus displacing Brahe’s more complicated combined geocentric-heliocentric cosmology’s semantical contribution.  The manner in which Brahe and Kepler could have different observations is discussed by Hanson in his chapter “Observation” in his Patterns of Discovery.  Hanson states that even if both the geocentric and heliocentric astronomers saw the same dawn, they nonetheless saw differently, because observation depends on the conceptual organization in one’s language.   Hanson uses the “see that….” locution.  Thus Brahe sees that the sun is beginning its journey from horizon to horizon, while Kepler sees that the earth’s horizon is dipping away from our fixed local star.   Einstein said that the theory decides what the physicist can observe; Hanson similarly said that observation is “theory laden”.

Alternative empirically adequate explanations due to empirical underdetermination are all more or less true.  An answer as to which explanation is truer must await further development of additional observational information or measurements that reduce the empirical underdetermination in the test-design concepts.  But there is never any ideal test design with “complete” information, i.e., without vagueness or measurement error.  Recognition of possible undecidability among alternative empirically adequate scientific explanations due to empirical underdetermination occasions what pragmatists call “scientific pluralism.


4.21 Scientific Truth

Truth and falsehood are spectrum properties of statements, such that the greater the truth, the lesser the error. 

Tested and nonfalsified statements are more empirically adequate, have more realistic ontologies, and are truer than falsified statements.

Falsified statements have recognized error, and may simply be rejected, unless they are found to be still useful for their lesser realism and truth.

The degree of truth in untested statements is unknown until tested.

What is truth!  Truth is a spectrum property of descriptive language with its relativized semantics and ontology.  It is not merely a subjective expression of approval.

Belief and truth are not identical.  Belief is acceptance of a statement as predominantly true.  As Jarrett Leplin (1944) maintains in his Defense of Scientific Realism (1997), truth and falsehood are properties that admit to more or less.  They are not simply dichotomous, as they are represented in two-valued formal logic.  Therefore one may wrongly believe that a predominantly false statement is predominantly true, or wrongly believe that a predominantly true statement is predominantly false.  Belief controls the semantics of the descriptive terms in a universally quantified statement, while truth is the relation of a statement’s semantics together with the ontology it describes to mind-independent nonlinguistic reality. 

Test-design language is presumed true with definitional force for the semantics of the test-design language, in order to characterize the subject and procedures of a test.  Theory language in an empirical test may be believed true by the developer and advocates of the theory, but the theory is not true simply by virtue of their belief.  Belief in an untested theory is speculation about a future test outcome.  A nonfalsifying test outcome will warrant belief that the tested theory is as true as the theory’s demonstrated empirical adequacy.  Empirically falsified theories have recognized error, are predominantly false, and may be rejected unless they are found to be still useful for their lesser realism and lesser truth. Tested and nonfalsified statements are more empirically adequate, have ontologies that are more realistic, and thus are truer than empirically falsified statements.

Popper said that Eddington’s historic eclipse test of Einstein’s theory of gravitation in 1919 “falsified” Newton’s theory and thus “corroborated” Einstein’s theory.  Yet the U.S. National Aeronautics and Space Administration (NASA) today still uses Newton’s laws to navigate interplanetary rocket flights such as the Voyager and New Horizon missions.  Thus Newton’s “falsified” theory is not completely false or totally unrealistic, or it could never have been used before or after Einstein.  Popper said that science does not attain truth.  But contemporary pragmatists believe that such an absolutist idea of truth is misconceived.  Advancement in empirical adequacy is advancement in realism and in truth.  Feyerabend said, “Anything goes”.  Regarding ontology Hickey says, “Everything goes”, because while not all discourses are equally valid, there is no semantically interpreted syntax utterly devoid of ontological significance and thus no discourse utterly devoid truth.  Therefore Hickey adds that the more empirically adequate explanation goes farther – is truer and more realistic – than its less empirically adequate falsified alternatives.

In the latter half of the twentieth century there was a melodramatic melee among academic philosophers of science called the “Science Wars”.  The phrase “Science Wars” appeared in the journal Social Text published by Duke University in 1996.  The issue contained a bogus article by a New York University physicist Alan Sokal.  In a New York Times article (18 May 1996) Sokal disclosed that his purpose was to flatter the editors’ ideological preconceptions, which were social constructionist.  Sokal’s paper was intended to be a debunking exposé of postmodernism.  But since the article was written as a parody instead of a serious scholarly article, it was basically an embarrassment for the editors.

The “Science Wars” conflict involved sociology of science due to the influence of Thomas Kuhn’s Structure of Scientific Revolutions.  On the one side of the conflict were the postmodernists who advocated semantical relativism and social constructivism.  On the other side were philosophers who defended traditional scientific realism and objectivism.  The postmodernists questioned the decidability of scientific criticism, while the traditionalists defended it in the name of reason in the practice of science. 

The “Science Wars” conflict is resolved by the introduction of the concept of ontological relativity, which is both realist and constructivist, and also quite decidable by empirical criticism.  Relativized semantics is perspectivist and it relativizes ontology, but semantics nonetheless reveals reality.  Empirical underdetermination limits the decidability of criticism and occasionally admits scientific pluralism within empirically set limits.  But relativized semantics and constructivism do not abrogate decidability in scientific criticism or preclude scientific progress; it does not deliver science to social capriciousness or to any inherent irrationality.

Empirical science progresses in empirical adequacy, and thereby in realism and in truth.


4.22 Nonempirical Criteria

Confronted with irresolvable scientific pluralism – having several alternative explanations that are tested and not falsified due to empirical underdetermination in the test-design language – philosophers and scientists have proposed various nonempirical criteria that they believe have been operative historically in explanation choice. 

Furthermore a plurality of untested and therefore unfalsified theories may also exist before any testing, so that different scientists may have their preferences for testing one theory over others based on nonempirical criteria. 

Philosophers have proposed a variety of such nonempirical criteria.  Popper advances a criterion that he says enables the scientist to know in advance of any empirical test, whether or not a new theory would be an improvement over existing theories, were the new theory able to pass crucial tests in which its performance is comparable to the older alternative existing alterna­tives.  He calls this criterion the “potential satisfactoriness” of the theory, and it is measured by the amount of “information content” in the theory.  This criterion follows from his concept of the aim of science, the thesis that the theory that tells us more is preferable to one that tells us less, because the more informative theory has more “potential falsifiers”. 

But the amount of information in a theory is not static; it will likely evolve as the tested and nonfalsified theory is developed by the cognizant scientific profession over time.  And a theory with greater potential satisfactoriness may be empirically inferior, when tested with an improved test design.  Test designs are improved by developing more accurate measurement procedures and/or by adding clarifying descriptive information that reduces the vagueness in the characterization of the subject for testing.  Such test-design improvements refine the characterization of the problem addressed by the theories, and thus reduce empirical underdetermination and improve the decidability of testing.

When empirical underdetermination makes testing undecidable among alternative theories, different scientists may have personal reasons for preferring one or another alternative as an explanation.  In such circumstances selection may be an investment decision for the career scientist rather than an investigative decision.  The choice may be influenced by such circumstances as the cynical realpolitik of peer-reviewed journals.  Knowing what editors and their favorite referees currently want in submissions helps an author getting his paper published.  Publication is an academic status symbol with the more prestigious journals yielding more brownie points for accumulating academic tenure, salary and status.

In the January 1978 issue of the Journal of the American Society of Information Science (JASIS) the editor wrote that referees often use the peer review process as a means to attack a point of view and to suppress the content of a submitted paper, i.e., they attempt censorship.  Furthermore editors are not typically entrepreneurial; as “gate guards” they are academia’s risk-aversive rearguard rather than the risk-taking avant-garde.  They select the established “authorities” with reputation-based vested interests in the prevailing traditional views.  These so-called “authorities” cynically suborn the peer-review process by using their conventional views as criteria for criticism and for rejection for publication instead of using empirical criteria.  Such cynical reviewers and editors are effectively hacks that represent the status quo demanding trite papers rather than new, original and empirically superior ideas.  When this conventionality that produces hacks becomes sufficiently pervasive, it becomes normative, which is to say the science has become institutionally corrupted.

In contemporary academic sociology conventionality is accentuated by the conformism that is highly valued by sociological theory, which among sociologists is even furthermore reinforced by sociologists’ enthusiastic embrace of Kuhn’s conformist sociological thesis of “normal science”.  For example shortly after Kuhn’s Structure of Scientific Revolutions there appeared a new sociological journal, Sociological Methods and Research.  In a statement of policy reprinted in every issue for many years the editor states that the journal is devoted to sociology as a “cumulative” empirical science, and he describes the journal as one that is highly focused on the assessment of the scientific status of sociology.  One of the distinctive characteristics of normal science in Kuhn’s theory is that it is cumulative, such that it can demonstrate progress.  In other words research that does not conform to Kuhnian “normal science” is not progress.  Corruption has thus become more institutionalized and more retarding to development in academic sociology than in the other sciences.

External sociocultural factors have also influenced theory choice.  In his Copernican Revolution: Planetary Astronomy in the Development of Western Thought (1957) Kuhn wrote that the astronomer in the time of Copernicus could not upset the two-sphere universe without overturning physics and religion as well.  He reports that fundamental concepts in the pre-Copernican astronomy had become strands for a much larger fabric of thought, and that the nonastronomical strands in turn bound the thinking of the astronomers.  The Copernican revolution occurred because Copernicus was a dedicated specialist, who valued mathematical and celestial detail more than the values reinforced by the nonastronomical views that were dependent on the prevailing two-sphere theory.  This purely technical focus of Copernicus enabled him to ignore the nonastronomical consequences of his innovation, consequences that would lead his contemporaries of less restricted vision to reject his innovation as absurd.

Citing Kuhn some sociologists of knowledge including notably those advocating the “strong program” maintain that the social and political forces that influence society at large also inevitably influence the content of scientific beliefs.  This is truer in the social sciences, but sociologists who believe that this means empiricism does not control acceptance of scientific beliefs in the long term are mistaken, because it is pragmatic empiricism that enables wartime victories, peacetime prosperity – and in all times business profits – as reactionary politics, delusional ideologies and utopian fantasies cannot.

Persons with different economic views defend and attack certain social/political philosophies, ideologies, special interests and provincial policies, which are nonempirical criteria.  For example in the United States more than eighty years after Keynes, Republican politicians still attack Keynesian economics while Democrat politicians defend it.  Many Republicans are motivated by the right-wing political ideology such as may be found in 1974 Nobel-laureate Frederich von Hayek’s (1899-1992) Road to Serfdom or in the heroic novels by Ayn Rand (1905-1982).  The prevailing political philosophy among Republicans opposes government intervention in the private economy.  But as Federal Reserve Board of Governors Chairman Ben Bernanke (1953), New York Federal Reserve Bank President Timothy Geithner and U.S. Treasury Secretary Henry Paulson (1946) maintain in their Firefighting: The Financial Crisis and Its Lessons (2019), Adam Smith’s invisible hand of capitalism cannot stop a full blown financial collapse; only the visible hand of government can do that (P. 5). 

The post-World War II era offered no opportunity to witness a liquidity trap, but that changed in the 2007-2009 Great Recession, which thus offered added resolution of the previous empirical underdetermination to improve decidability.  Then pragmatism prevailed over ideology, when expediency dictated.  In his After the Music Stopped (2013) Alan S. Blinder (1948), Princeton University economist and former Vice Chairman of the Federal Reserve Board of Governors, reports that “ultraconservative” Republican President George W. Bush (1946) “let pragmatism trump ideology” (P. 213), when he signed the Economic Stimulus Act of 2008, a distinctively Keynesian fiscal policy of tax cuts, which added $150 billion to the U.S. Federal debt notwithstanding Republicans’ visceral abhorrence of the Federal debt.

In contrast Democrat President Barak Obama (1961) without reluctance and with a Democrat-controlled Congress signed the American Reinvestment and Recovery Act in 2009, a stimulus package that added $787 billion to the Federal debt.  Blinder in “How the Great Recession was Stopped” in Moody’s Analytics (2010) reports that simulations with the Moody Analytics large macroeconometric model showed that the effect of President Obama’s stimulus in contrast to a no-stimulus simulation scenario was a GDP that was 6 per cent higher with the stimulus than without it, an unemployment rate 3 percentage points lower, and 4.8 million additional Americans employed (P. 209).

Nonetheless as former Federal Reserve Board Chairman Ben Bernanke wrote in his memoir The Courage to Act (2013), President Obama’s 2009 stimulus was small in comparison with its objective of helping to arrest the deepest recession in seventy years in a $15 trillion national economy (P. 388).  Thus Bernanke, a conservative Republican, did not reject Keynesianism, but instead actually concluded that the recovery was needlessly slow, because the Obama Federal fiscal stimulus program was disproportionately small for the U.S. national macroeconomy.  Empirical underdetermination could have been resolved further had the Federal fiscal stimulus been larger.  Still pragmatic Republican officials were not willing to permit conservative nonintervention produce another Great Depression with 25% unemployment rates, although it would have made the Great Recession more empirically decidable about the effectiveness of Federal fiscal stimulus policy.

There are many other examples of nonempirical criteria that have operated in scientific criticism.  Another example is 1933 Nobel-laureate physicist Paul Dirac (1902-1984) who relied on the aesthetics he found in mathematics for his development of his operator calculus for quantum physics.  But all nonempirical criteria are presumptuous.  No nonempirical criterion enables a scientist to predict reliably which among alternative untested theories or nonfalsified explanations will survive empirical testing, when in due course the degree of empirical underdetermination is reduced by a new and improved test design that enables decidable testing.  To make such anticipatory choices is like betting on a horse before it runs the race.


4.23 The “Best Explanation” Criteria

As previously noted (See above, Section 4.05) Thagard’s cognitive-psychology system ECHO developed specifically for theory selection has identified three nonempirical criteria to maximize achievement of the coherence aim.  His simulations of past episodes in the history of science indicate that the most important criterion is breadth of explanation, followed by simplicity of explanation, and finally analogy with previously accepted theories.  Thagard considers these nonempirical selection criteria as productive of a “best explanation”.

The breadth-of-explanation criterion also suggests Popper’s aim of maximizing information content.  In any case there have been successful theories in the history of science, such as Heisenberg’s matrix mechanics and uncertainty relations, for which none of these three characteristics were operative in the acceptance as explanations.  And as Feyerabend noted in Against Method in criticizing Popper’s view, Aristotelian dynamics is a general theory of change comprising locomotion, qualitative change, generation and corruption, while Galileo and his successors’ dynamics pertains exclusively to locomotion.  Aristotle’s explanations therefore may be said to have greater breadth, but his physics is now deemed to be less empirically adequate.

 Contemporary pragmatists acknowledge only the empirical criterion, the criterion of superior empirical adequacy.  They exclude all nonempirical criteria from the aim of science, because while relevant to persuasion to make theories appear “convincing”, they are irrelevant as evidence of progress.  Nonempirical criteria are like the psychological criteria that trial lawyers use to select and persuade juries in order to win lawsuits in a court of law, but which are irrelevant to courtroom evidence rules for determining the facts of a case.  Such prosecutorial lawyers are like the editors and referees of the peer-reviewed academic literature (sometimes called the “court of science”) who ignore the empirical evidence described in a paper submitted for publication and who reject the paper due to its unconventionality.  Such editors make marketing-based instead of evidence-based publication decisions, and they corrupt the institution of science.

But nonempirical criteria are often operative in the selection of problems to be addressed and explained.  For example the American Economic Association’s Index of Economic Journals indicates that in the years of the lengthy Great Depression the number of journal articles concerning the trade cycle fluctuated in close correlation with the national average unemployment rate with a lag of two years.


4.24 Nonempirical Linguistic Constraints

The constraint imposed upon theorizing by empirical test outcomes is the empirical constraint, the criterion of superior empirical adequacy.  It is the regulating institutionalized cultural value definitive of modern empirical science that is not viewed as an obstacle to be overcome, but rather as a condition to be respected for the advancement of science.

But there are other kinds of constraints that are nonempirical and are retarding impediments that must be overcome for the advancement of science, and they are internal to science in the sense that they are inherent in the nature of language.  They are the cognition constraint and communication constraint.


4.25 Cognition Constraint

The semantics of every descriptive term is determined by its linguistic context consisting of universally quantified statements believed to be true. 

Conversely given the conventional meaning for a descriptive term, certain beliefs determining the meaning of the term are reinforced by habitual linguistic fluency with the result that the meaning’s conventionality constrains change in those defining beliefs. 

The conventionalized meanings for descriptive terms therefore produce the cognition constraint.  The cognition constraint is the linguistic impediment that inhibits construction of new theories, and is manifested as lack of imagination, creativity or ingenuity.

In his Concept of the Positron Hanson identified this impediment to discovery and called it the “conceptual constraint”.  He reports that physicists’ identification of the concept of the subatomic particle with the concept of its charge was an impediment to recognizing the positron.  The electron was identified with a negative charge and the much more massive proton was identified with a positive charge, so that the positron as a particle with the mass of an electron and a positive charge was not recognized without difficulty and delay. 

In his Introduction to Metascience Hickey referred to this conceptual constraint as the “cognition constraint”.  The cognition constraint inhibits construction of new theories, and is manifested as lack of imagination, creativity or ingenuity.  Semantical rules are not just explicit rules; they are also strong linguistic habits with subconscious roots that enable prereflective competence and fluency in both thought and speech.  Six-year-old children need not reference explicit grammatical and semantical rules in order to speak competently and fluently.  And these subconscious habits make meaning a synthetic psychological experience.

Given a conventionalized belief or firm conviction expressible as a universally quantified affirmative statement, the predicate in that affirmation contributes meaning parts to the meaning complex of the statement’s subject term.  The conventionalized status of meanings make development of new theories difficult, because new theory construction requires greater or lesser semantical dissolution and restructuring of the semantics of conventional terms.  Accordingly the more extensive the revision of beliefs, the more constraining are both the semantical restructuring and the psychological conditioning on the creativity of the scientist who would develop a new theory.  Revolutionary theory development requires both relatively more extensive semantical dissolution and restructuring and thus greater psychological adjustment in linguistic habits. 

However, use of computerized discovery systems circumvents the cognition constraint, because the machines have no linguistic-psychological habits.  Their mindless electronic execution of mechanized procedures is one of their virtues.

The cognition-constraint thesis is opposed to the neutral-language thesis that language is merely a passive instrument for expressing thought.  Language is not merely passive but rather has a formative influence on thought.  The formative influence of language as the “shaper of meaning” has been recognized as the Sapir-(1941) hypothesis and specifically by Benjamin Lee Whorf’s principle of linguistic relativity set forth in his “Science and Linguistics” (1940) reprinted in Language, Thought and Reality (1956).  But contrary to Whorf it is not the grammatical system that determines semantics, but rather it is what Quine called the “web of belief”, i.e., the shared belief system as found in a unilingual dictionary.

For more about the linguistic theory of Whorf readers are referred to in BOOK VI at the free web site www.philsci.com or in the e-book Twentieth-Century Philosophy of Science: A History, which is available in the web site through hyperlinks to Internet booksellers.


4.26 Communication Constraint

The communication constraint is the linguistic impediment to understanding a theory that is new relative to those currently conventional. 

The communication constraint has the same origins as the cognition constraint.  This impediment is also both cognitive and psychological.  The scientist must cognitively learn the new theory well enough to restructure the composite meaning complexes associated with the descriptive terms common both to the old theory that he is familiar with and to the theory that is new to him.  And this learning involves overcoming psychological habit that enables linguistic fluency that reinforces existing beliefs.

This learning process suggests the conversion experience described by Kuhn in revolutionary transitional episodes, because the new theory must firstly be accepted as true however provisionally for its semantics to be understood, since only statements believed to be true can operate as semantical rules that convey understanding.  That is why dictionaries are presumed not to contain falsehoods.  If testing demonstrates the new theory’s superior empirical adequacy, then the new theory’s pragmatic acceptance should eventually make it the established conventional wisdom.

But if the differences between the old and new theories are so great as perhaps to be called revolutionary, then some members of the affected scientific profession may not accomplish the required learning adjustment.  People usually prefer to live in an orderly world, but innovation creates semantic dissolution and consequent psychological disorientation.  In reaction the slow learners and nonlearners become a rearguard that clings to the received conventional wisdom, which is being challenged by the new theory at the frontier of research, where there is much conflict that produces confusion due to semantic dissolution and consequent restructuring of the relevant concepts in the web of belief.

The communication constraint and its effects on scientists have been insightfully described by Heisenberg, who personally witnessed the phenomenon when his quantum theory was firstly advanced.  In his Physics and Philosophy: The Revolution in Modern Science Heisenberg defines a “revolution” in science as a change in thought pattern, which is to say a semantical change, and he states that a change in thought pattern becomes apparent, when words acquire meanings that are different from those they had formerly.  The central question that Heisenberg brings to the phenomenon of revolution in science understood as a change in thought pattern is how the revolution is able to come about.  The occurrence of a scientific revolution is problematic due to resistance to the change in thought pattern presented to the cognizant profession.

Heisenberg notes that as a rule the progress of science proceeds without much resistance or dispute, because the scientist has by training been put in readiness to fill his mind with new ideas.  But he says the case is altered when new phenomena compel changes in the pattern of thought.  Here even the most eminent of physicists find immense difficulties, because a demand for change in thought pattern may create the perception that the ground has been pulled from under one’s feet.  He says that a researcher having achieved great success in his science with a pattern of thinking he has accepted from his young days, cannot be ready to change this pattern simply on the basis of a few novel experiments.  Heisenberg states that once one has observed the desperation with which clever and conciliatory men of science react to the demand for a change in the pattern of thought, one can only be amazed that such revolutions in science have actually been possible at all.

It might be added that since the prevailing conventional view has usually had time to be developed into a more extensive system of ideas, those unable to cope with the semantic dissolution produced by the newly emergent ideas often take refuge in the psychological comforts of coherence and familiarity provided by the more extensive conventional wisdom, which assumes the nature of a dogma and for some scientists an occupational ideology. 

In the meanwhile the developers of the new ideas together with the more opportunistic and typically younger advocates of the new theory, who have been motivated to master the new theory’s language in order to exploit its perceived career promise, assume the avant-garde rôle and become a vanguard.   1970 Nobel-laureate economist Paul Samuelson offers a documented example: He wrote in “Lord Keynes and the General Theory” in Econometrica (1946) that he considers it a priceless advantage to have been an economist before 1936, the publication year of Keynes’ General Theory, and to have received a thorough grounding in classical economics, because his rebellion against Keynes’ General Theory’s pretensions would have been complete save for his uneasy realization that he did not at all understand what it is about.  And he adds that no one else in Cambridge, Massachusetts really knew what it is about for some twelve to eighteen months after its publication.  Years later he wrote in his Keynes’ General Theory: Reports of Three Decades (1964) that Keynes’ theory had caught most economists under the age of thirty-five with the unexpected virulence of a disease first attacking and then decimating an isolated tribe of South Sea islanders, while older economists were the rearguard that was immune.  Samuelson was a member of the Keynesian vanguard.

Note also that contrary to Kuhn and especially to Feyerabend the transition however great does not involve a complete semantic discontinuity much less any semantic incommensurability.  And it is unnecessary to learn the new theory as though it were a completely foreign language.  The semantic incommensurability muddle is resolved by recognition of componential semantics.  For the terms common to the new and old theories, the component parts contributed by the new theory replace those from the old theory, while the parts contributed by the test-design statements remain unaffected.  Thus the test-design language component parts shared by both theories enable characterization of the subject of both theories independently of the distinctive claims of either, and thereby enable decisive testing.  The shared semantics in the test-design language also facilitates learning and understanding the new theory, however radical the new theory may be.

It may furthermore be noted that the scientist viewing the computerized discovery system output experiences the same communication impediment with the machine output that he would, were the outputted theories developed by a fellow human scientist.  The communication constraint makes new theories developed mechanically grist for Luddites’ mindless rejection.

Fortunately today the Internet and e-book media enable new ideas to circumvent obstructionism by the conventionally-minded peer-review literature.  These new media function as a latter day Salon des Refusés for both scientists and philosophers of science, who can now easily afford self publishing with world- wide distribution through the Internet.  Hickey’s communications with sociology journal editors exemplify the retarding effects of the communication constraint in current academic sociology.  See Appendix II in BOOK VIII at the free web site www.philsci.com or in the e-book Twentieth-Century Philosophy of Science: A History, which is available in the web site through hyperlinks to Internet booksellers.

The communication constraint is a general linguistic phenomenon that is not limited to the language of science.  It applies to philosophy as well.  Many philosophers of science who received much if not all of their philosophy education before the turbulent 1960’s or whose philosophy education was for whatever reason retarded, are unsympathetic to the reconceptualization of familiar terms such as “theory” and “law” that are central to contemporary pragmatism.  They are dismayed by the semantic dissolution resulting from the rejection of the positivist or romantic beliefs.

In summary both the cognition constraint and the communication constraint are based on the reciprocal relation between semantics and belief, such that given the conventionalized meaning for a descriptive term, certain beliefs determine the meaning of the term, which beliefs are furthermore reinforced by psychological habit that enables linguistic fluency.  The result is that the meaning’s conventionality impedes change in those defining beliefs.

Relativized artifactual semantics cannot supply the perfect congruence in knowledge of reality that the realism of Aristotle and the phenomenalism of the modern positivists had proffered.  But neither does it impose solipsistic isolation, because cognitively apprehended shared reality imposes far more congruence than divergence between the cognitive worlds of different persons thus enabling routinely effective communication 


4.27 Scientific Explanation

Different sciences have different surface structures in their language of explanation.  But a syntactical transformation of the surface structure of the laws into nontruth-functional conditional logical form is a rational reconstruction that exhibits the deep structure of the explanation explicitly displaying the essential contingency of the universally quantified law language.  Scientific laws are neither historicist nor prophesying nor otherwise unconditional.  The deep structure of an explanation is:

(1)     a discourse that can be schematized as a modus ponens logical deduction from a set of one or several universally quantified law statements expressible in a nontruth-functional hypothetical-conditional schema

(2)     together with a particularly quantified antecedent description of realized initial conditions

(3)     that jointly conclude to a consequent particularly quantified description of the explained event.

Explanation is the ultimate aim of basic science.  There are nonscientific types such as the historical explanation, but history is not a science, although it may use science as in economic history.  But only explanation in basic science is of interest in philosophy of science.  When some course of action is taken in response to an explanation such as a social policy, a medical therapy or an engineered product or structure, the explanation is used as applied science.  Applied science does not occasion a change in an explanation as in basic science, unless there is an unexpected failure in spite of conscientious and competent implementation of the relevant applied laws.

The logical form of the explanation in basic science is the same as that of the empirical test.  The universally quantified statements constituting a system of one or several related scientific laws in an explanation can be schematized as a nontruth-functional conditional statement in the logical form “For every A if A, then C”.  But while the logical form is the same for both testing and explanation, the deductive argument is not the same.

The deductive argument of the explanation is the modus ponens argument instead of the modus tollens logic used for testing.  In the modus tollens argument the conditional statement expressing the proposed theory is falsified, when the antecedent clause is true and the consequent clause is false.  On the other hand in the modus ponens argument for explanation both the antecedent clause describing initial and exogenous conditions and the conditional statements having law status are accepted as true, such that affirmation of the antecedent clause validly concludes to affirmation of the consequent clause describing the explained phenomenon.

Thus the schematic form of an explanation is “For every A if A, then C” is true. “A” is true.  Therefore “C” is true (and explained).  The conditional statement “For every A if A, then C” represents a set of one or several related universally quantified law statements applying to all instances of “A”.  When the individual explanation is given, “A” is the set of one or several particularly quantified statements describing the realized initial and exogenous conditions that cause the occurrence of the explained phenomenon as in a test.  And “C” is the set of one or several particularly quantified statements describing the explained individual consequent effect, which whenever possible is a prediction.

In the scientific explanation the statements in the conditional schema express scientific laws accepted as true due to their empirical adequacy as demonstrated by nonfalsifying test outcomes.  These together with the antecedent statements describing the initial conditions in the explanation constitute the explaining language that Popper calls the “explicans”.  And he calls the logically consequent language, which describes the explained phenomenon, the “explicandum”.  Hempel used the terms “explanans” and “explanandum” respectively.  Furthermore it has been said that theories “explain” laws.  Falsified theories do not occur in a scientific explanation.  Scientific explanations consist of laws, which are formerly theories that have been tested with nonfalsifying test outcomes.  Explanations that employ untested general statements are not scientific explanations, but may be called “folk” explanations.

Since all the universally quantified statements in the nontruth-functional conditional schema of an explanation are laws, the “explaining” of laws is said to mean that a system of logically related laws forms a deductive system partitioned into dichotomous subsets of explaining antecedent axioms and explained consequent theorems.  Logically integrating laws into axiomatic systems confers psychological satisfaction by contributing semantical coherence.  Influenced by Newton’s physics many positivists had believed that producing reductionist axiomatic systems is part of the aim of science.  Logical reductionism was integral to the positivist Vienna Circle’s unity-of-science agenda.  Hanson calls this “catalogue science” as opposed to “research science”.  The logical axiomatizing reductionist fascination is not validated by the history of science.  Great developmental episodes in the history of science such as the development of quantum physics have had the opposite effect of fragmenting science, i.e., classical physics cannot be made a logical extension of quantum mechanics.  But while fragmentation has occasioned the communication constraint and thus provoked opposition to a discovery, it has delayed but not halted the empirical advancement of science in its history.  The only criterion for scientific criticism that is acknowledged by the contemporary pragmatist is the empirical criterion.  Eventually realistic empirical pragmatism prevails.   

However, physical reductionism as opposed to mere axiomatic logical reductionism represents discoveries in science and does more than just add semantical coherence.  Simon and his associates developed discovery systems that produced physical reductions in chemistry.  Three such systems named, STAHL, DALTON and GLAUBER are described in Simon’s Scientific Discovery.  System STAHL named after the German chemist Georg Ernst Stahl (1659-1734) was developed by Jan Zytkow.  It creates a type of qualitative law that Simon calls “componential”, because it describes the hidden components of substances.  STAHL replicated the development of both the phlogiston and the oxygen theories of combustion.  System DALTON, named after John Dalton (1766-1668) the chemist creates structural laws in contrast to STAHL, which creates componential laws.  Like the historical Dalton the DALTON system does not invent the atomic theory of matter.  It employs a representation that embodies the hypothesis and incorporates the distinction between atoms and molecules invented earlier by Amadeo Avogadro (1776-1856).

 System GLAUBER was developed by Pat Langley in 1983.  It is named after the eighteenth century chemist Johann Rudolph Glauber (1604-1668) who contributed to the development of the acid-base theory.  Note that the componential description does not invalidate the higher-order description.  Thus the housewife who combines baking soda and vinegar and then observes a reaction yielding a salt residue may validly and realistically describe the vinegar and soda (acid and base) and their observed reaction in the colloquial terms she uses in her kitchen.  The colloquial description is not invalidated by her inability to describe the reaction in terms of the chemical theory of acids and bases.  Both descriptions are semantically significant and both together realistically describe an ontology.

The difference between logical and physical reductions is illustrated by the neopositivist Ernest Nagel in his distinction between “homogeneous” and “heterogeneous” reductions in his Structure of Science (1961).  The homogeneous reduction illustrates what Hanson called “catalogue science”, which isanson called “catalogue science” merely a logical reduction that contributes semantical coherence, while the heterogeneous reduction illustrates what Hanson called “research science”, which involves discovery and new empirical laws, which Nagel calls “correspondence rules” that relate theoretical terms to observation terms.  In the case of the homogeneous reduction, which is merely a logical reduction with some of the laws operating as a set of axioms and the other as a set of conclusions, the semantical effect is merely an exhibition of semantical structure and a decrease in vagueness to increase coherence.  This can be illustrated by the reduction of Kepler’s laws describing the orbital motions of the planet Mars to Newton’s law of gravitation. 

However in the case of the heterogeneous reduction there is not only a reduction of vagueness, but also the addition of correspondence rules that are universally quantified falsifiable empirical statements relating descriptive terms in the two laws to one another.  Nagel maintains that the correspondence rules are initially hypotheses that assign additional meaning, but which later become tested and nonfalsified empirical statements.  Nagel illustrates this heterogeneous type by the reduction of thermodynamics to statistical mechanics, in which a temperature measurement value is equated to a measured value of the mean of molecular kinetic energy by a correspondence rule.  Then further development of the law makes it possible to calculate the temperature of the gas in some indirect fashion from experimental data other than the temperature value obtained by actually measuring the temperature of the gas.  Thus the molecular kinetic energy laws empirically explain the thermodynamic laws.  But contrary to Nagel’s positivism the correspondence rules do not relate theoretical terms to observation terms and do not give statistical mechanics any needed observational foundation, because statistical mechanics is already observational.  As Einstein said, “the theory decides what the physicist can observe”.

In his “Explanation, Reduction and Empiricism” in Minnesota Studies in the Philosophy of Science (1962) Feyerabend with his wholistic view of the semantics of language dismissed Nagel’s analysis of reductionism.  Feyerabend maintained that the reduction is actually a complete replacement of one theory together with its observational consequences with another theory with its distinctive observational consequences.  But the contemporary pragmatist can analyze the language of reductions by means of the componential semantics thesis applied to both theories and to their shared and consistent test designs.

 

 

 

Pages [1] [2] [3] [4] [5] [6] [7]
NOTE: Pages do not corresponds with the actual pages from the book