INTRODUCTION TO PHILOSOPHY OF SCIENCE

Book I Page 7

       
4.18 Test Design Revision

Empirical tests are conclusive decision procedures only for scientists who agree on which language is proposed theory and which language is presumed test design, and who furthermore accept both the test design and the test-execution outcomes produced with the accepted test design.

The decidability of empirical testing is not absolute.  Popper had recognized that the statements reporting the observed test outcome, which he called “basic statements”, require prior agreement by the cognizant scientists, and that those basic statements are subject to future reconsideration.

All universally quantified statements are hypothetical, but theory statements are relatively more hypothetical than test-design statements, because the interested scientists agree that in the event of a falsifying test outcome, revision of the theory will likely be more productive than revision of the test design.

For the scientist who does not accept a falsifying test outcome of a theory, a different semantical change is produced than if he had accepted the test outcome as a falsification.  Such a dissenting scientist has either rejected the report of the observed test outcome or reconsidered the test design.  If he has rejected the outcome of the individual test execution, he has merely questioned whether or not the test was executed in compliance with its agreed test design.  Repetition of the test with careful fidelity to the design may answer such a challenge to the test’s validity one way or the other.

But if in response to a falsifying test outcome the dissenting scientist has reconsidered the test design itself, then he has thereby changed the semantics involved in the test in a fundamental way.  Reconsideration of the test design amounts to rejecting the test design as if it were falsified, and letting the theory define the subject of the test and the problem under investigation – a rôle reversal in the pragmatics of test-design language and theory language.  Then the theory’s semantics characterizes the problem for the dissenter, and the test design is effectively falsified, because it is deemed inadequate thus making the test design and the test execution irrelevant.

If a scientist rejects a test design in response to a falsifying test outcome, he has reversed the pragmatics of the test, having made the theory’s semantics define the subject of the test and the problem under investigation.

Popper rejects such a dissenting response to a test, calling it a “content-decreasing stratagem”.  He admonishes that the fundamental maxim of every critical discussion is that one should “stick to the problem”.  But as James Conant recognized to his dismay in his On Understanding Science: An Historical Approach (1947) the history of science is replete with such prejudicial responses to scientific evidence that have nevertheless been productive and strategic to the advancement of basic science in historically important episodes.  The prejudicially dissenting scientists may decide that the design for the falsifying test supplied an inadequate description of the problem that the tested theory is intended to solve, often if he developed the theory himself and did not develop the test design.  The semantical change produced for such a recalcitrant believer in the theory affects the meanings of the terms common to the theory and test-design statements.  The parts of the meaning complex that had been contributed by the rejected test-design statements are then the parts excluded from the semantics of one or several of the descriptive terms common to the theory and test-design statements.  Such a semantical outcome for a falsified theory can indeed be said to be “content decreasing”, as Popper said.

But a scientist’s prejudiced or tenacious rejection of an apparently falsifying test outcome may have a contributing function in the development of science.  It may function as what Feyerabend called a “detecting device”, a practice he called “counterinduction”, which is a discovery strategy that he illustrated in his examination of Galileo’s arguments for the Copernican cosmology. Galileo used the apparently falsified heliocentric theory as a “detecting device” by letting his prejudicial belief in the heliocentric theory control the semantics of observational description.  This enabled Galileo to reinterpret observations previously described with the equally prejudiced alternative semantics built into the Aristotelian geocentric cosmology.  Counterinduction was also the strategy used by Heisenberg, when he reinterpreted the observational description of the electron track in the Wilson cloud chamber using Einstein’s thesis that the theory decides what the physicist can observe, and he reports that he then developed his indeterminacy relations using quantum concepts.

Another historic example of using an apparently falsified theory as a detecting device is the discovery of the planet Neptune.  In 1821, when Uranus happened to pass Neptune in its orbit – an alignment that had not occurred since 1649 and was not to occur again until 1993 – Alexis Bouvard (1767-1843) developed calculations predicting future positions of the planet Uranus using Newton’s celestial mechanics.  But the observations of Uranus showed significant deviations from the predicted positions.   

A first possible response would have been to dismiss the deviations as measurement errors and preserve belief in Newton’s celestial mechanics. But astronomical measurements are repeatable, and the deviations were large enough that they were not dismissed as observational errors.  They were recognized to have presented a new problem.

A second possible response would have been to give Newton’s celestial mechanics the hypothetical status of a theory, to view Newton’s law of gravitation as falsified by the anomalous observations of Uranus, and then to attempt to revise Newtonian celestial mechanics.  But by then confidence in Newtonian celestial mechanics was very high, and no alternative to Newton’s physics had yet been proposed.  Therefore there was great reluctance to reject Newtonian physics.

A third possible response, which was historically taken, was to preserve belief in the Newtonian celestial mechanics, to modify the test-design language in order to propose a new auxiliary hypothesis of a gravitationally disturbing phenomenon, and then to reinterpret the observations by supplementing the description of the deviations using the auxiliary hypothesis of the disturbing phenomenon.  Disturbing phenomena can “contaminate” even supposedly controlled laboratory experiments.  The auxiliary hypothesis changed the semantics of the test-design description with respect to what was observed.  In 1845 both John Couch Adams (1819-1892) in England and Urbain Le Verrier (1811-1877) in France independently using apparently falsified Newtonian physics as a detecting device made calculations of the positions of a disturbing postulated planet to guide future observations in order to detect observationally the postulated disturbing body.  On 23 September 1846 using Le Verrier’s calculations Johann Galle (1812-1910) observed the postulated planet with the telescope of the Royal Observatory in Berlin.

Theory is language proposed for testing, and test design is language presumed for testing.  But here the pragmatics of the discourses was reversed.  In this third response the Newtonian gravitation law was not deemed a tested and falsified theory, but rather was presumed to be true and used for a new test design.  The test-design language was actually given the relatively more hypothetical status of theory by the auxiliary hypothesis of the postulated planet thus newly characterizing the observed deviations in the positions of Uranus.  The nonfalsifying test outcome of this new hypothesis was Galle’s observational detection of the postulated planet, which Le Verrier had named Neptune.

But counterinduction is after all just a discovery strategy and is more often an exceptional practice than the routine one.  Le Verrier’s counterinduction effort failed to explain a deviant motion of the planet Mercury when its orbit comes closest to the sun, a deviation known as its perihelion precession.  In 1843 Le Verrier presumed to postulate a gravitationally disturbing planet that he named Vulcan and predicted its orbital positions.  However unlike Le Verrier and most physicists at the time, Einstein had given Newton’s celestial mechanics the more hypothetical status of theory language, and he viewed Newton’s law of gravitation as having been falsified by the anomalous perihelion precession.  He had initially attempted a revision of Newtonian celestial mechanics by generalizing on his special theory of relativity.  This first such attempt is known as his Entwurf version, which he developed in 1913 in collaboration with his mathematician friend Marcel Grossman.  But working in collaboration with his friend Michele Besso he found that the Entwurf version had clearly failed to account accurately for Mercury’s orbital deviations; it showed only 18 seconds of arc each century instead of the actual 43 seconds.

In 1915 he finally abandoned the Entwurf and under prodding from the mathematician David Hilbert (1862-1943) turned to mathematics exclusively to produce his general theory of relativity.  He then developed his general theory, and announced his correct prediction of the deviations in Mercury’s orbit to the Prussian Academy of Sciences on 18 November 1915.  He received a congratulating letter from Hilbert on “conquering” the perihelion motion of Mercury.  After years of delay due to World War I his general theory was further vindicated by Arthur Eddington’s (1888-1944) historic eclipse test of 1919.  Some astronomers reported that they had observed a transit of a planet across the sun’s disk, but these claims were found to be spurious when larger telescopes were used, and Le Verrier’s postulated planet Vulcan has never been observed.  MIT professor Thomas Levenson relates the history of the futile search for Vulcan in his The Hunt for Vulcan (2015).

Le Verrier’s response to Uranus’ deviant orbital observations was the opposite to Einstein’s response to the deviant orbital observations of Mercury.  Le Verrier reversed the rôles of theory and test-design language by preserving his belief in Newton’s physics and using it to revise the test-design language with his postulate of a disturbing planet. Einstein viewed Newton’s celestial mechanics to be hypothetical, because he believed that the theory statements were more likely to be productively revised than test-design statements, and he took the deviant orbital observations of Mercury to falsify Newton’s physics, thus indicating that theory revision was needed.  Empirical tests are conclusive decision procedures only for scientists who agree on which language is proposed theory and which is presumed test design, and who furthermore accept both the test design and the test-execution outcomes produced with the accepted test design.

Finally there can be cases of test design revision other than those that occasion counterinduction. A new observational technique or instrumentality due in some cases to developments in what Feyerabend called “auxiliary sciences” may occasion a falsifying test outcome of a theory due to a reduction in empirical underdetermination (See below Section 4.19) in the new test design, e.g., development of a superior microscope or telescope.  In such a case the newly falsified theory had previously been a law due to earlier empirical testing and later with the new test design had a falsifying-test outcome.  In such a case there is no rôle reversal between theory and test design.  Rather a law simply reverts to its earlier more hypothetical status as a theory due to the new and superior test design. 

For more about Feyerabend readers are referred to BOOK VI at www.philsci.com or in the e-book Twentieth-Century Philosophy of Science: A History.


4.19 Empirical Underdetermination

Conceptual vagueness and measurement error are manifestations of empirical underdetermination, which occasion scientific pluralism.

Empirical underdetermination can be reduced indefinitely but never completely eliminated.

The empirical underdetermination of language may make empirical criteria incapable of producing a decisive theory-testing outcome.  Two manifestations of empirical underdetermination are conceptual vagueness and measurement error.  All concepts have vagueness that can be reduced indefinitely but can never be eliminated completely.  Mathematically expressed theories use measurement data that always contain measurement inaccuracy.  Measurement error can be reduced indefinitely but never eliminated completely.

Scientists prefer measurements and mathematically expressed theories, because they can measure the amount of prediction error in the theory, when the theory is tested.  But separating measurement error from a theory’s prediction error can be problematic.  Repeated careful execution of the measurement procedure, if the test is repeatable, enables statistical estimation of the degree or range of measurement error.  But as in economics, repeated measurement is not always possible.

 
4.20 Scientific Pluralism

 Scientific pluralism is recognition of the coexistence of multiple empirically adequate alternative explanations due to undecidability resulting from the empirical underdetermination in test-design language.

All language is always empirically underdetermined by reality.  Empirical underdetermination explains how two or more semantically alternative empirically adequate theories can have the same test-design language.  This means that there are several theories with alternative explanatory factors and yielding accurate predictions that are alternatives to one another, while having differences that are small enough to be within the range of the estimated measurement error.  In such cases empirical underdetermination due to the current test design imposes undecidability on the choice among the alternative explanations.

Econometricians are accustomed to alternative empirically adequate econometric models.  This occurs because measurement errors in aggregate social statistics are typically large in comparison to those in most natural sciences.  Each such model has different equation specifications, i.e., different causal variables in the equations of the model, and makes different forecasts for some of the same prediction variables that are accurate within the relatively large range of estimated measurement error.  And discovery systems with empirical test procedures routinely proliferate empirically adequate alternative explanations for output.  They produce what Einstein called “an embarrassment of riches”.  Logically this multiplicity of alternative theories means that there may be alternative empirically warranted nontruth-functional hypothetical conditional heuristic schemas in the form “For all A if A, then C” having alternative antecedents “A” and making different but empirically adequate predictions that are the empirically indistinguishable consequents “C”.

Empirical underdetermination is also manifested as conceptual vagueness.  For example to develop his three laws of planetary motion Johannes Kepler (1591-1630), a heliocentrist, used the measurement observations of Mars that had been collected by Tycho Brahe (1546-1601), a type of geocentrist.  Brahe had an awkward geocentric-heliocentric cosmology, in which the fixed earth is the center of the universe, the stars and the sun revolve around the earth, and the other planets revolve around the sun.  Kepler used Brahe’s astronomical measurement data, so measurement error clearly was not the operative underdetermination permitting the alternative cosmologies.  Kepler was a convinced Copernican placing the sun at the center of the universe.

Kepler’s belief in the Copernican heliocentric cosmology made the semantic parts contributed by that heliocentric cosmology become for him component parts of the semantics of the language used for celestial observation, thus displacing Brahe’s more complicated combined geocentric-heliocentric cosmology’s semantical contribution.  Then hypothesizing with the simpler Copernican heliocentrism’s contributions to the observational celestial semantics, he developed his three laws after deciding that the orbit of Mars is elliptical rather than circular.

Alternative empirically adequate theories due to empirical underdetermination are all more or less true.  An answer as to which theory is truer must await further development of additional observational information or measurements that clarify the empirically vague test-design concepts.  There is never any ideal test design with “complete” information, i.e., with no vagueness or no measurement error.  Pragmatist recognition of possible undecidability among alternative empirically adequate scientific explanations due to empirical underdetermination occasions what pragmatists call the thesis of “scientific pluralism”.


4.21 Scientific Truth

Truth and falsehood are spectrum properties of statements, such that the greater the truth, the lesser the error.

Tested and nonfalsified statements are more empirically adequate, have more realistic ontologies, and have more truth than falsified ones. 

Falsified statements have recognized error, and may simply be rejected, unless they are still useful for their lesser realism and lesser truth.

What is truth!  Truth is a spectrum property of descriptive language with its relativized semantics and ontology.  It is not merely a subjective expression of approval.

Belief and truth are not identical.  Belief is acceptance of a statement as true.  But one may wrongly believe that a false statement is true, or wrongly believe that a true statement is false.  Belief controls the semantics of the descriptive terms in universally quantified statements.  Truth is the relation of a statement’s semantics and ontology to mind-independent nonlinguistic reality.  Furthermore as Jarrett Leplin maintains in his Defense of Scientific Realism (1997), truth and falsehood are spectrum properties of statements, properties that admit to more or less; they are not simply dichotomous, as they are represented in two-valued formal logic.

Test-design language is presumed true with definitional force for the semantics of the test-design language, in order to characterize the subject and procedures of the test.  Theory language in an empirical test may be believed true by the developer and advocates of the theory, but the theory is not true simply by virtue of their belief.  Belief in an untested theory is speculation about a future test outcome.  A nonfalsifying test outcome will warrant belief that the tested theory is as true as the theory’s demonstrated empirical adequacy.  Empirically falsified theories have recognized error, and may be rejected unless they are still useful for their lesser realism and lesser truth. Tested and nonfalsified statements are more empirically adequate, have ontologies that are more realistic, and thus are truer than empirically falsified statements.

Popper said that Eddington’s historic eclipse test of Einstein’s theory of gravitation in 1919 “falsified” Newton’s theory and thus “corroborated” Einstein’s theory.  Yet the U.S. National Aeronautics and Space Administration (NASA) today still uses Newton’s laws to navigate interplanetary rocket flights such as the Voyager missions.  Thus Newton’s “falsified” theory is not completely false or it could never have been used before or after Einstein.  Popper said that science does not attain truth.  But contemporary pragmatists believe that such an absolutist idea of truth is misconceived.  Advancement in empirical adequacy is advancement in realism and in truth.  Feyerabend said, “Anything goes”.  Regarding ontology Hickey says, “Everything goes”, because while not all discourses are equally valid, there is no semantics utterly devoid of ontological significance.  Therefore Hickey adds that the more empirically adequate tested theory goes farther – is truer and more realistic – than its less empirically adequate falsified alternatives.  Empirical science progresses in empirical adequacy, in realism and in truth.


4.22 Nonempirical Criteria

Given the fact of scientific pluralism – of having several alternative explanations that are tested and not falsified due to empirical underdetermination in the test-design language – philosophers and scientists have proposed various nonempirical criteria they believe have been operative historically in explanation choice.  And a plurality of untested and therefore unfalsified theories may also exist before any testing, so that scientists may have preferences for testing one theory over another based on nonempirical criteria.  Philosophers have proposed a variety of such nonempirical criteria.

 Popper advances a criterion that he says enables the scientist to know in advance of any empirical test, whether or not a new theory would be an improvement over existing theories, were the new theory able to pass crucial tests, in which its performance is compared to older existing alterna­tives.  He calls this criterion the “potential satisfactoriness” of the theory, and it is measured by the amount of information content in the theory.  This criterion follows from his concept of the aim of science, the thesis that the theory that tells us more is preferable to one that tells us less, because the theory has more “potential falsifiers”. 

But a theory with greater potential satisfactoriness may be empirically inferior, when tested with an improved test design.  Test designs are improved by developing more accurate measurement procedures and/or by adding new descriptive information that reduces the vagueness in the characterization of the subject for testing.  Such test-design improvements refine the characterization of the problem addressed by the theories, and thus reduce empirical underdetermination to improve the decidability of testing.

When empirical underdetermination makes testing undecidable among alternative theories, different scientists may have personal reasons for preferring one or another alternative as an explanation.  In such circumstances selection may be an investment decision for the career scientist rather than an investigative decision.  The choice may be influenced by such circumstances as the cynical realpolitik of peer-reviewed journals.  Knowing what editors and their favorite referees currently want in submissions greatly helps an author getting his paper published.  Publication is an academic status symbol with the more prestigious journals yielding more brownie points for accumulating academic tenure, salary and status.

In the January 1978 issue of the Journal of the American Society of Information Science (JASIS) the editor wrote that referees often use the peer review process as a means to attack a point of view and to suppress the content of a submitted paper, i.e., they attempt censorship.  Furthermore editors are not typically entrepreneurial; as gate guards they are the risk-aversive rearguard rather than the risk-taking avant-garde.  They select the established “authorities” with reputation-based vested interests in the prevailing traditional views, and these “authorities” suborn the peer-review process by using their preferred views as criteria for criticism and thus acceptance for publication.  They and their reviewers represent the status quo who demand more of the same rather than original ideas, with the result that editors select foxes to guard the henhouse that are obstructions to innovation rather than agents of innovation.

External sociocultural factors have also influenced theory choice.  In his Copernican Revolution: Planetary Astronomy in the Development of Western Thought (1957) Kuhn wrote that the astronomer in the time of Copernicus could not upset the two-sphere universe without overturning physics and religion as well.  Fundamental concepts in the pre-Copernican astronomy had become strands for a much larger fabric of thought, and the nonastronomical strands in turn bound the thinking of the astronomers.  The Copernican revolution occurred because Copernicus was a dedicated specialist, who valued mathematical and celestial detail more than the values reinforced by the nonastronomical views that were dependent on the prevailing two-sphere theory.  This purely technical focus of Copernicus enabled him to ignore the nonastronomical consequences of his innovation, consequences that would lead his contemporaries of less restricted vision to reject his innovation as absurd.

Later in discussing modern science in his popular Structure of Scientific Revolutions Kuhn does not make the consequences to the nonspecialist an aspect of his general theory of scientific revolutions.  Instead he maintains, as part of his thesis of “normal” science that a scientist may willfully choose to ignore a falsifying outcome of a decisive test execution.  This choice is not due to the scientist’s specific criticism of either the test design or the test execution, but rather is due to the expectation that the falsified theory will later be improved and corrected.  However any such “correcting” alteration made to a falsified theory amounts to theory elaboration, a discovery strategy that produces a new and different theory.

Similarly sociology and politics operate as criteria today in the social sciences, where defenders and attackers of different economic views are in fact defending and attacking certain social/political philosophies, ideologies, special interests and provincial policies. For example in the United States Republican politicians attack Keynesian economics, while Democrat politicians defend it.  But pragmatism has prevailed over ideology, when expediency dictates, as during the 2007-2009 Great Recession crisis.  Thus in his After the Music Stopped Alan S. Blinder, Princeton University economist and former Vice Chairman of the Federal Reserve Board of Governors, reports that ultraconservative Republican President Bush “let pragmatism trump ideology” (P. 213), when he signed the Economic Stimulus Act of 2008, a distinctively Keynesian fiscal policy, which added $150 billion to the U.S. Federal debt.

In contrast Democrat President Obama without reluctance and with a Democrat-controlled Congress signed the American Reinvestment and Recovery Act in 2009, a stimulus package that added $787 billion to the Federal debt.  Blinder reports that simulations with the Moody Analytics large macroeconometric model showed that the effect of the stimulus in contrast to a no-stimulus simulation scenario was a GDP that was 6 per cent higher with the stimulus than without it, an unemployment rate 3 percentage points lower, and 4.8 million additional Americans employed (P. 209).

Nonetheless as former Federal Reserve Board Chairman Ben Bernanke wrote in his memoir The Courage to Act, the stimulus was small in comparison with its objective of helping to arrest the deepest recession in seventy years in a $15 trillion national economy (P. 388).  Thus Bernanke, a conservative Republican, did not reject Keynesianism, but concluded that the recovery was needlessly slow and protracted, because the stimulus program was too small.

Citing Kuhn some sociologists of knowledge including those advocating the “strong program” maintain that the social and political forces that influence society at large also influence scientific beliefs.  This is truer in the social sciences, but sociologists who believe that this means empiricism does not control acceptance of scientific beliefs in the long term are mistaken, because it is pragmatic empiricism that enables wartime victories, peacetime prosperity – and in all times business profits, as reactionary politics, delusional ideologies and utopian fantasies cannot.

All such criteria are presumptuous.  No nonempirical criterion enables a scientist to predict reliably which alternative nonfalsified explanation will survive empirical testing, when in due course the degree of empirical underdetermination is reduced by a new or improved test design that enables decidable testing.  To make such an anticipatory choice is like betting on a horse before it runs the race.

 
4.23 The “Best Explanation” Criteria

As noted above Thagard’s cognitive-psychology system ECHO developed specifically for theory selection has identified three nonempirical criteria to maximize the coherence aim.  His simulations of past episodes in the history of science indicate that the most important criterion is breadth of explanation, followed by simplicity of explanation, and finally analogy with previously accepted theories.  Thagard considers these nonempirical selection criteria as productive of a “best explanation”.

The breadth-of-explanation criterion also suggests Popper’s aim of maximizing information content.  In any case there have been successful theories in the history of science, such as Heisenberg’s matrix mechanics and uncertainty relations, for which none of these three characteristics were operative in the acceptance as explanations.  And as Feyerabend noted in Against Method in criticizing Popper’s view, Aristotelian dynamics is a general theory of change comprising locomotion, qualitative change, generation and corruption, while Galileo and his successors’ dynamics pertains exclusively to locomotion.  Aristotle’s explanations therefore may be said to have greater breadth, but his physics is now known to be less empirically adequate.

 Contemporary pragmatists acknowledge only the empirical criterion, the criterion of superior empirical adequacy.  They exclude all nonempirical criteria from the aim of science, because while relevant to persuasion to make theories appear “convincing”, they are irrelevant as evidence of progress. Nonempirical criteria are like the psychological criteria that trial lawyers use to select and persuade juries in order to win lawsuits in a court of law, but which are irrelevant to courtroom evidence rules for determining the facts of a case.  Such prosecutorial lawyers are like the editors and referees of the peer-reviewed academic literature (sometimes called the “court of science”) who ignore the empirical evidence described in a paper submitted for publication and reject the paper.

But nonempirical criteria are routinely operative in the selection of problems to be addressed and explained.  For example the American Economic Association’s Index of Economic Journals indicates that in the years of the Great Depression the number of journal articles concerning the trade cycle fluctuated in close correlation with the national average unemployment rate with a lag of two years.

 
4.24 Nonempirical Linguistic Constraints

The empirical constraint is the institutionalized value that regulates theory acceptance or rejection.                  

The constraint imposed upon theorizing by empirical test outcomes is the empirical constraint, the criterion of superior empirical adequacy.  It is a regulating institutionalized cultural value definitive of modern empirical science that is not viewed as an obstacle to be overcome, but rather as a condition to be respected for the advancement of science toward its aim

There are other kinds of constraints that are nonempirical and are retarding impediments that must be overcome for the advancement of science, and that are internal to science in the sense that they are inherent in the nature of language.  They are the cognition constraint and communication constraint.

 
4.25 Cognition Constraint

The semantics of every descriptive term is determined by its linguistic context consisting of universally quantified statements believed to be true. 

Thus the principle of linguistic constraints:

Given the conventionalized meaning for a descriptive term, certain beliefs determining the meaning of the term are reinforced by habitual linguistic fluency with the result that the meaning’s conventionality constrains change in those defining beliefs.

The conventionalized meanings for descriptive terms thus produce the cognition constraint, which inhibits construction of new theories, and is manifested as lack of imagination, creativity or ingenuity.

In his Course in General Linguistics (1916) Ferdinand de Saussure, the founder of semiology, maintained that language is an institution, and that of all social institutions it is the least amenable to initiative.  He called one of the several sources of resistance to linguistic change the “collective inertia toward innovation”.

In his Concept of the Positron (1963) Hanson similarly identified this impediment to discovery and called it the “conceptual constraint”.  He reports that physicists’ identification of the concept of the subatomic particle with the concept of its charge was an impediment to recognizing the positron.  The electron was identified with a negative charge and the much more massive proton was identified with a positive charge, so that the positron as a particle with the mass of an electron and a positive charge was not recognized without difficulty and delay. 

In his Introduction to Metascience (1976) Hickey referred to this conceptual constraint as the “cognition constraint”. The cognition constraint inhibits construction of new theories, and is manifested as lack of imagination, creativity or ingenuity.  Semantical rules are not just rules.  They are also strong linguistic habits with subconscious roots that enable prereflective competence and fluency in both thought and speech.  Six-year-old children need not reference explicit grammatical rules in order to speak grammatically.  And these habits make meaning a synthetic psychological experience.  Given a conventionalized belief or firm conviction expressible as a universally quantified affirmative statement, the predicate in that affirmation contributes meaning part(s) to the meaning complex of the statement’s subject term.  Not only does the conventionalized status of meanings make development of new theories difficult, but also any new theory construction requires greater or lesser semantical dissolution and restructuring.

Accordingly the more revolutionary the revision of beliefs, the more constraining are both the semantical structure and psychological conditioning on the creativity of the scientist who would develop a new theory, because revolutionary theory development requires relatively more extensive semantical dissolution and restructuring.

However, use of computerized discovery systems circumvents the cognition constraint, because the machines have no linguistic-psychological habits. Their mindless electronic execution of mechanized procedures is one of their virtues.

The cognition-constraint thesis is opposed to the neutral-language thesis that language is merely a passive instrument for expressing thought.  Language is not merely passive but rather has a formative influence on thought.  The formative influence of language as the “shaper of meaning” has been recognized as the Sapir-Whorf hypothesis and specifically by Benjamin Lee Whorf’s principle of linguistic relativity set forth in his “Science and Linguistics” (1940) reprinted in Language, Thought and Reality (1956).  But contrary to Whorf it is not just the grammatical system that determines semantics, but rather what Quine called the “web of belief”, the shared belief system as found in a dictionary.

For more about the linguistic theory of Whorf readers are referred to in BOOK VI at www.philsci.com or in the e-book Twentieth-Century Philosophy of Science: A History.


4.26 Communication Constraint

The communication constraint is the impediment to understanding a theory that is new relative to those currently accepted.

The communication constraint has the same origins as the cognition constraint.  It is the semantical impediment to understanding a new theory relative to those currently accepted and thus currently conventional.  This impediment is both cognitive and psychological.  The scientist must cognitively learn the new theory well enough to restructure the composite meaning complexes associated with the descriptive terms common both to the old theory that he knows and to the new theory to which he has just been exposed.  And this involves overcoming existing psychological habit that enables linguistic fluency, which reinforces existing beliefs.

This learning process suggests the conversion experience described by Kuhn in revolutionary transitional episodes, because the new theory must firstly be accepted as true however provisionally for its semantics to be understood, since only statements believed to be true can operate as semantical rules that convey understanding.  If testing demonstrates the new theory’s superior empirical adequacy, then the new theory’s pragmatic acceptance should eventually make it the established conventional wisdom.

But if the differences between the old and new theories are very great, some members of the affected scientific profession may not accomplish the required learning adjustment.  People usually prefer to live in an orderly world, but innovation creates semantic disorder and consequent anomie.  In reaction the slow learners and nonlearners become a rearguard that cling to the received conventional wisdom, which is being challenged by the new theory at the frontier of research, where there is much conflict that produces confusion due to semantic dissolution and consequent restructuring of the web of belief. 

Since the conventional view has had time to be developed into a more elaborate system of ideas, those unable to cope with the semantic dissolution produced by the newly emergent ideas take refuge in the psychological comfort of coherence provided by the more elaborate conventional wisdom, which assumes the nature of a dogma if not also an ideology.  In the meanwhile the developers of the new ideas together with the more opportunistic and typically younger advocates of the new theory, who have been motivated to master the new theory’s language in order to exploit its perceived career promise, assume the avant-garde rôle and become a vanguard. 

1970 Nobel-laureate economist Paul Samuelson wrote in his Keynes General Theory: Reports of Three Decades (1964) that Keynes’ theory had caught most economists under the age of thirty-five with the unexpected virulence of a disease first attacking and then decimating an isolated tribe of South Sea islanders, while older economists were immune.

Note that contrary to Kuhn and especially to Feyerabend the transition does not involve a complete semantic discontinuity much less any semantic incommensurability.  And it is unnecessary to learn the new theory as though it were a completely foreign language.  For the terms common to the new and old theories, the component parts contributed by the new theory replace those from the old theory, while the parts contributed by the test-design statements remain unaffected.  Thus the test-design language component parts shared by both theories enable characterization of the subject of both theories independently of the distinctive claims of either, and thereby enable decisive testing.  The shared semantics in the test-design language also facilitates learning and understanding the new theory, however radical the new theory may be.

It may also be noted that the scientist viewing the computerized discovery system output experiences the same communication impediment with the machine output that he would, were the outputted theories developed by a fellow human scientist.  New theories developed mechanically are grist for Luddites’ mindless rejection.

In summary both the cognition constraint and the communication constraint are based on the reciprocal relation between semantics and belief, such that given the conventionalized meaning for a descriptive term, certain beliefs determining the meaning of the term are reinforced by psychological habit that enables linguistic fluency.  The result is that the meaning’s conventionality impedes change in those defining beliefs.

The communication constraint is a general linguistic phenomenon and not limited to the language of science.  It applies to philosophy as well.  Thus many philosophers of science who received their education before 1970 or whose education was otherwise retarded are unsympathetic to the reconceptualization of familiar terms such as “theory” and “law” that are central to contemporary pragmatism.  They are dismayed by the semantic dissolution resulting from the rejection of the old positivist beliefs.  For example Hickey remembers hearing a dismayed Notre Dame University professor, whose uncomprehending reaction to the new pragmatism, tell his philosophy-of-science class when contemporary pragmatism was emerging in the 1960’s, “Now everything is messy.” 

For an example of the communication constraint in sociology see Appendix II in BOOK VIII at www.philsci.com or in the e-book Twentieth-Century Philosophy of Science: A History.  This appendix exemplifies the retarding effects of the communication constraint on current sociology.

 
4.27 Scientific Explanation

A scientific explanation is a discourse consisting of:

(1) a set of one or several related universally quantified law statements

(2) expressible jointly in a nontruth-functional hypothetical-conditional heuristic schema

(3) together with particularly quantified antecedent description of realized initial conditions,

(4)  which together conclude by modus ponens deduction

(5) to a particularly quantified description of the consequent occurrence of the explained event.         

Explanation is the ultimate aim of basic science.  There are nonscientific types such as the historical explanation, but history is not a science, although it may use science as in economic history.  But only explanation in basic science is of interest in philosophy of science.  When some course of action is taken in response to an explanation such as a social policy, a medical therapy or an engineered product or structure, the explanation is used as applied science.  Applied science does not occasion a change in an explanation as it does in basic science, unless there is a failure in spite of conscientious and competent implementation of the relevant tested laws.

Since a theory in an empirical test is proposed as an explanation, the logical form of the explanation in basic science is the same as that of the empirical test.  The universally quantified statements constituting a system of one or several related scientific laws in an explanation can be schematized as a nontruth-functional hypothetical-conditional statement in the logical form “For every A if A, then C”.  But while the logical form is the same for both testing and explanation, the deductive argument is not the same.

The deductive argument of the explanation is the modus ponens argument instead of the modus tollens logic used for testing.  In the modus tollens argument the hypothetical-conditional statement expressing the proposed theory is falsified, when the antecedent clause is true and the consequent clause is false.  On the other hand in the modus ponens argument for explanation both the antecedent clause describing initial and exogenous conditions and the hypothetical-conditional statements having law status are accepted as true, such that affirmation of the antecedent clause validly concludes to affirmation of the consequent clause describing the explained phenomenon.

Thus the heuristic schematic form of an explanation is “For every A if A, then C” is true. “A” is true.  Therefore “C” is true (and explained).  The conditional statement “For every A if A, then C” represents a set of one or several related universally quantified law statements applying to all instances of “A” and to all consequent instances of “C”.  “A” is the set of one or several particularly quantified statements describing the realized initial and exogenous conditions that cause the occurrence of the explained phenomenon as in a test.  “C” is the set of one or several particularly quantified statements describing the explained individual consequent effect, which whenever possible is a prediction.

In the explanation the statements in the hypothetical-conditional heuristic schema express scientific laws accepted as true due to their empirical adequacy as demonstrated by nonfalsifying test outcomes. These together with the antecedent statements describing the initial conditions in the explanation constitute the explaining language some call the explanans.  And they call the logically consequent language, which describes the explained consequent phenomenon, the explanandum.

It has also been said that theories “explain” laws.  Neither untested nor falsified theories occur in a scientific explanation.  Scientific explanations consist of laws, which are formerly theories that have been tested with nonfalsifying test outcomes.  Proposed explanations are merely untested theories. 

Since all the universally quantified statements in the nontruth-functional hypothetical-conditional heuristic schema of an explanation are laws, the “explaining” of laws means that a system of logically related laws forms a deductive system partitioned into dichotomous subsets of explaining antecedent axioms and explained consequent theorems. 

Integrating laws into axiomatic systems confers psychological satisfaction by contributing semantical coherence.  Influenced by Newton’s physics many positivists had believed that producing reductionist axiomatic systems is part of the aim of science.  The belief is integral to the Vienna Circle’s unity-of-science agenda.  And today physicists are strongly motivated to integrate general relativity theory with quantum theory.

But the reductionist fascination is not validated by the history of science.  Great developmental episodes in the history of science have had the opposite effect of fragmenting science.  But while the fragmentation has occasioned the communication constraint and thus opposition to discoveries, it has delayed but did not halt the empirical advancement of science in its history.  Eventually empirical pragmatism prevails.

 


Pages [1] [2] [3] [4] [5] [6] [7]
NOTE: Pages do not corresponds with the actual pages from the book