INTRODUCTION TO PHILOSOPHY OF SCIENCE

Book I Page 7

4.19 Empirical Underdetermination

Conceptual vagueness and measurement error are manifestations of empirical underdetermination, which may occasion scientific pluralism.

The empirical underdetermination of language may make empirical criteria incapable of producing a decisive theory-testing outcome. Two manifestations of empirical underdetermination are conceptual vagueness and measurement error. All concepts have vagueness that can be reduced indefinitely but can never be eliminated completely. Mathematically expressed theories use measurement data that always contain measurement inaccuracy that can be reduced indefinitely but never eliminated completely.

Scientists prefer measurements and mathematically expressed theories, because they can measure the amount of prediction error in the theory, when the theory is tested. But separating measurement error from a theory’s prediction error can be problematic. Repeated careful execution of the measurement procedure, if the test is repeatable, enables statistical estimation of the range of measurement error. But in research using historical time-series data such as economics, repetition is impossible.


4.20 Scientific Pluralism

Scientific pluralism is recognition of the coexistence of multiple empirically adequate alternative explanations due to undecidability resulting from the empirical underdetermination in a test-design.

All language is always empirically underdetermined by reality.  Empirical underdetermination explains how two or more semantically alternative empirically adequate explanations can have the same test-design. This means that there are several theories having alternative explanatory factors and yielding accurate predictions that are alternatives to one another, while predicting differences that are small enough to be within the range of the estimated measurement error in the test design. In such cases empirical underdetermination due to the current test design imposes undecidability on the choice among the alternative explanations.

Econometricians are accustomed to alternative empirically adequate econometric models. This occurs because measurement errors in aggregate social statistics are typically large in comparison to those in most natural sciences. Each such model has different equation specifications, i.e., different causal variables in the equations of the model, and makes different predictions for some of the same prediction variables that are accurate within the relatively large range of estimated measurement error.  And discovery systems with empirical test procedures routinely proliferate empirically adequate alternative explanations for output. They produce what Einstein called “an embarrassment of riches”. Logically this multiplicity of alternative explanations means that there may be alternative empirically warranted nontruth-functional hypothetical conditional schemas in the form “For all A if A, then C” having alternative causal antecedents “A” and making different but empirically adequate predictions that are the empirically indistinguishable consequents “C”.

Empirical underdetermination is also manifested as conceptual vagueness. For example to develop his three laws of planetary motion Johannes Kepler (1591-1630), a heliocentrist, used the measurement observations of Mars that had been collected by Tycho Brahe (1546-1601), a type of geocentrist. Brahe had an awkward geocentric-heliocentric cosmology, in which the fixed earth is the center of the universe, the stars and the sun revolve around the earth, and the other planets revolve around the sun.  Kepler used Brahe’s astronomical measurement data. There was empirical underdetermination in these measurement data, as in all measurement data. 

Measurement error was not the operative empirical underdetermination permitting the alternative cosmologies, because both used the same data. But Kepler was a convinced Copernican placing the sun at the center of the universe. His belief in the Copernican heliocentric cosmology made the semantic parts contributed by that heliocentric cosmology become for him component parts of the semantics of the language used for celestial observation, thus displacing Brahe’s more complicated combined geocentric-heliocentric cosmology’s semantical contribution.  The manner in which Brahe and Kepler could have different observations is discussed by Hanson in his chapter “Observation” in his Patterns of Discovery. Hanson states that even if both astronomers saw the same dawn, they nonetheless saw differently, because observation depends on the conceptual organization in one’s prior knowledge and language.   Hanson uses the “see that….” locution. Thus Brahe sees that the sun is beginning its journey from horizon to horizon, while Kepler sees that the earth’s horizon is dipping away from our fixed local star.   Einstein said that the theory decides what the physicist can observe; Hanson similarly said that observation is “theory laden”.

Alternative empirically adequate explanations due to empirical underdetermination are all more or less true. An answer as to which explanation is truer must await further development of additional observational information or measurements that reduce the empirical underdetermination in the test-design concepts. But there is never any ideal test design with “complete” information, i.e., without vagueness or measurement error. Recognition of possible undecidability among alternative empirically adequate scientific explanations due to empirical underdetermination occasions what pragmatists call “scientific pluralism.


4.21 Scientific Truth

Truth and falsehood are spectrum properties of statements, such that the greater the truth, the lesser the error. 

Tested and nonfalsified statements are more empirically adequate, have more realistic ontologies, and are truer than falsified statements.

Falsified statements have recognized error, and may simply be rejected, unless they are still useful for their lesser realism and truth.

What is truth! Truth is a spectrum property of descriptive language with its relativized semantics and ontology.  It is not merely an subjective expression of approval.

Belief and truth are not identical. Belief is acceptance of a statement as true. But one may wrongly believe that a false statement is true, or wrongly believe that a true statement is false. Belief controls the semantics of the descriptive terms in a universally quantified statement.  Truth is the relation of a statement’s semantics and ontology to mind-independent nonlinguistic reality. Furthermore as Jarrett Leplin maintains in his Defense of Scientific Realism (1997), truth and falsehood are properties that admit to more or less; they are not simply dichotomous, as they are represented in two-valued formal logic.

Test-design language is presumed true with definitional force for the semantics of the test-design language, in order to characterize the subject and procedures of a test. Theory language in an empirical test may be believed true by the developer and advocates of the theory, but the theory is not true simply by virtue of belief. Belief in an untested theory is speculation about a future test outcome. A nonfalsifying test outcome will warrant belief that the tested theory is as true as the theory’s demonstrated empirical adequacy. Empirically falsified theories have recognized error, and may be rejected unless they are still useful for their lesser realism and lesser truth. Tested and nonfalsified statements are more empirically adequate, have ontologies that are more realistic, and thus are truer than empirically falsified statements.

Popper said that Eddington’s historic eclipse test of Einstein’s theory of gravitation in 1919 “falsified” Newton’s theory and thus “corroborated” Einstein’s theory.  Yet the U.S. National Aeronautics and Space Administration (NASA) today still uses Newton’s laws to navigate interplanetary rocket flights such as the Voyager and New Horizon missions. Thus Newton’s “falsified” theory is not completely false or totally unrealistic, or it could never have been used before or after Einstein. Popper said that science does not attain truth.  But contemporary pragmatists believe that such an absolutist idea of truth is misconceived. Advancement in empirical adequacy is advancement in realism and in truth. Feyerabend said, “Anything goes”. Regarding ontology Hickey says, “Everything goes”, because while not all discourses are equally valid, there is no semantics utterly devoid of truth and ontological significance.  Therefore Hickey adds that the more empirically adequate explanation goes farther – is truer and more realistic – than its less empirically adequate falsified alternatives. Empirical science progresses in empirical adequacy, in realism and in truth.


4.22 Nonempirical Criteria

Confronted with unresolvable scientific pluralism – having several alternative explanations that are tested and not falsified due to empirical underdetermination in the test-design language – philosophers and scientists have proposed various nonempirical criteria that they believe have been operative historically in explanation choice. 

And a plurality of untested and therefore unfalsified theories may also exist before any testing, so that different scientists may have their preferences for testing one theory over another based on nonempirical criteria. 

Philosophers have proposed a variety of such nonempirical criteriaPopper advances a criterion that he says enables the scientist to know in advance of any empirical test, whether or not a new theory would be an improvement over existing theories, were the new theory able to pass crucial tests, in which its performance is comparable to older existing alterna­tives. He calls this criterion the “potential satisfactoriness” of the theory, and it is measured by the amount of “information content” in the theory. This criterion follows from his concept of the aim of science, the thesis that the theory that tells us more is preferable to one that tells us less, because the more informative theory has more “potential falsifiers”. 

But a theory with greater potential satisfactoriness may be empirically inferior, when tested with an improved test design. Test designs are improved by developing more accurate measurement procedures and/or by adding new descriptive information that reduces the vagueness in the characterization of the subject for testing. Such test-design improvements refine the characterization of the problem addressed by the theories, and thus reduce empirical underdetermination to improve the decidability of testing.

When empirical underdetermination makes testing undecidable among alternative theories, different scientists may have personal reasons for preferring one or another alternative as an explanation. In such circumstances selection may be an investment decision for the career scientist rather than an investigative decision. The choice may be influenced by such circumstances as the cynical realpolitik of peer-reviewed journals. Knowing what editors and their favorite referees currently want in submissions helps an author getting his paper published.  Publication is an academic status symbol with the more prestigious journals yielding more brownie points for accumulating academic tenure, salary and status.

In the January 1978 issue of the Journal of the American Society of Information Science (JASIS) the editor wrote that referees often use the peer review process as a means to attack a point of view and to suppress the content of a submitted paper, i.e., they attempt censorship. Furthermore editors are not typically entrepreneurial; as “gate guards” they are academia’s risk-aversive rearguard rather than the risk-taking avant-garde. They select the established “authorities” with reputation-based vested interests in the prevailing traditional views. These so-called authorities suborn the peer-review process by using their conventional views as criteria for criticism and for acceptance for publication instead of empirical criteria. Such reviewers and editors are effectively hacks that represent the status quo demanding trite papers rather than original and empirically superior ideas.  When this conventionality producing hacks becomes sufficiently pervasive, it becomes normative. In contemporary academic sociology conventionality is accentuated by the conformism that is highly valued by sociological theory, which among sociologists is still furthermore reinforced by sociologists’ enthusiastic embrace of Kuhn’s conformist sociological thesis of “normal science”.

External sociocultural factors have also influenced theory choice. In his Copernican Revolution: Planetary Astronomy in the Development of Western Thought (1957) Kuhn wrote that the astronomer in the time of Copernicus could not upset the two-sphere universe without overturning physics and religion as well. He reports that fundamental concepts in the pre-Copernican astronomy had become strands for a much larger fabric of thought, and the nonastronomical strands in turn bound the thinking of the astronomers. The Copernican revolution occurred because Copernicus was a dedicated specialist, who valued mathematical and celestial detail more than the values reinforced by the nonastronomical views that were dependent on the prevailing two-sphere theory.  This purely technical focus of Copernicus enabled him to ignore the nonastronomical consequences of his innovation, consequences that would lead his contemporaries of less restricted vision to reject his innovation as absurd.

Later in discussing modern science in his popular Structure of Scientific Revolutions Kuhn does not make the consequences for the nonspecialist an aspect of his general theory of scientific revolutions. Instead he maintains, as part of his thesis of “normal” science that a scientist may willfully choose to ignore a falsifying outcome of a decisive test execution. This choice is not due to the scientist’s effective criticism of either the test design or the test execution, but rather is due to the expectation that the falsified theory will later be improved and corrected. However any such “correcting” alteration made a to falsified theory amounts to a discovery strategy that Hickey calls “theory elaboration” that produces a new and different theory.

Citing Kuhn some sociologists of knowledge including those advocating the “strong program” maintain that the social and political forces that influence society at large also inevitably influence scientific beliefs.  This is truer in the social sciences, but sociologists who believe that this means empiricism does not control acceptance of scientific beliefs in the long term are mistaken, because it is pragmatic empiricism that enables wartime victories, peacetime prosperity – and in all times business profits, as reactionary politics, delusional ideologies and utopian fantasies cannot.

Persons with different economic views defend and attack certain social/political philosophies, ideologies, special interests and provincial policies.  For example in the United States more than eighty years after Keynes, Republican politicians still attack Keynesian economics while Democrat politicians defend it.  Yet pragmatism has prevailed over ideology, when expediency dictates, as happened during the 2007-2009 Great Recession crisis. Thus in his After the Music Stopped Alan S. Blinder, Princeton University economist and former Vice Chairman of the Federal Reserve Board of Governors, reports that “ultraconservative” Republican President George W. Bush “let pragmatism trump ideology” (P. 213), when he signed the Economic Stimulus Act of 2008, a distinctively Keynesian fiscal policy of tax cuts, which added $150 billion to the U.S. Federal debt.

In contrast Democrat President Barak Obama without reluctance and with a Democrat-controlled Congress signed the American Reinvestment and Recovery Act in 2009, a stimulus package that added $787 billion to the Federal debt.  In their book Firefighting: The Financial Crisis and its Lessons (2019) Bernanke, Geithner and Paulson report that Obama’s fiscal stimulus was the largest in the nation’s history (P. 97).  Blinder reports that simulations with the Moody Analytics large macroeconometric model showed that the effect of Obama’s stimulus in contrast to a no-stimulus simulation scenario was a GDP that was 6 per cent higher with the stimulus than without it, an unemployment rate 3 percentage points lower, and 4.8 million additional Americans employed (P. 209).

Nonetheless as former Federal Reserve Board Chairman Ben Bernanke wrote in his memoir The Courage to Act, the 2009 stimulus was small in comparison with its objective of helping to arrest the deepest recession in seventy years in a $15 trillion national economy (P. 388).  Thus Bernanke, a conservative Republican, did not reject Keynesianism, but concluded that the recovery was needlessly slow, because the Federal fiscal stimulus program was disproportionately small for the U.S. national macroeconomy.

All nonempirical criteria are presumptuous.  No nonempirical criterion enables a scientist to predict reliably which alternative nonfalsified explanation will survive empirical testing, when in due course the degree of empirical underdetermination is reduced by a new and improved test design that enables decidable testing. 

To make such an anticipatory choice is like betting on a horse before it runs the race.


4.23 The “Best Explanation” Criteria

As previously noted (See above, Section 4.05) Thagard’s cognitive-psychology system ECHO developed specifically for theory selection has identified three nonempirical criteria to maximize achievement of the coherence aim.  His simulations of past episodes in the history of science indicate that the most important criterion is breadth of explanation, followed by simplicity of explanation, and finally analogy with previously accepted theories.  Thagard considers these nonempirical selection criteria as productive of a “best explanation”.

The breadth-of-explanation criterion also suggests Popper’s aim of maximizing information content. In any case there have been successful theories in the history of science, such as Heisenberg’s matrix mechanics and uncertainty relations, for which none of these three characteristics were operative in the acceptance as explanations. And as Feyerabend noted in Against Method in criticizing Popper’s view, Aristotelian dynamics is a general theory of change comprising locomotion, qualitative change, generation and corruption, while Galileo and his successors’ dynamics pertains exclusively to locomotion.  Aristotle’s explanations therefore may be said to have greater breadth, but his physics is now deemed to be less empirically adequate.

Contemporary pragmatists acknowledge only the empirical criterion, the criterion of superior empirical adequacy. They exclude all nonempirical criteria from the aim of science, because while relevant to persuasion to make theories appear “convincing”, they are irrelevant as evidence of progress. Nonempirical criteria are like the psychological criteria that trial lawyers use to select and persuade juries in order to win lawsuits in a court of law, but which are irrelevant to courtroom evidence rules for determining the facts of a case. Such prosecutorial lawyers are like the editors and referees of the peer-reviewed academic literature (sometimes called the “court of science”) who ignore the empirical evidence described in a paper submitted for publication and who reject the paper due to its unconventionalitySuch editors make marketing-based promotional decisions instead of evidence-based publication decisions.

But nonempirical criteria are often operative in the selection of problems to be addressed and explained. For example the American Economic Association’s Index of Economic Journals indicates that in the years of the Great Depression the number of journal articles concerning the trade cycle fluctuated in close correlation with the national average unemployment rate with a lag of two years.


4.24 Nonempirical Linguistic Constraints

The constraint imposed upon theorizing by empirical test outcomes is the empirical constraint, the criterion of superior empirical adequacy.  It is the regulating institutionalized cultural value definitive of modern empirical science that is not viewed as an obstacle to be overcome, but rather as a condition to be respected for the advancement of science.

But there are other kinds of constraints that are nonempirical and are retarding impediments that must be overcome for the advancement of science, and they are internal to science in the sense that they are inherent in the nature of language.  They are the cognition constraint and communication constraint.


4.25 Cognition Constraint

The semantics of every descriptive term is determined by its linguistic context consisting of universally quantified statements believed to be true. 

Conversely given the conventional meaning for a descriptive term, certain beliefs determining the meaning of the term are reinforced by habitual linguistic fluency with the result that the meaning’s conventionality constrains change in those defining beliefs.   

The conventionalized meanings for descriptive terms produce the cognition constraint.  The cognition constraint is the linguistic impediment that inhibits construction of new theories, and is manifested as lack of imagination, creativity or ingenuity.

In his Concept of the Positron Hanson identified this impediment to discovery and called it the “conceptual constraint”. He reports that physicists’ identification of the concept of the subatomic particle with the concept of its charge was an impediment to recognizing the positron. The electron was identified with a negative charge and the much more massive proton was identified with a positive charge, so that the positron as a particle with the mass of an electron and a positive charge was not recognized without difficulty and delay. 

In his Introduction to Metascience Hickey referred to this conceptual constraint as the “cognition constraint”. The cognition constraint inhibits construction of new theories, and is manifested as lack of imagination, creativity or ingenuity. Semantical rules are not just explicit rules; they are also strong linguistic habits with subconscious roots that enable prereflective competence and fluency in both thought and speech. Six-year-old children need not reference explicit grammatical and semantical rules in order to speak competently and fluently.  And these subconscious habits make meaning a synthetic psychological experience.

Given a conventionalized belief or firm conviction expressible as a universally quantified affirmative statement, the predicate in that affirmation contributes meaning parts to the meaning complex of the statement’s subject term. Not only does the conventionalized status of meanings make development of new theories difficult, but also any new theory construction requires greater or lesser semantical dissolution and restructuring. Accordingly the more extensive the revision of beliefs, the more constraining are both the semantical restructuring and the psychological conditioning on the creativity of the scientist who would develop a new theory.  Revolutionary theory development requires both relatively more extensive semantical dissolution and restructuring and thus greater psychological adjustment in linguistic habits. 

However, use of computerized discovery systems circumvents the cognition constraint, because the machines have no linguistic-psychological habits and make no semantical interpretations. Their mindless electronic execution of mechanized procedures is one of their virtues.

The cognition-constraint thesis is opposed to the neutral-language thesis that language is merely a passive instrument for expressing thought. Language is not merely passive but rather has a formative influence on thought. The formative influence of language as the “shaper of meaning” has been recognized as the Sapir-Whorf hypothesis and specifically by Benjamin Lee Whorf’s principle of linguistic relativity set forth in his “Science and Linguistics” (1940) reprinted in Language, Thought and Reality (1956).  But contrary to Whorf it is not the grammatical system that determines semantics, but rather it is what Quine called the “web of belief”, i.e., the shared belief system as found in a unilingual dictionary.

For more about the linguistic theory of Whorf readers are referred to in BOOK VI at the free web site www.philsci.com or in the e-book Twentieth-Century Philosophy of Science: A History, which is available from most Internet booksellers.


4.26 Communication Constraint

The communication constraint is the linguistic impediment to understanding a new theory relative to those currently conventional. 

The communication constraint has the same origins as the cognition constraint. This impediment is also both cognitive and psychological. The scientist must cognitively learn the new theory well enough to restructure the composite meaning complexes associated with the descriptive terms common both to the old theory that he is familiar with and to the theory that is new to him. And this learning involves overcoming psychological habit that enables linguistic fluency that reinforces existing beliefs.

This learning process suggests the conversion experience described by Kuhn in revolutionary transitional episodes, because the new theory must firstly be accepted as true however provisionally for its semantics to be understood, since only statements believed to be true can operate as semantical rules that convey understanding.  That is why dictionaries are presumed not to contain falsehoods.  If testing demonstrates the new theory’s superior empirical adequacy, then the new theory’s pragmatic acceptance should eventually make it the established conventional wisdom.

But if the differences between the old and new theories are so great as perhaps to be called revolutionary, then some members of the affected scientific profession may not accomplish the required learning adjustment. People usually prefer to live in an orderly world, but innovation creates semantic disorientation and consequent psychological anomie. In reaction the slow learners and nonlearners become a rearguard that clings to the received conventional wisdom, which is being challenged by the new theory at the frontier of research, where there is much conflict that produces confusion due to semantic dissolution and consequent restructuring of the relevant concepts in the web of belief.

The communication constraint and its effects on scientists have been insightfully described by Heisenberg, who personally witnessed the phenomenon when his quantum theory was firstly advanced.  In his Physics and Philosophy: The Revolution in Modern Science Heisenberg defines a “revolution” in science as a change in thought pattern, which is to say a semantical change, and he states that a change in thought pattern becomes apparent, when words acquire meanings that are different from those they had formerly. The central question that Heisenberg brings to the phenomenon of revolution in science understood as a change in thought pattern is how the revolution is able to come about. The occurrence of a scientific revolution is problematic due to resistance to the change in thought pattern presented to the cognizant profession.

Heisenberg notes that as a rule the progress of science proceeds without much resistance or dispute, because the scientist has by training been put in readiness to fill his mind with new ideas. But he says the case is altered when new phenomena compel changes in the pattern of thought.  Here even the most eminent of physicists find immense difficulties, because a demand for change in thought pattern may create the perception that the ground has been pulled from under one’s feet.  He says that a researcher having achieved great success in his science with a pattern of thinking he has accepted from his young days, cannot be ready to change this pattern simply on the basis of a few novel experiments. Heisenberg states that once one has observed the desperation with which clever and conciliatory men of science react to the demand for a change in the pattern of thought, one can only be amazed that such revolutions in science have actually been possible at all. It might be added that since the prevailing conventional view has usually had time to be developed into a more extensive system of ideas, those unable to cope with the semantic dissolution produced by the newly emergent ideas often take refuge in the psychological comforts of coherence and familiarity provided by the more extensive conventional wisdom, which assumes the nature of a dogma and for some scientists an ideology. 

In the meanwhile the developers of the new ideas together with the more opportunistic and typically younger advocates of the new theory, who have been motivated to master the new theory’s language in order to exploit its perceived career promise, assume the avant-garde rôle and become a vanguard.   1970 Nobel-laureate economist Paul Samuelson offers a documented example: He wrote in “Lord Keynes and the General Theory” in Econometrica (1946) that he considers it a priceless advantage to have been an economist before 1936, the publication year of Keynes’ General Theory, and to have received a thorough grounding in classical economics, because his rebellion against Keynes’ General Theory’s pretensions would have been complete save for his uneasy realization that he did not at all understand what it is about. And he adds that no one else in Cambridge, Massachusetts really knew what it is about for some twelve to eighteen months after its publication. Years later he wrote in his Keynes’ General Theory: Reports of Three Decades (1964) that Keynes’ theory had caught most economists under the age of thirty-five with the unexpected virulence of a disease first attacking and then decimating an isolated tribe of South Sea islanders, while older economists were the rearguard that was immune. Samuelson was a member of the Keynesian vanguard.

Note also that contrary to Kuhn and especially to Feyerabend the transition however great does not involve a complete semantic discontinuity much less any semantic incommensurability. And it is unnecessary to learn the new theory as though it were a completely foreign language. The semantic incommensurability muddle is resolved by recognition of componential semantics.  For the terms common to the new and old theories, the component parts contributed by the new theory replace those from the old theory, while the parts contributed by the test-design statements remain unaffected. Thus the test-design language component parts shared by both theories enable characterization of the subject of both theories independently of the distinctive claims of either, and thereby enable decisive testing.  The shared semantics in the test-design language also facilitates learning and understanding the new theory, however radical the new theory may be.

It may furthermore be noted that the scientist viewing the computerized discovery system output experiences the same communication impediment with the machine output that he would, were the outputted theories developed by a fellow human scientist. The communication constraint makes new theories developed mechanically grist for Luddites’ mindless rejection.

Fortunately today the Internet and e-book media enable new ideas to circumvent obstructionism by the peer-review literature, functioning as a latter day Salon des Refusés for both scientists and philosophers of science.           Hickey’s communications with sociology journal editors exemplify the retarding effects of the communication constraint in current academic sociology. See Appendix II in BOOK VIII at the free web site www.philsci.com or in the e-book Twentieth-Century Philosophy of Science: A History, which is available from most Internet booksellers.

The communication constraint is a general linguistic phenomenon that is not limited to the language of science. It applies to philosophy as well. Many philosophers of science who received much if not all of their philosophy education before the turbulent 1960’s or whose philosophy education was for whatever reason retarded, are unsympathetic to the reconceptualization of familiar terms such as “theory” and “law” that are central to contemporary pragmatism. They are dismayed by the semantic dissolution resulting from the rejection of the positivist or romantic beliefs.

In summary both the cognition constraint and the communication constraint are based on the reciprocal relation between semantics and belief, such that given the conventionalized meaning for a descriptive term, certain beliefs determine the meaning of the term, which beliefs are furthermore reinforced by psychological habit that enables linguistic fluency.  The result is that the meaning’s conventionality impedes change in those defining beliefs.


4.27 Scientific Explanation

A scientific explanation is:

(1)   a discourse that can be schematized as a modus ponens logical deduction from a set of one or several universally quantified law statements expressible in a nontruth-functional hypothetical-conditional schema

(2)    together with a particularly quantified antecedent description of realized initial conditions

(3)  that jointly conclude to a consequent particularly quantified description of the explained event.

Explanation is the ultimate aim of basic science. There are nonscientific types such as the historical explanation, but history is not a science, although it may use science as in economic history. But only explanation in basic science is of interest in philosophy of science. When some course of action is taken in response to an explanation such as a social policy, a medical therapy or an engineered product or structure, the explanation is used as applied science. Applied science does not occasion a change in an explanation as in basic science, unless there is an unexpected failure in spite of conscientious and competent implementation of the relevant applied laws.

Since a theory in an empirical test is proposed as an explanation, the logical form of the explanation in basic science is the same as that of the empirical test. The universally quantified statements constituting a system of one or several related scientific laws in an explanation can be schematized as a nontruth-functional conditional statement in the logical form “For every A if A, then C”. But while the logical form is the same for both testing and explanation, the deductive argument is not the same.

The deductive argument of the explanation is the modus ponens argument instead of the modus tollens logic used for testing. In the modus tollens argument the conditional statement expressing the proposed theory is falsified, when the antecedent clause is true and the consequent clause is false. On the other hand in the modus ponens argument for explanation both the antecedent clause describing initial and exogenous conditions and the conditional statements having law status are accepted as true, such that affirmation of the antecedent clause validly concludes to affirmation of the consequent clause describing the explained phenomenon.

Thus the schematic form of an explanation is “For every A if A, then C” is true. “A” is true. Therefore “C” is true (and explained). The conditional statement “For every A if A, then C” represents a set of one or several related universally quantified law statements applying to all instances of “A”. “A” is the set of one or several particularly quantified statements describing the realized initial and exogenous conditions that cause the occurrence of the explained phenomenon as in a test. “C” is the set of one or several particularly quantified statements describing the explained individual consequent effect, which whenever possible is a prediction.

In the scientific explanation the statements in the conditional schema express scientific laws accepted as true due to their empirical adequacy as demonstrated by nonfalsifying test outcomes.  These together with the antecedent statements describing the initial conditions in the explanation constitute the explaining language that Popper calls the “explicans”. And he calls the logically consequent language, which describes the explained phenomenon, the “explicandum” Hempel used the terms “explanans” and “explanandum” respectivelyFurthermore it has been said that theories “explain” laws. Neither untested nor falsified theories occur in a scientific explanation. Scientific explanations consist of laws, which are formerly theories that have been tested with nonfalsifying test outcomes. Proposed explanations are merely untested theories.

Since all the universally quantified statements in the nontruth-functional conditional schema of an explanation are laws, the “explaining” of laws is said to mean that a system of logically related laws forms a deductive system partitioned into dichotomous subsets of explaining antecedent axioms and explained consequent theorems.  Logically integrating laws into axiomatic systems confers psychological satisfaction by contributing semantical coherence. Influenced by Newton’s physics many positivists had believed that producing reductionist axiomatic systems is part of the aim of science. Logical reductionism was integral to the positivist Vienna Circle’s unity-of-science agenda.  Hanson calls this “catalogue science” as opposed to “research science”.  The logical axiomatizing reductionist fascination is not validated by the history of science. Great developmental episodes in the history of science such as the development of quantum physics have had the opposite effect, i.e., that of fragmenting science, because quantum mechanics cannot be made a logical extension of classical physics. But while fragmentation has occasioned the communication constraint and thus provoked opposition to a discovery, it has delayed but not halted the empirical advancement of science in its history. The only criterion for scientific criticism that is acknowledged by the contemporary pragmatist is the empirical criterionEventually empirical pragmatism prevails.

However, physical reductionism as opposed to mere axiomatic logical reductionism represents discoveries in science and does more than just add semantical coherence. Simon and his associates developed discovery systems that produced physical reductions in chemistry.  Three such systems named, STAHL, DALTON and GLAUBER are described in Simon’s Scientific Discovery.  System STAHL, named after the German chemist Georg Ernst Stahl was developed by Jan Zytkow.  It creates a type of qualitative law that Simon calls “componential”, because it describes the hidden components of substances.  STAHL replicated the development of both the phlogiston and the oxygen theories of combustion. System DALTON, named after John Dalton the chemist creates structural laws in contrast to STAHL, which creates componential laws. Like the historical Dalton the DALTON system does not invent the atomic theory of matter.  It employs a representation that embodies the hypothesis and incorporates the distinction between atoms and molecules invented earlier by Amadeo Avogado.

 System GLAUBER was developed by Pat Langley in 1983. It is named after the eighteenth century chemist Johann Rudolph Glauber who contributed to the development of the acid-base theory. Note that the componential description does not invalidate the higher-order description.  Thus the housewife who combines baking soda and vinegar and then observes a reaction yielding a salt residue may validly and realistically describe the vinegar and soda (acid and base) and their observed reaction in the colloquial terms she uses in her kitchen. The colloquial description is not invalidated by her inability to describe the reaction in terms of (reduce it to) the chemical theory of acids and bases. Both descriptions are semantically significant and realistically describe ontology.

The difference between logical and physical reductions is illustrated by the neopositivist Ernest Nagel in his distinction between “homogeneous” and “heterogeneous” reductions in his Structure of Science (1961). The homogeneous reduction illustrates what Hanson called “catalogue science”, which isanson called “catalogue science” merely a logical reduction that contributes semantical coherence, while the heterogeneous reduction illustrates what Hanson called “research science”, which involves discovery and new empirical statements that Nagel calls “correspondence rules”.  In the case of the homogeneous reduction, which is merely a logical reduction with one of the theories operating as a set of axioms and the other as a set of conclusions, the semantical effect is merely an exhibition of semantical structure and a decrease in vagueness to increase coherence.  This can be illustrated by the reduction of Kepler’s laws describing the orbital motions of the planet Mars to Newton’s laws of gravitation. 

 However in the case of the heterogeneous reduction there is not only a reduction of vagueness, but also the addition of correspondence rules, which are universally quantified falsifiable empirical statements relating descriptive terms in the two theories to one another. Nagel maintains that the correspondence rules are initially conventions that merely assign additional meaning, but which later become testable and falsifiable empirical statements.  Nagel illustrates this heterogeneous type by the reduction of thermodynamics to statistical mechanics, in which a temperature measurement value is equated to a measured value of the mean of molecular kinetic energy by a correspondence rule.  Then further development of the theory makes it possible to calculate the temperature of the gas in some indirect fashion from experimental data other than the temperature value obtained by actually measuring the temperature of the gas. Thus the molecular kinetic energy laws empirically explain the thermodynamic laws. 

 In his “Explanation, Reduction and Empiricism” in Minnesota Studies in the Philosophy of Science (1962) Feyerabend with his wholistic view of the semantics of language dismissed Nagel’s positivist analysis of reductionism. Feyerabend maintained that the reduction is actually a complete replacement of one theory together with its observational consequences with another theory with its distinctive observational consequences.  But the contemporary pragmatist can analyze the language of reductions by means of the componential semantics thesis applied to both theories and their shared test design language

 

 

Pages [1] [2] [3] [4] [5] [6] [7]
NOTE: Pages do not corresponds with the actual pages from the book