INTRODUCTION TO PHILOSOPHY OF SCIENCE

Book I Page 7


4.22 Nonempirical Criteria

Confronted with irresolvable scientific pluralism – having several alternative explanations that are tested and not falsified due to empirical underdetermination in the test-design language – philosophers and scientists have proposed various nonempirical criteria that they believe have been operative historically in explanation choice.  Furthermore a plurality of untested and therefore unfalsified theories may also exist before any testing, so that different scientists may have their preferences for testing one theory over others based on nonempirical criteria.

Philosophers have proposed a great variety of such nonempirical criteria.  Popper advanced a criterion that he says enables the scientist to know in advance of any empirical test, whether or not a new theory would be an improvement over existing theories, were the new theory able to pass crucial tests in which its performance is comparable to the older alternative existing alterna­tives.  He calls this criterion the “potential satisfactoriness” of the theory, and it is measured by the amount of “information content” in the theory.  This criterion follows from his concept of the aim of science, the thesis that the theory that tells us more is preferable to one that tells us less with the more informative theory having more “potential falsifiers”. 

But the amount of information in a theory is not static; it will likely evolve as the tested and nonfalsified theory is extended by the cognizant scientific profession over time.  And a theory with greater potential satisfactoriness may be empirically inferior, when tested with an improved test design.  Test designs may be improved by developing more accurate measurement procedures and/or by adding clarifying descriptive information that reduces the vagueness in the characterization of the subject for testing.  Such test-design improvements refine the characterization of the problematic phenomenon addressed by the theories, and thus reduce empirical underdetermination and improve the decidability of testing.

When empirical underdetermination makes testing undecidable among alternative theories, different scientists may have personal reasons for preferring one or another alternative as an explanation.  In such circumstances selection may be due to an ego involvement for the scientist rather than an investigative decision.  Or the choice may be influenced by such circumstances as the cynical realpolitik of peer-reviewed journals.  Knowing what editors and their favorite referees currently want in submissions helps an author getting his paper published.  In the January 1978 issue of the Journal of the American Society of Information Science (JASIS) the editor wrote that referees often use the peer review process as a means to attack a point of view and to suppress the content of a submitted paper, i.e., competitive careerists attempt censorship of ideas contrary to their own.  Furthermore editors are not typically entrepreneurial; as “gate guards” they are academia’s risk-aversive rearguard rather than the risk-taking avant-garde.  They select the established “authorities” with reputation-based vested interests in their personal preferences or in the prevailing traditional views.  These so-called “authorities” cynically suborn the peer-review process by using their personal preferences or conventional views as criteria for criticism and rejection for publication instead of using empirical criteria.  Such cynical reviewers and editors are hacks that represent the status quo demanding trite papers rather than new, original and empirically superior ideas.  Thus acceptance in the peer-reviewed literature is a sign of banal conventionality instead of empirical adequacy.  When this cynicism becomes sufficiently pervasive as to be normative, the science has become institutionally corrupted and is destined to exhibit the degenerate sterility of an intellectually incestuous monoculture.

In contemporary academic sociology the existing degenerate sterility of an intellectually incestuous monoculture is accentuated by the social-control mechanisms described in sociological theory, and which are operative in science as described by sociologist Warren O. Hagstrom in his The Scientific Community (1965).  Furthermore among sociologists the corrupting conformism is reinforced by sociologists’ enthusiastic embrace of Kuhn’s conformist sociological thesis of “normal science”.  For example shortly after Kuhn’s Structure of Scientific Revolutions there appeared a new sociological journal, Sociological Methods and Research edited by George W. Bohrnstedt of Indiana University.  In a statement of policy reprinted in issues for many years the editor states that the journal is devoted to sociology as a “cumulative” empirical science, and he describes the journal as one that is highly focused on the assessment of the scientific status of sociology.  One of the distinctive characteristics of conformist “normal” science in Kuhn’s theory is that it is cumulative, such that it can demonstrate progress.  In other words research that does not conform to Kuhnian “normal science is not progress.  Such corrupting intellectual incest has thus become more institutionalized and more retarding in academic sociology than in the more mature sciences.

External sociocultural factors have also influenced theory choice.  In his Copernican Revolution: Planetary Astronomy in the Development of Western Thought (1957) Kuhn wrote that the astronomer in the time of Copernicus could not upset the two-sphere universe without overturning physics and religion as well.  He reports that fundamental concepts in the pre-Copernican astronomy had become strands for a much larger fabric of thought, and that the nonastronomical strands in turn bound the thinking of the astronomers.  The Copernican revolution occurred because Copernicus was a dedicated specialist, who valued mathematical and celestial detail more than the values reinforced by the nonastronomical views that were dependent on the prevailing two-sphere theory.  This purely technical focus of Copernicus enabled him to ignore the nonastronomical consequences of his innovation, consequences that would lead his contemporaries of less restricted vision to reject his innovation as absurd.

Citing Kuhn some sociologists of knowledge including notably those advocating the “strong program” maintain that cultural, social and political forces that influence society at large also inevitably influence the content of scientific beliefs.  For example the Nazis and Stalinists made their political ideologies dictate biology.  Sociologists who believe that this means empiricism does not control acceptance of scientific beliefs in the long term are mistaken, because it is pragmatic empiricism that in the competitive world enables wartime victories, peacetime prosperity – and in all times business profits – as reactionary politics, delusional ideologies and utopian fantasies cannot.

Even today persons with different political philosophies, partisan ideologies, and personal interests defend and attack economic ideas and policies by using nonempirical criteria.  Political views are like any other in that people believe what they want to believe.  For example in the United States more than eighty years after Keynes, Republican politicians still attack Keynesian economics while Democrat politicians defend it.  Many Republicans are motivated by the right-wing political ideology such as may be found in 1974 Nobel-laureate Frederich von Hayek’s (1899-1992) Road to Serfdom or in the heroic romantic novels by the author Ayn Rand (1905-1982).  The prevailing political philosophy among Republicans opposes government intervention in the private sector of the national economy.  But as Federal Reserve Board of Governors Chairman Ben Bernanke (1953), New York Federal Reserve Bank President Timothy Geithner (1961) and U.S. Treasury Secretary Henry Paulson (1946) maintain in their Firefighting: The Financial Crisis and Its Lessons (2019), Adam Smith’s invisible hand of capitalism cannot stop a full blown financial collapse; only the visible hand of government can do that (P. 5). 

         The post-World War II era offered no opportunity to witness a liquidity trap, but that changed in the 2007-2009 Great Recession, which thus offered added resolution of the previous empirical underdetermination to improve decidability.  In his After the Music Stopped (2013) Alan S. Blinder (1948), Princeton University economist and former Vice Chairman of the Federal Reserve Board of Governors, reports that “ultraconservative” Republican President George W. Bush (1946) “let pragmatism trump ideology” (P. 213), when he signed the Economic Stimulus Act of 2008, a distinctively Keynesian fiscal policy of tax cuts, which added $150 billion to the U.S. Federal debt notwithstanding Republicans’ visceral abhorrence of the Federal debt.

In contrast Democrat President Barak Obama (1961) without reluctance and with Democrat-party control of both houses of Congress signed the American Reinvestment and Recovery Act in 2009, a stimulus package that added $787 billion to the Federal debt.  In “How the Great Recession was Stopped” in Moody’s Analytics (2010) Blinder reports that simulations with the Moody Analytics large macroeconometric model showed that the effect of President Obama’s stimulus in contrast to a no-stimulus simulation scenario was a GDP that was 6 per cent higher with the stimulus than without it, an unemployment rate 3 percentage points lower, and 4.8 million additional Americans employed (P. 209).  Pragmatic Republicans in Congress were not willing to permit doctrinaire conservative noninterventionism produce another Great Depression with its 25% unemployment rates as in 1933, although it would have made the Great Recession more empirically decidable about the effectiveness of Federal fiscal stimulus policy.  Yet Chairman Bernanke wrote in his memoir The Courage to Act (2013), that President Obama’s 2009 stimulus was small in comparison with its objective of helping to arrest the deepest recession in seventy years in a $15 trillion national economy (P. 388).  Thus Bernanke, a conservative Republican, did not reject Keynesianism, but instead actually concluded that the recovery was needlessly slow, because the Obama Federal fiscal stimulus program was disproportionately small for the size of the U.S. national macroeconomy.

As it happens Chairman Bernanke discovered at the time of the Great Recession that expansionary Keynesian fiscal policy could be supplemented with a new monetary policy.  Keynesian deficit spending was needed to force money into the economy, because traditional monetary policy merely increases bank reserves.  But increasing bank reserves in the economy’s liquidity trap condition produces negligible increased lending by banks for private-sector investment spending, since short-term interest rates are at or near zero and cannot be lowered.  But Bernanke’s Federal Reserve Board supplemented the Obama fiscal stimulus by introducing a new monetary stimulus policy called “quantitative easing”, which consisted of purchasing mortgages and long-term U.S. Treasury bonds.  These actions occurred in three stages called “QE1” in 2009, “QE2” in 2011 and “QE3” in 2012, which altogether injected $4.5 trillion into the economy and reduced long-term interest rates.  The conservative Cassandras in the U.S. Congress warned of a resulting hyperinflation, but inflation in the decade that followed has been negligible. 

Now at this writing during the COV-19 pandemic, long-term interest rates are also near zero (the 10-year Treasury notes are now 0.5% and falling), so quantitative easing is no longer an effective option to stimulate the economy.  Thus political pragmatism dictates much more deficit spending.  The recent Federal deficit due to the pandemic depression has already dwarfed the deficit incurred during the 2007-9 Great Recession.  The Wall Street Journal (14 July 2020) reported that the Federal deficit for the calendar year ending 30 June 2020 amounted to $3 trillion, reaching a record-level 14% of the GDP, and that the Congressional Budget Office estimates that the deficit for the Federal fiscal year ending 30 September 2020 will be $3.7 trillion.  And the pandemic is still ongoing with no end in sight until some day a vaccine is developed.  The only option left for the Federal Reserve is to continue to purchase U.S. Treasury securities, so the U.S. Treasury Department has more money to spend.  Thus with both short-term and long-term interest rates in a liquidity trap more Federal super deficits are likely over the next several Federal fiscal years, because Republicans like all political animals understand the pragmatic real politick of political economy, especially in election years.  Their options now are Keynesianism, socialism, or an historic depression that exceeds the 1930’s Great Depression by orders of magnitude.

There are many other examples of nonempirical criteria that have operated in scientific criticism.  Another example is 1933 Nobel-laureate physicist Paul Dirac (1902-1984) who relied on the aesthetics he found in mathematics for his development of his operator calculus for quantum physics and for his prediction of the positron.  Nonetheless all nonempirical criteria are presumptuous.  To make such anticipatory choices is like betting on a horse before it runs the race.

No nonempirical criterion enables a scientist to predict reliably which among alternative either untested theories or nonfalsified explanations will survive empirical testing, when in due course the degree of empirical underdetermination is reduced by a new and improved test design that enables decidable testing.


4.23 The “Best Explanation” Criteria

As previously noted (See above, Section 4.05) Thagard’s cognitive-psychology system ECHO developed specifically for theory selection has identified three nonempirical criteria to maximize achievement of the coherence aim.  His simulations of past episodes in the history of science indicate that the most important criterion is breadth of explanation, followed by simplicity of explanation, and finally analogy with previously accepted theories.  Thagard considers these nonempirical selection criteria as productive of a “best explanation”.

The breadth-of-explanation criterion also suggests Popper’s aim of maximizing information content.  In any case there have been successful theories in the history of science, such as Heisenberg’s matrix mechanics and uncertainty relations, for which none of these three characteristics were operative in the acceptance as explanations.  And as Feyerabend noted in Against Method in criticizing Popper’s view, Aristotelian dynamics is a general theory of change comprising locomotion, qualitative change, generation and corruption, while Galileo and his successors’ dynamics pertains exclusively to locomotion.  Aristotle’s explanations therefore may be said to have greater breadth, but his physics is now deemed to be less empirically adequate.

Contemporary realistic neopragmatists acknowledge only the empirical criterion, the criterion of superior empirical adequacy.

They exclude all nonempirical criteria from the aim of science, because while relevant to persuasion to make theories appear “convincing”, they are irrelevant as evidence for progress.  Nonempirical criteria are like the psychological criteria that trial lawyers use to select and persuade juries in order to win lawsuits in a court of law, but which are irrelevant to courtroom evidence rules for determining the facts of a case.  Such prosecutorial lawyers are like the editors and referees of the peer-reviewed academic literature (sometimes called the “court of science”) who ignore the empirical evidence described in a paper submitted for publication and who reject the paper due to its unconventionality.  Such editors make marketing-based instead of evidence-based publication decisions, and they corrupt the institution of science.

But nonempirical criteria are often operative in the selection of problems to be addressed and explained.  For example the American Economic Association’s Index of Economic Journals indicates that in the years of the lengthy Great Depression the number of journal articles concerning the trade cycle fluctuated in close correlation with the national average unemployment rate with a lag of two years.


4.24 Nonempirical Linguistic Constraints

The constraint imposed upon theorizing by empirical test outcomes is the empirical constraint, the criterion of superior empirical adequacy.  It is the regulating institutionalized cultural value definitive of modern empirical science that is not viewed as an obstacle to be overcome, but rather as a condition to be respected for the advancement of science.

But there are other kinds of constraints that are nonempirical and are retarding impediments that must be overcome for the advancement of science, and these are internal to science in the sense that they are inherent in the nature of language.  They are the cognition constraint and communication constraint.


4.25 Cognition Constraint

The semantics of every descriptive term is determined by its linguistic context consisting of universally quantified statements believed to be true.   But the artifactual thesis of language implies that semantics and belief are mutually determining.

Therefore given the conventional meaning for a descriptive term, certain beliefs that determine the meaning of the term are implied.  And these beliefs are furthermore reinforced by habitual linguistic fluency with the result that the meaning’s conventionality constrains change in those defining beliefs.  The conventionalized meanings for descriptive terms produce the cognition constraint, the linguistic impediment that inhibits construction of new theories, which is manifested as lack of imagination, creativity or ingenuity.

In his Concept of the Positron Hanson identified this impediment to discovery and called it the “conceptual constraint”.  He reports that physicists’ identification of the concept of the subatomic particle with the concept of its charge was an impediment to recognizing the positron.  The electron was identified with a negative charge and the much more massive proton was identified with a positive charge, so that the positron as a particle with the mass of an electron and a positive charge was not recognized without difficulty and delay.

In his Introduction to Metascience Hickey referred to this conceptual constraint as the “cognition constraint”.  The cognition constraint inhibits construction of new theories, and is manifested as lack of imagination, creativity or ingenuity.  Semantical rules are not just explicit rules; they are also strong linguistic habits with subconscious roots that enable prereflective competence and fluency in both thought and speech.  Six-year-old children need not reference explicit grammatical and semantical rules in order to speak competently and fluently.  And these subconscious habits make meaning a synthetic psychological experience.

Given a conventionalized belief or firm conviction expressible as a universally quantified affirmative categorical statement, the predicate in that categorical affirmation contributes meaning parts to the meaning complex of the statement’s subject term.  The conventionalized status of meanings makes development of new theories difficult, because new theory construction requires greater or lesser semantical dissolution and restructuring of the complex semantics of conventional terms.  Accordingly the more extensive the revision of beliefs, the more constraining are both the semantical restructuring and the psychological conditioning on the creativity of the scientist who would develop a new theory.  Revolutionary theory development requires both relatively more extensive semantical dissolution and restructuring and thus greater psychological adjustment in linguistic habits. 

However, use of computerized discovery systems circumvents the cognition constraint, because the machines have no linguistic-psychological habits.  Their mindless electronic execution of mechanized procedures is one of their virtues.

The cognition-constraint thesis is opposed to the neutral-language thesis that language is merely a passive instrument for expressing thought.  Language is not merely passive but rather has a formative influence on thought.  The formative influence of language as the “shaper of meaning” has been recognized as the Sapir-Whorf hypothesis and specifically by Benjamin Lee Whorf’s (1941)  principle of linguistic relativity set forth in his “Science and Linguistics” (1940) reprinted in Language, Thought and Reality (1956).  But contrary to Whorf it is not the grammatical system that determines semantics, but rather it is what Quine called the “web of belief”, i.e., the shared belief system as found in a unilingual dictionary.

For more about the linguistic theory of Whorf readers are referred to in BOOK VI at the free web site www.philsci.com or in the e-book Twentieth-Century Philosophy of Science: A History, which is available at Internet booksellers through hyperlinks in the web site.


4.26 Communication Constraint

The communication constraint is the linguistic impediment to understanding a theory that is new relative to those currently conventional. 

The communication constraint has the same origins as the cognition constraint.  This impediment is also both cognitive and psychological.  The scientist must cognitively learn the new theory well enough to restructure the composite meaning complexes associated with the descriptive terms common both to the old theory that he is familiar with and to the theory that is new to him.  And this learning involves overcoming psychological habit that enables linguistic fluency that reinforces existing beliefs.

This learning process suggests the conversion experience described by Kuhn in revolutionary transitional episodes, because the new theory must firstly be accepted as true however provisionally for its semantics to be understood, since only statements believed to be true can operate as semantical rules that convey understanding.  That is why dictionaries are presumed not to contain falsehoods.  If testing demonstrates the new theory’s superior empirical adequacy, then the new theory’s pragmatic acceptance should eventually make it the established conventional wisdom.

But if the differences between the old and new theories are so great as perhaps to be called revolutionary, then some members of the affected scientific profession may not accomplish the required learning adjustment.  People usually prefer to live in an orderly world, but innovation creates semantic dissolution and consequent psychological disorientation.  In reaction the slow learners and nonlearners become a rearguard that clings to the received conventional wisdom, which is being challenged by the new theory at the frontier of research, where there is much conflict that produces confusion due to semantic dissolution and consequent restructuring of the relevant concepts in the web of belief.

The communication constraint and its effects on scientists have been insightfully described by Heisenberg, who personally witnessed the phenomenon when his quantum theory was firstly advanced.  In his Physics and Philosophy: The Revolution in Modern Science Heisenberg defines a “revolution” in science as a change in thought pattern, which is to say a semantical change, and he states that a change in thought pattern becomes apparent, when words acquire meanings that are different from those they had formerly.  The central question that Heisenberg brings to the phenomenon of revolution in science understood as a change in thought pattern with semantical change is how the revolution is able to come about.  The occurrence of a scientific revolution is problematic due to resistance to the change in thought pattern presented to the cognizant profession.

Heisenberg notes that as a rule the progress of science proceeds without much resistance or dispute, because the scientist has by training been put in readiness to fill his mind with new ideas.  But he says the case is altered when new phenomena compel changes in the pattern of thought.  Here even the most eminent of physicists find immense difficulties, because a demand for change in thought pattern may create the perception that the ground has been pulled from under one’s feet.  He says that a researcher having achieved great success in his science with a pattern of thinking he has accepted from his young days, cannot be ready to change this pattern simply on the basis of a few novel experiments.  Heisenberg states that once one has observed the desperation with which clever and conciliatory men of science react to the demand for a change in the pattern of thought, one can only be amazed that such revolutions in science have actually been possible at all.

It might be added that since the prevailing conventional view has usually had time to be developed into a more extensive system of ideas, those unable to cope with the semantic dissolution produced by the newly emergent ideas often take refuge in the psychological comforts of coherence and familiarity provided by the more extensive conventional wisdom, which assumes the nature of a dogma and for some scientists an occupational ideology. 

In the meanwhile the developers of the new ideas together with the more opportunistic and typically younger advocates of the new theory, who have been motivated to master the new theory’s language in order to exploit its perceived career promise, assume the avant-garde rôle and become a vanguard.   1970 Nobel-laureate economist Samuelson offers a documented example: He wrote in “Lord Keynes and the General Theory” in Econometrica (1946) that he considers it a priceless advantage to have been an economist before 1936, the publication year of Keynes’ General Theory, and to have received a thorough grounding in classical economics, because his rebellion against Keynes’ General Theory’s pretensions would have been complete save for his uneasy realization that he did not at all understand what it is about.  And he adds that no one else in Cambridge, Massachusetts really knew what it is about for some twelve to eighteen months after its publication.  Years later he wrote in his Keynes’ General Theory: Reports of Three Decades (1964) that Keynes’ theory had caught most economists under the age of thirty-five with the unexpected virulence of a disease first attacking and then decimating an isolated tribe of South Sea islanders, while older economists were the rearguard that was immune.  Samuelson was a member of the Keynesian vanguard.

Note also that contrary to Kuhn and especially to Feyerabend the transition however great does not involve a complete semantic discontinuity much less any semantic incommensurability.  And it is unnecessary to learn the new theory as though it were a completely foreign language. For American economists in the 1930’s the semantics for the test-design language was defined for both the Keynesian and pre-Keynesian macroeconomic theories by the relevant Federal agencies.  For example the unemployment rate was collected and defined by the Bureau of Labor Statistics, U.S. Department of Labor, and the interest rates and money stocks were collected and defined by the nation’s central bank, the Federal Reserve Board of Governors.  And in his General Theory Keynes explicitly footnoted the National Income and Product Accounts, which include the gross national product, developed by 1971 Nobel-laureate Simon Kuznets (1901-1985) and published by the National Bureau of Economic Research (NBER) in 1935.

The semantic incommensurability muddle is resolved by recognition of componential semantics.  For the terms common to the new and old theories, the component parts contributed by the new theory replace those from the old theory, while the parts contributed by the test-design statements remain unaffected.  Thus the test-design language component parts shared by both theories enable characterization of the subject of both theories independently of the distinctive claims of either, and thereby enable decisive testing.  The shared semantics in the test-design language also facilitates learning and understanding the new theory, however radical the new theory.  It may furthermore be noted that the scientist viewing the computerized discovery system output experiences the same communication impediment with the machine output that he would, were the outputted theories developed by a fellow human scientist.

Fortunately today the Internet and e-book media enable dissemination of new ideas that circumvent obstructionism by the conventionally-minded peer-reviewed literature.  These new media function as a latter day Salon des Refusés for both scientists and philosophers of science, who can now easily afford the now inexpensive self publishing with world-wide distribution through Internet booksellers.  For Hickey’s communications with sociology journal editors and referees exemplifying the retarding effects of the communication constraint in the current academic sociology literature, see Appendix II to BOOK VIII at the free web site www.philsci.com or in the e-book Twentieth-Century Philosophy of Science: A History, which is available at Internet booksellers through hyperlinks in the site.

The communication constraint is a general linguistic phenomenon that is not limited to the language of science.  It applies to philosophy as well.  Many philosophers of science who received much if not all of their philosophy education before the intellectually eventful 1960’s or whose philosophy education was for whatever reason retarded, are unsympathetic to the reconceptualization of familiar terms such as “theory” and “law” that are central to contemporary realistic neopragmatism.  They are dismayed by the semantic dissolution resulting from the rejection of the positivist or romantic beliefs.

In summary both the cognition constraint and the communication constraint are based on the reciprocal relation between semantics and belief, such that given the conventionalized meaning for a descriptive term, certain beliefs determine the meaning of the term, which beliefs are furthermore reinforced by subconscious psychological habit that enables linguistic fluency.  The result is that the meaning’s conventionality impedes change in those defining beliefs.


4.27 Scientific Explanation

Different sciences have different surface structures, which may involve complex mathematics.  But the syntactical transformation of the surface structure of the laws into nontruth-functional hypothetical-conditional logical form is the philosopher’s heuristic enabling a rational reconstruction that produces the deep structure of the explanation, which clearly and explicitly displays the essential contingency of the universally quantified law language and the logic of explanation.  Scientific laws are neither historicist nor prophesying nor otherwise unconditional. 

The deep structure of a scientific explanation exhibits:

(1)     a discourse that can be expressed as a modus ponens logical deduction from a set of one or several universally quantified law statements expressible in a nontruth-functional hypothetical-conditional form

(2)     together with a particularly quantified antecedent description of the production protocols and the realized initial conditions

(3)     that jointly conclude to a consequent particularly quantified description of the explained event.

Explanation is the ultimate aim of basic science.  There are nonscientific types such as historical explanation, but history is not a science, although it may use science as in economic history.  But only explanation in basic science is of interest in philosophy of science.  When some course of action is taken in response to an explanation such as a social policy, a medical therapy or an engineered product or structure, the explanation is used as applied science.  Applied science does not occasion a change in an explanation as in basic science, unless there is an unexpected failure in spite of correct, conscientious and competent implementation of the relevant applied laws.

The logical form of the explanation in basic science is the same as that of the empirical test.  The universally quantified statements constituting a system of one or several related scientific laws in an explanation can be transformed into a deep structure containing a nontruth-functional hypothetical-conditional statement in the logical form “For every A if A, then C”.  But while the logical form is the same for both testing and explanation, the deductive argument is not the same.

The deductive argument of the explanation is the modus ponens argument instead of the modus tollens logic used for testing.  In the modus tollens argument the nontruth-functional hypothetical-conditional statement expressing the proposed theory is falsified, when the antecedent clause is true and the consequent clause is false.  On the other hand in the modus ponens argument for explanation both the antecedent clause describing initial and/or exogenous conditions and the nontruth-functional hypothetical-conditional statements or equations having law status are accepted as true, such that affirmation of the antecedent clause validly concludes to affirmation of the consequent clause describing the explained phenomenon.

Thus the logical form of an explanation is “For every A if A, then C” is true. “A” is true.  Therefore “C” is true (and explained).  The nontruth-functional hypothetical-conditional statement “For every A if A, then C” represents a set of one or several related universally quantified law statements applying to all instances of “A”.  When the individual explanation is given, “A” is the set of one or several particularly quantified statements describing the realized initial and exogenous conditions that cause the occurrence of the explained phenomenon as in a test.  The particular quantification of “A” makes the nontruth-functional hypothetical-conditional statement also particularly quantified, to make the explanation of the specific event.  And “C” is the set of one or several particularly quantified statements describing the explained individual consequent effect, which whenever possible is a prediction.

In the scientific explanation the statements in the nontruth-functional hypothetical-conditional schema express scientific laws accepted as true due to their empirical adequacy as demonstrated by nonfalsifying test outcomes.  These together with the antecedent statements describing the initial and exogenous conditions in the explanation constitute the explaining language that Popper calls the “explicans”.  And he calls the logically consequent language, which describes the explained phenomenon, the “explicandum”.  Hempel used the terms “explanans” and “explanandum” respectively.  Furthermore it has been said that theories “explain” laws.  Falsified theories do not occur in a scientific explanation.  Scientific explanations consist of laws, which are formerly theories that have been tested with nonfalsifying test outcomes.  Explanations that employ untested assumed general statements are not scientific explanations.

The “explaining” of laws means that a system of logically related laws in the surface-structure language forms a deductive system dichotomously partitioned into subsets of explaining antecedent axioms and explained consequent theorems.  Logically integrating laws into axiomatic systems confers psychological satisfaction by contributing semantical coherence.  Influenced by Newton’s physics many positivists had believed that producing reductionist axiomatic systems is part of the aim of science.  Logical reductionism was integral to the positivist Vienna Circle’s unity-of-science agenda.  Hanson calls this “catalogue science” as opposed to “research science”.  The logical axiomatizing reductionist fascination is not validated by the history of science.  Great developmental episodes in the history of science such as the development of quantum physics have had the opposite effect of fragmenting science, i.e., classical physics cannot be made a logical extension of quantum mechanics.  Attempts to resolve this fragmentation in physics had exercised both Popper and Bohm.  But while fragmentation has occasioned the communication constraint and thus provoked opposition to a discovery, attempts to resolve it have delayed but not halted the empirical advancement of science in its history.  The only criterion for scientific criticism that is acknowledged by the contemporary realistic neopragmatist is the empirical criterion.  Eventually realistic empirical pragmatism prevails.

However, physical reductionism as opposed to mere axiomatic logical reductionism represents discoveries in science and does more than just add semantical coherence.  Simon and his associates developed discovery systems that produced physical reductions in chemistry.  Three such systems named, STAHL, DALTON and GLAUBER are described in Simon’s Scientific Discovery.  System STAHL named after the German chemist Georg Ernst Stahl (1659-1734) was developed by Jan Zytkow.  It creates a type of qualitative law that Simon calls “componential”, because it describes the hidden components of substances.  STAHL replicated the development of both the phlogiston and the oxygen theories of combustion.  System DALTON, named after John Dalton (1766-1668) the chemist creates structural laws in contrast to STAHL, which creates componential laws.  Like the historical Dalton the DALTON system does not invent the atomic theory of matter.  It employs a representation that embodies the hypothesis and incorporates the distinction between atoms and molecules invented earlier by Amadeo Avogadro (1776-1856).

 System GLAUBER was developed by Pat Langley in 1983.  It is named after the eighteenth century chemist Johann Rudolph Glauber (1604-1668) who contributed to the development of the acid-base theory.  Note that the componential description does not invalidate the higher-order description.  Thus the housewife who combines baking soda and vinegar and then observes a reaction yielding a salt residue may validly and realistically describe the vinegar and soda (acid and base) and their observed reaction in the colloquial terms she uses in her kitchen.  The colloquial description is not invalidated by her inability to describe the reaction in terms of the chemical theory of acids and bases.  Both descriptions are semantically significant and both semantic components together realistically describe an ontology.

The difference between logical and physical reductions is illustrated by the neopositivist Ernest Nagel in his distinction between “homogeneous” and “heterogeneous” reductions in his Structure of Science (1961).  The homogeneous reduction illustrates what Hanson called “catalogue science”, which isanson called “catalogue science” merely a logical reduction that contributes semantical coherence, while the heterogeneous reduction illustrates what Hanson called “research science”, which involves discovery and new empirical laws, which Nagel calls “correspondence rules”, that relate theoretical terms to observation terms.  In the case of the homogeneous reduction, which is merely a logical reduction with some of the laws operating as a set of axioms and the other as a set of conclusions, the semantical effect is merely an exhibition of semantical structure and a decrease in vagueness to increase coherence.  This can be illustrated by the reduction of Kepler’s laws describing the orbital motions of the planet Mars to Newton’s law of gravitation.

But in the case of the heterogeneous reduction there is not only an increase in coherence and a reduction of vagueness, but also the addition of correspondence rules that are universally quantified falsifiable empirical statements relating descriptive terms in the two laws to one another.  Nagel maintains that the correspondence rules are initially hypotheses that assign additional meaning, but which later become tested and nonfalsified empirical statements.  Nagel illustrates this heterogeneous type by the reduction of thermodynamics to statistical mechanics, in which a temperature measurement value is equated to a measured value of the mean of molecular kinetic energy by a correspondence rule.  Then further development of the test design makes it possible to calculate the temperature of the gas in some indirect fashion from experimental data other than the temperature value obtained by actually measuring the temperature of the gas.  Thus the molecular kinetic energy laws empirically explain the thermodynamic laws.  But contrary to Nagel’s positivism the correspondence rules do not relate theoretical terms to observation terms and do not give statistical mechanics any needed observational status, because statistical mechanics is already observational.  As Einstein said, “the theory decides what the physicist can observe”.

In his “Explanation, Reduction and Empiricism” in Minnesota Studies in the Philosophy of Science (1962) Feyerabend with his wholistic view of the semantics of language altogether dismissed Nagel’s analysis of reductionism.  Feyerabend maintained that the reduction is actually a complete replacement of one theory together with its observational consequences with another theory with its distinctive observational consequences.  But the contemporary realistic neopragmatist can analyze the language of reductions by means of the componential semantics thesis applied to both theories and to their shared and consistent test designs.

 

 

 

Pages [1] [2] [3] [4] [5] [6] [7]
NOTE: Pages do not corresponds with the actual pages from the book