INTRODUCTION TO PHILOSOPHY OF SCIENCE

Book I Page 6

Chapter 4. Functional Topics

The preceding chapters have offered generic sketches of the principal twentieth-century philosophies of science, namely romanticism, positivism and pragmatism.  And they have discussed selected elements of the contemporary pragmatist philosophy of language for science, namely the object language and metalanguage perspectives, the synchronic and diachronic views, and the syntactical, semantical, ontological and pragmatic dimensions. 

Finally at the expense of some repetition this chapter integrates the philosophy of language into the four functional topics, namely (1) the institutionalized aim of basic science, (2) scientific discovery, (3) scientific criticism, and (4) scientific explanation.


4.01 Institutionalized Aim of Science

During the last approximately three hundred years empirical science has evolved into a social institution with its own distinctive and autonomous professional subculture of shared views and values.

The institutionalized aim of science is the cultural value system that regulates the scientist’s performance of basic research.

Idiosyncratic motivations of individual scientists are historically noteworthy, but are largely of anecdotal interest for philosophers of science, except when such idiosyncrasies have produced results that have initiated an institutional change.

The literature of philosophy of science offers various proposals for the aim of science.  The three modern philosophies of science mentioned above set forth different philosophies of language, which influence their diverse concepts of all four of the functional topics including the aim of science.


4.02
Positivist Aim

Early positivists aimed to create explanations having objective basis in observations and to make empirical generalizations summarizing the individual observations. They rejected all theories as speculative and therefore in their view as unscientific.

The positivists proposed a foundational agenda based on their naturalistic philosophy of language.  Early positivists such as Mach proposed that science should aim for firm objective foundations by relying exclusively on observation, and should seek empirical generalizations that summarize the individual observations.  They deemed theories to be at best temporary expedients and too speculative to be considered appropriate for science.  However, the early positivist Pierre Duhem (1861-1916) admitted that physical theories are integral to science, and he maintained that their function is to summarize laws as Mach said laws summarize observations, although Duhem denied that theories have a realistic or phenomenalist semantics thus avoiding the neopositivists’ problems with theoretical terms.

Later neopositivists aimed furthermore to justify explanatory theories by logically relating the theoretical terms in the theories to observation terms that they believed are a foundational reduction base.

After the acceptance of Einstein’s relativity theory by physicists, the later positivists also known as “neopositivists” acknowledged the essential rôle that hypothetical theory must have in the aim of science.  Between the twentieth-century World Wars, Carnap and his fellows in the Vienna Circle group of neopositivists attempted to justify the meaningfulness of theories in science by logically relating the so-called theoretical terms in the theories to the so-called observation terms that they believed should be the foundational logical-reduction base for science. 

Positivists alleged the existence of “observation terms”, which are terms that reference only observable entities or phenomena.  Observation terms are deemed to have simple, elementary and primitive semantics and to receive their semantics ostensively and passively in perception.  Positivists furthermore called the particularly quantified sentences containing only such terms “observation sentences”, if issued on the occasion of observing.  For example the sentence “That crow is black” uttered while the speaker of the sentence is viewing a present crow, is an observation sentence.

Many of these neopositivists were also called “logical positivists”, because they attempted to use symbolic-logic expressions fabricated by Russell and Whitehead to accomplish the logical reduction of theory language to observation language.  The logical positivists fantasized that this Russellian symbolic logic can serve philosophy as mathematics serves physics, and it became their idée fixe.  For decades the symbolic logic ostentatiously littered the pages of the Philosophy of Science and British Journal for Philosophy of Science journals with its chicken tracks, and rendered their ostensibly “technical” papers fit for the bottom of a birdcage.

These neopositivists were self-deluded, because in fact the truth-functional logic cannot capture the hypothetical-conditional logic of empirical testing in science.  For example the truth-functional truth table says that if the conditional statement’s antecedent statement is false, then the conditional statement expressing the theory is defined as true no matter whether the consequent is true or false.  But in the practice of science a false antecedent statement means that execution of a test did not comply with the definition of initial conditions in the test design thus invalidating the test, and is therefore irrelevant to the truth-value of the conditional statement that is the tested theory.  Consequently the aim of these neopositivist philosophers was not relevant to the aim of practicing research scientists.

Today their era of pretext has past.  The truth-functional logic is not seriously considered by post-positivist philosophers of science much less by practicing research scientists.  Scientists do not use symbolic logic or seek any logical reduction for so-called theoretical terms.  The extinction of positivism was in no small part due to the disconnect between the positivists’ philosophical agenda and the actual practices and values of research scientists.

For more about positivism readers are referred to BOOKs II and III at the free web site www.philsci.com or in the e-book Twentieth-Century Philosophy of Science: A History, which is available in the web site at Internet booksellers.


4.03 Romantic Aim

The aim of the social sciences is to develop explanations describing social-psychological intersubjective motives, in order to explain observed social interaction in terms of purposeful “human action” in society.

The romantics have a subjectivist social-psychological reductionist aim for the social sciences, which is thus also a foundational agenda.  This agenda is a thesis of the aim of the social sciences that is still enforced by many social scientists.  Thus both romantic philosophers and romantic scientists maintain that the sciences of culture differ fundamentally in their aim from the sciences of nature.  In the pragmatist view this foundational agenda has made academic sociology a cultural backwater.

Some romantics call their type of explanation “interpretative understanding” and others call it “substantive reasoning”.  Using this concept of the aim of social science they often say that an explanation must be “convincing” or must “make substantive sense” to the social scientist due to the scientist’s introspection upon his actual or imaginary personal experiences, especially when he is a participating member of the same culture as the social members he is investigating.  Some romantics advocate “hermeneutics”, which is a concept often associated with literary interpretation, and that discloses purportedly hidden meaning in a text accessed by re-experiencing the intersubjective mental experience of a text’s author.

Examples of romantic social scientists are sociologists like Talcott Parsons (1902-1979), an influential American sociologist who taught at Harvard University.  In his Structure of Social Action (1937) he advocated a variation on the philosophy of the sociologist Max Weber, in which vicarious understanding that Weber called “verstehen” is a criterion for criticism that the romantics believe trumps empirical evidence.  Verstehen sociology is also known as “folk sociology” or “pop sociology”.  Enforcing this “social action” criterion has obstructed the evolution of sociology into a modern empirical science in the twentieth century.  Cultural anthropologists furthermore reject verstehen as a fallacy of ethnocentrism. 

An example of an economist whose philosophy of science is paradigmatically romantic is Ludwig von Mises (1881-1973), an Austrian School economist.  In his Human Action: A Treatise on Economics (1949) Mises proposes a general theory of human action that he calls “praxeology” that employs the “method of imaginary constructions”, which suggests Weber’s ideal types.  He finds praxeology exemplified in both economics and politics.  Mises maintains that praxeology is deductive and apriori like geometry, and is therefore unlike natural science.  Praxeological theorems cannot be falsified, because they are certain.  All that is needed for deduction of praxeology’s theorems is knowledge of the “essence” of human action, which is known introspectively.  On his view experience merely directs the investigator’s interest to problems.

The 1989 Nobel-laureate econometrician Trygve Haavelmo (1911-1999) in his “Probability Approach in Econometrics” in Econometrica (July supplement, 1944) supplies the agenda ostensibly used by most econometricians today.  It is a widely accepted example of romanticism.  Econometricians do not reject the aim of prediction, simulation, optimization and policy formulation using statistical econometric models; with their econometric modeling agenda they enable it.  But they subordinate the selection of explanatory variables in their models to factors that are derived from economists’ heroically imputed maximizing rationality theses, which identify the motivating factors explaining the decisions of economic agents such as buyers and sellers in a market.  Thus they exclude econometrics from discovery and limit its function to testing romantic “theory”.  In his Philosophy of Social Science (1995) Alexander Rosenberg (1946) describes the economists’ theory of “rational choice”, i.e., the use of the maximizing rationality theses, as “folk psychology formalized”.

However the “theoretical” economist’s rationality postulates have been relegated to the status of a fatuous cliché, because in practice the econometrician almost never derives his equation specification deductively from the rationality postulates expressed as preference schedules.  Instead he will select variables to produce statistically acceptable models that produce accurate predictions regardless of the rationality postulates.  In fact in Haavelmo’s seminal paper he wrote that the economist may “jump over the middle link” of the preference schedules, although he rejected determining equation specifications by statistics alone.

For more about the romantics including Parsons, Weber, Haavelmo and others readers are referred to BOOK VIII at the free web site www.philsci.com or in the e-book Twentieth-Century Philosophy of Science: A History, which is available through hyperlinks in the web site to Internet booksellers.


4.04 More Recent Ideas

Most of the twentieth-century philosophers’ proposals for the aim of science are less dogmatic than those listed above and arise from examination of important developmental episodes in the history of the natural sciences.  Some noteworthy examples:

Einstein: Reflection on his relativity theory influenced Albert Einstein’s concept of the aim of science, which he set forth as his “programmatic aim of all physics” stated in his “Reply to Criticisms” in Albert Einstein: Philosopher-Scientist (1949). The aim of science in Einstein’s view is a comprehension as complete as possible of the connections among sense impressions in their totality, and the accomplishment of this comprehension by the use of a minimum of primary concepts and relations.  Einstein certainly did not reject empiricism, but he included an explicit coherence agenda in his aim of science.  His thesis also implies a uniform ontology for physics, and he accordingly found statistical quantum theory to be “incomplete” according to his aim.  His is a minority view among physicists today.

Popper: Karl R. Popper was an early post-positivist philosopher of science and was also critical of the romantics.  Reflecting on Arthur Eddington’s (1882-1944) historic 1919 solar eclipse test of Einstein’s relativity theory in physics Popper proposed in his Logic of Scientific Discovery (1934) that the aim of science is to produce tested and nonfalsified theories having greater universality and more information content than any predecessor theories addressing the same subject.  Unlike the positivists’ view his concept of the aim of science thus focuses on the growth of scientific knowledge.  And in his Realism and the Aim of Science (1983) he maintains that realism explains the possibility of falsifying test outcomes in scientific criticism.  The title of his Logic of Scientific Discovery notwithstanding, Popper denies that discovery can be addressed by either logic or philosophy, but says instead that discovery is a proper subject for psychology.  Cognitive psychologists today would agree.

Hanson: Norwood Russell Hanson reflecting on the development of quantum theory states in his Patterns of Discovery: An Inquiry into the Conceptual Foundations of Science (1958) and in Perception and Discovery: An Introduction to Scientific Inquiry (1969) that inquiry in research science is directed to the discovery of new patterns in data to develop new hypotheses for deductive explanation.  He calls such practices “research science”, which he opposes to “completed science” or “catalogue science”, which is merely re-arranging established ideas into more elegant formal axiomatic patterns.  He follows Peirce who called hypothesis formation “abduction”.  Today mechanized discovery systems typically search for patterns in data.

Kuhn: Thomas S. Kuhn, reflecting on the development of the Copernican heliocentric cosmology in his The Copernican Revolution: Planetary Astronomy in the Development of Western Thought (1957) maintained in his popular Structure of Scientific Revolutions (1962) that the prevailing theory, which he called the “consensus paradigm”, has institutional status.  He proposed that small incremental changes extending the consensus paradigm, to which scientists seek to conform, defines the institutionalized aim of science, which he called “normal science”.  And he said that scientists neither desire nor aim consciously to produce revolutionary new theories, which he called “extraordinary science.  This concept of the aim of science is thus a conformist agenda; Kuhn therefore defined scientific revolutions as institutional changes in science, which he excludes from the institutionalized aim of science.

Feyerabend: Paul K. Feyerabend reflecting on the development of quantum theory in his Against Method (1975) proposed that each scientist has his own aim, and that contrary to Kuhn anything institutional is a conformist impediment to the advancement of science.  He said that historically successful scientists always “break the rules”, and he ridiculed Popper’s view of the aim of science calling it “ratiomania” and “law-and-order science”.  Therefore Feyerabend proposes that successful science is literally “anarchical”, and borrowing a slogan from the Marxist Leon Trotsky, Feyerabend advocates “revolution in permanence”.

For more about the philosophies of Popper, Kuhn, Hanson and Feyerabend readers are referred to BOOKs V, VI and VII at the free web site www.philsci.com or in the e-book Twentieth-Century Philosophy of Science: A History, which is available in the web site through hyperlinks to Internet booksellers.


4.05 Aim of Maximizing “Explanatory Coherence”

Thagard: Computational philosopher of science Paul Thagard proposes that the aim of science is “best explanation”.  The thesis refers to an explanation that aims to maximize the explanatory coherence of one’s overall set of beliefs.  This aim of science is thus explicitly a coherence agenda. 

Thagard developed a computerized cognitive system ECHO, an acronym for “Explanatory Coherence by Harmony Optimization”, in order to explore the operative criteria in theory choice.  His computer system described in his Conceptual Revolutions (1992) simulated the realization of the aim of maximizing “explanatory coherence” by replicating various episodes of theory choice in the history of science.  In his system “explanation” is an undefined primitive term.  He applied his system ECHO to replicate theory choices in several episodes in the history of science including (1) Lavoisier’s oxygen theory of combustion, (2) Darwin’s theory of the evolution of species, (3) Copernicus’ heliocentric astronomical theory of the planets, (4) Newton’s theory of gravitation, and (5) Hess’ geological theory of plate tectonics.  It is surprising that these developments are described as maximized coherence with overall beliefs.

In reviewing his historical simulations Thagard reports that ECHO indicates that the criterion making the largest contribution historically to explanatory coherence in scientific revolutions is explanatory breadth – the preference for the theory that explains more evidence than its competitors.  But he adds that the simplicity and analogy criteria are also historically operative although less important.  He maintains that the aim of maximizing explanatory coherence with these three criteria yields the “best explanation”.

“Explanationism”, maximizing the explanatory coherence of one’s overall set of beliefs, is inherently conservative.  The ECHO system appears to document the historical fact that the coherence aim is psychologically satisfying and occasions strong, and for some scientists nearly compelling motivation for accepting coherent theories, while theories describing reality as incoherent with established beliefs are psychologically disturbing, and are often rejected when first proposed. But progress in science does not consist in maximizing the scientist’s psychological contentment.  Empiricism eventually overrides coherence when there is a conflict due to new evidence.  In fact defending coherence has historically had a reactionary effect.  For example Heisenberg’s revolutionary indeterminacy relations, which contradict microphysical theories coherent with established classical physics including Einstein’s general relativity theory, do not conform to ECHO’s maximizing-explanatory-coherence criterion.

For more about the philosophy of Thagard readers are referred to BOOK VIII at the free web site www.philsci.com or in the e-book Twentieth-Century Philosophy of Science: A History, which is available in the web site through hyperlinks to Internet booksellers.


4.06 Contemporary Pragmatist Aim

The successful outcome (and thus the aim) of basic-science research is explanations made by developing theories that satisfy critically empirical tests, which theories are thereby made scientific laws that can function in scientific explanations and test designs.

The principles of contemporary pragmatism including its philosophy of language have evolved through the twentieth century beginning with the autobiographical writings of Heisenberg, one of the central participants in the historic development of quantum theory.  This philosophy is summarized in Section 2.03 above in three central theses: (1) relativized semantics, (2) empirical underdetermination and (3) ontological relativity, which are not repeated here.

For more about the philosophy of Heisenberg readers are referred to BOOKs II and IV at the free web site www.philsci.com or in the e-book Twentieth-Century Philosophy of Science: A History, which is available in the web site through hyperlinks to Internet booksellers.

The institutionally regulated practices of research scientists may be described succinctly in the pragmatist statement of the aim of science.  The contemporary research scientist seeking success in his research may consciously employ this aim as what some social scientists call a “rationality postulate”.  The institutionalized aim of science can be expressed as such a pragmatist rationality postulate as follows:

The institutionalized aim of science is to construct explanations by developing theories that satisfy critically empirical tests, which theories are thereby made scientific laws that can function in scientific explanations and test designs.

Pragmatically rationality is not some incorrigible principle or intuitive preconception.  The contemporary pragmatist statement of the aim of science is a postulate in the sense of an empirical hypothesis about what has been and will be responsible for the historical advancement of basic-research science.  Therefore like any hypothesis it is destined to be revised at some unforeseeable future time, when due to some future developmental episode in basic science, research practices are revised in some fundamental way.  Then some conventional practices deemed rational today might be dismissed by philosophers and scientists as misconceptions and perhaps even superstitions, as are the romantic and positivist beliefs today. The aim of science is more elaborately explained in terms of all four of the functional topics as sequential steps in the development of explanations.

The institutionalized aim can also be expressed so as not to impute motives to the successful scientist, whose personal psychological motives may be quite idiosyncratic and even irrelevant.  Thus the contemporary pragmatist statement of the aim of science may instead be phrased as follows in terms of a successful outcome instead of a conscious aim imputed to scientists:

The successful outcome of basic-science research is explanations made by developing theories that satisfy critically empirical tests, which theories are thereby made scientific laws that can function in scientific explanations and test designs.

The empirical criterion is the only criterion acknowledged by the contemporary pragmatist, because it is the only criterion that accounts for the advancement of science.  Historically there have been other criteria, but whenever there has been a conflict, eventually it is demonstrably superior empirical adequacy often exhibited in practicality that has enabled a new theory to prevail.  This is true even if the superior theory’s ascendancy has taken many years or decades, or even if it has had to be rediscovered, such as the heliocentric theory of the ancient Greek astronomer Aristarchus of Samos (in the third century BCE).


4.07 Institutional Change

Change within the institution of science is change made under the regulation of the institutionalized aim of science, and may consist of new theories, new test designs, new laws and/or new explanations.

Change of the institution of science, i.e., institutional change, on the other hand is the historical evolution of scientific practices involving revision of the aim of science, which may be due to revision of its criteria for criticism, its discovery practices, or its concept of explanation.

Institutional change in science must be distinguished from change within the institutional constraint.  Philosophy of science examines both changes within the institution of science and historical changes of the institution itself.  But institutional change is often recognized only retrospectively due to the distinctively historical uniqueness of each episode and also due to the need for eventual conventionality for new basic-research practices to become institutionalized.  The emergence of artificial intelligence in the sciences may exemplify an institutional change in progress today.

In the history of science institutionally deviate practices that yielded successful results are initially recognized and accepted by only a few scientists.  As Feyerabend emphasized in his Against Method, in the history of science successful scientists have often broken the prevailing methodological rules.  But the successful departures eventually become conventionalized.  And that is clearly true of the quantum theory.  By the time they are deemed acceptable to the peer-reviewed literature, reference manuals, encyclopedias, student textbooks, academic mediocrities and hacks, and desperate academic plagiarists, the institutional change is complete and has become the received conventional wisdom. 

Successful researchers have often failed to understand the reasons for their unconventional successes, and have advanced or accepted erroneous methodological ideas and philosophies of science to explain their successes.  One of the most historically notorious such misunderstandings is Isaac Newton’s “hypotheses non fingo”, his denial that his law of gravitation is a hypothesis.  Nearly three centuries later Einstein demonstrated otherwise.

Eventually Newton’s physics occasioned an institutional change in physicists’ concept of explanation.  Newton’s contemporaries, Gottfried Leibniz (1646-1716) and Christian Huygens (1629-1695) had criticized Newton’s gravitational theory for admitting action at a distance.  Both of these contemporaries of Newton were convinced that all physical change must occur through direct physical contact like colliding billiard balls.  Leibniz therefore described Newton’s concept of gravity as an “occult quantity” and called Newton’s theory unintelligible.  But eventually Newtonian mathematical physics became institutionalized and paradigmatic of explanation in physics.  For example by the later nineteenth century the physicist Hermann von Helmholtz (1821-1894) said that to understand a phenomenon in physics means to reduce it to Newtonian laws.  

In his Concept of the Positron (1963) Hanson proposes three stages in the process of the evolution of a new concept of explanation; he calls them the black-box, the gray-box, and the glass-box stages.  In the initial black-box stage, there is an algorithmic novelty, a new formalism, which is able to account for all the phenomena for which an existing formalism can account.  Scientists use this technique, but they then attempt to translate its results into the more familiar terms of the prevailing orthodoxy, in order to provide “understanding”.  In the second stage, the gray-box stage, the new formalism makes superior predictions in comparison to older alternatives, but it is still viewed as offering no “understanding”.  Nonetheless it is suspected as having some structure that is in common with the reality it predicts.  In the final glass-box stage the success of the new theory will have so permeated the operation and techniques of the body of the science that its structure will also appear as the proper pattern of scientific inquiry.  

Einstein was never able to accept the Copenhagen statistical interpretation and a few physicists today still reject it. Writing in 1962 Hanson said that quantum theory is in the gray-box stage, because scientists have not yet ceased to distinguish between the theory’s structure and that of the phenomena themselves.  This is to say that they did not practice ontological relativity.  But since Aspect, Dalibard, and Roger’s findings from their 1982 nonlocality experiments demonstrated empirically the Copenhagen interpretation’s semantics and ontology, the quantum theory-based evolution of the concept of explanation in physics has become institutionalized.


4.08 Philosophy’s Cultural Lag

There exists a time lag between the evolution of the institution of science and developments in philosophy of science, since the latter depend on the realization of the former.  A quarter of a century passed between Heisenberg’s philosophical reflections on the language of his indeterminacy relations in quantum physics and the emergence of the contemporary pragmatist philosophy of science in academic philosophy.  Heisenberg is not just one of the twentieth century’s greatest physicists, but he is also one of its greatest philosophers of language.  But even today academic philosophers almost never reference his philosophical writings.  Philosophers tend to be more doctrinaire than scientists, who use the empirical criterion.  Institutional change in philosophy is thus glacial. 


4.09 Cultural Lags among Sciences

Not only are there cultural lags between the institutionalized practices of science and philosophy of science, there are also cultural lags among the several sciences. 

Philosophers of science have preferred to examine physics and astronomy, because historically these have been the most advanced sciences since the historic Scientific Revolution benchmarked with Copernicus and Newton.  Institutional changes occur with lengthy time lags due to such impediments as intellectual mediocrity, technical incompetence, risk aversion, or vested interests in the conventional ideas of the received wisdom.  As Planck grimly wrote in his Scientific Autobiography (1949), a new truth does not triumph by convincing its opponents, but rather succeeds because its opponents have died off; or as he also said, science progresses “funeral by funeral”.

The newer social and behavioral sciences have remained institutionally retarded.  Naïve sociologists and even economists today are blithely complacent in their amateurish philosophizing about basic social-science research, often adopting prescriptions and proscriptions that contemporary philosophers of science recognize as anachronistic and counterproductive.  The result has been the emergence and survival of philosophical superstitions in these retarded social sciences, especially to the extent that they have looked to their own less successful histories to formulate their ersatz philosophies of science.

Currently most sociologists and economists still enforce a romantic philosophy of science, because they believe that sociocultural sciences must have fundamentally different philosophies of science than the natural sciences.  Similarly behaviorist psychologists continue to impose the anachronistic positivist philosophy of science.  In the view of the contemporary pragmatist philosophers these sciences are institutionally retarded, because they erroneously impose preconceived semantical and ontological commitments as criteria for scientific criticism.  Institutional retardation is revealed by the publishing decisions by the journal editors and their chosen referees, who with respect to contemporary modeling and artificial intelligence techniques are still a rear guard protecting the interests of sociology’s incompetents.  Hickey’s detailed exposé of the backwater institutional condition of American academic sociology is set forth in “Appendix II” and “Appendix III” to BOOK VIII at the free web site www.philsci.com and in the e-book Twentieth-Century Philosophy of Science: A History.

Pragmatists can agree with Popper, who in his critique of Kuhn in “Normal Science and its Dangers” in Criticism and the Growth of Knowledge (1970) said that science is “subjectless” meaning that valid science is not defined by any particular semantics or ontology.  Pragmatists tolerate any semantics or ontology that romantics or positivists may include in their scientific explanations, theories and laws, but pragmatists recognize only the empirical criterion for criticism.


4.10 Scientific Discovery

“Discovery” refers to the development of new and empirically superior theories.

Much has already been said in the above discussions of philosophy of scientific language in chapter 3 about the pragmatic basis for the definition of theory language, about the semantic basis for the individuation of theories, and about state descriptions.  Those discussions will be assumed in the following comments about the mechanized development of new theories.

Discovery is the first step toward realizing the aim of science.  The problem of scientific discovery for contemporary pragmatist philosophers of science is to proceduralize and then to mechanize the development of universally quantified statements for empirical testing with nonfalsifying test outcomes, thereby making laws for use in explanations and test designs.  Contemporary pragmatism is consistent with the use of computerized discovery systems.


4.11 Discovery Systems

A mechanized discovery system produces a transition from an input-language state description containing currently available language to an output-language state description containing generated and tested new theories.

In the “Introduction” to his Models of Discovery (1977) Simon, one of the founders of artificial intelligence wrote that dense mists of romanticism and downright know-nothingism have always surrounded the subject of scientific discovery and creativity.  Therefore the most significant development addressing the problem of scientific discovery has been the relatively recent mechanized discovery systems in a new specialty called “computational philosophy of science”. 

The ultimate aim of the computational philosopher of science is to facilitate the advancement of contemporary sciences by participating in and contributing to the successful basic-research work of the scientist.  The contemporary pragmatist philosophy of science thus carries forward the classical pragmatist John Dewey’s emphasis on participation.  Unfortunately few academic philosophers have the requisite computer skills much less the needed working knowledge of an empirical science for participation in basic research.  Hopefully that will change in future Ph.D. dissertations in philosophy of science, which are very likely to be interdisciplinary endeavors.

Every useful discovery system to date has contained procedures both for constructional theory creation and for critical theory evaluation for quality control of the generated output and for quantity control of the system’s otherwise unmanageably large output.  Theory creation introduces new language into the current state description to produce a new state description, while falsification in empirical tests eliminates language from the current state description to produce a new state description. Thus both theory development and theory testing enable a discovery system to offer a specific and productive diachronic dynamic procedure for linguistic change to advance empirical science.

The discovery systems do not merely implement an inductivist strategy of searching for repetitions of individual instances, notwithstanding that statistical inference is employed in some system designs.  The system designs are mechanized procedural strategies that search for patterns in the input information.  Thus they implement Hanson’s thesis in Patterns of Discovery that in a growing research discipline inquiry seeks the discovery of new patterns in data.  They also implement Feyerabend’s “plea for hedonism” in Criticism and the Growth of Knowledge (1971) to produce a proliferation of theories.  But while many are made by these systems, mercifully few are chosen thanks to the empirical testing routines in the systems to control for both quality and quantity of the outputted equations.


4.12 Types of Theory Development

In his Introduction to Metascience Hickey distinguishes three types of theory development, which he calls theory extension, theory elaboration and theory revision.  This classification is vague and may be overlapping in some cases, but it suggests three alternative types of discovery strategies and therefore implies different discovery-system designs.

Theory extension is the use of a currently tested and nonfalsified explanation to address a new scientific problem.

The extension could be as simple as adding hypothetical statements to make a general explanation more specific for a new problem at hand. Analogy is a special case of theory extension.  In his Computational Philosophy of Science (1988) Thagard describes this strategy for mechanized theory development, which consists in the patterning of a proposed solution to a new problem by analogy with a successful explanation originally developed for a different subject.  Using his system design based on this strategy his discovery system called PI (an acronym for “Process of Induction”) produced a rational reconstruction of the theory of sound waves by analogy with the description of water waves.  The system was his Ph.D. dissertation in philosophy of science at the University of Toronto, Canada.

In his Mental Leaps: Analogy in Creative Thought (1995) Thagard further explains that analogy is a kind of nondeductive logic, which he calls “analogic”.  It firstly involves the “source analogue”, which is the known domain that the investigator already understands in terms of familiar patterns, and secondly involves the “target analogue”, which is the unfamiliar domain that the investigator is trying to explain.  Analogic is the strategy whereby the investigator understands the targeted domain by seeing it in terms of the source domain.  Analogic requires a “mental leap”, because the two analogues may initially seem unrelated.  And the mental leap is also called a “leap”, because analogic is not conclusive like deduction.

It may be noted that if the output state description generated by analogy such as the PI system is radically different from anything previously seen by the affected scientific profession containing the target analogue, then the members of that affected profession may experience the communication constraint to the high degree that is usually associated with a theory revision.  The communication constraint is discussed below (See below, Section 4.26).

Theory elaboration is the correction of a currently falsified theory to create a new theory by adding new factors or variables that correct the falsified universally quantified statements and erroneous predictions of the old theory. 

The new theory has the same test design as the old theory. The correction is not merely ad hoc excluding individual exceptional cases, but rather is a change in the universally quantified statements. This process is often misrepresented as “saving” a falsified theory, but in fact it creates a new one.

For example the introduction of a variable for the volume quantity and the development of a constant coefficient for the particular gas could elaborate Gay-Lussac’s (1778-1850) law for gasses into the combined Gay-Lussac’s law, Boyle’s law and Charles’ law.  Similarly Friedman’s macroeconomic quantity theory might be elaborated into a Keynesian hyperbolic liquidity-preference function by the introduction of an interest rate, both to account for the cyclicality manifest in an annual time series describing the calculated velocity parameter and to display the liquidity trap phenomenon, which actually occurred both in the Great Depression (1929-1933) and in the recent Great Recession (2007-2009).

Pat Langley’s BACON discovery system exemplifies mechanized theory elaboration.  It is named after the English philosopher Francis Bacon (1561-1626) who thought that scientific discovery can be routinized.  BACON is a set of successive and increasingly sophisticated discovery systems that make quantitative laws and theories from input measurements.  Langley designed and implemented BACON in 1979 as the thesis for his Ph.D. dissertation written in the Carnegie-Mellon department of psychology under the direction of Simon.  A description of the system is given in Simon’s Scientific Discovery: Computational Explorations of the Crea­tive Processes (1987).

BACON uses Simon’s heuristic-search design strategy, which may be construed as a sequential application of theory elaboration.  Given sets of observation measurements for several variables, BACON searches for functional relations among the variables.  BACON has produced a rational reconstruction that simulated the discovery of several historically significant empirical laws including Boyle’s law of gases, Kepler’s third planetary law, Galileo’s law of motion of objects on inclined planes, and Ohm’s law of electrical current.

Theory revision is the reorganization of currently available information to create a new theory.

The results of theory revision may be radically different from any current theory, and may thus be said to occasion a “paradigm change”.  It might be undertaken after repeated attempts at both theory extension and theory elaborations have failed.  The source for the input state description for mechanized theory revision presumably consists of the descriptive vocabulary from the currently untested theories addressing the problem defined by a test design.  The descriptive vocabulary from previously falsified theories may also be included as inputs to make an accumulative state description, because the vocabularies in rejected theories can be productively cannibalized for their scrap value.  In fact even terms and variables from tested and nonfalsified theories could also be included, just to see what new proposals come out; empirical underdetermination permits scientific pluralism, and reality is full of surprises.  Hickey notes that a mechanized discovery system’s newly outputted theory is most likely to be called revolutionary if the revision is great, because theory revision typically produces greater change to the current language state than does theory extension or theory elaboration thus producing psychologically disorienting semantical dissolution due to the transition.

Theory revision, the reorganization of currently existing information to create a new theory, is evident in the history of science.  The central thesis of historian of science Herbert Butterfield’s (1900-1979) Origins of Modern Science: 1300-1800 (1958, P. 1) is that the type of transition known as a “scientific revolution” was not brought about by new observations or additional evidence, but rather by transpositions in the minds of the scientists.  Specifically he maintains that the type of mental activity that produced the historic scientific revolutions is the “art of placing a known bundle of data in a new system of relations”.

Hickey found this same “art” in the history of economics.  1980 Nobel-laureate econometrician Lawrence Klein wrote in his Keynesian Revolution (1949, Pp. 13 & 124) that all the important parts of Keynes theory can be found in the works of one or another of his predecessors.  In other words Keynes put a known bundle of information into a new system of relations, such as his aggregate consumption function and his money-demand function with its speculative-demand component and the liquidity trap.  Thus Hickey’s theory-revising METAMODEL discovery system is applicable to the development of Keynes’ general theory.

In 1972 Hickey’s METAMODEL discovery system produced a rational reconstruction of the discovery of the Keynesian macroeconomic theory from U.S. statistical data available prior to 1936, the publication year of Keynes’ revolutionary General Theory of Employment, Interest and Money.  Hickey’s METAMODEL discovery system described in his Introduction to Metascience (1976) is a mechanized generative grammar with combinatorial transition rules for producing longitudinal econometric models.  His mechanized grammar is a combinatorial finite-state generative grammar that satisfies the collinearity restraint for the regression-estimated equations and for the formal requirements for executable multi-equation predictive models.  The system also tests for collinearity, statistical significance (Student t-statistic), serial correlation, goodness-of-fit and for accurate out-of-sample retrodictions.

He also used his METAMODEL system in 1976 to develop a post-classical macrosociometric functionalist model of the American national society with fifty years of historical time-series data. The generated sociological model disclosed an intergenerational negative feedback that sociologists would call a “macrosocial integrative mechanism”, in which an increase in social disorder indicated by a rising homicide rate calls forth a delayed intergenerational stabilizing reaction by the socializing institution indicated by the high school completion rate, which restores order by reinforcing compliance with criminal law.  Distinctively macrosocial outcomes are not disclosed by multiplying social psychological behaviors n times.  But to the shock, chagrin and dismay of complacent academic sociologists the model is not a social-psychological theory and it panicked the editors of four peer-reviewed sociological journals.  They therefore rejected the paper that describes the model and its findings about the American national society’s dynamics and stability characteristics.  The paper is reprinted as “Appendix I” to BOOK VIII at the free web site www.philsci.com or in the e-book Twentieth-Century Philosophy of Science: A History.

This macrosociometric model was not just a simulation of a past accomplishment.  It is an example of contemporary AI-developed theory revision, an excursion into new territory that is unfamiliar to the academic establishment of sociologists.  Consequently Hickey incurred the opposition often encountered in such excursions. Hickey calls romantic sociology with its social-psychological reductionism “classical”, because his macrosociological quantitative functionalist theory supersedes the prevailing social-psychological reductionism, and manifests a basic discontinuity in sociological thought as evidenced by the criticisms by the orthodox journal referees.  He therefore calls the academic sociologists “classical” to describe their sociological thought with the same meaning that sociologist Donald Black used in his address to the American Sociological Association in 1998, which was also reported in his “The Purification of Sociology” article in Contemporary Sociology.  Black proposed a scientific revolution in sociology in the manner described by Thomas Kuhn, and noted that sociology has never had such a revolution in its short history.  Black says that “purifying” sociology of its “classical” tradition is a necessary condition for its needed revolutionary advance.  He expects that this new purified sociology will differ so fundamentally from the prevailing classical sociology, that most sociologists will undoubtedly resist it for the rest of their days – declaring it “incomplete, incompetent and impossible”.  He adds that sociology has never had a revolution in its short history, that classical sociology is all that sociologists have ever known, and that sociologists “worship dead gods of the past” while viewing disrespect as heresy.    

The four peer-reviewed sociological journals that rejected Hickey’s paper were Sociological Methods and Research, the American Journal of Sociology, the American Sociological Review, and Social Indicators Research with an editor who actually refused even to disclose any criticisms of the paper.  Editors give their patronage what they want, and the decisions by these panicked editors represent an institutional failure of academic sociology.  The referee criticisms showed that the provincial academic sociologists’ a priori ontological commitments to romanticism and to social-psychological reductionism rendered the editors and their chosen referees invincibly obdurate, and also exhibited their Luddite mentality toward mechanized theory development. 

The referee criticisms and Hickey’s rejoinders are given in “Appendix II” to BOOK VIII at the free web site www.philsci.com and in the e-book Twentieth-Century Philosophy of Science: A History.  Hickey’s critique of the scientific status of academic sociology is also given in “Appendix III” to BOOK VIII at the free web site and in the history e-book.

Simon called the combinatorial system design a “generate-and-test” design.  In the 1980’s he had written that combinatorial procedures consume excessive computational resources for present-day electronic computers.  Hickey’s models were small enough to operate in an IBM RS6000 computer, which had a mainframe CPU chip.  Gordon E. Moore formulated a famous law appropriately called Moore’s Law, which states that the number of transistors that can be placed on a CPU chip, and thus the computing power, doubles every year.  Furthermore developments in quantum computing promise to overcome computational constraints, where such constraints are currently encountered.  The increase in throughput enabled by the quantum computer is extraordinary relative to the conventional electronic computer – even the supercomputer.  And the availability of practical quantum computing seems only a matter of time.  Google’s Research Lab in Santa Barbara, CA, recently announced in the scientific journal Nature that its computer scientists have achieved “quantum supremacy”.  The New York Times (24 October 2019) quoted Dr. John Martinis, project leader for Google’s “quantum supremacy experiment” as saying that his group is now at the stage of trying to make use of this enhanced computing power. 

In the mid-1980’s Hickey integrated his macrosociometric model into a Keynesian macroeconometric model to produce an institutionalist macroeconometric model, while he was Deputy Director and Senior Economist for the Indiana Department of Commerce.  The report of the findings was read to the Indiana Legislative Assembly by the Speaker of the House in support of Governor Orr’s “A-plus” successful legislative initiative for increased State-government spending for K-12 primary and secondary public education.


4.13 Examples of Successful Discovery Systems

There are several examples of successful discovery systems in use.  John Sonquist developed his AID system for his Ph.D. dissertation in sociology at the University of Chicago.  His dissertation was written in 1961 before Edward O. Laumann and the romantics, who would likely have rejected it, had taken over the University of Chicago sociology department.  He described the system in his Multivariate Model Building: Validation of a Search Strategy (1970).  The system has long been used at the Survey Research Center, Institute for Social Research, University of Michigan, Ann Arbor, MI.  Now modified as the CHAID system using chi-squared (χ2) Sonquist’s discovery system is widely available commercially in both the SAS and SPSS software packages.  Its principal commercial application has been for list-processing scoring models for commercial market analysis and for creating credit-risk scores as well as for academic investigations in social science.  It is not only the oldest mechanized discovery system, but is also the most widely used in practical applications to date.

Robert Litterman developed his BVAR (Bayesian Vector Autoregression) system for his Ph.D. dissertation in economics at the University of Minnesota.  He described the system in his Techniques for Forecasting Using Vector Autoregressions (1984).  The economists at the Federal Reserve Bank of Minneapolis have used his system for macroeconomic and regional economic analysis.  The State of Connecticut and the State of Indiana have also used it for regional economic analysis.

 Having previously received an M.A. degree in economics Hickey had intended to develop his METAMODEL artificial-intelligence discovery system for a Ph.D. dissertation in philosophy of science at the University of Notre Dame, South Bend, IN.  After initiating a denial that he wanted to play God, the Reverend Chairman questioned Hickey’s seriousness, accused him of having a bad attitude, threatened that if he persisted with his ideas he could never succeed with their faculty, and issued his ultimatum: get reformed or get out.  Notre Dame is a Roman Catholic school, but Hickey was no recanting Galileo; he got out.  Notre Dame will always be better at football than philosophy.

Hickey then enrolled as a nondegree student at San Jose City College in San Jose, CA, a two-year associate-arts degree community college.  There using an IBM 370 mainframe computer he studied FORTRAN and then developed his computerized METAMODEL discovery system.   For the next thirty years he used his discovery system occupationally, working as a research econometrician in both business and government.  He also used it successfully for econometric market analysis and for risk analysis for various business corporations including USX/United States Steel Corporation, BAT(UK)/Brown and Williamson Company, Pepsi/Quaker Oats Company, Altria/Kraft Foods Company, Allstate Insurance Company, and TransUnion LLC.  In 2004 TransUnion’s Analytical Services Group purchased a perpetual license to use his METAMODEL system for their consumer credit risk analyses using their proprietary TrenData aggregated quarterly time series extracted from their national database of consumer credit files.  Hickey used the models generated by his discovery system to forecast payment delinquency rates, bankruptcy filings, average balances and other consumer borrower characteristics that indicate risk exposure for lenders.  And he also used his system for Quaker Oats, Kraft Foods and Brown & Williamson Co. to discover the sociological and demographic factors responsible for the secular long-term market dynamics of their processed food products and other nondurable consumer goods.

In 2007 Michael Schmidt, a Ph.D. student in computational biology at Cornell University, and his dissertation director, Hod Lipson developed their system EUREQA at Cornell University’s Artificial Intelligence Lab.  The system automatically develops predictive analytical models from data using a strategy they call an “evolutionary search” to find invariant relationships, which converges on the simplest and most accurate equations fitting the inputted data.  They report that the system has been used by many business corporations, universities and government agencies including Alcoa, California Institute of Technology, Cargill, Corning, Dow Chemical, General Electric, Amazon, Shell and NASA.

For more about discovery systems and computational philosophy of science readers are referred to BOOK VIII at the free web site www.philsci.com or in the e-book Twentieth-Century Philosophy of Science: A History, which is available in the web site through hyperlinks to Internet booksellers.


4.14 Scientific Criticism

Criticism pertains to the criteria for the acceptance or rejection of theories.  The only criterion for scientific criticism that is acknowledged by the contemporary pragmatist is the empirical criterion.

The philosophical literature on scientific criticism has little to say about the specifics of experimental design, as might be found in various college-level science laboratory manuals.  Most often philosophical discussion of criticism pertains to the criteria for acceptance or rejection of theories and more recently to the effective decidability of empirical testing that has been called into question due to the wholistic semantical thesis. 

In earlier times when the natural sciences were called “natural philosophy” and social sciences were called “moral philosophy”, nonempirical considerations operated as criteria for the criticism and acceptance of descriptive narratives.  Even today some philosophers and scientists have used their semantical and ontological preconceptions as criteria for the criticism of theories including preconceptions about causality or specific causal factors.  Such semantical and ontological preconceptions have misled them to reject new empirically superior theories.  In his Against Method Feyerabend noted that the ontological preconceptions used to criticize new theories have often been the semantical and ontological claims expressed by previously accepted and since falsified theories. 

What historically has separated the empirical sciences from their origins in natural and moral philosophy is the empirical criterion.  This criterion is responsible for the advancement of science and for its enabling practicality in application. Whenever in the history of science there has been a conflict between the empirical criterion and any nonempirical criteria for the evaluation of new theories, it is eventually the empirical criterion that ultimately decides theory selection.

Contemporary pragmatists accept relativized semantics, scientific realism, and ontological relativity, and they therefore reject all prior semantical or ontological criteria for scientific criticism including the romantics’ mentalistic ontology requiring social-psychological or any other kind of reductionism.

 

Pages [1] [2] [3] [4] [5] [6] [7]
NOTE: Pages do not corresponds with the actual pages from the book