INTRODUCTION TO PHILOSOPHY OF SCIENCE

Book I Page 5

Chapter 4. Functional Topics

The preceding chapters have offered generic sketches of the principal twentieth-century philosophies of science, namely romanticism, positivism and neopragmatism.  And they have discussed selected elements of the contemporary realistic neopragmatist philosophy of language for science, namely the object language and metalanguage perspectives, the synchronic and diachronic views, and the syntactical, semantical, ontological and pragmatic dimensions. 

Finally at the expense of some repetition this chapter integrates the philosophy of language into the four sequential functional topics, namely (1) the institutionalized aim of basic science, (2) scientific discovery, (3) scientific criticism, and (4) scientific explanation.


4.01 Institutionalized Aim of Science

The institutionalized aim of science is the cultural value system that regulates the scientist’s performance of basic research.

During the last approximately three hundred years empirical science has evolved into an institution with its own distinctive and autonomous professional subculture of shared naturalistic views and values.

Idiosyncratic motivations of individual scientists are historically noteworthy, but are largely of anecdotal interest for philosophers of science, except when such idiosyncrasies have episodically produced results that initiated institutional change.

The literature of philosophy of science offers various proposals for the aim of science.  The three modern philosophies of science mentioned above set forth different philosophies of language, which influence their diverse concepts of all four of the functional topics including the aim of science.


4.02
Positivist Aim

  Early positivists aimed to create explanations having objective basis in observations and to make empirical generalizations summarizing the individual observations. They rejected all theories as speculative and therefore as unscientific.

The positivists proposed a foundational agenda based on their naturalistic philosophy of language.  Early positivists such as Mach proposed that science should aim for firm objective foundations by relying exclusively on observation, and should seek only empirical generalizations that summarize the individual observations.  They deemed theories to be at best temporary expedients and too speculative to be considered appropriate for science.  However, the early positivist Pierre Duhem (1861-1916) admitted that mathematical physical theories are integral to science, and he maintained that their function is to summarize laws as Mach said laws summarize observations, although Duhem denied that theories have either a realistic or a phenomenalist semantics thus avoiding the later neopositivists’ problems with theoretical terms.

Later neopositivists aimed furthermore to justify explanatory theories by logically relating the theoretical terms in the theories to observation terms that they believed are a foundational reduction base.

After the acceptance of Einstein’s relativity theory by physicists, the later positivists also known as “neopositivists” acknowledged the essential rôle that hypothetical theory must have in the aim of science.  Between the twentieth-century World Wars, Carnap and his fellows in the Vienna Circle group of neopositivists attempted to justify the semantics of theories in science by logically relating the so-called theoretical terms in the theories to the so-called observation terms they believed should be the foundational logical-reduction base for science. 

Positivists alleged the existence of “observation terms”, which are terms that reference only observable entities or phenomena.  Observation terms are deemed to have simple, elementary and primitive semantics and to receive their semantics ostensively and passively in perception.  Positivists furthermore called the particularly quantified sentences containing only such terms “observation sentences”, if issued on the occasion of observing.  For example the sentence “That crow is black” uttered while the speaker of the sentence is viewing a present crow, is an observation sentence.

Many of these neopositivists were also called “logical positivists”, because they attempted to use symbolic-logic expressions fabricated by Russell and Whitehead to accomplish the logical reduction of theory language to observation language.  The logical positivists fantasized that this Russellian symbolic logic can serve philosophy as mathematics serves physics, and it became their idée fixe.  For decades the symbols ostentatiously littered the pages of the Philosophy of Science and British Journal for Philosophy of Science journals with its chicken tracks, and rendered their ostensibly “technical” papers fit for the bottom of a birdcage.

These neopositivists were self-deluded, because in fact a truth-functional logic cannot capture the hypothetical-conditional logic of empirical testing in science.  For example the truth-functional truth table says that if the conditional statement’s antecedent statement is false, then the conditional statement expressing the theory is defined as true no matter whether the consequent is true or false.  But in the practice of science a false antecedent statement means that execution of a test did not comply with the definition of initial conditions in the test design thus invalidating the test, and is therefore irrelevant to the truth-value of the conditional statement that is the tested theory.  Consequently the aim of these neopositivist philosophers was not relevant to the aim of practicing research scientists or to contemporary realistic neopragmatist philosophy of science.  The truth-functional logic is not seriously considered by post-positivist philosophers of science much less by practicing research scientists, and scientists do not use symbolic logic or seek any logical reduction for so-called theoretical terms.  The extinction of positivism was in no small part due to the disconnect between the positivists’ philosophical agenda and the actual practices and values of research scientists.

For more about positivism readers are referred to BOOK II and BOOK III  at the free web site www.philsci.com or in the e-book Twentieth-Century Philosophy of Science: A History available in the web site at Internet booksellers.


4.03 Romantic Aim

The aim of the social sciences is to develop explanations describing social-psychological intersubjective motives, in order to explain observed social interaction in terms of purposeful “human action” in society.

The romantics have a subjectivist social-psychological reductionist aim for the social sciences, which is thus also a foundational agenda.  This agenda is a thesis of the aim of the social sciences that is still enforced by many social scientists.  Thus both romantic philosophers and romantic social scientists maintain that the sciences of culture differ in their aim from the sciences of nature. 

Some romantics call their type of explanation “interpretative understanding” and others call it “substantive reasoning”.  Using this concept of the aim of social science they often say that an explanation must be “convincing” or must “make substantive sense” to the social scientist due to the scientist’s introspection upon his actual or imaginary personal experiences, especially when he is a participating member of the same culture as the social members he is investigating.  Some romantics advocate “hermeneutics”, which originated with the theologian Frederich Scheiermacher (1768-1834).  The concept is often associated with literary criticism.  It discloses purportedly hidden meaning in a text accessed by vicariously re-experiencing the intersubjective mental experience of the text’s author.

Examples of romantic social scientists are sociologists like Talcott Parsons (1902-1979), an influential American sociologist who taught at Harvard University.  In his Structure of Social Action (1937) he advocated a variation on the philosophy of the sociologist Max Weber, in which vicarious understanding that Weber called “verstehen” is a criterion for criticism that the romantics believe trumps empirical evidence.  Verstehen sociology is also known as “folk sociology” or “pop sociology”.  Enforcing this “social action” criterion has obstructed the evolution of sociology into a modern empirical science in the twentieth century.  Cultural anthropologists furthermore reject verstehen as a fallacy of ethnocentrism

An example of an economist whose philosophy of science is paradigmatically romantic is Ludwig von Mises (1881-1973), an Austrian School economist.  In his Human Action: A Treatise on Economics (1949) Mises proposes a general theory of human action that he calls “praxeology” that employs the “method of imaginary constructions”, which suggests Weber’s ideal types.  He finds praxeology exemplified in both economics and politics.  Mises maintains that praxeology is deductive and a priori like geometry, and is therefore unlike natural science.  Praxeological theorems cannot be falsified, because they are certain.  All that is needed for deduction of praxeology’s theorems is knowledge of the “essence” of human action, which is known introspectively.  On his view experience merely directs the investigator’s interest to problems.

The 1989 Nobel-laureate econometrician Trygve Haavelmo (1911-1999) in his “Probability Approach in Econometrics” in Econometrica (July supplement, 1944) supplies the romantic agenda ostensibly used by most econometricians today. Econometricians do not reject the aim of prediction, simulation, optimization and policy formulation using statistical econometric models; with their econometric modeling agenda they enable it.  But they subordinate the selection of explanatory variables in their models to factors that are derived from economists’ heroically imputed maximizing rationality theses, which identify the motivating factors explaining the decisions of economic agents such as buyers and sellers in a market.  As Mary S. Morgan (1921-2004) laments in her History of Econometric Ideas (1990) the econometricians following Haavelmo exclude econometrics from discovery and limit its function to testing theory.  In his Philosophy of Social Science (1995) Alexander Rosenberg (1946) describes the economists’ theory of “rational choice”, i.e., the use of the maximizing rationality theses, as “folk psychology formalized”.

However the “theoretical” economist’s rationality postulates have been relegated to the status of a fatuous cliché, because in practice the econometrician almost never derives his equation specification deductively from the rationality postulates expressed as preference schedules.  Instead he will select variables to produce statistically acceptable models that produce accurate predictions regardless of the rationality postulates.  In fact in Haavelmo’s seminal paper he wrote that the economist may “jump over the middle link” of the preference schedules, although he rejected determining equation specifications by statistics.

For more about the romantics including Parsons, Weber, Haavelmo and others readers are referred to BOOK VIII at the free web site www.philsci.com or in the e-book Twentieth-Century Philosophy of Science: A History, which is available through hyperlinks in the web site to Internet booksellers.


4.04 More Recent Ideas

Most of the twentieth-century philosophers’ proposals for the aim of science are less dogmatic than those listed above and arise from examination of important developmental episodes in the history of the natural sciences.  For example:

Einstein: Reflection on his relativity theory influenced Albert Einstein’s concept of the aim of science, which he set forth as his “programmatic aim of all physics” stated in his “Reply to Criticisms” in Albert Einstein: Philosopher-Scientist (1949). The aim of science in Einstein’s view is a comprehension as complete as possible of the connections among sense impressions in their totality, and the accomplishment of this comprehension by the use of a minimum of primary concepts and relations.  Einstein certainly did not reject empiricism, but he included an explicit coherence agenda in his aim of science.  His thesis implies a uniform ontology for physics, and he accordingly found statistical quantum theory to be “incomplete” according to his aim.  His is a minority view among physicists today.

Popper: Karl R. Popper was an early post-positivist philosopher of science and was also critical of the romantics.  Reflecting on Arthur Eddington’s (1882-1944) historic 1919 solar eclipse test of Einstein’s relativity theory in physics Popper proposed in his Logic of Scientific Discovery (1934) that the aim of science is to produce tested and nonfalsified theories having greater universality and more information content than any predecessor theories addressing the same subject.  Unlike the positivists’ view his concept of the aim of science thus focuses on the growth of scientific knowledge.  And in his Realism and the Aim of Science (1983) he maintains that realism explains the possibility of falsifying test outcomes in scientific criticism.  The title of his Logic of Scientific Discovery notwithstanding, Popper denies that discovery can be addressed by either logic or philosophy, but says instead that discovery is a proper subject for psychology.  Cognitive psychologists today would agree.

Hanson: Norwood Russell Hanson reflecting on the development of quantum theory states in his Patterns of Discovery: An Inquiry into the Conceptual Foundations of Science (1958) and in Perception and Discovery: An Introduction to Scientific Inquiry (1969) that the aim of inquiry in research science is directed to the discovery of new patterns in data to develop new hypotheses for deductive explanation.  He calls such practices “research science”, which he opposes to “completed science” or “catalogue science”, which is merely re-arranging established ideas into more elegant formal axiomatic patterns.  He follows Peirce who called hypothesis formation “abduction”.  Today mechanized discovery systems typically search for patterns in data.

Kuhn: Thomas S. Kuhn, reflecting on the development of the Copernican heliocentric cosmology in his The Copernican Revolution: Planetary Astronomy in the Development of Western Thought (1957) maintained in his popular Structure of Scientific Revolutions (1962) that the prevailing theory, which he called the “consensus paradigm”, has institutional status.  He proposed that small incremental changes extending the consensus paradigm, to which scientists seek to conform, defines the institutionalized aim of science, which he called “normal science”.  And he said that scientists neither desire nor aim consciously to produce revolutionary new theories, which he called “extraordinary science”.  This concept of the aim of science is thus a conformist agenda; Kuhn therefore defined scientific revolutions as institutional changes in science, which he excludes from the institutionalized aim of science.

Feyerabend: Paul K. Feyerabend reflecting on the development of quantum theory in his Against Method (1975) proposed that each scientist has his own aim, and that contrary to Kuhn anything institutional is a conformist impediment to the advancement of science.  He said that historically successful scientists always “break the rules”, and he ridiculed Popper’s view of the aim of science calling it “ratiomania” and “law-and-order science”.  Therefore Feyerabend proposes that successful science is literally “anarchical”, and borrowing a slogan from the Marxist Leon Trotsky, Feyerabend advocates “revolution in permanence”.

For more about the philosophies of Popper, Kuhn, Hanson and Feyerabend readers are referred to BOOK V, BOOK VI and BOOK VII at the free web site www.philsci.com or in the e-book Twentieth-Century Philosophy of Science: A History available at Internet booksellers through hyperlinks in the web site.


4.05 Aim of Maximizing “Explanatory Coherence”

Thagard: Computational philosopher of science Thagard proposes that the aim of science is “best explanation”.  This thesis refers to an explanation that aims to maximize the explanatory coherence of one’s overall set of beliefs.  The aim of science is thus explicitly a coherence agenda. 

Thagard developed a computerized cognitive system ECHO, an acronym for “Explanatory Coherence by Harmony Optimization”, in order to explore the operative criteria in theory choice.  His computer system described in his Conceptual Revolutions (1992) simulated the realization of the aim of maximizing “explanatory coherence” by replicating various episodes of theory choice in the history of science.  In his system “explanation” is an undefined primitive term.  He applied his system ECHO to replicate theory choices in several episodes in the history of science including (1) Lavoisier’s oxygen theory of combustion, (2) Darwin’s theory of the evolution of species, (3) Copernicus’ heliocentric astronomical theory of the planets, (4) Newton’s theory of gravitation, and (5) Hess’ geological theory of plate tectonics.  It is surprising that these developments are described as maximized coherence with overall beliefs.

In reviewing his historical simulations Thagard reports that ECHO indicates that the criterion making the largest contribution historically to explanatory coherence in scientific revolutions is explanatory breadth – the preference for the theory that explains more evidence than its competitors.  But he adds that the simplicity and analogy criteria are also historically operative although less important.  He maintains that the aim of maximizing explanatory coherence with these three criteria yields the “best explanation”.

“Explanationism”, maximizing the explanatory coherence of one’s overall set of beliefs, is inherently conservative.  The ECHO system appears to document the historical fact that the coherence aim is psychologically satisfying and occasions strong, and for some scientists nearly compelling motivation for accepting coherent theories, while theories describing reality as incoherent with established beliefs are psychologically disturbing, and are often rejected when first proposed. But progress in science does not consist in maximizing the scientist’s psychological contentment.  Empiricism eventually overrides coherence when there is a conflict due to new evidence.  In fact defending coherence has historically had a reactionary effect.  For example Heisenberg’s revolutionary indeterminacy relations, which contradict microphysical theories coherent with established classical physics including Einstein’s general relativity theory, do not conform to ECHO’s maximizing-explanatory-coherence criterion.

For more about the philosophy of Thagard readers are referred to BOOK VIII at the free web site www.philsci.com or in the e-book Twentieth-Century Philosophy of Science: A History, which is available at Internet booksellers through hyperlinks in the web site.


4.06 Contemporary Pragmatist Aim

The successful outcome (and thus the aim) of basic-science research is explanations made by developing theories that satisfy critically empirical tests, which theories are thereby made scientific laws that can function in scientific explanations and test designs.

The principles of contemporary realistic neopragmatism including its philosophy of language have evolved through the twentieth century beginning with the autobiographical writings of Heisenberg, one of the central participants in the historic development of quantum theory.  This philosophy is summarized in Section 2.03 above in three central theses: (1) relativized semantics, (2) empirical underdetermination and (3) ontological relativity, which are not repeated here.

For more about the philosophy of Heisenberg readers are referred to BOOK II and BOOK IV at the free web site www.philsci.com or in the e-book Twentieth-Century Philosophy of Science: A History, which is available at Internet booksellers through hyperlinks in the web site.

The institutionally regulated practices of research scientists may be described succinctly in the realistic neopragmatist statement of the aim of science.  The contemporary research scientist seeking success in his research may consciously employ this aim as what some social scientists call a “rationality postulate”.  The institutionalized aim of science can be expressed as such a realistic neopragmatist rationality postulate as follows:

The institutionalized aim of science is to construct explanations by developing theories that satisfy critically empirical tests, which theories are thereby made scientific laws that can function in scientific explanations and test designs.

Pragmatically rationality is not some incorrigible principle or intuitive preconception.  The contemporary realistic neopragmatist statement of the aim of science is a postulate in the sense of an empirical hypothesis about what has been and will be responsible for the historical advancement of basic-research science.  Therefore like any hypothesis it is destined to be revised at some unforeseeable future time, when due to some future developmental episode in basic science, research practices are revised in some fundamental way.  Then some conventional practices deemed rational today might be dismissed by philosophers and scientists as misconceptions and perhaps even superstitions, as are the romantic and positivist beliefs today. The aim of science is more elaborately explained in terms of all four of the functional topics as sequential steps in the development of explanations.

The institutionalized aim can also be expressed so as not to impute motives to the successful scientist, whose personal psychological motives may be quite idiosyncratic and even irrelevant.  Thus the contemporary realistic neopragmatist statement of the aim of science may instead be phrased as follows in terms of a successful outcome instead of a conscious aim imputed to scientists:

The successful outcome of basic-science research is explanations made by developing theories that satisfy critically empirical tests, which theories are thereby made scientific laws that can function in scientific explanations and test designs.

The empirical criterion is the only criterion acknowledged by the contemporary realistic neopragmatist, because it is the only criterion that accounts for the advancement of science.  Historically there have been other proposed criteria, but whenever there has been a conflict, eventually it is demonstrably superior empirical adequacy often exhibited in practicality that has enabled a new theory to prevail.  This is true even if the superior theory’s ascendancy has taken many years or decades, or even if it has had to be rediscovered, such as the heliocentric theory of the ancient Greek astronomer Aristarchus of Samos (who lived in the third century BCE).


4.07 Institutional Change

Change within the institution of science is change made under the regulation of the institutionalized aim of science, and may consist of new theories, new test designs, new laws and/or new explanations.

Change of the institution of science, i.e., institutional change, on the other hand is the historical evolution of scientific practices involving revision of the aim of science, which may be due to revision of its criteria for criticism, its discovery practices, or its concept of explanation.  

Institutional change in science must be distinguished from change within the institutional constraint.  Philosophy of science examines both changes within the institution of science and historical changes of the institution itself.  But institutional change is often recognized only retrospectively due to the distinctively historical uniqueness of each episode and also due to the need for eventual conventionality for new basic-research practices to become institutionalized.  The emergence of artificial intelligence in the sciences may exemplify an institutional change in progress today.

In the history of science institutionally deviate practices that yielded successful results were initially recognized and accepted by only a few scientists.  As Feyerabend emphasized in his Against Method, in the history of science successful scientists have often broken the prevailing methodological rules.  But the successful departures eventually became conventionalized.  And that is clearly true of the quantum theory.  By the time they are deemed acceptable to the peer-reviewed literature, reference manuals, encyclopedias, student textbooks, academic mediocrities and hacks, and desperate plagiarizing academics, the institutional change is complete and has become the received conventional wisdom. 

Successful researchers have often failed to understand the reasons for their unconventional successes, and have advanced or accepted erroneous methodological ideas and philosophies of science to explain their successes.  One of the most historically notorious such misunderstandings is Isaac Newton’s “hypotheses non fingo”, his denial that his law of gravitation is a hypothesis.  Nearly three centuries later Einstein demonstrated otherwise.

Newton’s contemporaries, Gottfried Leibniz (1646-1716) and Christian Huygens (1629-1695) had criticized Newton’s gravitational theory for admitting action at a distance.  Both of these contemporaries of Newton were convinced that all physical change must occur through direct physical contact like colliding billiard balls, as René Descartes (1596-1650) had believed.  Leibniz therefore described Newton’s concept of gravity as an “occult quantity” and called Newton’s theory unintelligible.  But eventually Newtonian mathematical physics became institutionalized and paradigmatic of explanation in physics.  For example by the later nineteenth century the physicist Hermann von Helmholtz (1821-1894) said that to understand a phenomenon in physics means to reduce it to Newtonian laws.  

In his Concept of the Positron (1963) Hanson proposes three stages in the process of the evolution of a new concept of explanation; he calls them the black-box, the gray-box, and the glass-box stages.  In the initial black-box stage, there is an algorithmic novelty, a new formalism, which is able to account for all the phenomena for which an existing formalism can account.  Scientists use this technique, but they then attempt to translate its results into the more familiar terms of the prevailing orthodoxy, in order to provide “understanding”.  In the second stage, the gray-box stage, the new formalism makes superior predictions in comparison to older alternatives, but it is still viewed as offering no “understanding”.  Nonetheless it is suspected as having some structure that is in common with the reality it predicts.  In the final glass-box stage the success of the new theory will have so permeated the operation and techniques of the body of the science that its structure will also appear as the proper pattern of scientific inquiry.  

Einstein was never able to accept the Copenhagen statistical interpretation in quantum mechanics and a few physicists today still reject it. Writing in 1962 Hanson said that quantum theory is in the gray-box stage, because scientists have not yet ceased to distinguish between the theory’s structure and that of the phenomena themselves.  This amounts to saying that they did not practice ontological relativity.  But since Aspect, Dalibard, and Roger’s findings from their 1982 nonlocality experiments demonstrated empirically the Copenhagen interpretation’s semantics and ontology, the quantum theory-based evolution of the concept of explanation in physics has become institutionalized.


4.08 Philosophy’s Cultural Lag

There often exists a time lag between an evolution in the institution of science and developments in philosophy of science, since the latter depend on the realization of the former. 

And there are also retarding sociological impediments.  Approximately a quarter of a century passed between Heisenberg’s philosophical reflections on the language of his indeterminacy relations in quantum physics and the emergence of the contemporary realistic neopragmatist philosophy of science in academic philosophy.  Heisenberg is not just one of the twentieth century’s greatest physicists, he is also one of its greatest philosophers of language.  But even today academic philosophers are still mute about Heisenberg’s philosophical writings; they treat him as a mere layman in philosophy who is unworthy of reference by serious academic philosophers.  These tribal academics cling to their positivist thesis of operational definitions, use only textbooks written by other academic philosophers, publish only journal articles written by other academic philosophers, and reference only books written by other academic philosophers – all while ignoring works having superior intrinsic merit that are written by nonacademics such as Heisenberg.  With the incestuous institutional values of a mediaeval occupational guild academic philosophers stake out their turf, construct their territorial defenses, barricade themselves in their silos and enforce their monopoly status in philosophy.


4.09 Cultural Lags among Sciences

Not only are there cultural lags between the institutionalized practices of science and philosophy of science, there are also cultural lags among the several sciences. 

Philosophers of science have preferred to examine physics and astronomy, because historically these have been the most advanced sciences since the historic Scientific Revolution benchmarked with Copernicus and Newton.  Institutional changes occur with lengthy time lags due to such impediments as intellectual mediocrity, technical incompetence, risk aversion, or vested interests in the conventional ideas of the received wisdom.  As Planck (1858-1947) grimly wrote in his Scientific Autobiography (1949), a new truth does not triumph by convincing its opponents, but rather succeeds because its opponents have died off; or as he also said, science progresses “funeral by funeral”.

The younger social and behavioral sciences have remained institutionally retarded.  Naïve sociologists and even economists today are blithely complacent in their amateurish philosophizing about basic social-science research, often adopting prescriptions and proscriptions that contemporary philosophers of science recognize as anachronistic and counterproductive.  The result has been the emergence and survival of philosophical superstitions in these retarded social sciences, especially to the extent that they have looked to their own less successful histories to formulate their ersatz and erroneous philosophies of science.

          Thus currently most sociologists and economists still enforce a romantic philosophy of science, because they erroneously believe that sociocultural sciences must have fundamentally different philosophies of science than the natural sciences.  Similarly behaviorist psychologists continue to impose the anachronistic positivist philosophy of science.  These sciences are institutionally retarded, because they erroneously impose preconceived semantical and ontological commitments as criteria for scientific criticism.  Realistic neopragmatists can agree with Popper, who in his critique of Kuhn in “Normal Science and its Dangers” in Criticism and the Growth of Knowledge (1970) said that science is “subjectless” meaning that valid science is not defined by any particular semantics or ontologyRealistic neopragmatists tolerate any semantics or ontology that romantics or positivists may include in their scientific explanations, theories and laws, but realistic neopragmatists recognize only the empirical criterion for criticism.


4.10 Scientific Discovery

Discovery is the construction of new and empirically more adequate theories. 

Discovery is the first step toward realizing the aim of science.  The problem of scientific discovery for contemporary realistic neopragmatist philosophers of science is to proceduralize and then to mechanize the development of universally quantified statements for empirical testing with nonfalsifying test outcomes, thereby making laws for use in explanations and test designs.  Contemporary realistic neopragmatism is consistent with the use of computerized discovery systems.


4.11 Discovery Systems

A mechanized discovery system produces a transition from an input-language state description containing currently available language to an output-language state description containing generated and tested new theories.

The ultimate aim of the computational philosopher of science is to facilitate the advancement of contemporary sciences by participating in and contributing to the successful basic-research work of the scientist.  The contemporary realistic neopragmatist philosophy of science thus carries forward the classical pragmatist Dewey’s emphasis on participation.  Unfortunately few academic philosophers have the requisite computer skills to write AI systems much less the needed working knowledge of an empirical science for participation in basic research.  Hopefully that will change in future Ph.D. dissertations in philosophy of science, which are very likely to be interdisciplinary endeavors.

Every useful discovery system to date has contained procedures both for constructional theory creation and for critical theory evaluation for quality control of the generated output and for quantity control of the system’s otherwise unmanageably large output.  Theory creation introduces new language into the current state description to produce a new state description, while falsification in empirical tests eliminates language from the current state description to produce a new state description. Thus theory development and theory testing together enable a discovery system to be a specific and productive diachronic dynamic procedure for linguistic change to advance empirical science.

The discovery systems do not merely implement an inductivist strategy of searching for repetitions of individual instances, notwithstanding that statistical inference is employed in some system designs.  The system designs are mechanized procedural strategies that search for patterns in the input information.  Thus they implement Hanson’s thesis in Patterns of Discovery that in a growing research discipline inquiry seeks the discovery of new patterns in data.  They also implement Feyerabend’s “plea for hedonism” in Criticism and the Growth of Knowledge (1971) to produce a proliferation of theories.  But while many are made by these systems, mercifully few are chosen thanks to the empirical testing routines in the systems to control for both quality and quantity of the outputted equations.


4.12 Types of Theory Development

In his Introduction to Metascience Hickey distinguishes three types of theory development, which he calls theory extension, theory elaboration and theory revision.  This classification is vague and may be overlapping in some cases, but it suggests three alternative types of discovery strategies and therefore implies different discovery-system designs.

Theory extension is the use of a currently tested and nonfalsified explanation to address a new scientific problem.

The extension could be as simple as adding hypothetical statements to make a general explanation more specific for a new problem at hand. Analogy is a special case of theory extension.  When physicists speak of “models”, they are referring to analogies.  In his Computational Philosophy of Science (1988) Thagard describes this strategy for mechanized theory development, which consists in the patterning of a proposed solution to a new problem by analogy with a successful explanation originally developed for a different subject.  Using his system design based on this strategy his discovery system called PI (an acronym for “Process of Induction”) produced a rational reconstruction of the theory of sound waves by analogy with the description of water waves.  The system was his Ph.D. dissertation in philosophy of science at the University of Toronto, Canada.

In his Mental Leaps: Analogy in Creative Thought (1995) Thagard further explains that analogy is a kind of nondeductive logic, which he calls “analogic”.  It firstly involves the “source analogue”, which is the known domain that the investigator already understands in terms of familiar patterns, and secondly involves the “target analogue”, which is the unfamiliar domain that the investigator is trying to explain.  Analogic is the strategy whereby the investigator understands the targeted domain by seeing it in terms of the source domain.  Analogic requires a “mental leap”, because the two analogues may initially seem unrelated.  And the mental leap is also a “leap”, because analogic is not conclusive like deduction.

It may be noted that if the output state description generated by analogy such as the PI system is radically different from anything previously seen by the affected scientific profession containing the target analogue, then the members of that affected profession may experience the communication constraint to the high degree that is usually associated with a theory revision.  The communication constraint is discussed below (See below, Section 4.26).

Theory elaboration is the correction of a currently falsified theory to create a new theory by adding new factors or variables (and also perhaps removing some) that correct the falsified universally quantified statements and erroneous predictions of the old falsified theory. 

The new theory has the same test design as the old theory. The correction is not merely ad hoc excluding individual exceptional cases, but rather is a change in the universally quantified statements. This process is often misrepresented as “saving” a falsified theory, but reflection on the basis for individuating theories reveals that in fact it creates a new one (See above, Section 3.48) .

For example the introduction of a variable for the volume quantity and the development of a constant coefficient for the particular gas could elaborate Gay-Lussac’s (1778-1850) law for gasses into Boyle’s law.  Similarly 1976 Nobel-laureate Milton Friedman’s (1912-2006) macroeconomic quantity theory might be elaborated into a Keynesian hyperbolic liquidity-preference function by the introduction of an interest rate both to account for the cyclicality manifest in an annual time series describing the calculated velocity parameter and to display the liquidity trap phenomenon, which actually occurred in the Great Depression of 1929-1933, in the Great Recession of 2007-2009 and in the recession due to the coronavirus epidemic beginning in 2020.

          Pat Langley’s BACON discovery system exemplifies mechanized theory elaboration.  It is named after the English philosopher Francis Bacon (1561-1626) who thought that scientific discovery can be routinized.  BACON is a set of successive and increasingly sophisticated discovery systems that make quantitative laws and theories from input measurements.  Langley designed and implemented BACON in 1979 as the thesis for his Ph.D. dissertation written in the Carnegie-Mellon department of psychology under the direction of Simon.  A description of the system is given in Simon’s Scientific Discovery: Computational Explorations of the Crea­tive Processes (1987).

BACON uses Simon’s heuristic-search design strategy, which may be construed as a sequential application of theory elaboration.  Given sets of observation measurements for many variables, BACON searches for functional relations among the variables.  BACON has produced rational reconstructions that simulated the discovery of several historically significant empirical laws including Boyle’s law of gases, Kepler’s third planetary law, Galileo’s law of motion of objects on inclined planes, and Ohm’s law of electrical current.

Theory revision is the reorganization of currently available information to create a new theory.

The results of theory revision may be radically different from any current theory address in the same subject, and may thus be said to occasion a “paradigm change”.  It might be undertaken after repeated attempts at both theory extension and theory elaborations have failed.  The source for the input state description for mechanized theory revision presumably consists of the descriptive vocabulary from the currently untested theories addressing the problem defined by a test design.  But the descriptive vocabulary from previously falsified theories may also be included as inputs to make an accumulative state description, because the vocabularies in rejected theories can be productively cannibalized for their scrap value.  In fact even terms and variables from tested and nonfalsified theories could also be included, just to see what new proposals come out; empirical underdetermination permits scientific pluralism, and reality is full of surprises.  Hickey notes that a mechanized discovery system’s newly outputted theory is most likely to be called revolutionary if the revision is great, because theory revision typically produces greater change to the current language state than does theory extension or theory elaboration thus producing psychologically disorienting semantical dissolution due to the transition.

Theory revision, the reorganization of currently existing information to create a new theory, is evident in the history of science.  The central thesis of Cambrian historian of science Herbert Butterfield’s (1900-1979) Origins of Modern Science: 1300-1800 (1958, P. 1) is that the type of transition known as a “scientific revolution” was not brought about by new observations or additional evidence, but rather by transpositions in the minds of the scientists.  Specifically he maintains that the type of mental activity that produced the historic scientific revolutions is the “art” of placing a known bundle of data in a new system of relations.  Hickey found this “art” in the history of economics: 1980 Nobel-laureate econometrician Lawrence Klein (1920-2013) wrote in his Keynesian Revolution (1949, Pp. 13 & 124) that all the important parts of Keynes theory can be found in the works of one or another of his predecessors.  In other words Keynes put a known bundle of information into a new system of relations, relations such as his aggregate consumption function and his money-demand function with its speculative-demand component and the liquidity trap.  Thus Hickey saw that his theory-revising METAMODEL discovery system could simulate the development of Keynes’ revolutionary general theory.

Therefore using his METAMODEL discovery system in 1972 Hickey produced a rational reconstruction of the development of the Keynesian macroeconomic theory from U.S. statistical data available prior to 1936, the publication year of Keynes’ revolutionary General Theory of Employment, Interest and Money.  The input information consisted of variables found in the published literature of macroeconomics up to 1935 and the corresponding statistical data are published in the U.S. Department of Commerce releases titled Historical Statistics of the United States: Colonial Times to 1970 and Statistical Abstract of the United States.  Hickey’s METAMODEL discovery system described in his Introduction to Metascience is a mechanized generative grammar with combinatorial transition rules for producing longitudinal econometric models.  His mechanized grammar is a combinatorial finite-state generative grammar that satisfies the collinearity restraint for the regression-estimated equations and for the formal requirements for executable multi-equation predictive models.  The system tests for statistical significance (Student t-statistics), for serial correlation (Durbin Watson statistic), for goodness-of-fit and for accurate out-of-sample retrodictions.

He also used his METAMODEL system in 1976 to develop a post-classical macrosociometric neofunctionalist model of the American national society with fifty years of historical time-series data. The generated sociological model disclosed an intergenerational negative feedback that sociologists would call a “macrosocial integrative mechanism”, in which an increase in social disorder indicated by a rising homicide rate calls forth a delayed intergenerational stabilizing reaction by the socializing institution indicated by the high school completion rate, which in turn restores order by reinforcing compliance with criminal law.   The paper is reprinted as “Appendix I” to BOOK VIII at the free web site www.philsci.com and in the e-book Twentieth-Century Philosophy of Science: A History.

This macrosociometric model was not just a simulation of a past episode. It is an example of contemporary AI-developed theory revision, an excursion into new territory that is still both unfamiliar and intimidating to academic sociologists, because it is beyond their competence and threatens their dogmas.

Consequently Hickey incurred the rejection often encountered by such pioneering excursions. To the shock, chagrin and dismay of complacent academic sociologists the model is not a social-psychological theory, and the paper panicked the editors of four peer-reviewed sociological journals, to which he submitted his paper.  The referee criticisms and Hickey’s rejoinders are given in “Appendix II”, and his consequent critique of the retarded condition of academic sociology is given in “Appendix III”.  Both appendices are in BOOK VIII at the free web site www.philsci.com and in the e-book Twentieth-Century Philosophy of Science: A History, which is available at Internet booksellers through hyperlinks in the web site.  Hickey fully expects that some desperate and mediocre academic sociologist will plagiarize his ideas without referencing his books or his web site.

          The scientific revolution in sociology against “classical” sociology demanded by University of Virginia sociologist Donald Black’s address at the American Sociological Association’s annual meeting, published in Contemporary Sociology as “The Purification of Sociology”, requires recognition of the distinctive characteristics of macrosociology.  Contrary to the reductionist conventional wisdom of current sociologists, distinctively macrosocial outcomes are not disclosed by multiplying social-psychological (i.e. microsociological) behaviors n times.  Hickey calls romantic sociology with its social-psychological reductionism “classical”, because his macrosociological quantitative functionalist theory supersedes the prevailing social-psychological reductionism, and manifests a basic difference between macro and micro levels of sociological explanation, which resembles the difference between macro and micro levels in economics.  Failure to recognize this difference is to commit the fallacy of composition, as 1970 Nobel-laureate economist Paul Samuelson (1915-2009) explains in his ubiquitous undergraduate textbook Economics.  Hickey’s macrosociometric model with its intergenerational negative feedback is such a distinctively macrolevel theory.

Hickey calls academic sociologists “classical” with the same meaning as Black, who said that “purifying” sociology of its “classical” tradition is a necessary condition for its needed revolutionary advance.  Black expects that this new purified sociology will differ so fundamentally from the prevailing classical sociology, that most sociologists will undoubtedly resist it for the rest of their days, declaring it “incomplete, incompetent and impossible”.  And he adds that sociology has never had a revolution in its short history, that classical sociology is all that sociologists have ever known, and that sociologists “worship dead gods of the past” while viewing disrespect as heresy.  Hickey believes that instead of “purify” he might have said “purge”.   

Simon called the combinatorial system design for theory revision (like the design of Hickey’s METAMODEL) a “generate-and-test” design.  In the 1980’s Simon proposed his “heuristic-search” design, because he believed that combinatorial procedures consume excessive computational resources for present-day electronic computers.  But Hickey’s generate-and-test system was small enough to operate in IBM 370 and IBM RS 6000 computers.  Gordon E. Moore formulated a famous “law” appropriately known as “Moore’s Law”, which states that the number of transistors that can be placed on a CPU chip doubles every year.  This is an annually compounded exponential growth rate in computing power.  Furthermore developments in quantum computing promise to overcome computational constraints, where such capacity constraints are currently encountered.  The increase in throughput that will be enabled by the quantum computer is extraordinary relative to the conventional electronic computer including the electronic supercomputer.  The availability of practical quantum computing seems only a matter of time.  The New York Times (24 October 2019) reported that Google’s Research Lab in Santa Barbara, CA, announced in the scientific journal Nature that its computer scientists have achieved “quantum supremacy”.  The article also quoted John Martinis, project leader for Google’s “quantum supremacy experiment” as saying that his group is now at the stage of trying to make use of this enhanced computing power.


4.13 Examples of Successful Discovery Systems

There are several examples of successful discovery systems in use.  John Sonquist developed his AID system for his Ph.D. dissertation in sociology at the University of Chicago.  His dissertation was written in 1961 before Laumann and his romantics, who would likely have rejected it, had taken over the University of Chicago sociology department.  Sonquist described the system in his Multivariate Model Building: Validation of a Search Strategy (1970).  The system has long been used at the Survey Research Center, Institute for Social Research, University of Michigan, Ann Arbor, MI.  Now modified as the CHAID system using chi-squared 2) Sonquist’s discovery system is widely available commercially in both the SAS and SPSS software packages.  Its principal commercial application has been for list-processing scoring models for commercial market analysis and for creating credit-risk scores as well as for academic investigations in social science.  It is not only the oldest mechanized discovery system, but is also the most widely used in practical applications to date.

Robert Litterman developed his BVAR (Bayesian Vector Autoregression) system for his Ph.D. dissertation in economics at the University of Minnesota.  He described the system in his Techniques for Forecasting Using Vector Autoregressions (1984).  The economists at the Federal Reserve Bank of Minneapolis have used his system for macroeconomic and regional economic analysis.  The State of Connecticut and the State of Indiana have also used it for regional economic analysis.

Hickey originally developed his METAMODEL discovery system to simulate the development of J. M. Keynes’ general theory.  For the next thirty years he used his discovery system occupationally as a research econometrician in both business and government.  While he was Deputy Director and Senior Economist for the Indiana Department of Commerce in the mid-1980’s, he integrated his macrosociometric model of the American national society into a Keynesian macroeconometric model and produced an institutionalist macroeconometric model of the American national economy.  He described the model and its findings in “The Indiana Economic Growth Model” in Perspectives on the Indiana Economy (March, 1985).  The model showed that increased funding for public education improves the economy by increasing labor productivity, and that a consequent increase in high school graduation rates improves social stability by mitigating crime rates.  A report of the findings was read to the Indiana Legislative Assembly by the Speaker of the House in support of Governor Orr’s successful “A-plus program” legislative initiative for an increase of $300 million in State-government spending for K-12 primary and secondary public education.  

Hickey also used his METAMODEL system for market analysis and for risk analysis for various corporations including USX/United States Steel Corporation, BAT (UK)/Brown and Williamson Company, Pepsi/Quaker Oats Company, Altria/Kraft Foods Company, Allstate Insurance Company, and the TransUnion LLC credit bureau.  In 2004 TransUnion purchased a perpetual license for his system to analyze their proprietary TrenData aggregated quarterly time series extracted from their national database of consumer credit files.  While in TransUnion’s Analytical Services Department Hickey used the models he generated with his discovery system to forecast payment delinquency rates, bankruptcy filings, average balances and other consumer borrower characteristics indicating risk exposure for lenders.  He also used his system for Quaker Oats, Kraft Foods and Brown & Williamson Companies to analyze the sociology, economics and demographics responsible for the secular market dynamics of their processed food products and other nondurable consumer goods.  Findings from this METAMODEL discovery system earned Hickey promotions and substantial bonuses from several of his employers.

In 2007 Michael Schmidt, a Ph.D. student in computational biology at Cornell University, and his dissertation director, Hod Lipson, developed their system EUREQA at Cornell University’s Artificial Intelligence Lab.  The discovery system automatically develops predictive analytical models from data using a strategy they call an “evolutionary search” to find invariant relationships, which converges on the simplest and most accurate equations fitting the inputted data.  They report that the system has been used by many business corporations, universities and government agencies including Alcoa, California Institute of Technology, Cargill, Corning, Dow Chemical, General Electric Corporation, Amazon, Shell Corporation and NASA.

For more about discovery systems and computational philosophy of science readers are referred to BOOK VIII at the free web site  www.philsci.com or in the e-book Twentieth-Century Philosophy of Science: A History, which is available at Internet booksellers through hyperlinks in the web site.



Pages [1] [2] [3] [4] [5] [6] [7]
NOTE: Pages do not corresponds with the actual pages from the book