BOOK VIII - Page 10

The “Last Sociologist”

In March 2001 Lawrence Summers, formerly U.S. Treasury Secretary with a Harvard University Ph.D. in economics, who as a Harvard University faculty member received tenure at the remarkably young age of twenty-eight years, was appointed Harvard’s twenty-seventh president.  His was not a caretaker administration.  In his first year as President the changes made by this nephew of Nobel-laureate economists Paul Samuelson and Kenneth Arrow occasioned no little controversy.  In “Roiling His Faculty, New Harvard President Reroutes Tenure Track” the Wall Street Journal (11 Jan. 2002) reported that Summers attempted to make tenure accessible to younger faculty members and to avoid “extinct volcanoes”, those “graybeard” professors who receive tenure due to past accomplishments, but whose productive years are behind them.  The threatening implications of Summers’ administrative changes were not overlooked in Harvard’s sociology department.  One unnamed faculty member was quoted by the Wall Street Journal as saying that a “prejudice” for younger over older candidates amounts to a prejudice for mathematical and statistical approaches such as those reflected by Summers’ own area of economics over historical or philosophical approaches in sociology.  The old guard is not leaving quietly, as they are being pushed toward the exits.

A published example of sociologists’ resistance to change appeared four months later in a New York Times OP-ED-page article (19 May 2002) with the apocalýptic title “The Last Sociologist” by Harvard sociology professor Orlando Patterson.  Essentially Patterson’s article is a defense of the German romantic dualism between the natural and social sciences, i.e., Naturwissenschaft and Kulturwissenschaft with its doctrine that sociology is the subjective interpretative understanding of culture.  “The Last Sociologist” article amounts to a reactionary jeremiad in defense of romanticism.  Patterson complains that in their anxiety to achieve the status of economists contemporary sociologists have adopted a style of scholarship that mimics the methodology and language of the natural sciences, which he describes as a style that focuses on building models, formulating laws, and testing hypotheses based on data generated by measurement.  He alleges that the methods of natural science are “inappropriate” and “distorting”.

Patterson illustrates the kind of scholarship that characterizes his romantic vision of the golden age of sociology by referencing such books as The Lonely Crowd by David Riesman, Patterson’s mentor, whom he describes as discarded and forgotten by his discipline of sociology, and The Sociology of Everyday Life by Erving Goffman, a Reisman contemporary.  Patterson writes that these authors followed in an “earlier tradition”, and he claims that their style of sociology was driven firstly by the significance of the subject and secondly by an epistemological emphasis on understanding the nature and meaning of social behavior.  Contrary to Patterson such authors have no monopoly on such aims.  But Patterson’s plea defies parody; imagine a Harvard University physicist appealing in the New York Times to pursue basic research in the physics of an earlier tradition!

 Patterson goes on to say that this understanding is of a type that can only emerge from the interplay of the author’s own views with those of the people being studied.  This is classic verstehen.  Patterson laments that today sociologists “eschew” explanation of human values, meanings, and beliefs.  He claims that sociologists disdain as reactionary any attempt to demonstrate how culture explains behavior, while the models emphasize the organizational aspects of culture, with the result that little or nothing is learned from sociology about literature, art, music, or religion even by those who purport to study these areas.  It is therefore unsurprising that in his article  “How Sociologists Made Themselves Irrelevant” in the Chronicle of Higher Education, Patterson laments that sociologists have been excluded from inputting to social policy studies such as President Obama’s “My Brother’s Keeper” initiative of 2014. 

But it must be conceded to Patterson that such articles as his “Last Sociologist” betray his recognition that the romantic agenda, which dominated Harvard sociology in the days of Parsonsian classical sociology, is now a spent force and is in its twilight. Changes at Harvard have begun thanks in no small part to inevitable (and blessed) attrition.  The Wall Street Journal article reported that Summers’ hiring policies received the support of Harvard’s governing board, and that hiring is an area that could prove to be his most enduring legacy.  And given that Harvard is the cradle of both classical and contemporary pragmatisms, Summers’ influence augurs well for future academic sociology at Harvard even after Summers’ departure.

Such nostalgia as Patterson’s notwithstanding, American society needs an empirical quantitative sociology that enables forecasting, optimization and simulation for policy formulation, even if academic sociologists are still too technically incompetent and philosophically reactionary either to produce such work or to accept it when it is served up to them.  Thus American academic sociology is still a missed opportunity, because the Federal Government offers a huge watershed of neglected sociologically relevant longitudinal data, some of which may be conveniently found in the recently published six-volume Historical Statistics of the United States (Cambridge University Press, 2010).  Most fundamentally a scientific sociology requires substituting the pragmatic empirical criterion, for the romantic semantical and ontological criteria for criticism.  American academic sociology might soon graduate to the status of a modern empirical science were sociologists like Patterson, Nathanson and Kimmel actually doomed dinosaurs.  But notwithstanding their published laments they are not the “last sociologists”.  The criticisms displayed in Appendix II were written by referees like Patterson who complain that Hickey “eschews substantive reasoning”, and whose criticisms were accepted by like-minded editors that rejected Hickey’s quantitative empirical macrosociological theory.

But the genie is out of the bottle.  Changes at Harvard have begun thanks in no small part to inevitable (and blessed) attrition.  The Wall Street Journal article reported that Summers’ hiring policies received the support of Harvard’s governing board, and that hiring is an area that could prove to be his most enduring legacy.  And given that Harvard is the cradle of both classical and contemporary pragmatisms, Summers’ influence may augur well for future academic sociology at Harvard even after Summers’ departure.

Interestingly Donald Black, a professor of sociology at the University of Virginia, has called for a scientific revolution in sociology.  In 1998 he read a paper at an American Sociological Association meeting in San Francisco, which was later published in Contemporary Sociology (2000) under the title “The Purification of Sociology”.  Referencing Kuhn’s Structure of Scientific Revolutions Black maintains that modern sociology is still classical, because its theory is classical, and that no scientific revolution can be expected in sociology until it abandons the classical tradition to which it still clings.  He states that sociology is classical, because its explanations of social behavior are (1) teleological, i.e., in terms of means and goals, (2) psychological, i.e., in terms of subjective mental motivations, and (3) individualistic, i.e., in terms of individual persons.

Black calls the needed revolutionary sociology “pure sociology”, because these three characteristics of classical sociology will be recognized as nonessential.  He says that “purifying” sociology of its classical tradition is a necessary condition for its needed revolutionary advance.  He expects that this new purified sociology will differ so fundamentally from the prevailing classical sociology, that most sociologists will undoubtedly resist it for the rest of their days – declaring it “incomplete, incompetent and impossible”.  He adds that sociology has never had a revolution in its short history, that classical sociology is all that sociologists have ever known, and that sociologists “worship dead gods of the past” while viewing disrespect as heresy.  With respect to the requirement for romantic social-psychological reductionism Black should have said, “purging” instead of “purifying”, because romantic sociology is the sustaining ideology of the academic-sociology guild.

Such exhortations as Black’s are almost never effective in the absence of actual development of the needed revolutionary theory.  Hickey’s 1978 paper exhibited in Appendix I is explicitly a post-classical theory, just as Black describes it in his article, and just as Hickey’s describes it in his paper.  And Hickey’s theory has been rebuffed by the kind of “classical” sociologists that Black criticizes, as exhibited in Appendix II.  Hickey adds that another necessary condition for progressive change is the passing of the old generation.  There will be no scientific revolution in academic sociology until a new generation becomes so rebelliously disenchanted with the status quo that they reject the complacent old guard’s dogmas.  Today the most promising ideas are contemporary pragmatism and mechanized analysis.

Yet not all of today’s academic sociologists are apish troglodytes that drag their knuckles as they walk.  The computational revolution in sociology has been active in sociology for half a century.  Below the reader will find description of a truly pioneering computerized discovery system developed by John Sonquist, a promethean vanguard who blazed the path for a real science of sociology for the twenty-first century.

Sonquist on Simulating the Research Analyst with AID

John A. Sonquist (b. 1931) received a Ph.D. in sociology from the University of Chicago in 1936.  Sonquist was a professor of sociology and the Director of the Sociology Computing Facility at the University of California at Santa Barbara, California.  Previously he was for many years on the faculty at the University of Michigan at Ann Arbor, and was Head of the Computer Services Facility for the University’s prestigious Institute for Social Research.  He is also a past chairman of the Association for Computing Machinery.  For his Ph.D. dissertation he developed a computerized discovery system called the AID system.  “AID” is an acronym for “Automated Interaction Detector” system.  Today description of the AID system can be found in many marketing research textbooks in chapters discussing data analysis techniques for hypothesis development.

The AID system is widely used for marketing-list scoring and also for risk scoring by financial lending institutions and by all three of the major national credit bureaus, Experian, Equifax and TransUnion, and the Fair-Isaac consulting firm.  The AID system performs a type of statistical analysis often called “segmentation modeling” but with reference to a dependent variable, which serves as a relevance criterion for the chosen segments.  Sonquist’s system, which is described in his Multivariate Model Building (1970), uses a well known statistical segmentation method called “one-way analysis of variance.”  Jay Magidson of Statistical Innovations, Inc. has developed a variation of AID, which is based on the equally well known segmentation method called chi-squared (“χ2”) analysis, and the system is now called CHAID (Chi-squared Automatic Interaction Detector).  It is commercially available in the SPSS statistical software package and in the SAS system it is called SY-CHAID.

In the “Preface” of his Multivariate Model Building Sonquist says that his interest in such a system started with a conversation with Professor James Morgan, in which the question was asked whether a computer could ever replace the research analyst himself, as well as replacing many of his statistical clerks.  He writes that they discarded as irrelevant the issue of whether or not a computer can “think”, and instead explored the question of whether or not the computer might simply be programmed to make some of the decisions ordinarily made by the scientist in the course of handling a typical analysis problem as well as performing the computations.  Developing such a computer program required firstly examining the research analyst’s decision points, his alternative courses of action, and his logic for choosing one rather than another course, and then secondly formalizing the decision-making procedure and programming it with the capacity to handle many variables instead of only a few.

An early statement of this idea was published in Sonquist’s “Simulating the Research Analyst” in Social Science Information (1967).  In this earlier work Sonquist observes that data processing systems and many information retrieval systems are nothing but an extension of the analyst’s pencil and lack really complex logical capabilities.  But he adds that there also exist information retrieval systems that are much more sophisticated, because simulating the human being retrieving information is one of the objectives of the system designer.  These more sophisticated information retrieval applications combine both a considerable data processing capability and logic for problem solving, such that the whole system is oriented toward the solution of a class of problems without human intervention. 

Sonquist then argues that such a combination of capabilities need not be limited to information retrieval, and that major benefits can be gained from the construction of a new type of simulation program, one in which the phenomenon simulated is the research analyst attempting to “make sense” out of his data.  The phrase “make sense”, which is a characteristic locution of the verstehen romantics, is placed in quotation marks by Sonquist.  But there is no evidence that he is advocating the verstehen philosophy of scientific criticism, because on the verstehen view a computer cannot “make sense” of social data, since it is not human and therefore cannot empathize with the human social participants.  He says instead that an important function of the research analyst in the social sciences is the construction of models which fit the observed data at least reasonably well, and that this approach to the analysis of data can be likened to curve fitting rather than to the testing of clearly stated hypotheses deduced from precise mathematical formulations.  He offers his own AID system as an example of a system that simulates such model construction by the research analyst.

Sonquist and Morgan initially published their idea in their “Problems in the Analysis of Survey Data, and a Proposal” in Journal of the American Statistical Association (June 1963).  The authors examine a number of problems in interviewing and survey research analysis of the joint effects of explanatory factors on a dependent variable, and they maintain that reasonably adequate techniques have been developed for handling most of them except the problem of interaction.  “Interaction” means the existence of an interrelating influence among two or more variables that explain a dependent variable, such that the effects on the dependent variable are not independent and additive.  This is a problem that statisticians call “collinearity”, which is contrary to the situation that is assumed by the use of other multivariate techniques, such as multiple classification analysis and multiple linear regression.  In multiple regression each variable associated with an estimated coefficient is assumed to be statistically independent, so that the effects of each variable on the dependent variable can be isolated and treated as additive.  In “Finding Variables That Work” in Public Opinion Quarterly (Spring, 1969) Sonquist notes that interaction among explanatory variables in a regression can be represented by combining them multiplicatively prior to statistical estimation to eliminate collinearity.  This is also called creating cross products.

But there still remains the prior problem of discovering the interacting variables.  One technique for detecting collinearity is to develop a correlation matrix for the independent variables, to determine which ones are actually not independent.  A factor analysis will also accomplish this determination.  The AID discovery system may be used in conjunction with such techniques as regression or multiple classification, in order to detect and identify interaction effects and to assist equation specification for regression.   The AID system also resembles an earlier statistical technique called “cluster analysis”, because it too combines and segments the observations into groups.  But the AID system is distinctive in that it is an analysis procedure that uses a dependent variable as a criterion.

In The Detection of Interaction Effects: A Report on a Computer Program for the Optimal Combinations of Explanatory Variables (1964, 1970) and in Searching for Structure: An Approach to Analysis of Substantial Bodies of MicroData and Documentation for a Computer Program (1971, 1973) Sonquist and Morgan describe their algorithm, as it is implemented in their AID computer program used at the University of Michigan, Survey Research Center.  The program answers the question: what dichotomous split on which single predictor variable will render the maximum improvement in the ability to predict values of the dependent variable.  The program divides a sample of at least one thousand observations through a series of binary splits into a mutually exclusive series of subgroups.  Each observation is a member of exactly one of these subgroups.  The subgroups are such that at each step in the procedure the arithmetic mean of each subgroup account for more of the total sum of squares (i.e., reduce the predictive error more) than the mean of any other pair of subgroups.  This is achieved by maximizing a statistic called “between-group sum of squares.”  The procedure is iterative and terminates when further splitting into subgroups is unproductive.

The authors illustrate the algorithm with a tree diagram displaying a succession of binary splits for an analysis of personal income using data categories representing age, race, education, occupation, and length in present job.  When the total sample is examined, the minimum reduction in the unexplained sum of squares is obtained by splitting the sample into two new groups consisting of persons under sixty-five years of age and persons aged sixty-five and over.  Both of these groups may contain some nonwhites and varying degrees of education and occupation groups.  The group that is sixty-five and over is not further divided, because control parameters in the system detect that the number of members in the group is too small in the sample.  It is therefore a final grouping.  The other group is further subdivided by race into white and nonwhite persons.  The nonwhite group is not further subdivided, because it is too small in the sample, but the system further subdivides the white group into persons with college education and persons without college education.  Each of these latter is further subdivided.  The college-educated group is split by age into those under forty-five years and those between forty-six and sixty-five.  Neither of these subgroups is further subdivided in the sample.  Those with no college are further subdivided into laborers and nonlaborers, and the latter are still further split by age into those under thirty five and those between thirty six and sixty five.  The variable representing length of time in current job is not selected, because at each step there existed another variable which was more useful in explaining the variance remaining in that particular group.  The predicted value of an individual’s income is the mean value of the income of the final group of which the individual is a member.  Such in overview is AID.

Sonquist offers little by way of philosophical commentary.  Unlike sociologists such as Parsons and Lundberg, he does not develop a philosophy of science much less a philosophy of language.  But there is little imperative that he philosophize, since the application of his AID system is less often philosophically controversial among sociologists.  In his applications there is typically no conflict between the data inputted to his system and the mentalistic ontology required by romantic sociologists, when his system is used to process data collected by interviewing and survey research consisting of verbal responses revealing respondents’ mental states such as attitudes, expectations or preferences.  In such applications a conflict occurs only with those extreme romanticists requiring the verstehen truth criterion.

In his 1963 paper, “Problems in the Analysis of Survey Data”, Sonquist considers the problem that occurs when “theoretical constructs” are not the same as the factors that the sociologist is able to measure, even when the survey questions are attitudinal or expectational questions, and when the measurements that the sociologist actually uses, often called “proxy variables” or “indicators”, are not related to the theoretical constructs on a simple one-to-one basis.            This is a problem that occurs only in cases in which a theory pre-exists empirical analysis, and in this circumstance Sonquist advocates a rôle for the AID system, in which the system’s empirical analyses are used for the resolution of problems involving interaction detection, problems which theory can­not resolve, or which must be addressed either arbitrarily or by making untestable assumptions. 

Later he considers the rôle for discovery systems for the development of theory, and the influence of Robert K. Merton is evident.  In Multivariate Model Building he states in the first chapter that he is not attempting to deal with the basic scientific problems of conceptualizing causal links or with latent and manifest functions, but only with the apparent relations between measured constructs and their congruence with an underlying causal structure.  He defines a “theory” as sets of propositions which describe at the abstract level the functioning of a social system, and proposes that in the inductive phase, ex post facto explanations of the relationships found within the data may form a basis for assembling a set of interrelated propositions which he calls a “middle range theory”, that describes the functioning of a specific aspect of a social system.  The AID system facilitates the inductive phase by identifying interacting variables, so that mathematical functions relating sociological variables are well specified for statistical estimation.

Sonquist draws upon an introductory text, An Introduction to Logic and Scientific Method, written in 1934 by two academic positivist philosophers of science, Morris R. Cohen and Ernest Nagel.  Cohen (1880-1947) received a Ph.D. from Harvard in 1906, and Nagel (1901-1985) studied under Cohen at City College of New York and received a Ph.D. from Columbia University in 1931.  The relevant chapter in the book is titled “The Method of Experimental Inquiry”, which examines the experimental “methods” for discovering causal relationships, methods advanced by Francis Bacon and later elaborated by John S. Mill.  These Baconian experimental methods are anything but romanticist: the two authors define the search for “causes” to mean the search for some invariant order among different sorts of elements or factors, and the book gives no suggestion that the social sciences should receive any distinctive treatment.  Since all discovery systems search for invariant relations, the attractiveness of the Baconian treatment for scientists such as Sonquist is self-evident. 

The propositions that Sonquist views as constituting middle-range sociological theory and that following Cohen and Nagel express a causal relationship, have the linguistic form: X1...Xn implies Y.  The researcher’s task in Sonquist’s view is to relate the causal proposition to a mathematical functional form, which is statistically estimated, and he concludes that a well specified, statistically estimated mathematical function with a small and random error term, expresses a causal relationship understood as the sufficient condition for an invariant relationship between the dependent or caused variable and the set of independent variables.

In “Computers and the Social Sciences” and “’Retailing’ Computer Resources to Social Scientists” in American Behavioral Scientist (1977) Sonquist and Francis M. Sim discuss the inadequate social organization in universities for the effective utilization of computer resources, especially by social scientists, whom they report are described derisively by other academicians as “the marginal computer users.”  The authors present some arguments for changing the professional rôles and social organization of computing in social science departments.    Hickey maintains that while the authors’ reorganization proposals may offer benefits, the underutilization of computer resources and systems analysis by social scientists cannot be remedied by such measures as academic reorganization, so long as the prevailing philosophy of science is still romanticism.  Reorganizing rôles can do no more for sociology than could reorganizing the deck chairs for the sinking R.M.S. Titanic.

Examination of Sonquist’s writings in their chronological order suggests that, as he had attempted to expand the discovery function of his system, he discovered that he had to move progressively further away from the romanticism prevailing in contemporary academic sociology.   He would have been better served by the contemporary pragmatist philosophy of science, than he had been by disinterring the 1930’s positivist views of Cohen and Nagel.  Both positivism and romanticism give a semantically based definition of “theory” and ontologically based criteria for scientific criticism.  On the pragmatist view “theory” is defined by the pragmatics of language, i.e., by its function, in what Hanson called “research science” as opposed to “catalogue science”.  And the pragmatist realism practiced by Galileo, Einstein and Heisenberg and formulated as “ontological relativity” by Quine, bases every causal claim exclusively on the empirical adequacy of a tested theory.  Discovery systems therefore make causal theories.

Comment and Conclusion

Pragmatism vs. Romanticism

At the opening of the twentieth century the prevailing philosophy of science was positivism with its philosophy of language.  Positivism is based on reflection on Newtonian physics.  The appearances of relativity theory and then quantum theory revised physics, and in due course revised philosophy of science to produce the contemporary pragmatism, which appeared as a critique of positivism.  Contemporary pragmatism also differs fundamentally from romanticism, and ironically for the same reasons: the pragmatist theses of relativized semantics and ontological relativity.  These ideas about language have their origin in Heisenberg’s conversation with Einstein in 1935 and on his own reflections on quantum theory the next year.

Romanticism has an a priori commitment to a mentalistic semantics and ontology as a criterion for scientific criticism, such that any proposed explanation not describing mental states is rejected out of hand regardless of its demonstrated empirical adequacy.  Pragmatism on the other hand accepts only empirical criteria for scientific criticism, and rejects all prior semantics and ontologies as criteria for scientific criticism.  Thus pragmatism permits but does not require mentalistic semantics and ontologies.  This difference is due to the different concepts of the aim of science.  Romanticism defines the aim of cultural science as the development of explanations having semantics that describe mentalistic ontologies, a semantics that romantics call “interpretative understanding”.  On the other hand pragmatism does not define the aim of social science in terms of any specific semantics or ontology.  Like Popper who said the science is “subjectless” pragmatists will accept any theory as a law that operates in an explanation that has been empirically tested and not falsified regardless of its semantics or ontology.

Pragmatism vs. Psychologism

Is computational philosophy of science conceived as cognitive psychology a viable agenda for twenty-first century philosophy of science?  Simon recognized the lack of empirical evidence needed to warrant claims that their computational cognitive systems model the structures and processes of the human mind or brain.  In fact he furthermore admitted that in some cases the historical discoveries replicated with the discovery systems described in his Scientific Discovery were actually performed differently from the way in which the discovery systems replicated the historic scientific discoveries.  Recognition of this deviation amounts to the falsification of the cognitive psychology claims.  Yet Simon did not explicitly reject his colleagues’ discovery systems as empirically falsified psychology.  Rather the psychological claims were tacitly ignored, while he and his colleagues including Langley continued to develop their systems without independent empirical research into psychology to guide new system development.  Simon had a conflict of aims.

Others have also found themselves confronted with this conflict.  In “A Split in Thinking among Keepers of Artificial Intelligence” the New York Times (18 Jul. 1993) reported that scientists attending the annual meeting of the American Association of Artificial Intelligence expressed disagreement about the goals of artificial intelligence.  Some maintained the traditional view that artificial-intelligence systems should be designed to simulate intuitive human intelligence, while others maintained that the phrase “artificial intelligence” is merely a metaphor that has become an impediment, and that AI systems should be designed to exceed the limitations of intuitive human intelligence.  The article notes that the division has fallen along occupational lines with the academic community preferring the psychology goal and the business community expressing the pragmatic goal.  It also notes that large AI systems have been installed in various major American corporations.

This alignment is incidental, since the academic community need not view artificial intelligence exclusively as an agenda for psychology.  But the alignment is understandable, since the business community financially justifies investment in artificial-intelligence systems pragmatically as it does every other investment including computer-system investments.  Business has no interest in faithful replicas of human limitations such as the computational constraint described in Simon’s thesis of bounded rationality or the semantical impediment described by Hanson and called the “cognition constraint” by Hickey.  This same pragmatic justification applies in basic-scientific research, because scientists will not use AI systems to replicate the human limitations.  They will use AI to transcend these limitations, in order to enhance performance.  Artificial intelligence may have outgrown its original home in academic psychology.  The functioning of discovery systems to facilitate basic research is more adequately described as constructional language-processing systems with no psychological claims.

The relation between the psychological and the linguistic perspectives can be illustrated by way of analogy with man’s experience with flying.  Since primitive man first saw a bird spread its wings and escape the hunter by flight, mankind has been envious of birds’ ability to fly.  This envy is illustrated in ancient Greek mythology by the character Icarus, who escaped from the labyrinth of Crete with wings that he made of wax.  But Icarus flew too close to the hot sun, so that he fell from the sky as the wax melted, and then drowned in the Aegean Sea.  Icarus’ fatally flawed choice of materials notwithstanding, his basic design concept was a plausible one in imitation of the evidently successful flight capability of birds.  Call the Icarus’ design concept the “wing-flapping” technology.  In fact in the 1930’s there was a company called Gray Goose Airways, which claimed to have developed a wing-flapping aircraft called an “ornithopter”.  But pity the investor who holds equity shares in Gray Goose Airways today, because his stock certificates are good only for folded-paper toy-glider airplanes.  A contemporary development of the wing-flapping technology might serve well for an ornithological investigation of how birds fly, but it is not the technology used for modern flight, which has evolved quite differently.

When proposed imitation of nature fails, pragmatic innovation prevails, in order to achieve the practical aim.  Therefore when asking how a computational philosophy of science should be conceived, it is necessary firstly to ask about the aim of basic science, and then to ask whether or not computational philosophy of science is adequately characterized as “normative cognitive psychology”, as Thagard would have it.  Contemporary pragmatist philosophy of science views the aim of basic science as the production of a linguistic artifact having the status of an “explanation”, which includes law language that had earlier been a proposed theory and has not been falsified when tested.  The aim of a computational philosophy of science in turn is derivative from the aim of science: to enhance scientists’ research practices by developing and employing mechanized procedures capable of achieving the aim of basic science.  The computational philosopher of science should feel at liberty to employ any technology that achieves this aim with or without any help from psychology.

Since a computer-generated explanation is a linguistic artifact, the computer system may be viewed as a constructional language-processing system.  Psychology or neurology may or may not suggest some tentative hypotheses to this end.  But the aim of basic science does not require reducing a computational philosophy of science to the status of a specialty in either psychology or neurology, any more than the aim of aerospace science need be reduced to a specialty in ornithology.  Thus to construe computational philosophy of science as normative cognitive psychology is to have lost sight of the aim of basic science.  And to date attempts at a cognitive psychology of science appear to have offered basic science no better prospects for improvement of research practices, than did the Icarus wing-flapping technology for human flight.  In retrospect the thesis that it should, might be labeled the “Icarus fallacy.” In computational philosophy of science “cognitive psychology” and “artificial intelligence” are as inessential to basic science as “engineering ornithology” is to manned flight.

It is furthermore noteworthy that to date developers of the practical and successful discovery systems have been practicing researchers in the sciences for which they have developed their discovery systems.  They have created systems that have produced serious and responsible proposals for advancing the contemporary state of the empirical sciences in which they work.  To date none have been cognitive psychologists.  Those fruitful discovery systems are Sonquist’s AID system, Litterman BVAR system, and Hickey’s METAMODEL system.  But if they have not been cognitive psychologists, nor have they been academic philosophers.

Sonquist was a practicing research sociologist.  His inadequacy in contemporary philosophy of science led him to turn to 1930’s-vintage positivism, to evade the romanticism prevailing in academic sociology.  Pragmatism would have served him better.  Now known as the CHAID system, Sonquist’s system is the most widely used of all discovery systems.

For Litterman, evasion of the romantic philosophy was easier, despite the fact that he is the economist who developed his BVAR system under teachers at the University of Minnesota, who were rational-expectations advocates.  Ironically their economic “theory” notwithstanding, they were economists who had rejected Haavelmo’s structural-equation agenda, thereby rendering romanticism inoperative for determining the equation specifications for econometric model construction.     Litterman would have had a better understanding of the significance and value of his work for economics, had he understood the contemporary pragmatist philosophy of science.  He would not have viewed the theories outputted by his system as “atheoretical”.  At this writing the Minneapolis Federal Reserve Bank still uses his system.

Hickey was more fortunate, since he is both an Institutionalist econometrician and a contemporary pragmatist philosopher of science.  During the thirty years following his development of his METAMODEL discovery system, he had applied his system for market analysis of both consumer and industrial products, for consumer credit risk analysis, for macroeconomic business cycle analysis and regional economics, and for macrosociology in an Institutionalist macroeconometric model for economic development analysis.

The practical discovery systems developed by Sonquist, Litterman, and Hickey also reveal a distinctive strategy.  Their designs, procedures, and computer languages are mechanized automations of the analytic practices actually used by researchers in their respective sciences.  The difference between these systems and those developed by Simon, Thagard, and other cognitive psychologists, echoes the philosophical issue between the ordinary-language and the ideal-language philosophers earlier in the twentieth century.  What may be called the ordinary-language computational philosophy-of-science approach is based on the analytical techniques that are ordinary in the respective sciences, and their applications have advanced new findings.

Computational philosophy of science is the wave of the future that has arrived, and information technology predictably grows exponentially over time.  Some philosophers of science will make needed adjustments in their views.  But most others will never acquire the necessary computer skills to contribute to this new line of development, and they will supply the profession’s abundant share of latter-day Luddites for a generation or more.  Possibly the philosophers’ psychologistic turn has been in reaction against the doctrinaire nominalism built into the Orwellian newspeak that is the Russellian symbolic logic.  Yet nothing precludes a linguistic computational philosopher of science who views the discovery systems as language-processing systems from recognizing a three-level semantics enabling philosophers to speak about semantics without having to make psychologistic claims.  Cognitive psychology of science is still merely a promissory note, and science awaits evidence of its cash value.

Pages [1] [2] [3] [4] [5] [6] [7] [8] [9] [10] [Next 10 >>]
NOTE: Pages do not corresponds with the actual pages from the book