# HERBERT SIMON, PAUL THAGARD, PAT LANGLEY AND OTHERS ON DISCOVERY SYSTEMS

## BOOK VIII - Page 6

**
Mitchell’s Institutionalist Critique**

Haavelmo’s agenda had its Institutionalist critics long
before the rational-expectations advocates and data-mining
practitioners.
Morgan also notes in her *
History of Econometric Ideas *that some economists including
the Institutionalist economist Wesley Clair Mitchell (1874-1948)
opposed Haavelmo’s approach.
Mitchell had an initiating rôle in founding the
prestigious National Bureau of Economic Research (N.B.E.R.),
where he was the Research Director for twenty-five years. In 1952 the National
Bureau published a biographical memorial volume titled
*Wesley Clair Mitchell: The Economic Scientist* edited by Arthur
Burns, a long-time colleague and later a Federal Reserve Board
Chairman.

Mitchell’s principal interest was the business cycle, and
in 1913 he published a descriptive analysis titled
*Business Cycles*. Haavelmo’s proposal
to construct models based on existing economic theory may be
contrasted with a paper by Mitchell titled “Quantitative
Analysis in Economic Theory” in
*American Economic Review *(1925).
Mitchell predicted that quantitative and statistical
analyses in economics will result in a radical change in the
content of economic theory from the prevailing type such as may
be found in the works of classical economist Alfred Marshall. Mitchell said that
instead of interpreting the data in terms of subjective motives,
which are assumed as constituting an explanation and which are
added to the data, quantitative economists may either just
disregard motives, or more likely they may regard them as
problems for investigation rather than assumed explanations and
draw any conclusions about them from the data. Thus while Simon’s thesis
of bounded rationality is a radical departure from the
neoclassical optimizing concept of rationality, Mitchell’s is
much more radical, because he dispensed altogether with such
imputed motives.

In his “Prospects of Economics” in Tugwell’s
*Trend of Economics*
(1924) Mitchell also said that economists would have a special
predilection for the study of institutions, because institutions
standardize behavior thus enabling generalizations and
facilitating statistical inferences. He prognosticated in 1924
that as data becomes more available, economics would become a
quantitative science less concerned with puzzles about economic
motives and more concerned about the objective validity of its
account of economic processes.
While many neoclassical economists view Mitchell’s
approach as atheoretical, Mitchell had a very erudite knowledge
of economic theories as evidenced in the monumental two-volume
work *Types of Economic
Theory *(ed. Dorfman, 1967).

Mitchell’s principal work setting forth the findings from
his empirical investigations is his
*Measuring Business Cycles*
co-authored with Arthur F. Burns and published by the National
Bureau in 1946. This
five-hundred page over-sized book contains no
regression-estimated Marshallian supply or demand equations. Instead it reports on the
authors’ examination of more than a thousand time series
describing the business cycle in four industrialized national
economies, namely the U.S., Britain, France and Germany. The authors explicitly
reject the idea of testing business cycle theories, of which
there were then a great many.
They state that they have surveyed such theories in an
effort to identify which time series may be relevant to their
interest. Their
stated agenda is to concentrate on a systematic examination of
the cyclical movements in different economic activities as
measured by historical time series, and to classify these data
with respect to their phasing and amplitude. They hoped to trace
causal relations exhibited in the sequence that different
economic activities represented by the time series reveal in the
cycle’s critical inflection points. To accomplish this they
aggregate the individual time series so that the economic
activities represented are not so atomized that the cyclical
behavior is obscured by perturbations due to idiosyncrasies of
the small individual units.

The merits and deficiencies of the alternative
methodologies used by the Cowles Commission group and the
National Bureau were argued in the economics literature in the
late 1940’s. In
*Readings in Business
Cycles* (1965) the American Economic Association has
reprinted selections from this contentious literature. Defense of Haavelmo’s
structural-equation approach was given by 1975 Nobel-laureate
economist Tjalling C. Koopmans, who wrote a review of Mitchell’s
*Measuring Business Cycles*
in the *Review of Economic
Statistics* in 1947 under the title “Measurement without
Theory.” Koopmans
compared Burns and Mitchell’s findings to Kepler’s laws in
astronomy and he compared Haavelmo’s approach to Newton’s theory
of gravitation. He
notes that Burns and Mitchell’s objective is merely to make
generalizing descriptions of the business cycle, while the
objective of Haavelmo’s structural-equation approach is to
develop “genuine explanations” in terms of the behavior of
groups of economic agents, such as consumers, workers,
entrepreneurs, etc., who with their motives for their actions
are the ultimate determinants of the economic variables. Then he adds that unlike
Newton, economists today already have a systematized body of
theory of man’s behavior and its motives, and that such theory
is indispensable for a quantitative empirical economics. He furthermore advocates
use of the Neyman-Pearson statistical inference theory, and
calls Burns and Mitchell’s statistical techniques “pedestrian.”

The approach of Burns and Mitchell was defended by
Rutledge Vining, who wrote a reply to Koopmans in the
*Review of Economics and
Statistics* in 1949 under the title “Koopmans on the Choice
of Variables to be Studied and the Methods of Measurement.” Vining argues that Burns
and Mitchell’s work is one of discovery, search, and hypothesis
seeking rather than one of hypothesis testing, and that even
admitting that observation is always made with some theoretical
framework in mind, such exploratory work cannot be confined to
theoretical preconceptions having the prescribed form that is
tested by use of the Neyman-Pearson technique. He also argues that the
business cycle of a given category of economic activity is a
perfectly acceptable unit of analysis, and that many statistical
regularities observed in population phenomena involve social
“organisms” that are distinctively more than simple algebraic
aggregates of consciously economizing individuals. He says that the
aggregates have an existence over and above the existence of
Koopmans’ individual units and their characteristics may not be
deducible from the behavior characteristics of the component
units.

Koopmans wrote “Reply” in the same issue of the same journal. He admitted that hypothesis seeking is still an unsolved problem at the very foundations of statistical theory, and that it is doubtful that all hypothesis-seeking activity can be described and formalized as a choice from a pre-assigned range of alternatives. But he stands by his criticism of Burns and Mitchell’s statistical measures, because he says that science has historically progressed by restricting the range of alternative hypotheses, and he advocates crucial experiments. He claims that crucial experiments deciding between the wave and particle theories of light in physics were beneficial to the advancement of physics before the modern quantum theory rejected the dichotomy. He also continues to adhere to his view that it is necessary for economics to seek a basis in theories of individual decisions, and says that he cannot understand what Vining means by saying that the aggregate has an existence apart from its constituent components, and that it has behavior characteristics of its own that are not deducible from the behavior characteristics of the components. He maintains that individual behavior characteristics are logically equivalent to those of the group’s, and that there is no opening wedge for essentially new group characteristics.

In the same issue of the same journal Vining wrote “A Rejoinder”, in which he said that it is gratuitous for anyone to specify any particular entity as necessarily the ultimate unit for a whole range of inquiry in an unexplored field of study. The question is not a matter of logic, but of fact; the choice of unit for analyses is an empirical matter. Some philosophers have called Koopmans’ thesis “methodological individualism”. Students of elementary logic will recognize Koopmans’ reductionist requirement as an instance if the fallacy of composition, in which one attributes to a whole the properties of its components. Thus just as the properties of water waves cannot be described exclusively or exhaustively in terms of the physical properties of constituent water molecules, so too for the economic waves of the business cycles cannot be describe exclusively or exhaustively in terms of the behavior of participant individuals. Both types of waves may be described as “real”, even if the reality is not easily described as an “entity”.

As it happens in the history of post-World War II
economics, a reluctant pluralism has prevailed. For many years the U.S.
Department of Commerce, Bureau of Economic Analysis (B.E.A.)
published the National Bureau’s business cycle
leading-indicators with selections of its many cyclical time
series and charts in their monthly
*Survey of Current Business*, which is the Federal agency’s principal
monthly periodical.
In 1996 the function was also taken over by the Conference
Board, which calculates and releases the monthly Index of
Leading Indicators based on Mitchell’s approach. The index has been
occasionally reported in national media such as
*The Wall Street Journal*. On the other hand the
Cowles Commission’s structural-equation agenda has effectively
conquered the curricula of academic economics; today in the
universities empirical economics has become synonymous with
“econometrics” in the sense given to it by Haavelmo.

Nevertheless the history of economics has taken its
revenge on Koopmans’ reductionist agenda. Had the Cowles Commission
implemented their structural-equation agenda in Walrasian
general equilibrium theory, the reductionist agenda would have
appeared to be vindicated.
But the macroeconomics that was actually used for
implementation was not a macroeconomics that is just an
extension of Walrasian microeconomics; it was the Keynesian
macroeconomics. Even
before Smith’s *Wealth of
Nations* economists were interested in what may be called
macroeconomics in the sense of a theory of the overall level of
output for a national economy.
With the 1871 marginalist revolution economists had
developed an economic psychology based on the classical
rationality thesis of maximizing behavior, which enabled
economists to use differential calculus to express and develop
their theory. And
this in turn occasioned the mathematically elegant Walrasian
general equilibrium theory that affirmed that the rational
maximizing behavior of individual consumers and entrepreneurs
would result in the maximum level of employment and output for
the whole national macroeconomy.
The Great Depression of the 1930’s debunked this
optimism, and Keynes’ macroeconomic theory offered an
alternative thesis of the less-than-full-employment equilibrium. This created a
distinctively macroeconomic perspective, because it made the
problem of determining the level of total output and employment
a different one than the older problem of determining the most
efficient interindustry resource allocation in response to
consumer preferences as revealed by relative prices.

This new macro perspective also brought certain other
less obvious novelties.
Ostensibly the achievement of Keynes’ theory was to
explain the less-than-full-employment equilibrium by the
classical economic psychology that explains economic behavior in
terms of the heroically imputed maximizing rationality theses. The economic historian
Mark Blaug of the University of London writes in his
*Economic History and the
History of Economics *that Keynes’ consumption function is
not derived from individual maximizing behavior, but is instead
a bold inference based on the known relationship between
aggregate consumer expenditures and aggregate national income.
Supporters as well as critics of Keynes knew there is a problem
in deriving a theory in terms of communities of individuals and
groups of commodities from the classical theory set forth in
terms of individuals and single commodities.

For example in Keynes’ macroeconomic theory saving and
investment behaviors have a different outcome than in
microeconomic theory, a difference known as “the paradox of
saving”. When the
individual increases his saving he assumes his income will be
unaffected by his action.
But when the aggregate population seeks to increase its
savings, consumption is thereby reduced and consequently the
incomes of others and perhaps themselves will be affected, such
that in the aggregate savings are reduced. Thus a motivated attempt
to increase saving by individuals causes a reduction of their
savings. In his
*Keynesian Revolution* the 1980 Nobel-laureate econometrician Lawrence
Klein called attempts to derive aggregate macroeconomic
relations from individual microeconomic decisions “the problem
of aggregation”, and he notes that classical economists have
never adequately solved this problem. One of the reasons that
the transition to Keynes macroeconomic theory is called the
“Keynesian Revolution” is recognition of a distinctive macro
perspective that is not reducible to the psychological
perspective in microeconomics, the rationality postulate that is
its economic psychology.
An evident example is Keynes “law of consumption”, which
he called a psychological law, a law that is
*ad hoc* with no
relation to the classical rationality postulate. Sociologists do
not yet recognize any distinctively macro perspective and still
require motivational analyses.

Joseph Schumpeter, a Harvard University economist of the
Austrian school and a critic of Keynes, was one of those older
economists who were immune from contagious Keynesianism. In his
*History of Economic
Analysis *he regarded Walrasian general equilibrium analysis
the greatest achievement in the history of economics. And in his review of
Keynes’ *General Theory*
in *Journal of the American
Statistical Association* (1936) he described Keynes’
“Propensity to Consume” as nothing but a
*deus ex machina* that
is valueless if we do not understand the “mechanism” of changing
situations in which consumers’ expenditures fluctuate, and he
goes on to say that Keynes’ “Inducement to Invest”, his
“Multiplier”, and his “Liquidity Preference”, are all an Olympus
of such hypotheses which should be replaced by concepts drawn
from the economic processes that lie behind the surface
phenomena. In other
words this expositor of the Austrian school of marginalist
economics regarded Keynes’ theory as hardly less atheoretical
than if Keynes had used data analysis. Schumpeter would accept
only a macroeconomic theory that is an extension of
microeconomics.

But economists could not wait for the approval of dogmatists like Schumpeter, because the Great Depression had made them desperately pragmatic. Keynesian economics became the principal source of theoretical equation specifications for macroeconometric modeling. In 1955 Klein and Goldberger published their Keynesian macroeconometric model of the U.S. national economy, which later evolved into the elaborate WEFA macroeconometric model of hundreds of equations. And this is not the only large Keynesian macroeconometric model; there are now many others, such as the DRI model, now the DRI-WEFA model, the Moody’s model and the Economy.com model. These have spawned a successful for-profit information-consulting industry marketing to both business and government. But there are considerable differences among these large macroeconometric models, and these differences are not decided by reference to purported derivations from rationality postulates or microeconomic theory, even though some econometricians still ostensibly subscribe to Haavelmo’s structural-equation programme and include relative prices in their equations. The criterion that is effectively operative in the choice among the alternative business-cycle models is unabashedly pragmatic; it is their forecasting performance that enables these consulting firms to profit and stay in business.

1970 Nobel-laureate economist Paul Samuelson, who wrote
in *Keynes General Theory:
Reports of Three Decades* that it is impossible for modern
students to realize the full effect of the “Keynesian
Revolution” upon those of brought up in the orthodox classical
tradition. He noted
that what beginners today often regard as trite and obvious was
to us puzzling, novel and heretical. He added that Keynes’
theory caught most economists under the age of thirty-five with
the unexpected virulence of a disease first attacking and
decimating an isolated tribe of South Sea Islanders, while older
economists [like Schumpeter] were immune.

**
Muth’s Rationalist Expectations Agenda**

After Muth’s papers, interest in the rational-expectations hypothesis died, and the rational-expectations literary corpus was entombed in the tomes of the profession’s periodical literature for almost two decades. Then unstable national macroeconomic conditions including the deep recession of 1974 and the high inflation of the 1970’s created embarrassments for macroeconomic forecasters who relied upon the large structural-equation macroeconometric models based on Keynes’ theory. These large models had been gratifyingly successful in the 1960’s, but their structural breakdown in the 1970’s occasioned a more critical attitude toward them and a proliferation of alternative views. One consequence was the disinterment and revitalization of interest in the rational-expectations hypothesis.

Most economists today attribute these economic events of the 1970’s to the sudden quadrupling of crude oil prices in October 1973 imposed by the Organization of Petroleum Exporting Countries (O.P.E.C.). But some economists chose to ignore the fact that the quadrupling of oil prices had induced pervasive and perverse cost-push inflation, which propagated throughout the nation’s transportation system from local delivery trucks to sea-going container ships and thus affected every product that the system carries. Commercial econometric consulting firms addressed this problem by introducing oil prices into their macroeconometric models, a solution mentioned by Haavelmo in his 1944 paper; they had to be pragmatic to retain their clients. These conditions were exacerbated by Federal fiscal deficits that were relatively large for the time and by the Federal Reserve Board’s permissive monetary policies under the chairmanship of Arthur Burns, which stimulated demand-pull inflation. These macroeconomic policy actions became targets of criticism, in which the structural-equation type of models containing such fiscal and monetary policy variables was attacked using the rational-expectations hypothesis.

1995 Nobel-laureate economist Robert E. Lucas (b. 1937)
criticized the traditional structural-equation type of
econometric model.
He was for a time at Carnegie-Mellon, and had come from
University of Chicago, to which he has since returned. Lucas’ “Econometric
Policy Evaluation: A Critique” in
*The Phillips Curve and
Labor Markets *(1976) states on the basis of Muth’s papers,
that any change in policy will systematically alter the
structure of econometric models, because it changes the optimal
decision rules underlying the statistically estimated structural
parameters in the econometric models. Haavelmo had addressed
the same type of problem in his discussion of the
irreversibility of economic relations, and his prescription for
all occasions of structural breakdown is the addition of the
missing variables responsible for the failure. Curiously, however, in
his presidential address to the American Economic Association in
2003, five years before the onset of the Great Recession, Lucas
proclaimed that macroeconomics has succeeded, because its
central problem of depression prevention has been solved. And in October 2008 with
the onset of the Great Recession he is quoted by
*Time* magazine as
saying that everyone is a Keynesian in a foxhole.

2011 Nobel-laureate Thomas J. Sargent, an economist at
the University of Minnesota and an advisor to the Federal
Reserve Bank of Minneapolis joined Lucas in the
rational-expectations critique of structural models in their
jointly authored “After Keynesian Macroeconomics” (1979)
reprinted in their *
Rational Expectations and Econometric Practice* (1981). They state that Keynes’
verbal statement of his theory set forth in his
*General Theory *(1936)
does not contain reliable prior information as to what variables
should be excluded from the explanatory right-hand side of the
structural equations of the macroeconometric models based on
Keynes’ theory. This
is a facile statement since Keynes’ theory stated what
explanatory factors should be included. Sargent furthermore
stated that neoclassical theory of optimizing behavior almost
never implies either the exclusionary restrictions the authors
find suggested by Keynes or those imposed by modern large
macroeconometric models.
The authors maintain that the parameters identified as
structural by current structural-equation macroeconometric
methods are not in fact structural, and that these models have
not isolated structures that remain invariant. This criticism of the
structural-equation models is perhaps better described as
specifically criticism of the structural-equation models based
on Keynesian macroeconomic theory. The authors tacitly leave
open the possibility that non-Keynesian structural-equation
business-cycle econometric models could nevertheless be
constructed that would not be used for policy analysis, and
which are consistent with the authors’ rational-expectations
alternative.

But while Lucas and Sargent offer the non-Keynesian theory that business fluctuations are due to errors in expectations resulting from unanticipated events, they do not offer a new structural-equation model. They reject the use of expectations measurement data, and proposed a distinctive type of rational-expectations macroeconometric model.

**
Rejection of Expectations Data and Evolution of VAR
Models**

The rejection of the use of expectations measurement data
antedates Muth’s rational-expectations hypothesis. In 1957 University of
Chicago economist Milton Friedman set forth his permanent income
hypothesis in his *Theory
of the Consumption Function.*** **
This is the thesis for
which he was awarded the Noble Prize in 1976, and in his Nobel
Lecture, published in*
Journal of Political Economy* (1977) he expressed approval of
the rational-expectations hypothesis and explicitly referenced
the contributions of Muth, Lucas and Sargent. In the third chapter of
his book, “The Permanent Income Hypothesis”, he discusses the
semantics of his theory and of measurement data. He states that the
magnitudes termed “permanent” are* ex ante* “theoretical
constructs”, which he maintains cannot be observed directly for
an individual consumer.
He says that only actual income expenditures and receipts
during some definite period can be observed, and that these
observed measurements are
*ex post *empirical data, although verbal
*ex ante* statements made by the consumer about his future
expenditures may supplement these
*ex post *data. Friedman explains that
his theoretical concept of permanent income is understood to
reflect the effect of factors that the income earner regards as
determining his capital value, i.e., his subjective estimate of
a discounted future income stream.

Friedman subdivides total measured income into a
permanent part and a transitory part. He says that in a large
group the empirical data tend to average out, so that their mean
average or expected value is the permanent part, and the
residual transitory part has a mean average of zero. In another statement he
says that permanent income for the whole community can be
regarded as a weighted average of current and past incomes
adjusted by a secular trend, with the weights declining as one
goes back further in time.
When this type of relationship is expressed as an
empirical model, it is a type known as an autoregressive model,
and it is the type that is very strategic for representation of
the rational-expectations hypothesis in the
**VAR** type of model in
contrast to the structural-equation type of econometric model.

Muth does not follow Friedman’s neopositivist
dichotomizing of the semantics of theory and observation. In his
rational-expectations hypothesis he simply ignores the idea of
establishing any correspondence by analogy or otherwise between
the purportedly unobservable theoretical concept and the
statistical concept of expected value, and heroically makes the
statistical concept of “expected value” the literal meaning of
“psychological expectations.”
In 1960 Muth published “Optimal Properties of
Exponentially Weighted Forecasts” in
*American Statistical
Association Journal.*
He referenced this paper in his “Rational Expectations”
paper, but this paper contains no reference to empirically
gathered expectations data.

Muth says that Friedman’s determination of permanent income is “vague”, and he proposes instead that an exponentially weighted-average of past observations of income can be interpreted as the expected value of the income time series. He develops such an autoregressive model, and shows that it produces the minimum-variance forecast for the period immediately ahead for any future time period, because it gives an estimate of the permanent part of measured income. The exponentially weighted average type of model had been used instrumentally for forecasting in production planning and inventory planning by business firms, but economists had not thought that such autoregressive models have any economic significance. Muth’s identification of the statistical concept of expected value with subjective expectations in the minds of the population gave the autoregressive forecasting models a new – and imaginative – economic relevance. Ironically, however, the forecasting success or failure of these models does not test the rational-expectations hypothesis, because they have no relation to the neoclassical theory based on maximizing rationality theses with or without expectations.

Nearly two decades later there occurred the development
of a more elaborate type of autoregressive model called the
“vector autoregression” or “**VAR”** model set forth by
Thomas J. Sargent in his “Rational Expectations, Econometric
Exogeniety, and Consumption” in
*Journal of Political
Economy* (1978).
Building on the work of Friedman, Muth and Lucas, Sargent
developed a two-equation linear autoregressive model for
consumption and income, in which each dependent variable is
determined by multiple lagged values of all of the variables in
the model. This is
called the “unrestricted vector autoregression” model. It implements Muth’s
thesis that expectations depend on the structure of the entire
economic system, because all factors in the model enter into
consideration by all economic participants in all their economic
roles. The** VAR** model dispenses
with Haavelmo’s autonomy concept, since there is no attempt to
identify the factors determining the preferences of any
particular economic group, because on the rational-expectations
hypothesis everyone considers everything.

In his “Estimating Vector Autoregressions Using Methods
Not Based On Explicit Economic Theories” in
*Federal Reserve Bank of
Minneapolis Quarterly Review* (Summer, 1979), Sargent
explains that the **VAR**
model is not constructed with the same procedural limitations
that must be respected for construction of the
structural-equation model.
Construction of the structural-equation model requires
firstly that the relevant economic theory be referenced as prior
information, and assumes that no variables may be included in a
particular equation other than those variables for which there
is a theoretical justification.
This follows from Haavelmo's premise that the probability
approach in econometrics is merely a testing method based upon
application of the Neyman-Pearson statistical inference
technique to equations having their specifications determined
*a priori* by economic
theory. But when the
rational-expectations hypothesis is implemented with the
**VAR **model, the situation changes because expectations are viewed as
conditioned on past values of all variables in the system and
may enter all the decision functions. Therefore the semantics
of the VAR model describes the much wider range of factors
considered by the economic participants, a range that Simon
deems humanly impossible.
Rational-expectations thus makes the opposite assumption
more appropriate, namely that in general it is likely that
movements of all variables affect behavior of all other
variables, and all the econometrician’s decisions in
constructing the model are guided by the statistical properties
and performance characteristics of the model rather than by
*a priori* theory. Sargent
also notes that **VAR**
models are vulnerable to Lucas’ critique, and that these models
cannot be used for policy analyses. The objective of the
**VAR** model is
principally accurate forecasting.

2011 Nobel-laureate Christopher A. Sims of Yale
University makes criticisms of structural-equation models
similar to those made by Lucas and Sargent. Sims, a colleague of
Sargent while at the University of Minnesota, advocates the
rational-expectations hypothesis and the development of
**VAR **models in his
“Macroeconomics and Reality” in
*Econometrica *(1980). He also states that the
coefficients of the **VAR**
models are not easily interpreted for their economic meaning,
and he proposes that economic information be developed from
these models by simulating the occurrence of random shocks and
then observing the reaction of the model. Sims thus inverts the
relation between economic interpretation and model construction
advanced by Haavelmo: instead of beginning with the theoretical
understanding and then imposing its structural restrictions on
data in the process of constructing the equations of the
empirical model, Sims firstly constructs the
**VAR** model from data, and then develops an understanding of economic
structure from simulation analyses with the model. He thus uses
**VAR** model interpretation for discovery rather than just for testing.

In the *Federal Reserve Bank of Minneapolis Quarterly Review* (Winter, 1986)
Sims states that **VAR**
modelers have been using these models for policy analysis in
spite of caveats about the practice. Not surprisingly this
policy advisor to a Federal Reserve Bank does not dismiss such
models for policy analysis and evaluation. He says that use of any
models for policy analysis involves making economic
interpretations of the models, and that predicting the effects
of policy actions thus involves making assumptions for
identifying a structure from the
**VAR** model. For this
purpose he uses shock simulations with the completed model. But
shock simulations admit to more than one structural form for the
same **VAR** model, and he offers no procedure for choosing among alternative
structures.

**
Litterman’s BVAR Models and Discovery System **

In his “Forecasting with Bayesian Vector Autoregression:
Four Years of Experience” in the
*1984 Proceedings of the American Statistical Association*, also
written as a *Federal
Reserve Bank of Minneapolis Working Paper*, Robert Litterman,
at the time a staff economist for the Federal Reserve Bank of
Minneapolis, who has since moved to Wall Street, says that the
original idea to use a **VAR** model for macroeconometric forecasting at the Minneapolis
Federal Reserve Bank came from Sargent. Litterman’s own
involvement, which began as a research assistant at the Bank,
was to write a computer program to estimate
**VAR** models and to forecast with them.
He reports that the initial forecasting results with this
unrestricted **VAR**
model were so disappointing, that a simple univariate
autoregressive time series model could have done a better job,
and it was evident that the unrestricted
**VAR **models are not
successful. **
**In his “Are
Forecasting Models Usable for Policy Analysis?” Litterman noted
that the unrestricted **VAR**
model is overparameterized, i.e., attempted to fit too many
variables to too few observations. This overparameterization
of regression models is a well known and elementary error. Avoiding it led to his
development of the Bayesian
**VAR** model, which
became the basis for Litterman’s doctoral thesis titled
*Techniques for Forecasting Using Vector Autoregression *(University
of Minnesota, 1980).

In the Bayesian vector autoregression or “**BVAR”** model, there is a
prior matrix that is included in the ordinary least squares
estimation of the coefficients of the model, and the parameters
that are the elements in this prior matrix thereby influence the
values of the estimated coefficients. This prior matrix is an
*a priori* imposition on a model like economic theory in the
conventional structural-equation econometric model as described
by Haavelmo, because it has the desired effect of restricting
the number of variables in the model. But the prior matrix is
systematically revised as part of the constructional procedure.
Litterman argues that in the construction of structural-equation
models the economist rarely attempts to justify the exclusion of
variables on the basis of economic theory. He says that the use of
such exclusionary restrictions does not allow a realistic
specification of *a priori*
knowledge. His
Bayesian specification, on the other hand, includes all
variables in the system at several time lags, but it also
includes the prior matrix indicating uncertainty about the
structure of the economy.
Like Sargent, Litterman is critical of the adequacy of
conventional macroeconomic theory, and he maintains that
economists are more likely to find the regularities needed for
better forecasts in the data than in some
*a priori* economic
theory. Thus his
objective is explicitly discovery by data analysis.

The difficult part of constructing
**BVAR** models is
constructing a realistic prior matrix, and Litterman describes
his procedure in his *
Specifying Vector Autoregression for Macroeconomic Forecasting*,
a Federal Reserve Bank of Minneapolis Staff Report published in
1984. His prior
matrix, which he calls the “Minnesota prior”, suggests with
varying degrees of uncertainty that all the coefficients in the
model except those for the dependent variables’ first lagged
values are close to zero.
The varying degrees of uncertainty are indicated by the
standard deviations calculated from benchmark out-of-sample
retrodictive forecasts made with simple univariate models, and
the degrees of uncertainty are assumed to decrease as the time
lags increase. The
parameters in the prior matrix are calculated from these
standard deviations and from “hyperparameter” factors that vary
along a continuum that indicates how likely the coefficients on
the lagged values of the variables deviate from a prior mean of
zero.

One extreme of this continuum is the univariate
autoregressive model, and the opposite extreme is the
multivariate unrestricted **
VAR** containing all the variables in each equation of the
model. By varying
such hyperparameters and by making successive out-of-sample
retrodictive forecasts, it is possible to map different prior
distributions to a measure of forecasting accuracy according to
how much multivariate interaction is allowed. The measure of accuracy
that Litterman uses is the determinant matrix of the logarithms
of the out-of-sample retrodictive forecast errors for the whole
**BVAR** model. Forecast
errors measured in this manner are minimized in a search along
the continuum between univariate and unrestricted
**VAR** models. Litterman calls this
procedure a “prior search”, which resembles Simon’s
heuristic-search procedure in that it is recursive, but
Litterman’s is explicitly Bayesian. The procedure has been
made commercially available in a computer system called by a
memorable acronym, “**RATS”**,
which is marketed by VAR Econometrics Inc., Minneapolis, MN. This system also contains
the ability to make the shock simulations of the type that Sims
proposed for economic interpretation of the
**BVAR** models.

Economists typically do not consider the
**VAR** or
**BVAR** models to be economic theories or “theoretical models”. The concept of theory in
economics, such as may be found in Haavelmo’ paper, originates
in the romantic philosophy of science, according to which the
language of theory must describe the rational decision-making
process in the economic participants’ attempts to maximize
utility or profits.
In other words the semantics of the theory must describe the
motivating mental deliberations of the economic participants
whose behavior the theory explains, and this amounts to the
*a priori* requirement
for a mentalistic ontology.
The opposing view is that of the positivists, or more
specifically the Behaviorists, who reject all theory in this
sense, except that behaviorists do not make economic models. Both views are similar in
that they have semantic concepts of theory.

The contemporary pragmatists on the other hand admit any
semantics/ontology into theory, but reject all
*a priori* semantical
and/or ontological criteria for scientific criticism, whether
mentalistic or antimentalistic, even when these criteria are
built into such metalinguistic terms as “theory” and
“observation.”
Contemporary pragmatists instead define theory language on the
basis of its use or function in scientific research, and not on
the basis of its semantics or ontology: according to the
pragmatist view theory language is that which is proposed for
testing. Theory is
distinguished by the hypothetical attitude of the scientist
toward a proposed solution to a problem. Therefore, according to
the contemporary pragmatist philosophy of science, Litterman’s
system is a discovery system, because it produces economic
theories, i.e., models proposed for testing.

Ironically the rejection of the structural-equation type
of econometric model by rational-expectations advocates is a
*de facto*
implementation of the contemporary pragmatist philosophy of
science. Sargent
described rational expectations with its greater fidelity to
the maximizing postulates as a “counterrevolution” against the
*ad hoc* aspects of the
Keynesian revolution.
But from the point of view of the prevailing romantic
philosophy of science practiced in economics, their
accomplishment in creating the
**BVAR **model is a radical revolution in the philosophy and
methodology of economics, because ironically there is actually
no connection between the rational-expectations thesis and the** BVAR** model. Rational expectations
play no rôle in the specification of the
**BVAR** model. Empirical tests of the
model could not test the rational-expectations “hypothesis” even
if it actually were an empirical hypothesis instead of merely an
economic dogma. And
their exclusion of empirical expectations measurement data
justifies denying that the model even describes any mental
expectations experienced by the economic participants. The rational-expectations
hypothesis associated with the
**BVAR** models is merely
a decorative discourse, a fig leaf giving the pragmatism of the
**BVAR** models a
fictitious decency for romantics.

The criterion for scientific criticism that is actually
operative in the **BVAR**
model is perfectly empirical; it is forecasting performance. And it is to this
criterion that Litterman appeals. In
*Forecasting with Bayesian Vector Autoregressions*:* Four Years of Experience*
he describes the performance of a monthly national economic
**BVAR** model constructed for the Federal Reserve Bank of Minneapolis. He reports that during
the period 1981 through 1984 this
**BVAR** model
demonstrated superior performance in forecasting the
unemployment rate and the real GNP during the 1982 recession,
which up to that time was the worst recession since the Great
Depression of the 1930’s.
The **BVAR **model
made more accurate forecasts than three leading structural
models at the time: Data Resources (DRI), Chase Econometrics,
and Wharton Associates (WEFA).
However, he also reports that the
**BVAR **model did not
make a superior forecast of the inflation rate as measured by
the annual percent change in the GNP deflator.

Thereafter Litterman continued to publish forecasts from
the** BVAR** model in the
Federal Reserve Bank of Minneapolis
*Quarterly Review*. In the
Fall, 1984, issue he forecasted that the 1984 slowdown was a
short pause in the post-1982 recession, and that the national
economy would exhibit above-average growth rates in 1985 and
1986. A year later
in the Fall 1985 issue he noted that his
**BVAR** model forecast
for 1985 was overshooting the actual growth rates for 1985, but
he also states that his model was more accurate than the three
large leading structural-equation models named above. In the Winter 1987 issue
two of his sympathetic colleagues on the Federal Reserve Bank of
Minneapolis research staff, William Roberds and Richard Todd,
published a critique reporting that the
**BVAR** model forecasts
of the real GNP and the unemployment rate were overshooting
measurements of actual events, and that competing structural
models had performed better for 1986. Several economists working
in regional economics have been experimenting with
**BVAR** modeling of
state economies. Such models have been used by the District
Federal Reserve Banks of Dallas (Gruben and Donald, 1991),
Cleveland (Hoehn and Balazsy, 1985), and Richmond (Kuprianov and
Lupoletti, 1984), and by the University of Connecticut (Dua and
Ray, 1995). Only
time will tell whether or not this new type of modeling
survives.

Reports in the Minneapolis Bank’s
*Quarterly Review *
contain descriptions of how the
**BVAR** national
economic model is revised as part of its continuing development. In the Fall 1984 issue
the model is described as having altogether forty-six
descriptive variables and equations, but it has a “core” sector
of only eight variables and equations, which receives no
feedback from the remainder of the model. This core sector must
make accurate forecasts, in order for the rest of the model to
function accurately.
When the **BVAR** model
is revised, the important changes are those made to the
selection of variables in this core sector. Reliance on this small
number of variables is the principal weakness of this type of
model. It is not a
vulnerability that is intrinsic to this type of model, but
rather is a concession to computational limits of the computer,
because construction of the Bayesian prior matrix made great
demands on the computer resources available at the time. In contrast the
structural-equation models typically contain hundreds of
different descriptive variables interacting most often as
simultaneous-block-recursive models. Improved computer
hardware design will enable the
**BVAR** models to be larger and contain more driving variables in the
core. But in the
meanwhile they must perform heroic feats with very small amounts
of descriptive information as they compete with the much larger
structural-equation models containing much greater amounts of
feedback information.

Unlike Simon’s simulations of historically significant
scientific discoveries, Litterman does not separate the merit of
his computerized discovery procedures for constructing his
**BVAR **models form the
scientific merit of the **
BVAR** models he makes with his Bayesian-based discovery
system. Litterman is
not recreating what Russell Hanson called “catalogue-science”,
but is operating at the frontier of “research science.” Furthermore, the approach
of Litterman and colleagues is much more radical than that of
the conventional economist, who needs only to propose some new
“theory”, and then apply conventional structural-equation
econometric modeling techniques.
The **BVAR**
technique has been made commercially available for microcomputer
use, but still the econometrician constructing the** BVAR **model must learn
statistical techniques that he had not likely been taught in his
professional education. Many
economists fail to recognize the pragmatic character of the
**BVAR** models, and
reject the technique out of hand, since they reject the
rational-expectations hypothesis.

The bottom-line takeaway from the rational-expectations succession of pragmatic modeling experiments in economics is that data-driven model construction can produce more accurate forecasting models than the traditional structural-equation modeling with its presumptuous a priori romantic “theory” that still haunts the halls of academic economics.

Pages [1] [2]
[3] [4]
[5] [6]
[7] [8]
[9] [10]
[Next 10 >>]

**NOTE: Pages do not corresponds
with the actual pages from the book**