THOMAS KUHN ON REVOLUTION AND PAUL FEYERABEND
BOOK VI - Page 8
Feyerabend’s Philosophy of Science
Of the four functional topics considered in philosophy of science the place to begin an overview of Feyerabend’s philosophy of science is with scientific criticism.
Given Feyerabend’s critique of Popper, it might be said at the outset and at the risk of oversimplification that Popper’s philosophy of criticism admits that test-design statements can be revised, but takes as its point of departure the acceptance and agreement about test-design language as a necessary condition for decidable criticism and thus for progress in science. Kuhn and Feyerabend on the other hand choose to examine the practices of criticism and the conditions for progress, where test-design statements are revised or anomalies are ignored, such that tests are nonfunctional as decision procedures. Central to Kuhn and Feyerabend’s philosophies is the thesis that the choice of scientific theories is not fully decidable empirically, and this thesis is the basis for their attacks on Popper’s falsificationism or “critical rationalism”.
But Feyerabend and Kuhn also differ. Feyerabend attacks Kuhn’s sociological thesis of how the empirical undecidability is resolved. The arbitrariness in criticism permitted by this empirical indeterminacy has been described in various ways. Conant called it “prejudice”, Kuhn called it “paradigm consensus”, and Feyerabend called it “tenacity”. Conant was simply dismayed by the phenomenon he observed in the history of science, but he took it more seriously than did his contemporaries, the positivist philosophers, who preferred to dismiss it as simply unscientific. Conant found that prejudice is too frequently practiced by contributing scientists to be dismissed so easily. He also explicitly admitted the strategic rôle of his own prejudices in his preference for an historical examination of science.
Kuhn did not merely accept prejudice as a frequent fact in the history of science. He saw it as integral to science due to a sociological function that it performs within a scientific community, a function that is a condition for scientific progress. Prejudice, which Kuhn had earlier referred to as the “problem of scientific beliefs”, is the sociologically enforced consensus about a paradigm, that is necessary for the scientific community to function effectively and efficiently for solving detailed technical problems referred to by Kuhn as “puzzles”. Without the consensus the community could not marshal its limited resources for the exploration or “articulation” of the promises of the paradigm. In Kuhn’s concept of science professional discipline becomes synonymous with conformity to the prevailing view defined by the paradigm. The phase during which this conformity is a criterion for criticism and is effectively enforced by social control, is “normal” science.
Feyerabend rejects Kuhn’s thesis that prejudice functions by virtue of a sociologically enforced uniformity. In Feyerabend’s view any such uniformity is indicative of stagnation rather than progress. Instead, prejudice understood as the principle of tenacity is strategically functional, because it has just the opposite effect that Kuhn thought: it promotes diversity and theoretical pluralism, which in Feyerabend’s view are necessary conditions for scientific progress. It might be said that Feyerabend views Kuhn’s sociological thesis of normal science as an instance of the fallacy of composition, the fallacy of incorrectly attributing to a whole the properties had by its component parts: just as houses need not have the rectangular shape of their component bricks, so too whole scientific professions need not have the monomaniacal prejudices of their individual members. The prejudice or tenacity practiced by the individual member scientist performs a function that does not obtain, if his whole profession were unanimously to share in his prejudice, his tenaciously held view.
The process by which the individual scientist’s tenacity is strategically functional is counterinduction. Its functional contribution occurs due to Thesis I, which says that theory supplies the concepts for observation. Tenacious development of a chosen theory results in the articulation of new facts, which enhance empirical criticism. New facts produced by counterinduction can both falsify currently accepted theories and revitalize previously falsified theories. The revitalization may occur because the new facts occur in sciences that are auxiliary to the falsified theory. This possibility of revitalization justifies the scientist’s prejudicial belief in a falsified theory, his apparently “irrational” rejection of falsifying factual evidence.
Aim of Science
Feyerabend’s views on scientific criticism lead to the topic of the aim of science. Popper has a well defined and explicit thesis of the aim of science. The aim of science in Popper’s view is the perpetual succession of conjectures and refutations, in which each successive conjecture or theory can explain both what had been explained by its falsified predecessor and the anomalous cases that falsified the predecessor. The new theory is therefore more general than its predecessor, while it also replaces and corrects its falsified predecessor. Popper saw the process of refutation as involving a deductive procedure having the logical form of modus tollens. And because it is a procedure in deductive logic, it is not subject to cultural or historical change. Popper admits that application of the logic in the sense of experimental identification of the falsifying instances may be problematic and may take several years. But he maintains that the logic of falsification isolates the conditions for scientific progress, and that it represents adequately how science has proceeded historically, when it has proceeded successfully. He maintains that this procedure may be said to have become institutionalized, but its validity, which is guaranteed by deductive logic, does not depend on its institutional status. Its validity is ahistorical, and will never be invalidated by historical or institutional change; it is tradition independent.
Both Kuhn and Feyerabend deny that Popper’s vision of the development of science is historically faithful. The principal deficiency in the Popperian vision is its optimistic assessment of the decidability of falsification. Not only do they view the range of nondecidability of scientific criticism to be greater than Popper thinks, but they also view it as having an integral rôle in the process of scientific development. This nondecidability gives the scientist a range of latitude, which he is free to resolve by his strategic choices. Kuhn and Feyerabend disagree on which aims influence these choices, but they agree that they are historical or institutional in nature and may change. Furthermore, such changes involve semantical changes, which introduce an additional dimension to the scientist’s freedom of choice, when they involve an incommensurable semantic discontinuity.
Kuhn views incommensurable change as characteristic only of occasional scientific revolutions, with sociologically enforced consensus resisting such change and defining the aim of science during the interrevolutionary periods of normal science. Feyerabend also views incommensurable changes as infrequent, but he does not regard the interim periods as an enforced consensus contributing to scientific progress; instead he views normal science as Kuhn defined it as an impediment to progress. He therefore advocates a much more individualistic aim of science, which he refers to as scientific anarchy. Ironically both Popper and Feyerabend explicitly invoke Trotsky’s refrain “revolution in permanence”, but their meanings are diametrically opposed. Popper means perpetual conjectures and refutations occurring within an ahistorical institutionalized logical framework for conclusive refutation, while Feyerabend means perpetual institutional change with no controlling tradition-independent framework.
Feyerabend’s discussion of scientific explanation contains much more criticism of other philosophers’ views than elaboration of his own views. From the outset of his professional career he criticized the deductive-nomological concept of scientific explanation and the logical reductionism advocated by the logical positivists. Initially Feyerabend also considered Bohr’s concept of explanation to be a “higher kind of positivism”, but he later preferred to view Bohr as a kind of historicist philosopher, due to Bohr’s distinctive relationalist interpretation of complementarity in quantum theory. As it happens, Bohr was so naïvely eclectic a philosopher, that positivist, neo-Kantian and historicist characterizations can all find support in his works.
For nearly the first two decades of his career Feyerabend subscribed to Popper’s philosophy of science, which contains a concept of scientific explanation requiring universal statements. Popper’s philosophy of explanation also contains the idea of deeper levels of explanation, where the depth is determined by the scope or extent of universality of the explanation. Initially Popper proposed his thesis of verisimilitude, according to which the deeper explanations are said to be closer to the truth. Later he does not mention the idea of verisimilitude, but he continued to describe explanations as having greater or lesser depth according to the extent of their universality. And he also continued to describe the universal laws and theories occurring in explanations as having greater or lesser corroboration, because science cannot attain truth in any timeless sense of truth.
After Hanson had persuaded Feyerabend to reconsider the merits of the Copenhagen interpretation of quantum theory, Feyerabend rejected Popper’s concept of explanation by logical deduction from universal laws, and instead accepted historicism. He was led to this conclusion both by his incommensurability thesis and by the nonuniversalist implications he found in Bohr’s relationalist interpretation of quantum theory. Popper had stated that scientific theories are merely conjectures that may be highly corroborated, but may never be true in any timeless sense. Feyerabend agrees but furthermore says that theories have an even more historical character, since the complementarity thesis in quantum theory demonstrates their regional character. Complementarity makes quantum theory nonuniversal at all times, because it is conditional upon mutually exclusive experimental circumstances; it is not even temporarily universal. Feyerabend thus concluded that universal science, i.e., science containing universal laws and theories, is only apparently universal, and that it is actually a special and recent historical tradition.
Feyerabend’s historicist philosophy of scientific explanation is in need of greater elaboration. For example he never related his views to the genetic type of explanation that is characteristic of historicism. Although this type of explanation had been dismissed by positivists as merely an elliptical deductive-nomological explanation, it was discussed seriously by Hanson in “The Genetic Fallacy Revisited” in American Philosophical Quarterly (1967). Hanson distinguishes different levels of language, one for historical fact and one for conceptual analysis. He says that the distinction differentiates history of science from philosophy of science, and that the genetic fallacy consists of attempting to argue from premises in the historical level to conclusions in the analytical level. It is clear that given his distinction between the theoretical and historical traditions and the way he relates them, Feyerabend would not admit Hanson’s “genetic fallacy” thesis.
The topic of discovery may be taken to refer either to the development of new theories or to the development of new facts. Feyerabend’s thesis of counterinduction is a thesis of the development of new facts. Thesis I enables the scientist to use the concepts supplied by new theory to make revised observations. Counterinduction is a thesis of observation according to the artifactual philosophy of the semantics of language, which Feyerabend set forth in his Thesis I. It is unfortunate that Feyerabend never examined Heisenberg’s use of Einstein’s aphorism for reinterpreting the Wilson cloud chamber observations as an example of counterinduction. But Feyerabend virtually never references anything written by Heisenberg, and it is unlikely that he had an adequate appreciation for the differences between Heisenberg and Bohr’s philosophies of quantum theory.
Feyerabend addresses the problem of developing new theories in “Creativity” in his Farewell to Reason. In this brief article he takes issue with what other philosophers have often called the heroic theory of invention, the idea that creativity is a special and personal gift. He criticizes Einstein for maintaining a variation on the heroic thesis. Einstein wrote that theory development is a free creation, in the sense that it is a conscious production from sense impressions. And he renders Einstein as saying that theories are “fictions”, which are unconnected with these sense impressions, even though theories purport to describe a hidden and objective world. Feyerabend maintains that at no time does the human mind freely select special bundles of experience from the labyrinth of sense impressions, because sense impressions are late theoretical constructs and not the beginnings of knowledge.
Feyerabend expresses much greater sympathy for Mach’s treatment of scientific discovery. Mach advanced the idea of instinct, which Feyerabend contrasts with Einstein’s idea of free creation. Feyerabend renders Mach as offering an analysis of the discovery process, according to which instinct enables a researcher to formulate general principles without a detailed examination of relevant empirical evidence. Instinct seems not as such to be inherent, but rather is the result of a long process of adaptation, to which everyone is subjected. Many expectations are disappointed during this process of adaptation, and the human mind retains the results of consequently altered behavior. These daily confirmations and disappointments greatly exceed the number of planned experiments. They are used to correct the results of experiments, which are in need of correction because they can be distorted by alien circumstances. Feyerabend says that according to Mach empirical laws developed from principles proceeding from instinct are better than laws developed from experiment.
In concluding his discussion of the topic of creativity Feyerabend advocates a return to wholeness, in which human beings are viewed as inseparable parts of nature and society, and not as independent architects. He rejects as conceited the view that some individuals have a divine gift of creativity. Feyerabend therefore apparently subscribes to the social theory of invention, as would be expected of a historicist.
Comments and Conclusion
Consider firstly Kuhn’s attempts at linguistic analysis. As mentioned above Kuhn postulates a structured lexical taxonomy, which he also calls a conceptual scheme, and maintains that it is not a set of beliefs. He calls it instead an “operating mode” of a “mental module” prerequisite to having beliefs, a “module” that supplies and bonds what is possible to conceive. He also says that this taxonomic module is prelinguistic and possessed by animals, and he calls himself a post-Darwinian Kantian, because like the Kantian categories the lexicon supplies preconditions of possible experience, while unlike Kantian categories the lexicon can and does change. But Kuhn’s woolly Darwinist neo-Kantianism is a needless deus ex machina for explaining the cognition and communication constraints associated with meaning change through theory development and criticism.
There certainly exists what may be called a conceptual scheme, but it is beliefs that bond and structure it. And what they bond and structure are the components of complex meanings for association with the sign vehicle, morpheme or individual descriptive term. The elementary components are semantic values. These complexes of components function as do Kuhn’s “cluster of criteria” for referencing individuals including contrast sets of terms that he says each language user associates with a descriptive term. Their limits on what can be conceived is Pickwickian, because when empirical testing or informal experience occasion a reconsideration of one or several beliefs, the falsifying outcome can always be expressed with the existing vocabulary and its semantics by articulating the contradiction to the theory’s prediction. The empirically based contradiction partly disintegrates the bonds and structures due to belief in the theory, but not those due to the statements of test design. Semantical reintegration by the formation of new hypotheses is constrained psychologically by language habit. Formulating new hypotheses that even promise to solve the new scientific problem is a task that often demands high intelligence and fertile imagination. And the greater the disintegration due to more extensive rejection of current beliefs, the more demanding the task of novel hypothesizing.
Two reasons for incommensurability can be distinguished in Kuhn’s literary corpus. The first is due to semantic values that are unavailable in the language of an earlier theory but that is contained in the language of a later one. The second reason for incommensurability is the semantic restructuring of the taxonomic lexicon. However, only the first reason seems to compel anything that might be called incommensurability in the sense of inexpressibility. Language for a later theory containing descriptive vocabulary enabling distinguishing features of the world for which an earlier theory’s language supplies no semantic values seems clearly to make impossible the expression of those distinctions in the earlier theory’s language. Obvious examples may include features of the world that are distinguishable with the aid of microscopes, telescopes, X-rays or other observational instruments not available at the time the earlier theory was formulated, but which supply semantics that is expressed in the language of a later theory. However even for some of these novelties Hanson recognized “phenomenal seeing”, which may supply some semantical continuity.
This reason for incommensurability can be couched in terms of semantic values, because the meanings attached to descriptive terms are not atomistic; they are composite and have component parts that can be exhibited as predicates in universally quantified affirmations. Belief in the universal affirmation “every raven is black” makes the phrase “black ravens” redundant, thereby indicating that the idea of blackness is a component part of the meaning of the concept of raven. However, all descriptive terms including the term “black” also have composition, because it has a lexical entry in a unilingual dictionary. The smallest distinguishable features available to the language user in his descriptive vocabulary are not exclusively or uniquely associated with any descriptive term, but are elementary semantical components of descriptive language. These elementary distinguishable features of the world recognized in the semantics of a language at a given point in time are its “semantic values.” Thus semantic incommensurability may occur when theory change consists of the introduction of new semantic values not formerly contained in the language of an earlier theory addressing the same subject.
Kuhn’s second reason for incommensurability, lexicon restructuring, does not occasion incommensurability prohibiting expressibility; there is no missing semantics, but instead there is only the reorganization of previously available semantic values. The reorganization is due to the revision of beliefs, which may be extensive and result in correspondingly difficult adjustment not only for the developer of the new theory formulating the new set of beliefs but also for the members of the cognizant profession who must assimilate the new theory. The composite meanings associated with each descriptive term common to both old and new theories are disintegrated to a greater or lesser degree into their elementary semantic values, and then are reintegrated by the statements of the new theory. And concomitant to this restructuring, the users’ old language habits must be overcome and new ones acquired. An ironic aspect to this view is that semantic incommensurability due to introduction of new semantic values occurs in developmental episodes that appear least to be revolutionary, while those involving extensive reorganization and thus appear most to be revolutionary introduce no new semantic values and thus have no semantic incommensurability.
In his “Commensurability, Comparability and Communicability”, Kuhn says that if scientist’s moving forward in time experience revolutions, his gestalt switches will ordinarily be smaller than the historian’s, because what the historian experiences as a single revolutionary change will usually have been spread over a number of such changes during the interim historical development of the science. And Kuhn immediately adds that it is not clear that those small incremental changes need have had the character of revolutions, although he retains his wholistic thesis of gestalt switch for revolutionary cases. Clearly the time intervals in the forward movement of the theory-invention must be incremental subject only to the time it took the inventing scientist to formulate his new theory, while the time intervals in the comparative retrospection may be as lengthy as the historian chooses, such as the very lengthy interval considered by Kuhn in his “Aristotle experience” comparing the physics of Aristotle and Newton.
But more than duration of time interval is involved in the forward movement. On the one hand the recognition and articulation of any new semantic values and on the other hand the disintegration and reintegration of available semantic values in the meaning complexes in a lexical restructuring are seldom accomplished simultaneously, since the one process is an impediment to the accomplishment of the other. Attempted reintegration of disintegrating semantics is probably the worst time to attempt introduction of new semantic values. Throwing new semantic values into the existing confusion of conceptual disorientation could only exacerbate and compound the difficulties involved in conceptual reintegration and restructuring. For this reason scientists will attack one of these problems at a time.
Furthermore as noted above new semantic values can at times be articulated with existing descriptive vocabulary, as Hanson exhibited with his thesis of “phenomenal seeing” exemplified by the biologist describing a previously unobserved microbe seen under a microscope for the first time, and for which there is yet no classification. Then later the product of phenomenal-seeing description may be associated with a new “kind word”, i.e., descriptive term that functions as a label for classification of the new phenomenon. And the new “kind word” may then later acquire still more semantics by incorporation into a larger context. Scientific revolutions are reorganizations of available semantic values, and incommensurability due to new semantic values is not found in revolutions except in the periods created by the historian’s sweeping retrospective choices of time intervals for comparison. In the forward movement the new semantic values (or “kind words” based on them) introduced into the current language may be accommodated by a relevant currently accepted law by the extension of that law. Or their introduction may subsequently occasion a modification of the current law by elaborating it into a new and slightly different theory. And new semantic values may eventually lead to revolutionary revisions of current law.
Turn next to the philosophy of Feyerabend, which is more elaborate than Kuhn’s. Feyerabend’s began with an agenda for modern microphysics: to show how a realistic microphysics is possible. Initially the conditions that he believed a realist microphysics must satisfy were taken from Popper’s philosophy of science, and these conditions are contained in Popper’s idea of universalism. However, there is an ambiguity in Popper’s “universalism”, and that ambiguity was not only brought into Feyerabend’s agenda while he had accepted Popper’s philosophy, it was also operative in his philosophy after he rejected Popper’s philosophy, because he rejected universalism in both senses. The first meaning of “universal” refers to the greater scope that a new theory should have relative to its predecessors, and the second meaning refers to the universal logical quantification of general statements. Feyerabend’s acceptance of Bohr’s interpretation of the quantum theory led him to reject universalism in both of Popper’s senses, and consequently to advance his radical historicist philosophy of science.
Feyerabend had adequate reason to reject universalism in Popper’s first sense, the sense of greater scope. If it is not actually logically reductionist, as Feyerabend sometimes says, it does gratuitously require an inclusiveness that demands that a new theory explain the domain of the older one. But there are historic exceptions that invalidate such a demand. Feyerabend notes explicitly in his Against Method for example that Galileo’s theory of motion is less universal than Aristotle’s doctrine of the four types of cause, which explained qualitative change as well as mechanical motion.
With respect to Popper’s second sense of universalism Feyerabend believes that his Thesis I with its dependence on universal logical quantification cannot be applied to quantum theory due to Bohr’s semantical thesis of complementarity, which is duality expressed with inconsistent classical concepts. Feyerabend thus finds incommensurability within quantum theory, and he therefore rejects universalism in the sense of universal logical quantification. This rejection involves a semantical error that is made by many philosophers including both the positivists and the Copenhagen physicists. That semantical error consists of implicitly regarding the meanings of descriptive terms or variables, or even larger units of language, as unanalyzable wholes. A semantical metatheory of meaning description that enables analysis of semantical composition of the meanings of the descriptive terms is needed to see how universal logical quantification is consistent with duality without Bohr’s complementarity.
Below are some preliminary considerations for such a semantical analysis, which might serve for a modification of Feyerabend’s Thesis I. Since Quine’s rejection of the analytic truth, and notwithstanding the fact that he rejected analyticity altogether, the analytic-synthetic distinction may still be viewed as a pragmatic one instead of a semantic one, such that any descriptive universally quantified statement believed to be true, may be viewed as both analytic and synthetic instead of dichotomously, i.e., it may be viewed as what Quine calls an “analytical hypothesis”. The laws found in physics and in many other sciences use mathematical syntax, where universal quantification is expressed implicitly by letting the numeric variables have no measurement values; the variables await assignment of their measurement values by execution of a measurement procedure or by evaluation in an equation from other variables having measurement values already assigned. Furthermore the universality in mathematical language is claimed only for measurement instances; it makes no ontological reference to entities. The following analysis applies to mathematically expressed language, but for the sake of simplicity the analysis is here given in terms of categorical statements, because such statements have explicit syncategorematic quantifiers.
Consider next a list of universally quantified affirmations having the same subject term, and which are believed to be true. The concepts associated with the descriptive terms predicated of the common subject by the several categorical affirmations in the list exhibit a composition or complexity in the meaning of the subject term. The meaning of the subject term may therefore be said to have component parts consisting of the predicating concepts, and its meaning thus need not be viewed wholistically.
Consider in turn the relations that may obtain among the concepts that are universally predicated in the believed universal affirmations having the common subject term. These predicate terms may or may not be related to each other by other universal statements. If any of the predicate concepts are related to one another by universally quantified negative statements, then the common subject term in the statements in the list is equivocal, and the predicate concepts related to one another by universal negations are parts of different meanings of the equivocal subject term. Otherwise the subject term common to the statements in the list is univocal, whether or not the predicate concepts are related to one another by universally quantified affirmations, and the predicate concepts are different component parts of the one meaning of the univocal subject term.
Terms are either univocal or equivocal; concepts are relatively clear or vague. All concepts are always more or less vague, but vagueness may be reduced by adding or excluding semantic values. Adding universal affirmations to the list of universally quantified affirmations having the same subject term believed to be true reduces the vagueness in their common subject term by clarifying the meaning of the shared subject term with respect to the added predicate concepts that contain the added semantic values. Asserting universal negations relating concepts predicated of the common subject also clarifies the meaning of the subject term by showing equivocation and thus excluding semantic values. And asserting universal affirmations relating the concepts predicated of the common subject, clarifies the meaning of the subject term by revealing additional structure in the meaning of the common univocal subject term, and making a deductive system.
Semantics of Experiments
Now consider science: In all scientific experiments the relevant set of universal statements is dichotomously divided into a subset of universal statements that is presumed for testing and the remainder subset of universal statements that is explicitly proposed for testing. The division is pragmatic. The former subset is called “test-design statements” and the latter subset is called “theory statements”. The test-design statements identify the subject of the test and the test procedures, and are presumed true for the test.
Consider a descriptive term that is a subject term in any one of the universal statements in the above-mentioned set, and that is common to both the test-design statements and the theory statements in the divided set. The dual analytic-synthetic nature of all of the universal statements makes the common subject term have part of its semantics supplied by the concepts that are predicated of it in the test-design subset of statements. This part of the common subject term’s semantics remains unchanged through the test, so long as the division between theory and test-design statements remains unchanged. The proponents and advocates of the theory remainder-set of statements presumably believe that the theory statements are true with enough conviction to warrant empirical testing. But their belief does not carry the same high degree of conviction that they have invested in the test-design statements.
Before the execution of a test of the theory, all scientists interested in the test outcome agree that the universally quantified test-design statements and also the particularly quantified language that describes the test’s initial conditions and its outcome with semantics defined in the universally quantified test-design statements, are believed true independently of the theory. Thus if the test outcome shows an inconsistency between the characterization supplied by the test-outcome statements and the characterization made by theory’s prediction statements, the interested scientists agree than it is the theory that is to be viewed as falsified and not the universally quantified test-design statements. This independence of test-design and test-outcome statements is required for the test to be contingent, and it precludes the test-design statements from either implying or denying the theory to be tested or any alternative theory that addresses the same problem. Therefore for the cognizant scientific profession the semantical parts defined by the test-design statements before test execution leave the test-design’s constituent terms effectively vague, because test-design statements are silent with respect to any theory’s claims.
Notwithstanding that the originating proposer and supporting advocates of the theory may have such high confidence in their theory, that for them the theory may supply part of the semantics for its constituent terms even before testing, they have nonetheless agreed that in the event of a falsifying test outcome the test-design language trumps the theory. This amounts to saying that functionally the theory does not define any part of the semantics of its constituent terms that are common to the test design. Or in other words the test-design statements assumed the vague semantical status that Heisenberg called the physicist’s “everyday” concepts.
After the test is executed in accordance with its test design, the particularly quantified test-outcome statements and the theory’s particularly quantified prediction statements are either consistent or inconsistent with one another (after discounting empirical underdetermination not attributable to failure to execute the test in accordance with the agreed test design). In other words they either characterize the same observed or measurement instances or they do not. If the test outcome is an inconsistency between the test-outcome description and the theory’s prediction, then the theory is falsified. And since the theory is therefore no longer believed to be true, it cannot contribute to the semantics of any of its constituent descriptive terms even for the proposer and advocates of the theory.
But if the test outcome is not a falsifying inconsistency between the theory’s prediction and the test-outcome description, then for each term common to the theory and test design the semantics contributed by the universally quantified test-design and theory statements are component parts of the univocal meaning complex of each shared descriptive term, and they identify the same instances. The additional characterization supplied by the semantics of the tested and nonfalsified theory statements thereby resolves the vagueness that the meaning of the common descriptive terms had before the test, especially for those who did not share the conviction had by the theory’s proposers and advocates.
Nonfalsified theory redefines the test design
In some sciences such as physics a theory’s domain may include the test-design domain for the theory. As stated above, before the test execution of such a theory and before the test outcome is known, the test-design language must be vague about the tested theory’s domain, in order for the test to be independent of the theory’s description. But if after the test the outcome is known to be nonfalsification of the tested theory, then the nonfalsified theory has become a law, and the domain of the test-design language at least in principle may be describable with the language of the nonfalsified theory now a law. This application of the tested and nonfalsified theory to its test domain changes the semantics of the test-design statements by still further resolving the vagueness in the test-design language.
While the vagueness in the concept associated with the common subject term is reduced by a nonfalsifying test outcome, the vagueness in the concepts predicated of the subject term by the two sets of statements is not necessarily resolved by the relation of the predicate concepts to one another merely by the nonfalsifying test outcome. Resolution of the vagueness in these predicate concepts requires additional universal statements relating the predicates in the tested and nonfalsified theory and test-design statements. Such would be the case were the statements formerly used as independent test-design statements revised, such that they could be incorporated into a deductive system and thus derived from the nonfalsified theory after the test. The resulting deductive system makes the universally quantified test-design statements logical consequences of the new laws due to the theory having been tested and not falsified. But this loss of independence of the test-design statements is no longer important for the test, since the nonfalsifying test outcome is known. This amounts to deriving from the theory a new set of laws applicable to the functioning of the apparatus and physical procedures of an experiment described by the test-design statements.
In 1925 when rejecting positivism Einstein told Heisenberg that the physicist must assume that this can be done. Einstein argued that it is in principle impossible to base any theory on observable magnitudes alone, because in fact the very opposite occurs: it is the theory that decides what the physicist can observe. Einstein argued that when the physicist claims to have observed something new, he is actually saying that while he is about to formulate a new theory that does not agree with the old one, he nevertheless must assume that the new theory covers the path from the phenomenon to his consciousness and functions in a sufficiently adequate way, that he can rely upon it and can speak of observations. The claim to have introduced nothing but observable magnitudes is actually to have made an assumption about a property of the theory that the physicist is trying to formulate. But Einstein required only an assumption, not an actual deductive derivation.
Feyerabend’s universality criterion
Feyerabend’s first criterion of universality set forth in his Thesis I requires that the test-design laws, which describe the macrophysical experimental set up, must be incorporated into a deductive system consisting of the microphysical quantum theory in a manner analogous to the incorporation of Kepler’s empirical laws into Newton’s theory enabled by the approximate nature of Kepler’s laws.
As it happens, contrary to Bohr’s instrumentalist thesis and to Heisenberg’s closed-off-theories doctrine but consistent with Heisenberg’s pragmatic semantical views, the microphysical phenomena can be described with the semantics of the quantum theory and without classical concepts. This is what Heisenberg did when he construed the observed tracks in the Wilson cloud chamber using his quantum theory. But he offered no quantum description of the functioning of the macrophysical apparatus, i.e., the Wilson cloud chamber by means of laws logically derived from the quantum theory.
Since the 1990’s there has been a successful replacement of the traditional language with its classical concepts by a new language, which is better adapted to the mathematics of quantum theory. In his Understanding Quantum Mechanics (1999) Princeton University physicist Roland Omnès reports that recent conceptual developments using the Hilbertian framework have enabled all the features of classical physics to be derived directly from Copenhagen quantum physics. And he says that this mathematics of quantum mechanics is a “universal language of interpretation” for both microphysical and macrophysical description. This new language accomplishes what Bohr’s “complementarity” use of classical concepts cannot. Furthermore the deductive relationship has not only resolved the vagueness in the semantics of Heisenberg’s “everyday” language, but because it is deductive, it has even further resolved the vagueness in the semantics of the vocabulary in both macrophysics and microphysics.
Alternative to relativism and deductivism
Contrary to Feyerabend, relativism is not the exclusive alternative to deductivism. The choice between classical and derived quantum macrophysical descriptions is a false dichotomy. The universal test-design statements, such as those describing the experimental set up, need not say anything about the fundamental constitution of matter; that is what the microphysical theory describes. The pretest independent test-design statements are vague with respect to any microphysics, and Heisenberg’s term “everyday” is appropriate to describe the vague concepts associated with these terms. After a nonfalsifying test the semantics supplied by the quantum theory provides further resolution of the concepts associated with the terms common to both test-design and theory statements. The vagueness in the “everyday” concepts is never resolved into classical concepts. The whole meaning complex constituting each concept is more properly called a “quantum” concept, given that the quantum theory is not falsified, because the quantum theory resolves vagueness by the addition of the quantum-theory-defined meaning parts to each whole meaning complex. And it is for this reason Heisenberg was able to use quantum concepts when he described the observed free electron in the Wilson cloud chamber, since those concepts were resolved by the quantum context supplied by his matrix mechanics and later his by indeterminacy relations.
Pages  
NOTE: Pages do not corresponds with the actual pages from the book