+ All Categories
Home > Documents > Committee Reports: Supplement to Volume XLVI of The Accounting Review || Report of the Committee on...

Committee Reports: Supplement to Volume XLVI of The Accounting Review || Report of the Committee on...

Date post: 15-Jan-2017
Category:
Upload: trinhdat
View: 215 times
Download: 1 times
Share this document with a friend
29
Report of the Committee on Accounting Theory Construction and Verification Source: The Accounting Review, Vol. 46, Committee Reports: Supplement to Volume XLVI of The Accounting Review (1971), pp. 51+53-79 Published by: American Accounting Association Stable URL: http://www.jstor.org/stable/244058 . Accessed: 16/06/2014 00:50 Your use of the JSTOR archive indicates your acceptance of the Terms & Conditions of Use, available at . http://www.jstor.org/page/info/about/policies/terms.jsp . JSTOR is a not-for-profit service that helps scholars, researchers, and students discover, use, and build upon a wide range of content in a trusted digital archive. We use information technology and tools to increase productivity and facilitate new forms of scholarship. For more information about JSTOR, please contact [email protected]. . American Accounting Association is collaborating with JSTOR to digitize, preserve and extend access to The Accounting Review. http://www.jstor.org This content downloaded from 62.122.73.34 on Mon, 16 Jun 2014 00:50:44 AM All use subject to JSTOR Terms and Conditions
Transcript
Page 1: Committee Reports: Supplement to Volume XLVI of The Accounting Review || Report of the Committee on Accounting Theory Construction and Verification

Report of the Committee on Accounting Theory Construction and VerificationSource: The Accounting Review, Vol. 46, Committee Reports: Supplement to Volume XLVI ofThe Accounting Review (1971), pp. 51+53-79Published by: American Accounting AssociationStable URL: http://www.jstor.org/stable/244058 .

Accessed: 16/06/2014 00:50

Your use of the JSTOR archive indicates your acceptance of the Terms & Conditions of Use, available at .http://www.jstor.org/page/info/about/policies/terms.jsp

.JSTOR is a not-for-profit service that helps scholars, researchers, and students discover, use, and build upon a wide range ofcontent in a trusted digital archive. We use information technology and tools to increase productivity and facilitate new formsof scholarship. For more information about JSTOR, please contact [email protected].

.

American Accounting Association is collaborating with JSTOR to digitize, preserve and extend access to TheAccounting Review.

http://www.jstor.org

This content downloaded from 62.122.73.34 on Mon, 16 Jun 2014 00:50:44 AMAll use subject to JSTOR Terms and Conditions

Page 2: Committee Reports: Supplement to Volume XLVI of The Accounting Review || Report of the Committee on Accounting Theory Construction and Verification

Report of the Committee on Accounting Theory

Construction and Verification

CONTENTS

Theory Construction and Verification: An Overview 53 The Nature of Theories 54 A Tentative Metatheory for Accounting 58 Prediction in Decision Models 63 Accounting Measurement: Its Construction and Verification 67 An Illustrative Case 68 Concluding Remarks and Inferences 77

51

This content downloaded from 62.122.73.34 on Mon, 16 Jun 2014 00:50:44 AMAll use subject to JSTOR Terms and Conditions

Page 3: Committee Reports: Supplement to Volume XLVI of The Accounting Review || Report of the Committee on Accounting Theory Construction and Verification

I. THEORY CONSTRUCTION AND VERIFICATION: AN OVERVIEW

The process of theory construction and verification has for many years been the subject of analysis from a variety of points of view by scholars interested in the philoso- phy of science. Only recently has it become a topic for research and discussion in ac- counting. This interest has manifested itself in diverse forms, ranging from somewhat casual statements seeking to identify the relationship between accounting theory and accounting practice to more abstract meta- physical and philosophical musings.

Opinions differ concerning the way in which theories are constructed. It seems that many theories come about by "intuitive flashes." There are "laws of nature" (ob- served regularities) which are discovered by induction. However, seldom are the theories

which connect and explain these laws in a coherent and general way-discovered by induction. Thus, the process of theory con- struction is imperfectly understood. Clues for this lack of understanding are given by the language used to describe the process. For example, we characterize a great theore- tician as a "genius," which implies that his intellect is beyond our understanding or that we could not have duplicated the pro- cess. The "intuitive flash" description im- plies a lack of understanding of the mental processes by which the discovery is made.

Even though we do not know how theo- ries are constructed, we can note that a new theory usually arises as a result of "anom- alies" in the old theory.! Scientific theories provide certain "expectations" or "predic- tions" about phenomena and when these expectations occur, they are said to "con-

'Thomas S. Kuhn, The Structure of Scientific Rev- olutions (The University of Chicago Press, 1962), Ch. VI.

firm" the theory.2 When unexpected results occur, they are considered to be anomalies which eventually require a modification of the theory or the construction of a new theory. The purpose of the new theory or the modified theory is to make the unex- pected expected, to convert the anomalous occurrence into an expected and explained occurrence.

The question of verification of theories, or theory validation, has received even less attention in accounting than has theory construction. Indeed, other than the con- venient surmise made by some that account- ing practice and verification of theory are essentially synonymous, few serious at- tempts have been made to identify that which constitutes verification. That this is a significant deficiency is evident when one reflects upon the wide variety of possible interpretations attached to this concept by Professor Machlup:

Verification in research and analysis may refer to many things, including the correctness of mathematical and logical arguments, the appli- cability of formulas and equations, the trust- worthiness of report, the authenticity of docu- ments, the genuineness of artifacts or relics, the adequacy of reproductions, translations and paraphrases, the accuracy of historical and sta- tistical accounts, the corroboration of reported events, the completeness in the enumeration of circumstances in a concrete situation, the repro- ducibility of experiments, the explanatory or predictive value of generalizations.3

2Several different terms are used for this process. Rudolf Carnap, "Testability and Meaning," Philoso- phy of Science, Vol. 4, 1937, distinguishes between testable, verifiable, and confirmable. "Verifiable" refers to the intersubjective perception of observers concerning a proposition which specifies an individual occurrence. "Confirmable" refers to a collection of propositions (a theory). It is the "verification" of the singular propositions which is taken to be the "con- firmation" of the theory which contains those propositions.

3Fritz Machlup, "The Problem of Verification in Economics," The Southern Economic Journal, (July, 1955), P. 1.

53

This content downloaded from 62.122.73.34 on Mon, 16 Jun 2014 00:50:44 AMAll use subject to JSTOR Terms and Conditions

Page 4: Committee Reports: Supplement to Volume XLVI of The Accounting Review || Report of the Committee on Accounting Theory Construction and Verification

54 The Accounting Review

Whichever of these forms, if any, verifica- tion in accounting assumes, it depends essentially upon a metatheoretical or meta- physical concept of truth. Furthermore, for those disciplines which purport to be based on some type of underlying theory, it ap- pears that there exist but two major clas- sifications of distinctive truth criteria. Truth of a mathematical-type theory is predicated upon logical consistency. Truth of a theory in the physical or social sciences is based upon correspondence between deduced re- sults and observable phenomena. Most of the apparent variations from these two basic verification goals in Machlup's enumeration are in fact merely instances of the applica- tion of theoretical propositions (if an under- lying theory indeed exists) for which some assurance of "accuracy" is desired, not ex- plicit attempts to verify the theory. For ex- ample, "the adequacy of reproductions, translations and paraphrases" is a goal of the reproducer, translator, or paraphraser as well as the recipients) of these opera- tions. Whether they are adequate or accu- rate depends, it seems, both on the skill of the operator (practitioner) and the validity of the methodology or principles of opera- tion which he uses to perform his task. It is the set of principles that is or should be the product of the theoretical structure, and these principles not the individual applica- tions of them by craftsmen of the trade are the focus of the verification process. Cast in terms of the second of the basic truth criteria, do these principles (deduced re- sults) when applied under controlled experi- mental conditions produce results (observ- able phenomena) which are in harmony with the central objectives of the discipline? It is in this same context that we will at- tempt to outline in more specific terms the verification process as it might be conceived of in accounting.

Section II presents a brief review of the nature of theories as a basis for the tentative accounting metatheory outlined in Section

III. The major implications and potential problems of this framework are examined in Sections IV and V. A comprehensive illustration is presented in Section VI to provide an integrated view of the applica- tion of these concepts to a specific measure- ment problem (interperiod tax allocation). Finally, in Section VII, some concluding remarks and inferences are offered.

II. THE NATURE OF THEORIES

A "theory" is, first of all, a set of sen- tences. Theories are expressed in a language and therefore the study of language is per- tinent to the study of theories. Indeed much of the philosophy of science is nothing but a study of language, although it is a study of the language peculiar to the scientist. Morris,4 Carnap5 and others have distin- guished three areas in the study of language: Syntactics, Semantics and Pragmatics.

Syntactics is the study of the relations of signs to signs. By themselves syntactical propositions have no empirical content. The disciplines of mathematics and logic are syntactical in that a mathematical or logical proposition says nothing whatsoever about the real world. Such propositions are often referred to as "analytic propositions" and are characterized as being logically true, as opposed to being empirically true. For example, "If all electrons have magnetic moments and particle x has no magnetic moment, then particle x is not an electron" is an analytic proposition. One does not need to know the meaning of "electron" or "magnetic moment" to see that this propo- sition is true. It is true by virtue of the form of the sentence and the agreed upon way in which the logical constants (if-then, and, not) are used. One could replace the descrip- tive terms with nonsense words without

4Charles Morris, Foundations of the Theory of Signs (University of Chicago, 1938).

5Rudolf Carnap, Introduction to Semantics (Harvard University Press, 1942).

This content downloaded from 62.122.73.34 on Mon, 16 Jun 2014 00:50:44 AMAll use subject to JSTOR Terms and Conditions

Page 5: Committee Reports: Supplement to Volume XLVI of The Accounting Review || Report of the Committee on Accounting Theory Construction and Verification

Committee Reports 55

altering the truth value, e.g., "If all bzrs have wales and x has no wale, then x is not a bzr."6 As another example, the proposi- tion "(a + b)2 = a2 + 2ab + b2" is analytic. It is true by virtue of the agreed upon rules of algebra which specify the way in which the signs of algebra may be arranged and manipulated.

Also, statements which are true by virtue of the meanings of their constituent terms are analytic; e.g., "a bachelor is an unmar- ried adult male" is true by virtue of the meanings of "bachelor," "unmarried," etc.7 Note that analytic propositions require a prior commitment to a set of rules or defini- tions. "A bachelor is an unmarried adult male" is analytically true because of the prior linguistic commitment to define "bachelor" in a particular way. "Fifteen is one-half of thirty"' is analytically true because of a prior commitment to the rules of arithmetic.

Semantics is the study of the relation of signs to objects or events. If signs are to have referents in the real world, it is neces- sary to have rules or understandings as to the linkage between a particular sign and a particular object or event. These are called "semantical rules" and it is these rules that give empirical meaning to the signs. It is possible, of course, to define a particular sign by showing how it relates to other signs, but it is impossible to define all signs in this way. More precisely, the definition of signs by reference to other signs can be done

6See Rudolf Carnap, Philosophical Foundations of Physics (Basic Books, Inc., 1966), Chs. 26-28 for a more complete discussion. The electron example ap- pears on p. 266.

7Arthur Pap, An Introduction to the Philosophy of Science (The Free Press of Glencoe, 1962) distinguishes between "broadly analytic" and "strictly analytic" propositions. He argues that broadly analytic proposi- tions like "a bachelor is an unmarried male" can be reduced to strictly analytic propositions with the help of appropriate definitions (p. 97). The distinction and the disagreement about the possibility of reduction is not pertinent to this study.

only on pain of circularity and lack of em- pirical meaning.8

It is generally agreed, especially in con- nection with metrical notions, that the se- mantical rules should be in the nature of operational definitions. The selection of a particular sign to refer to a particular object or event is quite arbitrary intially, but sub- sequent use of that sign requires careful specification of the operations by which we link the sign to the object or event.

Given the semantical rules for linking signs to objects or events and the syntactical rules for linking signs to signs, propositions with empirical content can be formed. In contrast to the analytic propositions dis- cussed above, these "empirical proposi- tions" are intended to say something about the real world and therefore their truth value is contingent upon observation.9 For example, the proposition "John is a bach- elor" is intended to say something about a thing in the real world and "John" is the sign for that thing. The truth value of that proposition can be either true or false with- out contradiction.

Thus, the truth value of analytic and em- pirical propositions is discovered by dif- ferent procedures. Analytic propositions are proved by the use of syntactical rules.

8"Definitions of terms by reference to other terms belonging to the same language system (for example, the language of physics) are internal definitions, and one might, by using this kind of definition alone, build a whole ingrown language whose terms referred to each other but to nothing else. Definitions which go outside the language system to something else-perception, for instance, are external definitions, and they are re- quired if the whole system is to mean anything." Peter Caws, The Philosophy of Science (D. Van Nostrand Company, Inc., 1965), p. 46.

9There are several different names for the contrasting propositions. Since they are contingent upon observa- tions, what we have called "empirical propositions" are called "contingent propositions" by some philoso- phers. Others contrast "a priori statements" with "empirical statements." Followers of Kant contrast "analytic" with "synthetic." Earlier writers were mak- ing the same distinction when they spoke of "neces- sary" and "accidental" truth. In this paper we will generally use the terms "analytic" and "empirical" to make the distinction.

This content downloaded from 62.122.73.34 on Mon, 16 Jun 2014 00:50:44 AMAll use subject to JSTOR Terms and Conditions

Page 6: Committee Reports: Supplement to Volume XLVI of The Accounting Review || Report of the Committee on Accounting Theory Construction and Verification

56 The Accounting Review

They are either true (valid) or contradictory. Empirical propositions are verified by oper- ations of observation. They are either true (conform with the observations) or false.

Pragmatics is the study of the relation of signs to the users of those signs. Different signs invoke different responses from a par- ticular user even though those signs are in- tended to have the same referent. Different users may interpret the same sign in dif- ferent ways.

This three-part division of the study of language provides the basis for a two-part division of the sciences. In general, sciences may be classified as empirical or nonem- pirical. Examples of the latter are logic and mathematics. These sciences are composed exclusively of analytic propositions and therefore do not depend upon empirical findings for their truth value. The empirical sciences have for their purpose the explana- tion and prediction of occurrences in the real world. The propositions of empirical science, therefore, are said to be true only if they correctly explain or predict some empirical phenomena.

Despite the empirical test for the assign- ment of truth value, the theories of empiri- cal science are not composed entirely of propositions which can be verified by ob- servations. Instead, a theory in empirical science is composed of a combination of analytic and empirical propositions; The syntactical or logical part of a theory can be abstracted and studied in isolation from the empirical part of that theory. This pro- cess is usually called the "axiomatization" or "formalization" of a theory and the re- sult is called an "axiomatic" or "formal" system. Of course the formal system per se is not an empirical theory. In order for the formal system to function as a theory of empirical science the semantical rules must be added.

These ideas are often illustrated by using geometry as an example. Geometry qua mathematics is a formal system. It says

nothing about the real world and its theo- rems can be deductively derived from its axioms. It has a set of primitive terms e.g., "point," "straight line" which have no empirical meaning. By semantical rules these primitive terms are given empirical meaning

e.g., a point is taken to mean the inter- section of cross-hairs or the tip of a survey- or's stake, a straight line is taken to mean a taut string or the path of light rays. In this way geometry becomes an empirical theory. It is now a theory of physics and as such it says something about the real world. The outputs of geometry qua physics are in- tended to be empirical propositions and their truth value can be verified by observa- tions. For example, the length of a hypote- nuse can be calculated from the theory. The theory specifies the relevant empirical in- puts (length of the two sides) and it specifies the way in which those inputs are manipu- lated. The output is the length of the hypotenuse and that output can be verified, within the limits of measurement error, by a separate empirical operation.

The last step, the verification of the out- put, is indispensable if the theory is intended to be empirical. It is not required that every proposition in an empirical theory be verifi- able; there are many terms which operate within the formal system that are not sub- ject to observation. (These are often called "theoretical terms" in contrast to "observa- tional terms.") However, an empirical theo- ry must have some propositions that are verifiable. The verification of these individ- ual propositions is taken as a test of the theory. If the propositions are found to be true, then the theory is said to be "vali- dated" or "confirmed.'"1

10Carl Hempel, Philosophy of Natural Sciences (Prentice-Hall, Inc., 1966), p. 30, makes this point as follows:

But if a statement or set of statements is not testable at least in principle, in other words, if it has not test implications at all, then it cannot be significantly proposed or entertained as a scientific hypothesis or theory, for no conceivable empirical finding can then accord or conflict with it. In this case, it has

This content downloaded from 62.122.73.34 on Mon, 16 Jun 2014 00:50:44 AMAll use subject to JSTOR Terms and Conditions

Page 7: Committee Reports: Supplement to Volume XLVI of The Accounting Review || Report of the Committee on Accounting Theory Construction and Verification

Committee Reports 57

Hempel" presents all of this in a simple diagram:

in the diagram, i.e., (3.3) can be restated as "If s(x1) < s(x2) then xi floats on X2."

011(3.2)

02 > s(X1) = VI/WI

03 (3.2) sx1) = s2x2) 05

04 S(X2) = V2W

Data de- Systematic connection affected Prediction in scribed in by statements making reference terms of ob- terms of to nonobservables servables observables

The arrow represents a deductive inference and the number above the arrow refers to a particular sentence by which the deduc- tion is effected. These sentences are:

"(3.2) Def. s(x,) = v(x,)/w(x,)"

which is the definition of the specific gravity of a body xi where w- and vi refer to the weight and volume of that body; and

"(3.3) A solid body floats on a liquid if its specific gravity is less than that of the liquid."

which refers to the expression s(xi) < s(x2)

no bearing whatever on empirical phenomena, or as we will also say, it lacks empirical import.

He then discussed an article entitled "Gravity and Love as Unifying Principles" and characterizes the view that physical bodies are attracted by love as a "pseudo-hypothesis" because:

No specific empirical findings of any kind are called for by this interpretation; no conceivable observational or experimental data can confirm or disconfirm it. In particular, therefore, it has no implications concerning gravitational phenomena; consequently, it cannot possibly explain those phenomena or render them "intelligible." To illustrate this further, let us suppose someone were to offer the alternative thesis that physical bodies gravitationally attract each other and tend to move toward each other from a natural tendency akin to hatred, from a natural inclination to collide with and destroy other physical objects. Would there be any conceivable way of adjudicating these conflicting views? Clearly not. Neither of them yields any testable implications; no empirical discrimina- tion between them is possible. Ibid., p. 31.

"Carl Hempel, Aspects of Scientific Explanation (The Free Press, 1965), p. 181. Minor changes were made in the symbols for clarity and to correct a typo- graphical error.

The particular values of weight and vol- ume are obtained by "appropriate opera- tional procedures," i.e., "in terms of the directly observable outcomes of specified measuring procedures," and the symbols 01, 02, 03, 04 are observation terms connect- ing 01 to WI, 02 to v1, etc. 01 through 04 are the observational inputs to the formal sys- tem. 05 is the observational output. It is also necessary that the output be operationally defined so that it can be verified.

This diagram of an empirical theory, al- beit an overly simple theory, is thought to capture the essentials of any empirical theory. Hempel intends it to be general and shows how more complex theories (New- tonian physics) can be cast in a similar schematic."2 Margenau's "construct" and "percept" diagrams are almost identical. He says that theories "attain validity through empirical confirmation. This pro- cess represents a circuit, traceable in either sense, from perception (observation) via rules of correspondence to constructs and back along some other route to per- ception.""13

In summary, a theory of empirical science may be divided into two parts: (1) A Formal System which is composed of abstract sym-

12Ibid., p. 185. 13Henry Margenau, The Nature of Physical Reality

(McGraw-Hill Book Company, Inc., 1950). The dia- grams appear on pp. 85, 93 and 106, and the quotation onp. 121.

This content downloaded from 62.122.73.34 on Mon, 16 Jun 2014 00:50:44 AMAll use subject to JSTOR Terms and Conditions

Page 8: Committee Reports: Supplement to Volume XLVI of The Accounting Review || Report of the Committee on Accounting Theory Construction and Verification

58 The Accounting Review

bols and a set of syntactical rules for manip- ulating those symbols; and (2) An Interpre- tation of the formal system which connects certain symbols to observations via seman- tical rules. The propositions in the formal system are analytic in the sense that they are deduced from axioms or definitions. The propositions of the interpreted theory are intended to be empirical and they must be tested by observations. The semantical rules are connected with two different kinds of observations: (1) Inputs and (2) Outputs. In order for a theory to be complete the kinds of observations to be made and the measurement rules must be specified. These are the empirical inputs to the formal sys- tem. Those inputs are then manipulated according to the syntactical rules. The out- puts of the formal system are connected via semantical rules to observations. If the ob- servations are as specified by the formal system, then that particular proposition is said to be verified.

This may be diagrammed as follows:

vide, via semantical rules, an expectation or prediction of an observable occurrence. If the occurrence is in fact observed then that particular case is said to be verified. If enough cases are verified the theory is said to be confirmed.

III. A TENTATIVE METATHEORY FOR ACCOUNTING

The above outline of an empirical theory provides a natural division of research ac- tivities. Different researchers concentrate on the different parts of the theory, i.e., on the semantic, syntactic and pragmatic ele- ments. This specialization has, in fact, oc- curred in accounting.

In the syntactics area, we have the entire history of trying to develop "concepts" or "principles." The idea behind stating such concepts or principles is to allow one to deduce the correct procedure in a specific case. Thus, the concepts or principles are intended to form a deductive, if not a com-

Theory Plane Inputs Syntactical I I Manipulation Ot put

._ n wF - .0 C)Du

Observation Plane'e

The formal system (on the Theory Plane) is concerned exclusively with the syntactical rules. The inputs and outputs are in the form of abstract symbols. The semantical rules provide specific values for the inputs in a particular case, i.e., numerals are in- serted in place of the abstract symbols. These numerals are manipulated in the same way as the abstract symbols are manipu- lated, i.e., in accordance with the syntactical rules. Then the outputs are also specific numerals instead of symbols and they pro-

pletely axiomatized, system. Recent efforts toward prescribing a new formal system include Mattesich,14 and Moonitz and Sprouse."5 Those efforts could be contrasted

"Richard Mattessich, Accounting and Analytical Methods (Richard D. Irwin, Inc., 1964).

15Maurice Moonitz, "The Basic Postulates of Ac- counting, Accounting Research Study No. I (American Institute of Certified Public Accountants, 1961) and Robert T. Sprouse and Maurice Moonitz, "A Tenta- tive Set of Broad Accounting Principles for Business Enterprises," Accounting Research Study No. 3 (Ameri- can Institute of Certified Public Accountants, 1962).

This content downloaded from 62.122.73.34 on Mon, 16 Jun 2014 00:50:44 AMAll use subject to JSTOR Terms and Conditions

Page 9: Committee Reports: Supplement to Volume XLVI of The Accounting Review || Report of the Committee on Accounting Theory Construction and Verification

Committee Reports 59

to the attempts to formally describe the ex- tant system by Ijiri'6 and Sterling.'7

The pragmatics area includes the whole of the "behavioral" research in accounting. Accounting reports are signs which affect the behavior of people and a number of people have attempted to test this effect. Recent efforts include the Beaver,'8 Ball and Brown'9 studies on the effects of price and volume changes occasioned by the is- suance of earnings reports. A complete sum- mary of such behavioral studies is given by Williams and Griffin.20

Less research effort has been put into the semantic area. On the input side, there is a long history of discussions about such things as the feasibility and objectivity of various measures. However, little testing of various measures under such criteria has been carried out. Recently, McDonald2' has completed a test of the feasibility and ob- jectivity of various value measures. We be- lieve that more effort should be directed toward the measurement of the inputs and that competing measurement schemes ought to be subjected to rigorous tests.

The semantic connections on the output side are even more neglected. The extant theory of accounting-the theory of match- ing expenses with revenues-has not been confirmed. It is difficult to know just what one would mean by "confirmation" in con- nection with the extant theory. The problem is that the individual outputs of the current theory of matching expenses and revenues

"6Yuji Ijiri, The Foundations of Accounting Measure- ment (Prentice-Hall, Inc., 1967).

'7Robert R. Sterling, "An Abstract of Accounting Theory," The Annals of Economics and Management Science (No. 5, 1968).

'8William H. Beaver, "The Information Content of Annual Earnings Announcements," Empirical Re- search in Accounting: Selected Studies, 1968.

'9Ray Ball and Philip Brown, "An Empirical Evalua- tion of Accounting Income Numbers," Journal of Ac- counting Research, Vol. 6, No. 2

20Thomas H. Williams and Charles H. Griffin, "On the Nature of Empirical Verification in Accounting," Abacus, December, 1969, pp. 157-78.

21Daniel L. McDonald, "A Test Application of the Feasibility of Market Based Measures in Accounting," Journal of Accounting Research, Vol. 6, No. 1.

are not, in many if not most instances, em- pirical propositions; rather, they are ana- lytic. The outputs of accounting may be described as a function of the inputs. Thus, V= f(qi, q2, . . .). The q's are the inputs which are manipulated according to the prescribed functional form and Vi is the output. For example, in the case of a de- preciable asset, the inputs are the quantity of dollars expended, the quantity of years of service life, and the quantity of dollars expected to be received upon retirement of that asset. The function is either straight line or one of the forms of accelerated de- preciation. The output, Vi, is then either the "unexpired cost" or the "expired cost." In the case of inventories, the inputs are the quantities, the dollars expended, and the number of units, and the functional form is either LIFO or FIFO or some kind of average. The output is again the unexpired cost of the inventory or the cost of goods sold. The difficulty is that an output such as "unexpired cost" is not subject to a sepa- rate measurement operation by direct ref- erence to a real world counterpart and, therefore, is not verifiable in the sense de- scribed in the previous section. One can attempt to assure oneself of the accuracy of the inputs and of the accuracy of the cal- culation (in itself, a limited form of verifica- tion), but one cannot separately measure such analytic outputs of the system. While there are instances of output from the pres- ent system which are empirical and there- fore independently verifiable (e.g., the cash position), most of the output is analytic (in the sense described above) and thus cannot be independently verified. Without the pos- sibility of verifying the bulk of the individ- ual outputs of the system, it is impossible to confirm the theory in the way that we have used "confirm" in the previous section.

Given our inability to confirm extant ac- counting theory, we have several lines open to us. First, we could claim that we are a non-empirical science and thus avoid the

This content downloaded from 62.122.73.34 on Mon, 16 Jun 2014 00:50:44 AMAll use subject to JSTOR Terms and Conditions

Page 10: Committee Reports: Supplement to Volume XLVI of The Accounting Review || Report of the Committee on Accounting Theory Construction and Verification

60 The Accounting Review

Theory Plane (Measurement- Prediction De l Communication Mdl|Mdl|

Function) / , \X /

Key: direct flows of symbolic representation.

------------------ verification links (to the extent they exist).

Diagram I Accounting as a Measurement-Communication Function

problem of confirmation. Second, we could attempt to alter the theory so that the out- puts would be subject to direct verification. Third, we could expand the "theory of ac- counting" to include decision theories. To put it another way, we could conceive of accounting as being the measurement-com- munication function of the decision process.22 This latter option appears preferable to us at the present time, and our conception of it is presented in Diagram 1.

The inputs to the accounting system are observations made on real world phenom- ena. The outputs from the accounting sys- tem are both empirical (e.g., the measure of cash position) and analytic (e.g., "net income"). Only the empirical outputs can be directly verified by independent reference back down to the observation plane. The

22For a more complete analysis of the possible in- terpretations of the nature of accounting and the character of its outputs (in terms of real world refer- ents), see Sterling, "On Theory Construction and Verification," The Accounting Review, July, 1970, pp. 449-54, and Williams and Griffin, op. cit., pp. 150-54.

analytic outputs must somehow be verified through the prediction models and the de- cision models in which they, along with the empirical outputs, are used. The outputs from prediction models are primarily em- pirical (e.g., future stock prices), although it is possible that some may be analytic. Decision models utilize both empirical and theoretical outputs from both the account- ing system and the prediction models. The outputs of the decision models purport to satisfy the objective, or goal, criteria in- herent in the models, and these criteria refer to observable phenomena in the observation plane; thus a link, albeit a complicated one which depends on the "accuracy" of the inputs and outputs through three system models, is established back to the real world, and a testable proposition emerges.

Under this scheme, the outputs of the ac- counting system which are analytic as op- posed to empirical present special verifica- tion problems. Such accounting outputs are similar to the notion of specific gravity in Hempel's example. We can observe that one

This content downloaded from 62.122.73.34 on Mon, 16 Jun 2014 00:50:44 AMAll use subject to JSTOR Terms and Conditions

Page 11: Committee Reports: Supplement to Volume XLVI of The Accounting Review || Report of the Committee on Accounting Theory Construction and Verification

Committee Reports 61

body floats on another, but we can not di- rectly observe the specific gravity of a body. Instead, the specific gravity is calculated from the weight and volume. Thus the spe- cific gravity is an analytic term. Accounting terms such as depreciation expense, income, working capital, etc. are similarly analytic in that they are calculated from certain in- puts and are not subject to separate mea- surement and are not verifiable by reference to real world phenomena. Rather, it is the entire theory plane (the measurement- communication function, the prediction models, and the decision model in their in- terdependent network) that would be sub- ject to confirmation. The decision theory outputs refer to the plane of observation and thus we could choose among competing measurement models, for a given decision model and a given set of prediction models, on the basis of which one best enabled us to achieve the stated goals. The decision model would specify what properties are to be measured. The accounting function would be to measure those properties that are spe- cified by the decision model. The example given by Hempel is one example of a "deci- sion model." If someone wants to know whether one body will float on another, then they can apply this model. The model precisely specifies what properties are to be measured and how they are to be manipu- lated. Weight and volume are to be mea- sured and a host of other properties are not to be measured. Thus, the decision model provides us with a definition of relevance. Those properties that are specified by a de- cision model are relevant to that theory. Those properties that are not specified by a decision model are irrelevant to that model. The same is true with economic decision models. Certain properties will be specified and therefore should be measured by ac- countants. Other properties not specified should not be measured because they are irrelevant.

Even the outputs of the accounting sys-

tern which are empirical require reference to decision models. While such empirical out- puts can be confirmed by separate direct measures, why should they have been pro- duced at all? Because and only if they are required as an input for some decision mod- el (including herein the general objective of identifying the present state of a system, as there is some underlying decision model implicit in such requests). Thus for empir- ical as well as for analytic measures, their specification as required inputs for decision models is the signal of their relevance.

Under this interpretation, there is in a certain sense no separate theory of account- ing. Every theory calls for measurements, and accounting is simply the measurement activity for certain kinds of decision models. In the physical sciences the development of measurements is inextricably entwined with the construction of theories. After the the- ory has been established and the measure- ment operations are well defined, then the measurement activity can be turned over to technicians. During the development stage, however, the very concepts that one seeks to measure are strongly influenced by the theory that one is trying to establish. Some- times this is expressed by noting that the very act of measurement requires a com- mitment to a particular theory. Compare, for example, the measurements in relativis- tic time and space to the measurements "of the same phenomena" in absolute time and space. It is for this reason that we believe that accountants qua metricians must be- come involved in the construction, refine- ment and elaboration of the decision mod- els. Churchman has made the same point. For similar reasons, he has argued that op- erations researchers qua theoreticians must become involved with the development of measurements.23 Whether it is the metricians

23C. West Churchman, "A Systems Approach to Measurement in Business Firms," unpublished paper presented at Accounting Colloquium I, University of Kansas, April, 1969.

This content downloaded from 62.122.73.34 on Mon, 16 Jun 2014 00:50:44 AMAll use subject to JSTOR Terms and Conditions

Page 12: Committee Reports: Supplement to Volume XLVI of The Accounting Review || Report of the Committee on Accounting Theory Construction and Verification

62 The Accounting Review

who ought to become theoreticians or the theoreticians who ought to become metri- cians is not important. What is important is that theory construction and measurement development must be fused.24 Part of our previous problems have been the result of conceiving of them as separable activities.

In short, theory construction and mea- surement development are inseparable. The theory specifies, in a conceptual sense, what is to be measured, how the measurements are to be manipulated, and what measurable outcomes one can expect. This implies that the theory is constrained by what can be measured. It also implies that the measure- ments interact with the theory in that the predicted occurrences will be either verified or falsified by separate measurements. Some of the measurements may be considered anomalous and result in the modification or rejection of the theory. Similar types of observations might be made about the inter- action of communicated outputs and the users (i.e., the personification of the deci- sion models in all their infinite variety) of these measures. Thus, if accounting is to be a measurement-communication process, then the "theory of accounting" might be thought of as the set of substantive proposi- tions that relate accounting measurements to decision models (semantic area) and de- cision making (pragmatic area).

The issue now becomes what are the sub- stantive propositions of concern to account- ing. The answer will also help to resolve other issues of interest, such as: (a) how is the subset, accounting theory, isolated from all other substantive propositions, and (b) what are the sources of accounting theory?

According to A Statement of Basic Ac- counting Theory, accounting is

24e Carl G. Hempel, Fundamentals of Concept Formation in Empirical Science (The University of Chicago Press, 1967), p. 46, for an excellent example of a concept, "haze," which has many desirable fea- tures (e.g., uniformity, precision, feasibility, objectiv- ity, freedom from bias, etc.), but which does not meet the indispensable requirement of being specified by some theory.

The process of identifying, measuring, and com- municating economic information to permit informed judgments and decisions by users of the information.25

It could be argued that the modifier econom- ic is either redundant or needlessly restric- tive. It could be also argued that the above definition does not satisfactorily delineate the boundaries of accounting, in the sense that virtually all disciplines can be viewed as providing a body of knowledge to act as a basis for informed judgments by man. How- ever, the crucial element in the definition is the emphasis on decision making as the ul- timate criterion upon which to evaluate accounting as a language. The structuring of the accounting system should be judged in the light of its ability to permit substan- tive propositions that will facilitate decision making.

A full application of this criterion will require a complete specification of the de- cision settings faced by all potential users of accounting data. Such a specification includes:

(1) an optimization rule, decision-crite- rion, and/or goals of the decision maker (and constraints, if any),

(2) all feasible acts available to the deci- sion maker,

(3) all possible events or states that may occur over the decision horizon,

(4) probability distributions relating to the set of possible events, and

(5) a set of payoffs, conditional upon the state and act.

The problem of specification in each of these areas pose a formidable task, and even a casual glance at the state of the arts sug- gests that much remains to be done. Thus, one of the main areas for the development of accounting theory involves the specifica- tion of the decision processes of potential users.

25A Statement of Basic Accounting Theory, p. 1.

This content downloaded from 62.122.73.34 on Mon, 16 Jun 2014 00:50:44 AMAll use subject to JSTOR Terms and Conditions

Page 13: Committee Reports: Supplement to Volume XLVI of The Accounting Review || Report of the Committee on Accounting Theory Construction and Verification

Committee Reports 63

The sources of accounting theory con- ceived of in this perspective are those dis- ciplines that provide some knowledge of the decision processes of users, including such fields as operations research, decision the- ory, the behavioral sciences, economics, finance, and other functional fields. Much of the earlier theory development in ac- counting'was really a theory of income de- termination, and the source of such theory was economics. Now that the emphasis has shifted to a broader frame of reference, the sources of accounting theory will broaden as well. One additional area has to be added to the list. Since accounting is largely a measurement process, measurement theory is also a source of knowledge that will affect why and how accounting measures (i.e., questions such as reliability, objectivity, freedom from bias, etc.).

If accounting theory can be defined as those substantive propositions that relate accounting measurements to decision mod- els and decision making, then two broad areas having operational and confirmable significance for accounting researchers can be identified: (1) The predictive power of accounting measurements and (2) the be- havioral implications of accounting mea- surements.

Predictive ability refers to the use of ac- counting measurements in models which predict events of interest to decision makers. This is an important area of study for two major reasons: (a) Prediction is an inherent part of decision making a necessary condi- tion before a decision can be made, and (b) predictive ability can be studied, even though only part of the decision process is known or specified.26 However, thus far the predictive ability approach has been an es- sentially impersonal approach to the infor- mation needs for decisions; it has ignored the behavioral interactions of the account-

26For a complete discussion of this area, see Beaver, Kennelly, and Voss, "Predictive Ability as a Criterion for the Evaluation of Accounting Data," The A ccount- ing Review, October, 1968.

ing data and the decision maker. As a result, another body of research has already begun to emerge that focuses upon both directions of this interaction (i.e., the effect of account- ing measures on decision maker's behavior, and the effect of decisions upon the ac- counting data).27

Both of these research areas need a more completely specified metatheoretical frame- work. The Committee will however further examine only the first of these areas (pre- dictive power) in this context. Hopefully, this analysis will provide some assistance to others seeking to develop a unified struc- ture for behavioral studies.

IV. PREDICTION IN DECISION MODELS

Relationship between Prediction and Decision Models

Research into predictive ability concerns itself with the issue, what are the parameters of the decision process and to what extent is accounting helpful in providing assessments of those parameters? Although this issue pervades all aspects of the decision process, the predictions that are probably most rele- vant are the following:

a. Predictions of future events or states (or probability distributions of them).

b. Predictions of alternative courses of action.

27In Williams and Griffin, op. cit., pp. 154-78, most of the published empirical research projects in account- ing through 1968 were classified under the following categories:

I. Effect of Accounting Measurements on Users (Internal and External)

II. Relationship between Accounting Measure- ments and Selected Dependent Variables

III. Behavior Patterns of Accounting Measurements IV. Effect of Users on Accounting Measurements

Since classes II and III are essentially subdivisions of the "predictive power" area, and classes I and IV are subdivisions of the "behavioral implications" area, this provides one indication of the scope of our basic dichotomy.

This content downloaded from 62.122.73.34 on Mon, 16 Jun 2014 00:50:44 AMAll use subject to JSTOR Terms and Conditions

Page 14: Committee Reports: Supplement to Volume XLVI of The Accounting Review || Report of the Committee on Accounting Theory Construction and Verification

64 The Accounting Review

c. Predictions of outcomes or payoffs that will occur given the future event and the future action.

These three types of predictions are illus- trated in the following table using several different decision models:

mix model, it is not clear whether the model is designed to predict what mix will mini- mize cost or maximize profit, or whether it is to decide the mix that will be produced. In large part this difficulty arises because in the final analysis decisions are made in the mind of an individual. Even if the linear

Illustrative Decision Model

Investment Loan Linear Programming Decision Decision Product Mix Model Model Decision Model

Predictions of future events Future price of shares. Firm Cost and revenue coefficients. Future Value of GNP. failure or

not failure.

Prediction of alternative courses List of possible List of List of products. of action investments. possible List of constraints.

loans.

Prediction of outcomes Rate of return. Losses or Minimum cost or maximum profit gains. under linear constraints.

Two-Stage Process of Prediction and Decision

It is now necessary to talk in terms of prediction models as well as decision mod- els. The relationship can be portrayed as in Diagram 2. Since all decision models re- quire predictions of the independent vari- ables, prediction models of some type are required. The selection of both the decision model and the prediction model(s) is to some extent limited by the data available. At the same time the final model selection dictates the kinds of information which must be made available in the data bank.

The final operation of the decision model then requires goal specification, decision model specification, predictions (at least of the independent variables) from the op- eration of the prediction model, and per- haps (but not always) information directly from the data bank. It can be argued of course that no data enters the decision op- eration directly as data, but only as implicit predictors.

It is not always easy to distinguish a pre- diction model from a decision model. For example, in the linear programming product

programming product mix model is built into the system, such that the mix is auto- matically set by the results of the model and it appears clear that the model is a decision model, someone decided to allow the model to be a decision. That decision was in turn based on confidence in the predictive capac- ity of the linear programming model.

Historically A ccounting Information Systems Have Been Relatively Prediction Free and Decision Free

Of course many accounting numbers can emerge only after some types of prediction have been made (e.g., depreciation). Yet it is clear that few accountants have argued that the numbers that emerge from the ac- counting process are predictors of anything in particular. Net income is taken to be some form of statement about the past, but not necessarily a particularly good predictor of anything in the future. Even if some in- dividuals are assumed to use accounting in- formation as predictors for some decision model, there has been little deliberate at- tempt to enumerate or specify the decision models that might be using the information.

This content downloaded from 62.122.73.34 on Mon, 16 Jun 2014 00:50:44 AMAll use subject to JSTOR Terms and Conditions

Page 15: Committee Reports: Supplement to Volume XLVI of The Accounting Review || Report of the Committee on Accounting Theory Construction and Verification

Committee Reports 65

| Goals F ata Bank

Develop or

Operate rdcin

I \ I = w / I Epvad~~uevt op or

Prediction --- -1 (Compare _____ Predictiodeln | / IPredictions

E \ / \ / / tvWi~~~~~~~~94! Actual)

r-' -

' ,--___ 'Evaluate \l \ \, Operate \ 1/ Evaluate

Goal Decision -i Decision *_ Accomplishment Model Model

-r I

|

-1 ~~~~~~Outcomes

Evaluate I Measures I

E \__ _ _ ; M~~~~~easuresof I ~ ~ ~ - _ _ ~~~ Reports of

Measures of Independent >

Operating phase Variables ------------ Evaluation phase

Diagram 2 Relation Between Decision Models and Production Models

This content downloaded from 62.122.73.34 on Mon, 16 Jun 2014 00:50:44 AMAll use subject to JSTOR Terms and Conditions

Page 16: Committee Reports: Supplement to Volume XLVI of The Accounting Review || Report of the Committee on Accounting Theory Construction and Verification

66 The Accounting Review

It is in this sense that the accounting infor- mation system has tended to be prediction free and decision free.

Methods of Verifying Predictions

Logical Verification. Logical verification requires a specific enumeration of assump- tions and critical analysis of the subsequent logical and mathematical propriety. For example, given the assumptions of the linear programming model, its mathematical pro- priety can be established.

Empirical Tests. Comparison of subse- quent events with the prior prediction forms a compelling verification of predictions. But it is by no means clear how one com- pares the relative predictive capacity of two models. For example, assume model A yields predictions which on the average are reasonably correct but have a wide disper- sion. Model B produces predictions which are on the average less correct but they are relatively consistent and tightly clustered. Many would rank the models on the basis of mean square error, but this in itself assumes a particular kind of loss function associated with using the predictions.

Verification in any Absolute Sense is Impossible. Our concern must be with the degree of confirmation of prediction, pre- diction models, and decision models. Verifi- cation and degrees of confirmation have long been a problem in the philosophy of science. The problem has been extensively discussed and is far from being solved. The interested reader can consult several of the references in Section II of this report for more information on this topic.

Verification of Predictions and Verification of Models

If predictions can be verified, it serves to verify the prediction model used to arrive at the predictions. Among other things this

will require formal records of predictions and of subsequent outcomes.

Verification of decision outcomes is con- cerned with the verification of the decision model. When decision models are being ver- ified, it is important that the outcomes which occur be compared with the out- comes that would be predicted by the deci- sion model given the actual observed inde- pendent variables (not those predicted), and the actual action taken (not that which was planned or dictated by the model).

Problems

Attempts to verify predictions and deci- sion models encounter numerous problems of which the following are most important. Most prediction models and decision mod- els involve many independent variables. If the outcomes do not agree with the predic- tions or the payoffs do not agree with anti- cipations, there is considerable uncertainty as to whether the error is caused by specifi- cation inadequacies in the model or by mea- surement variation in the input or the out- put. This problem could possibly be isolated if the measurement problems on the inde- pendent variables could be solved. But this in fact is the second problem. If a prediction has been made and a subsequent observa- tion has been made and there is a difference, it is easy to jump to the conclusion that the prediction was wrong. However, it is equal- ly conceivable that the measurement or ob- servation of the event or occurrence was incorrect and the prediction model is in fact valid. Finally, there is an interactive effect between models and measurement. A given model may be entirely appropriate if the measurements involved are precise. On the other hand, if the measurements are not sufficiently accurate, there still may be a model that they would have fit but that model was not in fact the one used. This question of "proper" measurements, which is of primary concern to accountants, is the focus of the next section of the report.

This content downloaded from 62.122.73.34 on Mon, 16 Jun 2014 00:50:44 AMAll use subject to JSTOR Terms and Conditions

Page 17: Committee Reports: Supplement to Volume XLVI of The Accounting Review || Report of the Committee on Accounting Theory Construction and Verification

Committee Reports 67

V. ACCOUNTING MEASUREMENT: ITS CONSTRUCTION AND

VERIFICATION

The ultimate goal of "accounting theory" is to be able to construct accounting mea- surement systems and to evaluate compet- ing ones. The numerous measurement con- troversies in external reporting are promi- nent examples of the need for such evaluations. The purpose of this section is to discuss: (a) why measurement problems occur, (b) how alternative measurement systems can be verified or evaluated, and (c) what difficulties arise in attempting to conduct the verification process.

Problems of Measurement

Problems of measurement arise from at least three sources. First, the accountant may be unable (or unwilling) to specify the decision process of the user. Hence, he is unable to determine even what is to be mea- sured, let alone how it is to be measured. Second, even if the parameters of the deci- sion process (i.e., the independent variables in the decision model, which are the depen- dent variables in the prediction process) are known, there may be several alternative sets of independent variables that could be used in the prediction process. Third, even once the prediction model is specified at some conceptual level, the independent vari- ables may be operationally defined (i.e., measured) in more than one way. The net result of these three factors is an unlimited number of measurement alternatives, and the need arises for method of evaluating (i.e., verifying) the competing measures.

Verification of Measurements

Part of the verification process is the "audit," which is the set of procedures for verifying conformity to a specific measure- ment rule.28 However, we are also concerned

28Part of the verification process also involves ap- plication of tests of logical propriety. However, this

with the verification in a broader sense, which involves a determination of which measurement rule ought to be adopted.

In the light of the discussion in earlier sections, the measurement alternatives must be evaluated in terms of their relative pre- dictive ability. Specifically, predictive power refers to the relative predictive ability of an independent variable, when measured under each of the alternative measurement rules. A similar criterion is often used in econom- ics to choose among alternative measure- ments. One such example is how to best measure the money supply. The economist has a theoretical construct of "money," defined in terms of factors such as general acceptability, store of value, and unit of exchange among other factors. However, when he attempts to operationalize the con- cept, there are several alternatives available -currency only, currency plus demand deposits, and currency plus demand and time deposits are but a few of the alterna- tives. The economist will resolve this issue by examining how the money supply will predict other economic variables (e.g., GNP), and he will select that measure of the money supply with the highest predictive power.

This measurement issue is essentially the same as the one the accountant faces when attempting to define the best measure of income, or liquid assets, or debt, or any other accounting concept.

Note that the previous sections further implicitly indicate that a complete evalua- tion in terms of predictive ability involves assigning a loss function to the prediction errors, which requires a consideration of the decision process of the user. In other words, the evaluation of alternative accounting measurement involves a cost-benefit analy- sis, where the benefits of each alternative are measured in terms of the decision pro- cess of the user.

aspect of the measurement function is not discussed in detail here.

This content downloaded from 62.122.73.34 on Mon, 16 Jun 2014 00:50:44 AMAll use subject to JSTOR Terms and Conditions

Page 18: Committee Reports: Supplement to Volume XLVI of The Accounting Review || Report of the Committee on Accounting Theory Construction and Verification

68 The Accounting Review

Problems of Verification

Several problems arise in attempting to verify the measurements:

(1) One difficulty is the inability to spe- cify the prediction and/or decision models relevant for users of accounting data.

(2) The evaluation of alternative mea- sures is conditional upon the prediction and decision models assumed when con- ducting the analysis. Different predictive and/or decision models may produce a dif- ferent ranking of alternative measurements.

(3) If certain accounting measurements are currently available while others are not, there is a potential bias in favor of the cur- rently reported method. The current meth- od of measurement may be influencing the decision model being used and/or some of the dependent variables in the prediction equations (e.g., market price of shares). This influence may be difficult to specify. In ad- dition, it may be difficult to determine what the effect would be if the measurement method were changed.

(4) If different measures are different for different purposes (i.e., different decision and/or predictions), the problem of satisfy- ing competing user needs arises. Making such trade-off decisions in an attempt to reach some social optimum is a very diffi- cult task.

VI. AN ILLUSTRATIVE CASE

The purpose of this section is to integrate the previous discussion in terms of a single illustration. The illustration chosen was the measurement issue concerning interperiod tax allocation. In order to construct and verify a theory (or theories) for interperiod tax allocation, some decision-prediction context must be specified. In this regard, it is interesting to note that previous "theo- retical" discussions of the issue (e.g., APB Opinion No. 11), offer little help. As was pointed out earlier, most theory develop-

ment has taken place in a decision-free and prediction-free context, and interperiod tax allocation is no exception.

The decision context chosen here is that of a manager of an investment fund.

The Decision Rule

Sharpe29 and Jensen30 have suggested three potential goals of an investment fund.

(1) The fund's portfolio should be select- ed from a member of the efficient set of portfolios. Efficiency is related to the concept of diversification, and is relatively easily achieved by increas- ing the number of securities in the portfolio and by selecting from more than one industry. The efficient set consists of those portfolios which possess highest expected return for a given level risk or possess least risk for a given level of expected return. It can be shown that the optimal port- folio for a risk averse investor must be a member of the efficient set, since all other portfolios are strictly domi- nated by some member of the efficient set (either with respect to risk and/or return).

(2) The fund portfolio's relative position on the risk spectrum should be sta- tionary over time. Although initially an investment manager may be indif- ferent as to the relative riskiness of the fund's portfolio, subsequently he should attempt to maintain the rela- tive risk level as stationary as possi- ble. The reason is that the fund has attracted a clientele whose utility function is maximized by holding the fund's portfolio. Substantial shifts in the riskiness of the portfolio will

29William F. Sharpe, "Mutual Fund Performance," Journal of Business, XXXIX, Part 2 (January, 1966), 119-138.

30Michael C. Jensen, "Risk, the Pricing of Capital Assets, and the Evaluation of Investment Portfolios," Journal of Business, XLII (April, 1969), 167-247.

This content downloaded from 62.122.73.34 on Mon, 16 Jun 2014 00:50:44 AMAll use subject to JSTOR Terms and Conditions

Page 19: Committee Reports: Supplement to Volume XLVI of The Accounting Review || Report of the Committee on Accounting Theory Construction and Verification

Committee Reports 69

cause the clientele to incur unneces- sary transactions costs to alter their holdings such that they are optimal in the light of new risk position of the fund.

(3) The fund may wish to provide superi- or analytical skills that will permit the detection of securities that are selling above or below their "intrinsic val- ue." However, these skills are provid- ed at a cost (e.g., salaries of the secur- ity analysts plus incremental broker- age fees incurred relative to a buy- and-hold policy), and the cost may exceed the benefits (e.g., the abnor- mal returns received from investing in the undervalued securities. While goals (1) and (2) are appropriate for any investment fund, goal (3) is to be pursued only if special conditions exist.

Let us assume with respect to the fund manager in the illustration that he does not pursue goal (3) and that he operates a suf- ficiently large fund that goal (1) is trivial to achieve. Hence, goal (2), stationarity of risk over time, is the dominant goal.

The adoption of goal (2) requires that risk be defined in some meaningful manner. Portfolio theory provides a rich literature on risk measurement.

Risk Measurement

In recent years, Sharpe and others3' have extended the earlier work of Markowitz32 to a simplified portfolio model (hereafter

31William F. Sharpe, "Capital Asset Price: A Theory of Market Equilibrium Under Conditions of Risk," Journal of Finance, XIX (September, 1964), 425-442; John Lintner, "The Valuation of Risk Assets and the Selection of Risky Investments in Stock Portfolios and Capital Budgets," Review of Economics and Sta- tistics, XLVII (February, 1965), 13-37; John Lintner, "Security Prices, Risk and Maximal Gains from Diver- sification," Journal of Finance, XX (December, 1965), 587-616; Jan Mossin, "Equilibrium in a Capital Asset Market," Econometrica, XXXIV (October, 1966), 768-783.

32Markowitz, Portfolio Selection: Efficient Diversifi- cation of Investments. New York: Wiley, 1959.

referred to as the diagonal model) and to a capital asset pricing model, which deter- mines the equilibrium prices for all securi- ties.33 Markowitz defined the riskiness of a portfolio of assets in terms of the variance of the portfolio's return [(2(Rp)].34 Under certain conditions the variance is the ap- propriate measure of risk. The conditions are:

1. That the utility function of the investor have the following properties: a. The first derivative be positive and b. The second derivative be negative

(i.e., a risk averse utility function for wealth); and

2. That the return distributions of the individual securities are stable with a finite variance (i.e., a normal distribu- tion).

The use of the variance is not as restric- tive as it might first appear. Empirical evi- dence35 has shown that return distributions are adequately characterized as symmetric and that at the portfolio level the variance is highly correlated with other popular dis- persion measures, such as the mean absolute deviation, the semi-standard deviation, the range, the interquartile range, and higher central moments of the distribution.

The variance of a portfolio's return is the sum of two terms. For convenience and without loss of generality, assume that an equal dollar amount is invested in each se- curity. It can be shown that

a2(Rp) = (1 /N) a2(R.) + (N I)(RiR) (1)

33An excellent review of the literature on portfolio theory, capital asset pricing, and the behavior of secu- rity prices appears in Jensen, op. cit.

34Throughout this illustration, the term return will refer to rate of return.

35Eugene Fama, "The Behavior of Stock-Market Prices," Journal of Business, XXXVII (January, 1965), 34-105.

This content downloaded from 62.122.73.34 on Mon, 16 Jun 2014 00:50:44 AMAll use subject to JSTOR Terms and Conditions

Page 20: Committee Reports: Supplement to Volume XLVI of The Accounting Review || Report of the Committee on Accounting Theory Construction and Verification

70 The Accounting Review

1 = average variance +

ON- I0 (N 1 average covariance

where

a2(Rp) = variance of portfolio's return. a2(RI) = mean of the variances of the

individual securities N

E a2(Ri)

in the portfolio, N

o(Ri, Rj) = mean of the covariance of each individual security with every other security

N N

Ad(RI, Rj) i=1 j=1

N(N- 1) N = number of securities in

portfolio.

As N increases the first term, [(1/N 2(Ri)], converges to zero, and the second

term, [(N - 1/N) - a(R ,Rj)], converges to the average covariance among the securities that comprise the portfolio [i.e., lim a2(R,)

N--+

= (RI, Rj)]. For a diversified portfolio (i.e., where N is large), a security's contribu- tion to the risk of the portfolio is measured by its average covariance with all other se- curities in the portfolio, not its variance.36

Because the concept of covariance is so crucial to an understanding of risk, it will be described in intuitive terms. Essentially, the covariance reflects the extent to which security returns move together. Two secur- ity returns are said to have positive covari- ance if, when one security's ex post return is larger than its expected value, the other security's ex post return is also larger than

36The covariance is defined as E((Rk - E(Ri)] [(Rj - E(Rj)].

expected. Two securities are said to possess negative covariance with one another if, when one security's return is above its ex- pected value, the other return is below its expected value.

A security's return may have a high vari- ance, but, if it has low covariance (ideally, negative covariance) with other securities, it is not really a risky security to hold, be- cause its addition to the portfolio will tend to reduce the variance of the portfolio's re- turn. Empirically, there may be positive association between a security's variance and its average covariance with other se- curities. However, a priori there is no ob- vious reason why such association would have to exist.

One limitation of the Markowitz model is the enormous amount of parameter esti- mation required in order to assess the vari- ance of return for a portfolio. For an N se- curity portfolio, the factor is [N - (N + 1)/ 2], which for N = 1,000 is 500,500. In order to reduce the number of parameters to esti- mate, Sharpe37 has offered the diagonal model, which specifies the following rela- tionships:

RI = al + hli Rm + {i (2)

where

E(e) = 0 a(RM, ej) = 0

;J) = 0

RI= return on security i, RM = return on all other capital assets

in the market (hereafter referred to as the "market return").

= an individualistic factor reflecting that portion of security i's return which is not a linear function of RM.

ai, i = intercept and slope associated with the linear relationship.

37William F. Sharpe, "A Simplified Model for Port- folio Analysis," Management Science (January, 1963), 277-293.

This content downloaded from 62.122.73.34 on Mon, 16 Jun 2014 00:50:44 AMAll use subject to JSTOR Terms and Conditions

Page 21: Committee Reports: Supplement to Volume XLVI of The Accounting Review || Report of the Committee on Accounting Theory Construction and Verification

Committee Reports 71

The model asserts that a security's return can be decomposed into two elements, a systematic component (#I RMt) which re- flects common movement of a single secu- rity's return with the average return of all other securities in the market, and an in- dividualistic component (a1 +,d), which reflects that residual portion of a security's return that moves independently of the market-wide return. Intuitively, a motiva- tion for the model can be provided by view- ing events as being classified into one of two categories: (1) those events that have economy-wide impacts, which are reflected in the returns of all securities; and (2) those events which have an impact only upon one particular security. Dichotomizing events in this fashion is obviously highly abstrac- tive. In fact, a third class of events imme- diately would come to mind-industry-wide events. However, previous empirical stud- iesm suggest that the omission of an explicit industry factor in the equation is not a serious misspecification of the model.

Within the context of the diagonal model, the variance of portfolio return is defined as,

2(Rp) = ( a) 2() + (#)2 a2(RM) (3)

where

a2(e,) = mean of the variances of the individualistic factors.

38Eugene Fama, Lawrence Fisher, Michael Jensen, and Richard Roll, "The Adjustment of Stock Prices to New Information," Report No. 6715, Center for Mathematical Studies in Business and Economics, University of Chicago, January, 1967; Ray Ball and Philip Brown, "An Empirical Evaluation of Account- ing Income Numbers," Journal of Accounting Research (Autumn, 1968), 159-178; Philip Brown and Ray Ball, "Some Preliminary Findings on the Association be- tween the Earnings of a Firm, Its Industry, and the Economy," Journal of A ccounting Research, supple- ment to Autumn, 1967 issue, 55-77.

N

z~1 A = mean of 's N

a2(RM) = variance of the market return, RM.

For N = 1 (i.e., for an individual security),

a2(Ri) = 2() + 2 r(RM) (3a)

Note that, analogous to the Markowitz model, the variance is composed of two ele- ments. As N increases, the first term goes to zero and the portfolio variance becomes equal to the second term, fi2a2(RM) - 2(Rp) will differ among portfolios solely according to the magnitude of fl, the average of the #i's of the securities comprising the portfolio. Hence, an individual security's contribution to the riskiness of the portfolio is measured by its fi, not 2(E).

As equation (3a) shows, the variance of a security's return can differ from that of other securities because of one of two fac- tors, either a2(E) or hi. The first factor is re- ferred to as the individualistic or avoidable risk of a security, because that risk can be driven to zero through diversification (i.e., be increasing N). The risk-averse investor (i.e., one who prefers less risk to more risk, for a given expected return), will select a portfolio where the individual riskiness is essentially zero. Such a portfolio is known as an efficient portfolio. The f3i is the sys- tematic or unavoidable risk of the security and measures the security's sensitivity to market-wide events. It is called the system- atic or unavoidable risk because it is that portion of the variance of the security's re- turn that cannot be diversified away by in- creasing the number of securities in the portfolio.

As would be expected, hi bears a direct relationship to the concept of covariance. In particular, it can be shown that, if secu- rity returns are normally distributed,

This content downloaded from 62.122.73.34 on Mon, 16 Jun 2014 00:50:44 AMAll use subject to JSTOR Terms and Conditions

Page 22: Committee Reports: Supplement to Volume XLVI of The Accounting Review || Report of the Committee on Accounting Theory Construction and Verification

72 The Accounting Review

o(R1, RM) Pi2(Rm)

where

a(RI, RM) = covariance of security i's returns with the market returns

a2(RM) = variance of the market return.

Hence our previous statements that the se- curity's riskiness be measured in terms of its covariance is entirely compatible with using : as a measure of security riskiness. The following statements can be made con- cerning the magnitude of di: (1) the larger the value, the greater the riskiness of the security; and (2) a j# of one implies an "av- erage" riskiness.

Remember that the original motivation for the diagonal model was to reduce the number of parameters to estimate. The vari- ance of a portfolio, using the diagonal mod- el, requires an estimation of 2N + 1, which for N = 1000 is 2001 (as compared with 500,500 for the Markowitz model).

Another advantage of the diagonal model is that it can be extended to the more gen- eral cases where security return distribu- tions are characterized by the Stable family of distributions, of which the normal dis- tribution is a special case. This is an impor- tant property because there is considerable evidence" that the distributions of security returns most closely conform to those mem- bers of the Stable family, which have finite expected values but infinite variances and covariances. Fama40 has shown that the /# can still be interpreted as a measure of risk,

39Eugene Fama, "The Behavior of Stock-Market Prices," op. cit.; Benoit Mandelbrot, "The Variation of Certain Speculative Prices," Journal of Business, XXXVL (October, 1963), 394-419; Richard Roll, "The Efficient Market Model Applied to U.S. Trea- sury Bill Rates." Unpublished Ph.D. dissertation, Chicago, 1968.

40Eugene Fama, "Portfolio Analysis in a Stable Paretian Market," Management Science (January, 1965),404-419.

even in cases where the covariance and vari- ance, strictly speaking, are undefined.

Sharpe and others4" have extended the earlier work on portfolio models to capital asset pricing models, which determine the equilibrium prices for all securities in the market. Essentially the models start from the assumption that investors are generally risk averse and show that, in equilibrium, capital assets will be priced such that

E(Ri) = R,41 - #/.) + /L E(RM) (5)

where

E(RI) = expected return of asset i. RF = rate of return on a riskless asset.

E(RM) = expected return on "market" portfolio. a(RV, RM)

(2(RM)

The capital asset model states that the only variable which determines differential ex- pected returns among securities is the sys- tematic risk coefficient, #j. The model fur- ther asserts there is a linear relationship between #j3 and expected return, such that the greater the risk the higher the expected return.

Note that the variability of the individual- istic component of return does not enter into the pricing of capital assets, since that component can be eliminated through di- versification. Although the models were originally developed under the assumption of finite variance and covariance, Fama42 has shown that the results extend to the broader class Stable distributions with finite expected values but infinite variances and covariances.

"William F. Sharpe, "Capital Asset Prices: A the- ory of Market Equilibrium under Conditions of Risk," op. cit.; John Lintner, op. cit.; and Jan Mossin, op. cit.

42Eugene Fama, "Risk, Return, and Equilibrium," Report No. 6831, Center for Mathematical Studies in Business and Economics, University of Chicago, June, 1968.

This content downloaded from 62.122.73.34 on Mon, 16 Jun 2014 00:50:44 AMAll use subject to JSTOR Terms and Conditions

Page 23: Committee Reports: Supplement to Volume XLVI of The Accounting Review || Report of the Committee on Accounting Theory Construction and Verification

Committee Reports 73

Empirical assessments of a, and ft can be obtained from a time series, least-squares regression of the following form:

Rit = ai + bi RMt + e-e (6)

where Rit and RMt are ex post returns for security i and the market, respectively, and where eit is the disturbance term in the equa- tion. King's study of monthly security re- turns43 found that, on the average, approxi- mately 52 per cent of the variation in an in- dividual security's return could be explained by its comovement with a marketwide index of return. The percentage has been secularly declining since 1926, and, for the final 101 months of the study (ending with Decem- ber, 1960), the proportion explained was 30 per cent.

The assessment of #j from a time series regression assumes that fi was stationary during that period. Evidence by Blume"4 and by Jensen45 suggests that stationarity does exist, especially at the portfolio level. The empirical evidence46 also indicates that the resulting equation conforms well to other assumptions of the linear regression model (i.e., linearity, serial independence of the disturbance terms and homocedas- ticity), with one exception. The distribution of the estimated residuals in leptokurtic (i.e., has fatter tails than would be expected under normality). This departure from nor- mality is consistent with Fama's findings47 that security returns are members of the Stable family of distributions with finite means but infinite variances. However,

43Benjamin King, "Market and Industry Factors in Stock Price Behavior," Journal of Business, XXXIX, Part II (January, 1966), 139-190.

44Marshall Blume, "The Assessment of Portfolio Performance," Unpublished Ph.D. dissertation, Uni- versity of Chicago, 1968.

4"Michael C. Jensen, op. cit. 4"Eugene Fama, Lawrence Fisher, Michael Jensen,

and Richard Roll, op. cit. 47 Eugene Fama, "The Behavior of Stock Market

Prices," op. cit.

Wise48 has shown that for Stable distribu- tions with finite expected values, least- square estimates of fhi are unbiased and con- sistent, although not efficient.

Verification of Decision-Model

In sum, portfolio theory provides a mea- sure of risk that has both theoretical and empirical support. Note, however, that the portfolio models are abstractions of under- lying reality. They condense everything into a mean-variance of return decision crite- rion. In particular, they abstract from the following important aspects:

(1) They convert the N period consump- tion-investment decision of the investor into a one-period model. Fama,49 Hakans- son,50 and Mossin5' have shown that under general properties of a risk averse utility function that the investor will behave in a manner consistent with one period maximi- zation. However, much research remains to be done in reconciling the one period models to the multi-period approach.

(2) The models abstract from the fact that the marginal utility of a dollar of in- come may differ depending on the state in which the income is received. The state pref- erence approach52 views a single security as a complex bundle of claims to income in various states, with the price per dollar of income varying with each state.

48John Wise, "Linear Estimation for Linear Regres- sions Systems Having Infinite Variances." Unpub- lished paper presented at Berkeley-Stanford Mathe- matical Economics Seminar, October, 1963.

49Eugene Fama, "Multi-Period Consumption-In- vestment Decisions," Report No. 6830. Chicago: Uni- versity of Chicago Center for Mathematical Studies in Business and Economics, 1968.

50Nils H. Hakansson, "Optimal Investment and Consumption Strategies for a Class of Utility Func- tions," Working Paper 101, Los Angeles, Calif.: Western Management Science Institute, 1966.

51Jan Mossin, "Optimal Multiperiod Portfolio Policies," Journal of Business, XXXXI (April, 1968), 215-229.

52See J. Hirshleifer, "Investment Decision Under Uncertainty: Choice-Theoretic Approaches," Quarter- ly Journal of Economics, LXXIX (November, 1965), 509-536.

This content downloaded from 62.122.73.34 on Mon, 16 Jun 2014 00:50:44 AMAll use subject to JSTOR Terms and Conditions

Page 24: Committee Reports: Supplement to Volume XLVI of The Accounting Review || Report of the Committee on Accounting Theory Construction and Verification

74 The Accounting Review

In the simplest case there are two states, boom and depression, with the price for a dollar of income in depression being higher than the conditional price per boom. If states are defined in terms of overall in- dices of economic activity (e.g., GNP or Dow-Jones Index) then it is appealing to believe that portfolio model (which im- plicitly defines states in terms of RM) is consistent with a state-preference approach.

However, there are certain bundles of claims that are not tied to an economy-wide index, but to the individual consumer's other sources of wealth. All insurance pol- icies fall into this category. While the pur- chase of insurance can be explained relative- ly easily by a state-preference approach, it does not fit into a diagonal rpodel or Sharpe capital asset pricing model framework. Hence, the precise work between the two approaches remains largely unexplored in any formal manner.

A verification of the decision model here would involve a comparison of the payoffs from decisions made under a Markowitz- Sharpe approach versus those from deci- sions made under a state preference ap- proach. However, although the state- preference theory is richer conceptually, it lacks operationality. In particular, the con- ditional prices of a dollar in each state are unobservable. It would virtually be impos- sible, even ex post, to empirically verify the degree of conformity of the abstract model to the more complete approach.

Ultimately our evaluation of the inter- period tax allocation measurement con- troversy is conditional upon the assumption that the Markowitz-Sharpe approach is a relevant decision-context (i.e., that it is an adequate surrogate of true underlying pro- cess, for the purposes at hand).

Remainder of Decision Context

Although the discussion of the decision rule has been somewhat lengthy, the rest of

the dimensions of the process can be easily set forth.

(1) States or events-the future # (beta) of the portfolios.

(2) all "feasible" acts available to the decision makers-all possible combinations of N securities. Even for N = 100, the num- ber of possible portfolios is extremely large.

(3) probability distributions relating to the set of possible events and which are con- ditional upon the act chosen-the probabil- ity distributions of each portfolio's ft. A probability distribution would be needed for each possible portfolio (act).

(4) the set of payoffs-the set of payoffs would be defined in terms of the loss in utility of the fund holders because actual j# was not the expected fl. (This will be dis- cussed in greater detail later).

It is clear that specific enumeration of all elements in dimensions (1) through (4) would be an enormous task. Hence, great deal of simplification occurs. Some distribu- tional assumption may be made about ft, so that specification of only the sufficient sta- tistics is necessary (e.g., normality with a given mean and standard deviation). This implicity defines both (1) and (3). The num- ber of "feasible" acts can be greatly reduced by requiring that a portfolio be a member of the efficient set in order for it to be "feasi- ble." There are algorithms available for selecting the efficient set from a large set of feasible alternatives.53 The specification of the payoff matrix can be simplified by im- posing some generalizable functional form on the utility function.

Questions can be raised, of course, con- cerning what the effect is of making these simplifications of the process, which involve an abstraction of the underlying decision and the prediction process. Such questions

53See Markowitz, op. cit. and Donald E. Farrar, The Investment Decision Under Uncertainty, Englewood Cliffs, N.J.: Prentice-Hall, Inc., 1962.

This content downloaded from 62.122.73.34 on Mon, 16 Jun 2014 00:50:44 AMAll use subject to JSTOR Terms and Conditions

Page 25: Committee Reports: Supplement to Volume XLVI of The Accounting Review || Report of the Committee on Accounting Theory Construction and Verification

Committee Reports 75

can be answered only by verifying the deci- sion and prediction model used.

The Prediction Process

As indicated in Section III, prediction occurs at all levels, (1) through (4). Predic- tion at levels (1) and (3) are of primary con- cern, because it is here where the issue of interperiod tax allocation arises.

The decision model indicates that ,B is a parameter, on which predictions must be provided. The prediction of /# is equivalent to assigning a probability distribution on 3 for a given portfolio for a future period. The probabilities will be conditional prob- abilities, conditional upon the values of the independent variables (i.e., predictors) cur- rently observed.

There are several prediction models avail- able. One simple model would be to use an extreme form of a naive model and to fore- cast that the ,3 in period t + 1 will be equal to the value of : in period t.

A time series analysis of past data would indicate the forecast error generated by such a model.

One limitation of such a model is that the ex post observed /# measures the ex ante "true" ,B with error. An alternative forecast model would involve the use of instrumental variables to remove error from the observed p. 5 Several accounting based variables, such as the pay-out ratio, the debt-asset ratio, and the variability of earnings, have been offered by the financial statement anal- ysis literature as reflecting risk. Such mea- sures could be used as instrumental vari- ables. In fact, this procedure has been found to provide forecasts of /# that were "superi- or" to those provided by the simple, naive model.

However, the measurement of these ac- counting variables are sensitive to inter-

54For a precise description of this model, see Wil- liam Beaver, Paul Kettler, and Myron Scholes, "The Association Between Market Determined and Ac- counting Risk Measures," Accounting Review, October, 1970.

period tax allocation alternatives. One method of evaluating (i.e., verifying) the interperiod tax allocation is in terms of the ability of each alternative to provide ac- counting variables that result in "superior" forecasts of fl. It would be possible using past time series data to empirically examine the relative predictive ability of deferral and nondeferral within this context.

However, there are several difficulties in- volved, associated with the use of such a prediction model.

(1) The dependent variable of interest is the "true" 3 in some future period. How- ever, this variable is never directly observ- able because it is ex ante concept, and only ex post f3's can be observed. In other words, it can be shown that some forecast model using observed j3(bt) can be used to forecast observed #,+,(b,+?) with some degree of fore- cast success. However, not only is bt mea- suring fa with error, but b,+? is also mea- suring #,+l with error. Hence, the "true" dependent variable is never observable. The dependent variable used is always a surro- gate for the underlying event of interest. This is, in general, true, and any evaluation of the prediction model is conditional upon the adequacy of the surrogate dependent variable chosen.

(2) It is possible to talk about forecast performance in a decision-free context, using "classical" error measures. However, forecasting error should ultimately be evalu- ated in terms of the loss function associated with the forecast error. The loss function on the error is defined in terms of the payoff matrix of the decision model. In particular a forecast error in ft will result in the fund's owners being on a suboptimal point of their utility function.

Given that the payoff matrix is defined in terms of the utility function of the fund owners, several problems arise.

(A) It would be extremely difficult to specify the utility function of any of the owners.

This content downloaded from 62.122.73.34 on Mon, 16 Jun 2014 00:50:44 AMAll use subject to JSTOR Terms and Conditions

Page 26: Committee Reports: Supplement to Volume XLVI of The Accounting Review || Report of the Committee on Accounting Theory Construction and Verification

76 The Accounting Review

(B) Since the owners are likely to have heterogeneous utility functions, how are functions aggregated to obtain an overall measure of the "loss" associated with a prediction error in $V? The implication is that it may be virtually

impossible to specify the loss function asso- ciated with the prediction errors. Hence, a complete evaluation is unobtainable.

(3) One interesting implication of a port- folio decision model is that it specifies the nature of prediction error of concern. Pre- diction errors at the individual security level are not concern per se, but prediction at the portfolio level are. Hence, it is possible to distinguish between two types of errors individualistic and systematic. The individ- ualistic errors are uncorrelated across in- dividual securities and hence can be diversi- fied away at the portfolio level. Systematic error (correlated errors across firms) cannot be diversified away and are of concern.

(4) The results of any predictive ability study are conditional upon the prediction model used. This not only includes the inde- pendent variables, but also the functional form assumed (e.g., linearity). It is always possible that another untested model would have produced different results. For exam- ple, suppose exhaustive testing indicated that the "best" set instrumental variables did not include any accounting variables at all. If accounting data are not needed in the prediction process, then any measure- ment controversy concerning the accounting variables becomes irrelevant in this context.

The Measurement Process

The preceding material outlined how a "theory" of interperiod tax allocation might be constructed and verified. The construc- tion of such a theory went far beyond the traditional arguments over the validity of premises, such as "costs attach," "match- ing costs with revenues," or the "roll over" principle. Ultimately, the traditional argu- ments follow from different premises. What

is at issue is not the logic involved in draw- ing conclusions from the premises, but rath- er the crucial issue is the validity of the premises themselves.

Hence, an extended "theory" for inter- period tax allocation incorporated the uses to which the accounting measurements are to be put. This in turn involved a specifica- tion of a decision and prediction context within which to discuss interperiod tax allocations.

In evaluating the interperiod tax alloca- tion issue within this context, any inferences drawn are not only subject to the qualifica- tions cited earlier regarding the decisions and prediction models, but also some addi- tional, special problems:

(1) A major problem is that measurement of the dependent variable (e.g., ex post f8) may be influenced by the measurement method currently used by the accounting reporting system. In particular, the fact that tax deferral is the acceptable, more visible alternative, may imply that investors (and security prices) respond to that reported number. Hence, the observed 83's, computed from market prices, may be forecasted bet- ter under allocation, simply because it is the reported alternative. It is difficult to em- pirically determine what the effect would be, if the reported measurement method were changed.

(2) Because the "theory" has been con- structed and tested within a specific deci- sion-prediction context, there may be some legitimate concern about the generality of the conclusions of such research. Only repli- cation of the study across several contexts can answer that question. However, inabil- ity to generalize may merely reflect the state of nature, not an inadequacy in the theory.

(3) If different measures are optimal for different purposes, the problem of satisfying competing user needs arise. Any policy making group (e.g., the APB) faces the dif- ficult task of making trade-off decisions in an attempt to reach some social optimum.

This content downloaded from 62.122.73.34 on Mon, 16 Jun 2014 00:50:44 AMAll use subject to JSTOR Terms and Conditions

Page 27: Committee Reports: Supplement to Volume XLVI of The Accounting Review || Report of the Committee on Accounting Theory Construction and Verification

Committee Reports 77

VII. CONCLUDING REMARKS & INFERENCES

Until recently, accounting theory largely consisted of a set of competing a priori argu- ments about the relative merits of alterna- tive accounting measurements. Currently there is a growing body of research that is attempting to establish an empirical tradi- tion in accounting theory development and verification. A full understanding of such research requires that the purpose of such research be viewed within the context of the history of the scientific method.

It is possible for the human mind to gen- erate an unlimited number of competing hypotheses. One method of verifying which hypothesis is "correct" is to subject the hypotheses to tests of logical propriety. Al- though such tests may reduce the number of alternatives, it usually will not lead to the selection of the "correct" hypothesis, be- cause more than one hypothesis may pass the tests of logic.

The crucial feature of hypotheses is that they are abstractions of the real world. It would be impossible (and probably un- necessary) to have a theory that included every single factor that affects the behavior of the dependent variable under study. Thus the choice among alternative hypotheses must be determined by which abstraction better captures the relevant aspects of real- ity. It is impossible to resolve such an issue on an a priori basis. There is a need for some external method of verifying whether or not an abstraction is a "good" one.

The verification process is provided by empirical tests, which examine the ability of a hypothesis to generate operational im- plications that lead to predictions about observable phenomena. Hence alternative hypotheses are evaluated in terms of the relative ability to predict the event of in- terest (i.e., the dependent variable).

From a utilitarian viewpoint, the justifi- cation for all knowledge is that it provides a basis for action. It is generally accepted

that no decision can be made without, at least implicitly, making a prediction. Thus the use of a predictive ability criterion in empirical research can be justified on a utilitarian basis that presumes the better the prediction, the better the decisions.

However, for the most part, most disci- plines that have employed the scientific method and its empirical tests have done so in a "decision-free" context that is, with- out the explicit introduction of a decision process that the predictions are going to service. For example, an astronomer or a meteorologist will evaluate alternative hy- potheses on some predictive criterion with- out ever specifying the decision process of navigators that must rely upon these two disciplines in order to navigate optimally.

By the same token, many of the empirical studies in accounting have examined the predictive relationships of accounting mea- surements with other variables of interest, without explicitly specifying the decision process of the user of the data. However, problems arise because of the nonspecifica- tion of the decision process. Beaver, Ken- nelly, Voss cite at least two problems. First, without a knowledge of the loss function of the errors in predictions, it may be im- possible even to rank alternative measure- ments in terms of predictive ability. A spe- cification of the loss function would require explicit introduction of the decision process. Second, even if an ordinal ranking were possible according to predictive ability, ordinal relationships are insufficient, if the "better" measurement alternative involves a higher cost. The evaluation must be then conducted in terms of a cost-benefit analy- sis-that is, the incremental benefit must be at least equal to the increment cost. Mea- surement of the benefit requires a specifica- tion of the decision process.

Until the decision processes are complete- ly specified, the empirical, predictive studies must be regarded as providing part of the knowledge we need for a meaningful evalua-

This content downloaded from 62.122.73.34 on Mon, 16 Jun 2014 00:50:44 AMAll use subject to JSTOR Terms and Conditions

Page 28: Committee Reports: Supplement to Volume XLVI of The Accounting Review || Report of the Committee on Accounting Theory Construction and Verification

78 The Accounting Review

tion of alternative accounting information systems, but more information about the decision process is needed before the evalua- tion will be complete.

One "interim" solution is the construc- tion of a data base, which would include all potentially relevant information. However, since there is no way to restrict the number of acceptable decision models and since the number of predictive models conceivable is unlimited, there is no upper bound at the a priori level on the amount of potentially relevant data. Since these data are being stored at some cost and because the size of the data base would rapidly approach in- finity, some implicit assessments of benefits would have to be made. Thus, the data base approach does not provide us with an op- portunity for ignoring or postponing the cost-benefit analysis that must be conducted in order to meaningfully evaluate alterna- tive information systems.

As our metatheoretical framework in Sec- tion III requires, we must ultimately evalu- ate most information issues in accounting within the context of specified decision models. The bulk of the outputs of the ac- counting system, residing as they do in the theory plane, have no meaning (relevance) independent of the related decision models. While the predictive ability criterion is somewhat independent of specific, explicitly defined decision models, it too depends in the final analysis upon at least some broad enumeration of conceptual variables of in- terest and import to the general class of decision makers.

The crucial feature to note about decision models is that they, like hypotheses, are abstractions of a much more complex pro- cess. A complete specification would re- quire:

(1) the optimization rule, decision-crite- rion, or goals of the decision-maker,

(2) all feasible acts available to the decision-maker,

(3) all possible events or states that may occur over the decision horizon,

(4) probability distributions relating to the set of possible events (note such distributions must be generated for each possible act as well),

(5) a set of payoffs, conditional upon the state and act.

This type of specification poses a formi- dable task, and even a casual glance at the current state of the art would suggest that much remains to be done. In fact, the com- plexity of the underlying process is so great that it is unreasonable to believe that we will ever attain a complete specification of the process, any more than our hypotheses are complete specifications of predictive relationships.

Abstraction of the complete process may take place at any of the five levels cited earlier. For example, deterministic models abstract from the fact that the world is sto- chastic. Even models that incorporate the stochastic nature of the decision variables will often use simulation, which is an ab- straction of the underlying statistical pro- cess, rather than attempting to empirically assess what the underlying process really is. The set of feasible acts or set of possible events may be restricted to a manageable, finite number, when in fact the number may be much larger. The optimization rule or the set of payoffs may be approximated by some functional form that can be handled analytically, so that an optimal act can be determined with the use of a relatively sim- ple algorithm. Thus abstraction can occur in the decision as well as the prediction model.

Because these models are abstractions, there is an unlimited number of ways of abstracting and hence an unlimited number of decision models that may occur to the human mind. What is needed is some meth- od of verification of the decision models, to see if they are meaningful abstractions

This content downloaded from 62.122.73.34 on Mon, 16 Jun 2014 00:50:44 AMAll use subject to JSTOR Terms and Conditions

Page 29: Committee Reports: Supplement to Volume XLVI of The Accounting Review || Report of the Committee on Accounting Theory Construction and Verification

Committee Reports 79

of the real process and to choose among competing models of the same process. A necessary, but not sufficient, test is the verification of logical and mathematical propriety. However, as in the case of com- peting hypotheses, this may reduce the number of models but it will be unable to tell how adequate the remaining acceptable models are. A second approach would be an attempt to see which model best predicted decision-makers' actions. However, this approach has several difficulties. How is a loss function assigned to the prediction errors? One answer would be to assign the loss function implied by the decision models being tested. However, this is obviously a

circular approach. Even if it is possible to adequately predict the decision-makers' actions, all that is accomplished is that we have identified the surrogate (abstraction) the decision-maker is using, but we still do not know how adequate that surrogate is. To resolve both of these difficulties, a knowledge of the "true" process is needed. Yet we do not have such knowledge.

While these "gaps" in our body of knowl- edge are formidable, they can be narrowed. This is the research challenge in accounting. Hopefully, this report has defined a frame- work within which meaningful, theoretical and empirical research can be conducted.

This content downloaded from 62.122.73.34 on Mon, 16 Jun 2014 00:50:44 AMAll use subject to JSTOR Terms and Conditions


Recommended