+ All Categories
Home > Documents > Multi-auditor cooperation: a model of distributed reasoning

Multi-auditor cooperation: a model of distributed reasoning

Date post: 22-Sep-2016
Category:
Upload: ab
View: 213 times
Download: 1 times
Share this document with a friend
14
346 IEEE TRANSACTIONS ON ENGINEERING MANAGEMENT, VOL. 40, NO. 4, NOVEMBER 1993 Multi-Auditor Cooperation: A Model of Distributed Reasoning Ai-Mei Chang, Member, ZEEE, Andrew D. Bailey, Jr., and Andrew B. Whinston Abstract-Management and analysis of complex systems fre- quently involves groups of managers/experts, each working on different parts of the system, cooperating and coordinating among themselves to manage/analyze the system as a whole. We propose a distributed reasoning approach to render this process of multi-agent cooperation more efficient and effective. We con- sider, as an example of such a group process, the auditing process of assessing the reliability of an internal control system, where each auditor makes default assumptions about parts of the system being analyzed by other team members. We model the coordination process using an assumption-based truth mainte- nance system, which explicitly represents such assumptions, their possible retractions on subsequent contrary evidence, and changes in the auditor's terminal options. We show how the resolution of opinions among auditors can be achieved through a process of evidence sharing facilitated by the model. Index Term /Key Phrases: Multi-agent cooperation, default reasoning, truth maintenance systems, group-decision making, and audit support system. I. INTRODUCTION ISTRIBUTED problem-solving techniques are in- D creasingly recommended for multi-agent coopera- tion in a distributed environment where a group of ex- perts (or expert systems) are working together trying to complete a large complex task [11-[31. Many prototype- distributed problem solvers have been built during recent years [4]. Recent research efforts are aimed at building hybrid and distributed reasoning tools to enable a set of independent knowledge-based systems to act as a set of cooperating agents, working together to solve a problem [5]. We present a distributed reasoning approach to model the process of multi-auditor cooperation on assessing the reliability of an internal control system with the intent of rendering the process more efficient and effective than the present methods of cooperation. The main purpose of internal controls in a financial information system is to enhance the system's reliability in preventing, detecting, and correcting errors, irregularities, and fraud in the system [6]. The American Institute of Manuscript received March 1, 1992; revised October 1992. Review of this manuscript was processed by Editor E. Geisler. A. M. Chang is with the Department of MIS, University of Arizona, Tucson, AZ 85721. A. D. Bailey is with the Department of Accounting, University of Arizona, Tucson, AZ 85721. A. B. Whinston is with the Department of Management Science and Information Systems, University of Texas at Austin, Austin, TX 78712- 1175. IEEE Log Number 9210937. CPA's requires auditors to review and understand the internal control system as a matter of audit standards. The results of this review are used to plan the rest of the audit. Where the internal control system design is sound, the auditors will plan to perform tests of controls to assess the reliability of the internal control system. The more reliable the system, the less extensive are the expensive substantive tests the auditors need to conduct. In typical medium- and large-sized firms, the testing and evaluation task is usually large and complex enough that it requires a team effort. An efficient evaluation process requires that the task be completed in as short a time as possible. To facilitate an efficient process, the audit is usually so de- signed that the task is reduced to a series of subtasks with strong intra-subtask linkages and relatively simple inter- subtask linkages [7]. With each subtask under the audit responsibility of different team members, such an audit design is particularly efficient as the subtasks can, under certain conditions, be processed concurrently by the team members. The degree to which concurrent processing can be carried out by team members depends on the interdepen- dencies between subtasks. Interdependenciesmay require frequent interaction between the team members. For ex- ample, an auditor evaluating the reliability of the pur- chase system may require information on the validity of the store's requisition process which is under study by a different auditor. Depending on the situation, the auditor may have to wait for a significant period of time before he receives the appropriate information from the other audi- tor. Such delays are inefficient and not uncommon with traditional methods of cooperation. Conventionally, auditors use questionnaires, flow charts, and tests of controls for the evaluations that comprise their subtasks and, based on their analyses, they interact with other team members (face-to-face and/or through memos and telephone) to share their results and arrive at an overall evaluation of the various subsystems. In prac- tice, the joint evaluation process is performed in two alternative modes: 1) audit teams and 2) audit groups [81. Audit teams involve a process in which individual auditors in an organizational hierarchy make interdependent judg- ments, usually in a sequential and iterative manner, yield- ing a decision that results from a thorough review by more senior auditors. Audit groups, in contrast, are nonhierar- chical and make collective decisions in a simultaneous, rather than strictly sequential manner. Behavioral litera- 0018-9391/93$03.00 @ 1993 IEEE
Transcript
Page 1: Multi-auditor cooperation: a model of distributed reasoning

346 IEEE TRANSACTIONS ON ENGINEERING MANAGEMENT, VOL. 40, NO. 4, NOVEMBER 1993

Multi-Auditor Cooperation: A Model of Distributed Reasoning

Ai-Mei Chang, Member, ZEEE, Andrew D. Bailey, Jr., and Andrew B. Whinston

Abstract-Management and analysis of complex systems fre- quently involves groups of managers/experts, each working on different parts of the system, cooperating and coordinating among themselves to manage/analyze the system as a whole. We propose a distributed reasoning approach to render this process of multi-agent cooperation more efficient and effective. We con- sider, as an example of such a group process, the auditing process of assessing the reliability of an internal control system, where each auditor makes default assumptions about parts of the system being analyzed by other team members. We model the coordination process using an assumption-based truth mainte- nance system, which explicitly represents such assumptions, their possible retractions on subsequent contrary evidence, and changes in the auditor's terminal options. We show how the resolution of opinions among auditors can be achieved through a process of evidence sharing facilitated by the model.

Index Term / K e y Phrases: Multi-agent cooperation, default reasoning, truth maintenance systems, group-decision making, and audit support system.

I. INTRODUCTION ISTRIBUTED problem-solving techniques are in- D creasingly recommended for multi-agent coopera-

tion in a distributed environment where a group of ex- perts (or expert systems) are working together trying to complete a large complex task [11-[31. Many prototype- distributed problem solvers have been built during recent years [4]. Recent research efforts are aimed at building hybrid and distributed reasoning tools to enable a set of independent knowledge-based systems to act as a set of cooperating agents, working together to solve a problem [5]. We present a distributed reasoning approach to model the process of multi-auditor cooperation on assessing the reliability of an internal control system with the intent of rendering the process more efficient and effective than the present methods of cooperation.

The main purpose of internal controls in a financial information system is to enhance the system's reliability in preventing, detecting, and correcting errors, irregularities, and fraud in the system [6]. The American Institute of

Manuscript received March 1, 1992; revised October 1992. Review of this manuscript was processed by Editor E. Geisler.

A. M. Chang is with the Department of MIS, University of Arizona, Tucson, AZ 85721.

A. D. Bailey is with the Department of Accounting, University of Arizona, Tucson, AZ 85721.

A. B. Whinston is with the Department of Management Science and Information Systems, University of Texas at Austin, Austin, TX 78712- 1175.

IEEE Log Number 9210937.

CPA's requires auditors to review and understand the internal control system as a matter of audit standards. The results of this review are used to plan the rest of the audit. Where the internal control system design is sound, the auditors will plan to perform tests of controls to assess the reliability of the internal control system. The more reliable the system, the less extensive are the expensive substantive tests the auditors need to conduct. In typical medium- and large-sized firms, the testing and evaluation task is usually large and complex enough that it requires a team effort. An efficient evaluation process requires that the task be completed in as short a time as possible. To facilitate an efficient process, the audit is usually so de- signed that the task is reduced to a series of subtasks with strong intra-subtask linkages and relatively simple inter- subtask linkages [7]. With each subtask under the audit responsibility of different team members, such an audit design is particularly efficient as the subtasks can, under certain conditions, be processed concurrently by the team members.

The degree to which concurrent processing can be carried out by team members depends on the interdepen- dencies between subtasks. Interdependencies may require frequent interaction between the team members. For ex- ample, an auditor evaluating the reliability of the pur- chase system may require information on the validity of the store's requisition process which is under study by a different auditor. Depending on the situation, the auditor may have to wait for a significant period of time before he receives the appropriate information from the other audi- tor. Such delays are inefficient and not uncommon with traditional methods of cooperation.

Conventionally, auditors use questionnaires, flow charts, and tests of controls for the evaluations that comprise their subtasks and, based on their analyses, they interact with other team members (face-to-face and/or through memos and telephone) to share their results and arrive at an overall evaluation of the various subsystems. In prac- tice, the joint evaluation process is performed in two alternative modes: 1) audit teams and 2) audit groups [81. Audit teams involve a process in which individual auditors in an organizational hierarchy make interdependent judg- ments, usually in a sequential and iterative manner, yield- ing a decision that results from a thorough review by more senior auditors. Audit groups, in contrast, are nonhierar- chical and make collective decisions in a simultaneous, rather than strictly sequential manner. Behavioral litera-

0018-9391/93$03.00 @ 1993 IEEE

Page 2: Multi-auditor cooperation: a model of distributed reasoning

CHANG et al.: MULTI-AUDITOR COOPERATION: A MODEL 347

ture suggests that in many cases decisions made in groups/teams can lead to superior performance when compared to that obtained by unaided individual effort [81. There are potential problems with current audit team and group methods that can result in not only less effi- cient, but also less effective decisions: e.g., domination by an individual unrelated to the individual’s skill in making the judgment, social and time pressures pressing the team or group to accept inferior judgments, lack of structured discourse leading to discussions focused on less pertinent issues, resulting in a waste of valuable time, and the lack of proper documentation of the decision process, hinder- ing thorough review by senior audit personnel [9]. If the benefits of a group/team process are to be realized, these potential problems need to be minimized.

Any decision aid that supports the team/group coop- eration process should, for efficiency and effectiveness purposes, have the following characteristics: 1) it should support concurrent processing of audit subtasks with a minimum of interdependencies between the subtasks; 2) it should provide a structure for interactions between audi- tors that minimizes unnecessary and irrelevant interac- tions between auditors by focusing the auditor interaction on identifying conflicts’ between individual audit analyses and the reasons for such conflicts; 3) it should provide clear documentation of the decision process for review purposes; and 4) it should be able to support both hierar- chical (audit team) and nonhierarchical (audit group) types of interactions.

We present a distributed reasoning approach that aims to fulfill the above objectives in supporting the coopera- tion process between auditors in assessing the reliability of an internal control system. In our model of individual subtask analyses, each auditor incorporates default as- sumptions to minimize interdependencies between other auditors’ subtasks. Thus an auditor requiring information on the validity of the stores requisition process which is under study by a different auditor, may simply assume that the stores requisition process is sound and use that assumption as a basis for analysis of his own subtask. Such assumptions are default assumptions in the sense that auditors generally do not expect to receive evidence from other auditors that would contradict these assumptions. Thus incorporation of default assumptions obviates the need for frequent interactions between auditors and elim- inates the consequent delay. Auditors can proceed with their subtasks under the presumption that exceptions to the default assumptions will not generally occur. How- ever, an eventual interaction among auditors will be nec- essary to confirm or negate default values. In case such exceptions do occur, the auditors will have to retract their assumptions. Thus for an efficient concurrent processing of the assessment problem, default assumptions are nec-

We define “conflict” as a disagreement or incompatibility between audit opinions (beliefs) in the aggregate sense. A conflict may arise due to direct disagreements between elements of individual beliefs or due to incompatibilities in their implications. We use the term “contradiction” to denote disagreements between elements of beliefs.

essary and useful. However, this does not eliminate the need for auditors to interact among themselves and in the process cover what they consider to be critical defaults with facts from other subtask analysis.’ Our decisiotl aid will support the use of default assumptions and the inter- actions necessary to confirm or negate default assump- tions.

In addition to making default assumptions, an auditor checks for certain conditions (e.g., check for the presence of audit trails in the purchasing system), uses some rules (e.g., combining information about requisitions, bids, ap- provals, and purchase orders), and arrives at certain propositions regarding the reliability of the system (e.g., only valid purchases are processed in the purchasing sys- tem) [lo]. As each auditor arrives at an individual opinion concerning the subsystem he is studying, the process of evidence sharing and consensus building begins. We model the above process of arriving at an individual auditor opinion and information sharing using a cognitively con- sistent assumption-based truth maintenance system (ATMS) [ll]. The ATMS provides a reasonable means for retracting default assumptions when faced with contradic- tions and for maintaining consistency in the knowledge base of an auditor.

The use of ATMS allows auditors to be largely indepen- dent of the rest of the team over periods of time. The ability to act independently and concurrently results in a more efficient audit process. More importantly, it con- tributes to a more effective audit by forcing auditors to explicitly state their assumptions in the analyses, and to tie the assumptions and observations together using infer- ence rules. This creates clear explanations for inferences, a necessary condition for the ultimate integration of the audit during the cooperation stage. During the coopera- tion stage, the auditors must ensure that their default assumptions are not contradicted by the empirical obser- vations of other team members. The cooperation process requires a narrowing down of the areas of conflict, identi- fication of areas where further testing is required and the development of the explanation for the consensus opinion that emerges from the process. We propose a formal model and protocol for evidence sharing between the auditors. The model guides the interaction between audi- tors by identifying conflicts and their reasons, and then focuses on resolving such conflicts. This will assure that the audit is not only efficient, but also effective in arriving at a defensible consensus opinion. We have developed a preliminary prototype system to help the cooperation pro- cess and act as a decision support for auditors to store knowledge, exchange evidence and move towards consen- sus. The system can support both hierarchical and non- hierarchical types of auditor interactions. While the design of our system is largely motivated by the audit application, it also differs from other previously developed

It may not be necessary to explicitly confirm or negate all default assumptions with another member of the team. The degree of coverage depends upon the degree of resolution required. This point is addressed later in the paper.

Page 3: Multi-auditor cooperation: a model of distributed reasoning

348 IEEE TRANSACTIONS ON ENGINEERING MANAGEMENT, VOL. 40, NO. 4, NOVEMBER 1993

designs in its method of reasoning and in providing expla- nations. This is discussed later in the paper.

In addition to being efficient and effective our dis- tributed reasoning approach can benefit the process of multi-auditor cooperation in a number of ways. First, it provides a cognitively consistent, nonprobabilistic alterna- tive to multi-auditor decision making, thus removing many of the problems associated with a Bayesian, probabilistic approach [12]. Default logic is particularly suited for mod- eling the low probability events that are common in auditing practice. Second, it provides a formal explanation for the consensus that is obtained and provides a docu- mentation of the consensus process. This supports the review of the decision by senior auditors. It is also ex- tremely useful in providing a defensible position in sup- port of an audit report, especially important in today’s litigious environment [13]. Third, it forms a part of the information system supporting the audit process and is a step in the direction of assessing reliability of systems using artificial intelligence techniques.

We next examine the individual auditor’s process of arriving at an opinion using the augmented-ATMS model. We then describe the example auditing problem, issues involved in multi-agent cooperation, and the resolution of audit opinions using our approach. Finally, we discuss our implementation and prototype and conclude with a dis- cussion on our continuing research.

11. ATMS MODEL OF INDIVIDUAL AUDITOR’S OPINION An appreciation of an auditor’s cognitive process in

arriving at an opinion about internal control system relia- bility is necessary before developing a model. The process of opinion formation starts with the auditor making cer- tain empirical observations regarding the structure of the internal control system he is studying. This is derived from the auditor’s mental model of what aspects of the system need to be observed. For example, an auditor studying the stores subsystem may want to check whether all receipts from vendors are supported by appropriate purchase or- ders. An auditor may make many such observations dur- ing his study of a system’s soundness. In addition, the auditor may also make certain assumptions regarding other aspects of the system that are not directly under his observation. For instance, an auditor reviewing the stores subsystem may assume that the process of releasing pur- chase orders (under study by another auditor) is sound and that all purchase orders are valid orders. Such default assumptions are important for several reasons. First, they obviously affect an auditor’s opinion-retracting these assumptions may change the opinion. Second, if the audit is to be performed as a team, they are a necessary part of performing concurrent activities. Third, important default assumptions should be ultimately reconciled among the team members to avoid arriving at internally incompatible opinions or commonly agreed upon terminal opinions based on an incompatible and indefensible understanding of the problem.

In addition to the observations and default assump-

tions, the auditor uses inference rules derived from past experience, firm practice manuals, professional auditing standards, and audit textbooks. Using these rules, obser- vations, and default assumptions, each auditor in a team arrives at a series of propositions (beliefs or opinions) regarding the reliability of the system component he or she studies. The resulting beliefs should ultimately be in agreement with the observations and default assumptions at both the individual and team or group levels. If this is not the case, the auditors will review the default assump- tions and revise them in an attempt to produce a compati- ble result. This may lead to the need for extended audit effort in order to obtain additional observational data.

We believe that the cognitive aspect of an audit opinion formation can be best captured by modeling and organiz- ing the knowledge of the auditor using the augmented ATMS [HI . The ATMS is a representation scheme for storing reasoning knowledge about various propositions that an agent (human or computer) derives. Associated with each proposition is an assumption set, called an environment, that contains the minimal assumptions un- der which the proposition can be believed. The ATMS uses rules as its elementary units of knowledge and draws conclusions by combining rules to form proofs (or expla- nations). If there is a new observation that contradicts a proposition that has been arrived at based on previously held assumptions, the ATMS calls for a revision of the appropriate assumption set in such a way that the result is most plausible and in agreement with the updated obser- vation set.

Organizing the knowledge of an auditor using the ATMS formalizes the decision-making process of the auditor. It is particularly suited for the reliability assessment of inter- nal control systems as the problem is well structured in terms of premises, assumptions, and inference rules to arrive at propositions. Our model consists of the following parts: a rule base, { K } , consisting of causal and logical rules, a premise set, {Z}, consisting of concrete facts and empirical information, and a default assumption set, { Z } , containing the assumptions of the auditor. Based on these components, propositions (beliefs), { P}, are derived. The beliefs, premises, and assumptions, together with their relationships via rules constitute a belief structure. The ATMS maintains, for each propositional node in the sys- tem, a list of minimal sets of assumptions (environments), called the label L(P) , under which the corresponding proposition can be proved or explained. The label L ( P ) can yield only three possible truth values for the proposi- tion P: believed, disbelieved, and unknown. If any environ- ment in L ( P ) is believed, then P is believed; if any environment in L( 1 P) is believed, then P is disbelieved; if we can confirm neither L ( P ) nor P) , then P is unknown. The ATMS can perform three useful functions in organizing an auditor’s knowledge:

Producing explanations: Once a proposition, P is be- lieved by the auditor, the ATMS can retrace the justification paths and identify the argument of proof

Page 4: Multi-auditor cooperation: a model of distributed reasoning

CHANG ef d.: MULTI-AUDITOR COOPERATION: A MODEL

justifying that belief, as well as the default assump- tions upon which it is founded. This is a useful function as the defensibility of beliefs is important to auditors. Litigation settings are only the most ex- treme examples of the need for defensibility. Managing conflicts: Contradictions between the be- liefs and reality (observations) are viewed as signals that the currently held set of default assumptions should be modified. New sets of assumptions that are compatible and maximal (i.e., containing a minimal set of exceptions) are then generated using the ATMS. Guiding the acquisition of new information: If a certain proposition is in an unknown state, then the label L ( P ) provides clues as to the information required to render it ‘believed‘ or ‘disbelieved.’ That is, if a confirmation of assumption 2 is all that is missing from one set in L(T PI, while the confirmation of 1 Z is missing from some set in U P ) , then a test leading to the confirmation or denial of Z should be devised. That is, the ATMS provides clear guidelines for evidence search involved in an extended audit.

111. RELIABILITY ASSESSMENT PROBLEM We begin this section with a brief description of a

typical internal control system and then discuss the relia- bility assessment and cooperation process [14]. A high level representation of a firm’s internal accounting control system is presented in Figs. 1 and 2. The figures take advantage of the common audit subsystems representa- tion format with subsystems for Stores, Purchasing, Sales, Accounts Payable, Accounts Receivable, Cash Receipts, and Cash Disbursements.

Figure 1’s Venn diagram clearly indicates the degree of shared information between subsystems, while the arcs connecting the subsystem boxes (represented by boxes in Fig. 2) indicate the direction, sequence, and information content flows between subsystems. For example, (follow- ing the circled numerical sequence) the Stores depart- ment initiates a Requisition Order (RO) based on inven- tory reorder requirements. Copies of the RO are sent to Purchasing and to Accounts Payable. The copy sent to Purchasing will initiate action in Purchasing designed to acquire the requisitioned goods. The copy sent to Ac- counts Payable simply informs them that a purchase activ- ity is in process and that sufficient documentation will eventually arrive, if all goes well, to support preparation of a Check Request (CR). Later, when Purchasing sends the CR with the voucher after receiving a Vendor Invoice (VI), Accounts Payable will compare these documents to the Receiving Report (RR) and RO received from Stores and upon agreement will complete the CR and forward it and the documents to the Cash Disbursement sybcycle. This completes the Purchasing and Account Payable sub- cycles, the acquisition side of the Inventory subcycle (im- bedded in Stores in Figs. 1 and 2) and the link to the Cash Disbursements subcycle.

Sales, Stores, and Accounts Receivable subcycles are

RO - Requisition Order RR - Receiving Report PO - Purchase Order VI - Vendor Invoice RA - Remittonce Advice SI - Sales Invoice CO - Customer Order SR - Shipping Report CR - Check Rquesl CB - Customer Billing

349 -

W Fig. 1. Information shared between subsystems of the internal control

system.

a Voucher (VI. PO. RO. RRI Cl

RO - RcquiSitia, order RR - Reseinng Repon W-Rnchaseonkr VI - Vendor Invoice RA-Rrmutaue Advice SI. SaLs Invoice C o - C L ~ o r d e r SR - Shipping Repon CR - Check RcquM CB -Customer Billing

f C 2 Rmipm #$ Customer Cash

0 Circled numerals pmvide an indication of tk order of d o c u m n V d d m &en

Fig. 2. High level representation of 6 r ” s internal accounting control system, showing direction, sequence, and information content flows between subsystems.

related in a similar fashion to the information associated with Customers, Goods, Sales Invoice (SI), Shipping Re- ports (SR) and Remittance Advices (RA). The cbmpletion of the sales activities in these subcycles will complete the use side of the Inventory subcycle, which when combined with the acquisition activities above, will result in the inventory balance for the firm. In a similar fashion it complete the Cash Receipts subcycle, which when com- bined with the activites in the Cash Disbursements subcy- cles results in the firm’s cash balance (see Fig. 2).

We assume in the discussion that follows that the internal control system being assessed is of a midsize or larger firm and that the task is large and complex enough to require a team effort. There are three aspects of multi-agent cooperation that requires special focus [2]: (1) the way in which auditors interact to solve the problem, (2) the manner in which the workload is distributed among them, and (3) how results are integrated for communica-

Page 5: Multi-auditor cooperation: a model of distributed reasoning

350 IEEE TRANSACTIONS ON ENGINEERING MANAGEMENT, VOL. 40, NO. 4, NOVEMBER 1993

tion outside the team. We will first examine these issues by describing the reliability assessment process and then discuss how the process is formalized through our model- ing efforts.

We assume, for now, that interaction is of a nonhierar- chical type (audit group). That is, multiple auditors of similar “rank” are assigned to aspects of the internal control system with an expectation of a consensus report to the supervisor manager. The assessment process starts with the audit team developing an understanding of the client’s internal control system in order to make decisions concerning the planning of the audit. The auditors’ under- standing is based on experience across many firms and specific information about the firm under audit. Figures 1 and 2 would aid the auditors in understanding the client’s system. More detailed information about the system would also be available. If the audit team decides that the system is well designed and if functioning properly can be relied upon to produce sound financial statements, they will test the system to establish its operating characteris- tics. Because these systems are large and complex in a real firm, it is common for a team of auditors to break up the evidence collection task. The manner in which the task is broken up does not pose any special problem in an audit application. The common means of accomplishing this disaggregation is to follow the subsystem lines as this partition leads to subsystems with strong intra-subsystem linkages and relatively simple inter-subsystem linkages. Thus one or more auditors would concentrate on the Stores subsystems while others concentrated on the Ac- counts Receivable subsystem. In the following discussion we assume that one auditor is assigned to each subsystem.

We begin with an individual audit opinion formation. Figure 3 is an ATMS representation (belief structure) of an auditor’s view of the internal control system evaluation process incorporating default assumptions. If we consider the Purchasing subsystem, the auditor begins by making the default assumption, Z,, that the Requisition Order received from the Stores department is valid and that he has a complete population of valid RO’s (assumption Z5). These two default assumptions in the Purchasing subsys- tem will be the topic of consideration by an auditor in the Stores area. We will return to these default assumptions and their relationship to the Stores subsystem shortly. In addition, the auditor also assumes that PO’s are sent to Vendor and Stores (2,) and available PO’s constitute a complete population Z,, .

The auditor in Purchasing would sample information sources (such as a client database) regarding the receipt of RO’s from Stores, the receipt of proper bids and preparation of PO’S and establish (based on the sample observations) that the bids or catalogue prices match items represented on the RO received from Stores (I , , I,, and 1,). The auditor would then combine (K,) the infor- mation obtained (I,, 13, and I,) with his default assump- tions to conclude that the Purchase Orders (PO) prepared by the firm are valid (denoted by the proposition P,). This process will continue within the Purchasing subsystem

Bids obtained from valid vendor

Purchase Order (PO) prepared RO rwclved Q p

from stores for tach RO

RO i s valid

PO sent to vendor and stom

Available RO’s represent a complete population

Available PO’s represent a mmpletc populatloa

A P O is valid

Vendor invoke

from stores

RR is valid

Available VI’S represent a eomplele populalion

. Voucher prepared

Available vouchers represent a complete population

payable xco::eted Valid purchase

Fig. 3 . ATMS representation of auditor’s view of internal control system evaluation process (purchasing subsystem).

until proposition P4 is established and the auditor con- cludes that the Purchasing subsystem operates to produce accurate and reliable information.

If the default assumptions made by the auditor respon- sible for Purchasing are invalid, then the conclusion he derives using these assumptions will also be invalid. The only way to obtain information concerning the validity of these default assumptions is for the Purchasing subsystem auditor to communicate with the auditor responsible for Stores. The auditor responsible for Stores also makes a set of default assumptions ( Z , , Z , , Z3) , gathers sample observations (I , , Z,), combines the default assumptions and observations ( K , ) to arrive at proposition P,. Propo- sition P, asserts that the RO’s prepared in the Stores are valid (see Fig. 4). This information, if communicated to the auditor responsible for Purchasing, confirms the de- fault assumption 2,. Should the Stores auditor conclude “not P,” and fail to communicate it to the auditor respon- sible for Purchasing, the Purchasing auditor may incor- rectly conclude P2 and thus bias the subsequent proposi- tions. While these problems may be discovered elsewhere in the processing, the risk that the client purchased and paid for inappropriate inventory is increased.

Similar scenarios occur throughout the internal control system. Auditors who analyze a specific subsystem will tend to treat certain propositions to be established in other subsystems as default assumptions for their own model. Some mechanism is necessary, if the auditors are

Page 6: Multi-auditor cooperation: a model of distributed reasoning

CHANG et al.: MULTI-AUDITOR COOPERATION: A MODEL 351

Fig. 4. ATMS representation of auditor's view of intemal control system evaluation process (stores subsystem).

to complete the audit in a consistent manner, to assure adequate communication of key information among the auditors. Auditing firms have dealt with this problem by creating rather elaborate mechanisms for reducing audits to workable components and then bringing the informa- tion obtained by diverse team members together for inte- gration and evaluation. However, with the increasing com- plexity of audits and the advent of large-scale client computer systems, it has become worthwhile to consider more formal models of the process with a view toward automating the process and thus gaining assurance of the appropriate communications throughout the audit. Thus, our research efforts are primarily directed towards build- ing a formal model of the process using a distributed reasoning approach. In the next section we discuss the issues involved in multi-auditor cooperation and develop a protocol for evidence sharing.

IV. MULTI-AUDITOR COOPERATION The main objective of the multi-auditor cooperation is

to arrive at a consensus opinion in the most efficient and effective manner. The criteria of efficiency and effective- ness translate into the following goals for interaction: 1) to provide a structure for interaction that eliminates repetitive, circular, and 'unproductive arguments and thereby minimize the frequency of interaction; and 2) to focus on identifying conflicts between individual beliefs

and identifying reasons for such conflicts so that they can be resolved. Thus, only productive interactions are to be allowed. Organizing an individual auditor's belief struc- ture via an ATMS model helps the interaction process by providing clear explanations for the beliefs and identifying assumptions upon which they are based. ATMS facilitates easy retraction of assumptions when a contradiction is detected. The interaction process between the auditors is facilitated through the use of an interface which helps in efficiently forwarding the queries and answers to appro- priate agents. The complexity of the interaction process itself is dependent on the degree of resolution of individ- ual opinions that is desired. Therefore, we first examine the relationships between auditors' belief structures and their implications for the degree of resolution in audit opinions.

A. Relationships Between Belief Structures Arriving at a consensus opinion essentially involves the

resolution of the auditor's belief structures. Therefore, it is necessary to understand the ways in which belief struc- tures can relate to each other with respect to a particular proposition P. In the case of an ATMS, the relationships can be classified on the basis of how their label sets relate to each other: three important relationships are disjoint, overlap, and inconsistency. Let us assume that two audi- tors' belief structures provide explanation for the same proposition P. Their respective label sets are made up of default assumptions (Z's). If the label sets have null intersection, that is, if there are no common default assumptions between them, their relationship is disjoint. Disjoint belief structures provide different perspectives or explanation to the same proposition. It could be an exam- ple of incomplete theories or knowledge. That is, an auditor may not know or have all the relevant rules in his belief structure (knowledge is incomplete). If the intersec- tion of the label sets is not null and is a strict subset of each of the label sets, then the relationship is called overlap. In this instance, there are a few common default assumptions between the two belief structures, however one belief structure is not a subset of the other. It is possible in the case of disjoint or overlap relationships, that there may be directly contradicting default assump- tions between the belief structures. However, the premises and default assumptions in each belief structure still lead to the same proposition P.

If there exist common premises and/or default assump- tions which lead to proposition P in one belief structure and proposition 1 P in another belief structure, the two belief structures are said to be inconsistent with respect to proposition P. A relationship that is not inconsistent is called consistent. It is also to be noted that whereas disjoint and overlap are mutually exclusive, inconsistency is not mutually exclusive with these relationships. Incon- sistency arises because of contradictions in the rules be- tween the belief structures. In the case of the auditing application, since each auditor derives his rule from simi- lar firm practice manuals, professional auditing standards,

Page 7: Multi-auditor cooperation: a model of distributed reasoning

352 IEEE TRANSACTIONS ON ENGINEERING MANAGEMENT, VOL. 40, NO. 4, NOVEMBER 1993

and audit textbooks, we will assume that inconsistencies do not exist. Thus we limit our study to the consistent environment.

In a consistent environment, the reason for conflict in audit opinions arises because either:

1) there is contradiction in the set of premises (1’s) and/or default assumptions (2’s) between the belief structures; or

2) the rule bases of the belief structures are incom- plete.

Therefore, in a consistent environment, the conflict in auditor opinions can be resolved by resolving the contra- dictions between the belief structures’ set of premises and/or assumptions and/or by adding more rules to make the rule base more complete. This assertion is not proven, but is an expected outcome as the result of exchange of knowledge between the auditors. It is ex- pected (see [5]) that resolution of all disagreements among two agents in a consistent world will be achieved by one agent informing another of some explicit knowledge that the other is missing. It is to be noted that even when there is no conflict in opinions, there could be contradictions between the premise and/or assumption sets of two belief structures.

B. Resolving Belief Structures Resolution of belief structures involves some form of

partial or full exchange of information between the audi- tors. A full exchange of information will involve all audi- tors on a job sharing with each other (or with a team leader) their default assumptions, empirical observations, and their logic (K’s) in arriving at an opinion. Although such situations are possible, they are not thought to be very common for the following reasons: 1) it is expensive in terms of the time involved in auditors familiarizing themselves with others’ work, 2) in many instances, such full exchange of information may not be needed to resolve conflicts; 3) auditors may not have the expertise to com- ment on other auditors area of work, in which case, it may unnecessarily increase the frequency of interaction and decrease its effectiveness; and 4) auditors may not want to share their analyses and open them up for scrutiny unless the situation demands it. Thus one of the major goals of interaction is partial sharing of information to the extent that it helps in resolving conflicting opinions. Only in extreme cases would resolution of conflicts require or result in a full sharing of assumptions, observations, and rules.

Two scenarios are possible in an auditing situation. In the first scenario, each auditor expresses the same opinion on the reliability of the internal control system, or each auditor expresses an opinion on their individual portion of a larger common problem and their opinions taken to- gether are in agreement. In these cases, the auditors may not pursue their joint problem further as they have a sufficient consensus opinion. Note, however, that there may still be differences in the set of empirical observa-

tions (1’s) and the default assumptions (2’s) and/or the belief structures may have rule bases (K’s ) which are incomplete. If auditors attempt to produce a joint docu- ment describing or summarizing all the observations, as- sumptions, and procedures supporting their opinion, this may lead to contradictions and, subsequently, to full ex- change of information to resolve the contradictions.

In the second scenario, each auditor expresses a dif- ferent opinion on a common problem, or each auditor expresses an opinion on their individual portion of a larger common problem and they are in conflict when taken as a while. Full exchange of information would identify and resolve all contradictions. However, if the situation requires only a partial sharing of information, to the extent necessary to resolve the conflict in opinions, it may be preferred as it is more effi~ient.~ Based on our general model of individual belief structures, the reasons for conflict in opinions may be because of either: 1) contradictions in premises or empirical observations (1’s) and the default assumptions (Z’s); or 2) rule bases are incomplete (however, each rule is assumed to be correct). Our assumption is that the conflict in opinions can be resolved through evidence sharing if one or more of the above conditions is resolved.

V. EVIDENCE SHARING

A. Distributed Reasoning Model We start with a description of our model of distributed

reasoning (see Fig. 5). Consider a group of physically dispersed auditors. Each auditor acts as a problem solver, makes inferences about the reliability of the subsystem he is dealing with and organizes the knowledge in an ATMS which functions as a cache for all the inferences along with the observation, default assumptions, and rules used in making the inferences. In the event that contradictions arise, the ATMS is called upon to produce changes in the knowledge base (such as retracting assumptions incompat- ible with the new set of premises). Each auditor has only a limited knowledge of the global internal control system, being restricted to the local subsystem. Each auditor arrives at a proposition regarding the subsystem he is dealing with, generally prior to sharing his belief with others. The process of evidence sharing is facilitated by a communication interface which acts as an intermediary between the auditors. An auditor can pass a message to other auditors through this intermediary or directly. A message can be a query requesting a response or an assertion, which may or may not result in a response.

The communication interface is multi-functional. First, it has a data base for storing information about the auditors (meta-level knowledge) [ 151: their identifications,

There is a tradeoff involved in full or partial exchange of informa- tion. If all incompatibilities are followed up and resolved the result is effective but probably inefficient audit. If they are not resolved, the result is a potential ineffective audit, exposing the firm to significant litigations and losses. Our model allows both cases and the decision on this tradeoff remains with the audit team.

Page 8: Multi-auditor cooperation: a model of distributed reasoning

CHANG et al.: MULTI-AUDITOR COOPERATION: A MODEL 353

I RopositionNodes li

I I

4 d

Auditor

Fig. 5. The distributed reasoning system.

their addresses, the subsystems they are working on, and the key words that are associated with each subsystem (domain specific knowledge). Since the auditors initially communicate with each other via the interface, the inter- face uses this knowledge to direct the queries or asser- tions to the appropriate recipient. Second, it functions as a meta expert in judging whether a consensus opinion has been attained. In this respect, it can be considered as a blackboard, where intermediate results are stored and checked to see whether a consensus is obtained. Each auditor, upon arriving at a local opinion, asserts his termi- nal belief to the interface. (This can also be after revising a belief on the basis of evidence shared with other audi- tors). The interface uses its rules and inference mecha- nism to check whether all the terminal beliefs are in agreement and whether a consensus opinion can be gen- erated. If so, it transmits the summary of all propositions along with the consensus status to each auditor for review. It then assists the auditors in cooperating on preparing a joint report documenting the consensus. If there is no consensus, the interface relays the summary of all termi- nal propositions to each auditor for further action (de- scribed later). The communication interface can also in- corporate goals regarding the degree of resolution (requir- ing a partial or full exchange of information) that is required in a particular application. Based on these goals, the interface determines when such a resolution has been achieved.

It is to be noted in the above discussion, that we have assumed an audit group deliberating on an issue to arrive at a consensus where all auditors are of equal rank. Here, the interface acts mainly as a facilitator using its knowl-

edge in its meta-expert system to check whether consen- sus has been obtained. It is also possible to place this interface under the control of a manager, who can specify and change the goals, and rules, review the decisions and opinions of individual auditors, and make an overall judg- ment about the reliability of the internal control system. In this case, the interface acts as a decision support for the manager in reviewing and supervising audit team members. The interface could still perform the function of directing auditor queries to appropriate recipients au- tomatically using the meta knowledge without the man- ager having to participate in the process. Thus our model supports both hierarchically organized audit teams and non hierarchically organized audit groups.

B. ATMS Each auditor's reasoning system can be conceptualized

as shown in Fig. 5. The ATMS acts as a cache for rules, propositions, assumptions, assumption sets, and truth maintenance rules. The ATMS interfaces with the com- munication interface and other agents using the protocols in the communication base and with the user through the user interface. The ATMS creates and maintains infer- ence histories for each proposition using a data structure known as a "node." The ATMS has three kinds of nodes: proposition nodes, assumption nodes, and assumption sets. Each proposition node has the following form:

[proposition-ID, justification, label-pointer , status, communication-set-pointer I

Proposition-ID is a unique identifier of the proposition, justification is the inference history for the proposition and points to the rule from which the proposition was derived, label-pointer points to the assumption sets, at least one of which must be compatible for the correspond- ing proposition to be believed, status indicates whether the proposition is believed or not, and the communica- tion-set-pointer points to the list of auditors to whom the proposition has been communicated. Any assertion that an auditor receives from other auditors (a communicated assertion), be it a proposition or a premise, is always stored in that auditor's system as a proposition, with the justification marked as External and the label-pointer pointing to the auditor (or the communication interface) from whom the communication was received. Some of the proposition nodes are designated as intermediate nodes, where assertions, newly input by the auditor or communi- cated from others, are initially stored until their status is confirmed.

The assumptions base stores each assumption node with a unique identifier, a status label indicating whether the assumption is believed or not, a justification label indicating any basis for the assumption, and a communi- cation-set-pointer. In our implementation, premises, or observations ( I s ) are also stored as assumptions (assump- tions which have a likelihood of being true equal to one). The assumptions base also contains the assumption sets for the propositions. An assumption set for a proposition

Page 9: Multi-auditor cooperation: a model of distributed reasoning

354 IEEE TRANSACTIONS ON ENGINEERING MANAGEMENT, VOL. 40, NO. 4, NOVEMBER 1993

points to the assumptions upon which the believability of the proposition (status) depends. The status of an assump- tion set could either be compatible or incompatible. An assumption set is compatible if all the propositions de- rived from the assumption set are compatible with each other and with the assumptions (that is, all the proposi- tions can be believed at the same time). If all the proposi- tions derived from an assumption set cannot be believed at the same time, that is, if they are incompatible, then the status of the assumption set is Incompatible. If an assumption set is marked Incompatible, then any superset of it is also marked Incompatible. A proposition has the status “believed’ if at least one of its assumption sets is compatible and has the status “disbelieved’ if all its as- sumption sets are Incompatible.

Assumptions and propositions once entered in the sys- tem are never deleted. Depending on the their believabil- ity, their status is changed using nonmonotonic reasoning. We use truth maintenance rules to change the status. Any new assertion that is communicated to the system from other agents can be either a premise ( I ) or a proposition ( P ) or an assumption (2). A communicated premise is stored as a regular proposition node and with the justifi- cation marked extemal and the label-pointer recording the agent’s and the premises’s identification. This premise may contradict the existing data base. The contradictions are of two types: contradicting assumptions and contra- dicting propositions. First, the communicated premise is matched against the existing assumptions to check whether it contradicts any of the assumptions. This is accom- plished using pattern matching rules. A pattern names a group of facts which constitute a contradiction. If the pattern successfully matches against the data base (indi- cating contradiction), then the status of the assumption involved is changed from “Believed” to “Disbelieved” and the corresponding assumption set($ are made incompati- ble. This will in turn change the status of corresponding propositions. Most contradictions in our application are of this type. Sometimes, the communicated premise may contradict a proposition or a set of propositions taken together. In this case, the assumption set which is the union of the assumption sets of all the propositions in- volved is marked as Incompatible. If an assumption set is incompatible, it implies that one or more of the assump- tions in the set is to be “disbelieved.” In the present implementation, if such a situation occurs, the system will require the auditor to intervene and choose the appropri- ate assumption(s) to “disbelieve.” Thus the truth mainte- nance rules contain procedures to change the status of assumptions, assumption sets, and propositions. They also invoke the communication protocol to propagate the changes in beliefs to the communication interface and other interested auditors.

A proposition communicated to the system from other agents is stored as an intermediate proposition node pending confirmation. If the proposition contradicts any of the assumptions directly but contradicts no other proposition, then the assumption is “disbelieved” and the communicated proposition is made a regular proposition

node. If the communicated proposition contradicts a proposition in the system directly, the the truth mainte- nance rules invoke the communication protocol to obtain the label of the communicated proposition to check its validity. The label is checked element by element against the system data base. If the communicated proposition is finally confirmed, it is made a regular proposition node. A communicated assumption is stored as an intermediate proposition node and checked against the system premises and propositions for contradictions. If contradicted (or confirmed), the communication protocol is invoked to transmit the result to the appropriate agent. Thus, to maintain the integrity of the system’s data base, only communicated premises can directly affect the data base. Communicated propositions can affect the assumptions directly if they are the only points of contradiction, other- wise they have to be confirmed before any changes are made to the database. Communicated assumptions can never affect the system’s data base.

C. Communication Protocol The communication base contains the communication

protocol and a data base which stores details of messages exchanged with other agents. The communication proto- col is a set of rules that govern how messages are trans- mitted and received. Messages can be of two types: queries and assertions. There are rules for receiving and storing assertions, for raising queries, for sending assertions, and for receiving and handling queries. Assertions that are received could be premises, propositions, assumptions, or labels. Each element is stored as a proposition node, regular or intermediate, depending upon the nature of the assertion as discussed earlier. Each element of a label is stored separately. Queries that are raised in the system could be requests for labels to confirm the validity of propositions and to confirm assumptions. Queries also arise as a result of truth maintenance operations or the auditor’s purposeful actions. Assertions that are transmit- ted to other agents could arise in response to their queries or as a result of truth maintenance operations. If the recipient of a query or an assertion is known, the the query or assertion is sent directly to that agent. Otherwise it is routed through the communication interface which will help determine the appropriate recipient based on its meta knowledge.

On receipt of a query, a query handling routine searches the assumption and proposition nodes to check whether an assertion in response to the query can be given (such as providing the label set of a proposition or confirma- tion/contradiction for an assumption). If so, an assertion is transmitted with the communication-set pointer indicat- ing the agent to whom the assertion was sent. If not, a response indicating failure to find an answer to the query is sent.

D. Interaction Algorithm Schema Nonmonotonic reasoning is achieved in the system

through the execution of truth maintenance rules. The frequency of resolution depends on how often the system

Page 10: Multi-auditor cooperation: a model of distributed reasoning

CHANG ef al.: MULTI-AUDITOR COOPERATION: A MODEL 355

is consulted by the auditor and frequency of messages received. An interaction cycle begins with the acceptance of incoming assertions (from the auditor or other agents) and creation of new proposition nodes. Next, the truth maintenance rules are executed to detect contradictions and revise assumptions, assumption sets, and proposition status. Next, the current set of beliefs are checked against all inference rules and appropriate rules are executed. This leads to creation of new propositions and assumption sets. The firing of the above rules may have raised queries or assertions to be transmitted to other agents. These queries and assertions are transmitted to appropriate agents and/or the communication interface. The incom- ing queries are processed next by the query handling routines. The cycle is complete when assertions are sent in response to the queries and the processing starts again with new incoming assertions.

E. Communication Integace: Summary When the system is initialized, the auditors need have

no knowledge of what subsystems the other auditors are dealing with. Once they have arrived at their individual terminal propositions they transmit the information to other agents through the communication interface, which contains the meta-level knowledge of the goals of each auditor and what subsystem they are assigned. The inter- face uses its meta-level knowledge (key words associated with each subsystem) to infer the auditors to whom the message pertains. It then forward the message to the appropriate auditor(s). If it is not able to make this inference (because of lack of domain specific knowledge), it broadcasts the message to all auditors for a response. Generally a message transmitted through the interface is also stored in its knowledge base for sharing. The auditors and the interface communicate with each other using a common language. An assertion is always prefixed with codes: P, I, Z, or U P ) to indicate what kind of assertion the auditor is making and is followed by the statement of assertion (such as “PO is valid”). The interface stores these assertions with the appropriate reference to the auditor making the assertions. This enables the interface to match assumptions in one auditor’s ATMS model to keywords in another auditor’s model as pertaining to the same statement (such as “PO is valid”). As communica- tion progresses auditors learn more about the other audi- tors’ subtasks and can then communicate directly. Direct communication will minimize the number of messages transmitted within the system.

The communication interface functions as a blackboard by allowing information sharing among auditors. This is accomplished by storing: the terminal propositions of all auditors, changes in beliefs effected thus far, the status of consensus process and then transmitting relevant informa- tion to the auditors at the appropirate time or in response to their queries. m e auditors are required to transmit their terminal propositions and any changes in them as a result of direct evidence sharing with other auditors. For efficient control of. communication via the interface it is

Auditor 1 Auditor 2

L 1

Kiz : 113 A I4 A PII - &.

4,PZ 1 Conflict Resolved

Fig. 6. Auditor Interaction-1.

necessary to have a priority scheme for handling the messages. Ideally, the interface can embed in the form of rules in its expert system, the best possible strategies for arriving at consensus. Depending on these rules, the mes- sages which pertain to important issues could be given priority to aid an effective and efficient resolution process. In our current implementation, we have limited ourselves to a FIFO rule.

F. Some Examples

We discuss the evidence-sharing process in detail below with respect to our auditing example. The role of the interface is suppressed in the following discussion to focus more on the revision of knowledge bases that ultimately leads toward consensus at the auditor level. In the first two examples, we assume that auditors have knowledge of who they are dealing with and are communicating directly.

Consider two auditors, A1 and A2, sharing evidence regarding a conflict on proposition P2 in Fig. 6 (their ATMS structures are not shown, instead their rules, de- fault assumptions, and premises are represented using propositional logic). Al‘s knowledge is represented in the database as shown in Tables I-V (for the sake of brevity, only the ID’S are shown for the assumptions, premises, and propositions).

Figure 6 presents the interaction between the two agents using queries and assertions. Auditor 2 asserts the propo- sition 1 P2. This is stored as an intermediate propositon node by Auditor 1( PIl). During the truth maintenance

Page 11: Multi-auditor cooperation: a model of distributed reasoning

356 IEEE TRANSACTIONS ON ENGINEERING MANAGEMENT, VOL. 40, NO. 4, NOVEMBER 1993

TABLE I

TABLE I1 ~

Proposition Nodes Prop-ID Justification Label-Pointer Status Comm. Set. Pointer

PI Rule K , 21 Believed P2 Rules K, , K 3 22,23 Believed Com. Int.

Pfl External U I , , Believed ... ... ... ... ...

TABLE I11

Assumption Nodes Assump-ID Status Justification

4 Believed 12 Believed

Zl Believed

z3 Believed

... ...

... ...

... ...

TABLE IV

Assumption Set ID Assumption Set Status Comm. Set. Pointer

21 (Z l , I l , I , ) Compatible 22 ( Z , , 13, LAP,)} Compatible 23 (& 9 Id t 1 5 ) Compatible ... ... ...

TABLE V

Proposition Nodes (Intermediate) Prop-ID Justification Label-Pointer Status Pointer

External A 2 1 P, Disbelieved External A2: Z1, Disbelieved

PI1 PI2 PI3 External A2 L(Pll) Believed

TABLE VI

Proposition Nodes of A2 Prop-ID Justification Label-Pointer Status Comm. Set. Pointer

Pll Rule 4 1 72 Believed 7 P Z Rule K12 73 Disbelieved Com. Int. P3 External Al: I, Believed ... ... ... 1.. ...

cycle, A1 finds a conflict with proposition P, and conse- quently queries Agent 2 on 1 P2’s label. Auditor 2 returns the label L(-I P,), which is stored element by element by A1 (see proposition node P8 and intermediate proposition nodes PI, and PI3). In the next truth-maintenance cycle, these elements are compared against the data base and Agent 1 find that I, contradicts PI, ( Z , , in A2), updates the data base and asserts I, to Agent 2 who revised his knowledge base accordingly. The justification arrow for I, knowledge (stored as P3 in A2’s base in Table VI) points

toward Agent 1, who supplied the information. If we assume that the conflict is resolved at this point,

the process stops. Note that there could still be contradic- tion between Z,, and 2,. However, the current resolution is “satisfying” as all data shared between the two agents are compatible. If there are still contradictions between propositions appearing in the label sets, then labels for these propositions are exchanged. This recursive process ends when the contradiction is resolved by revision/ad- dition of default assumptions or premises. It is conceiv- able that all such contradictions may need to be resolved before any resolution is obtained. This will result in a “full” resolution.

It is also possible that evidence sharing in itself may not resolve the conflict. In Fig. 7, we reconsider the same two belief structures. Let us assume that the conflict in P, and

P, are not resolved even after sharing all the labels for P,, P I , P, , , and 1 P,. In this case it is possible for each agent to identify a minimal set of assumptions which when retracted will resolve the conflict. For example, if P, is true, then the set of minimal candidates for retraction for Agent 2 is:

That is, making the status of either (or both) of the above assumptions as “disbelieved” will result in the negation of 1 P,. Such sets of minimal candidates for retraction are defined as minimum cutsets. Similarly, if 1 P, is true, then the set of minimal candidates for Agent 1 is:

Thus to negate P,, the assumption Z 3 and at least one of 2, and Z, needs retraction. To resolve the conflict, the individual systems will require the intervention of human auditors to devise tests to check for the correctness of the assumptions in the above set of minimal candidates. Thus the process of evidence sharing helps in identifying the conflict areas and guides the acquisition of new informa- tion.

We now apply the evidence sharing process to the auditing example discussed in Section I1 using Figs. 3 and 8. Figure 3 represents the ATMS model of the auditor dealing with the Purchasing subsystem (Auditor P) and Fig. 8 represents the model of the auditor analyzing the stores subsystem (Auditor SI. Auditor S’s system has ar- rived at the proposition 1 P3, that is “acquisition of goods is not valid” (status of P3 is “disbelieved”). The reason for this proposition is that PO and RO do not tally when compared ( I : ) (incompatibility between I:, Z,, and P,). This implies that either Z, is “disbelieved” (PO is not valid) or P, is “disbelieved” (RO is not valid). After performing the above inferencing, Auditor S’s system sends a query to confirm the validity of 2, through the communication interface. The interface directs the query to the appropriate auditor (in this case, Auditor PI. Audi- tor P’s system confirms Z , (not shown in Fig. 9). The evidence sharing process after this confirmation is shown in Fig. 9. At this point Auditor S’s system requests human

Page 12: Multi-auditor cooperation: a model of distributed reasoning

357 CHANG et al.: MULTI-AUDITOR COOPERATION: A MODEL

A -P,

Auditor 1

K I : I1 A 12 A ZI - PI

K l : P i A Z z h l j - P ,

K 3 : Z 3 h I 4 ~ l S - P , P,,+ (conflict)

Auditor 1

Q: L ( ~ f i ) ?

A L(7f i )

Auditor 2

PO is valid

1 I I 1

Auditor 1 Auditor 2

Is and 2s. Generate in I s and Zs. Generate m i n i m cutscts. minimum cutscts

Conflict Not Resolved More tests nceded

Fig. 7. Auditor Interaction-2.

intervention. Since P, is the only other element involved, Auditor S prefers to attack the proposition P, of Auditor P. Auditor P’s system infers that proposition P2 (PO is valid) could be “disbelieved” only if one or more of the assumptions Z,, Z,, Z , , and Z , , are “disbelieved.” To devise tests for these assumptions, the system requests Auditor P’s intervention. Since 2, is “RO is valid” per- tains to Auditor S’s domain, Auditor P prefers to question Auditor S on 2,’s validity. Auditor S’s system infers that P, (RO is valid) could be “disbelieved” if one or more of the assumptions Z , , Z , , and Z , is “disbelieved.” Each auditor having identified the assumptions that need to be checked, then devise tests to prove or disprove the as- sumptions. For example, Auditor S checks to see if the population of RO’s is complete, the approvals are in order, and whether copies ahve gone to Purchasing and Accounts Payable. Auditor P checks whether the available PO’s (and RO’s recieved from Stores) constitute a com- plete population.

On testing the assumptions, let us assume that Auditor P observes that a set of PO’s has been overlooked and that the PO’s do not have appropriate RO’s. This leads to the proposition that “PO is not valid” (7 P 2 ) and thus Auditors P and S agree that proper procedures and con- trol are lacking in the system, perhaps resulting in lost or excess and unwanted inventory acquisition.

RO is valid

PO and RO do not tally

7& -t 7P3 CONFLICr RESOLVED

Conflict. Acquisition of goods not valid

Conflict implies that PI and Z7 are not in agreement with observed I:

Fig. 8. Stores subsystem.

INTERFACE AUDITOR P ” ~ w )

TESTS CONFIRM

-212 Tfi

Fig. 9. Evidence sharing between auditors.

Page 13: Multi-auditor cooperation: a model of distributed reasoning

358 IEEE TRANSACTIONS ON ENGINEERING MANAGEMENT, VOL. 40, NO. 4, NOVEMBER 1993

G. Use of System: Summay A description of how a human auditor interacts with

the system can provide a better appreciation of the dis- tributed reasoning system’s use in practice. Although the system is designed to operate autonomously to a certain extent, it is necessary for an individual auditor to interact with the system for the following purposes: 1) to input rules, propositions, and assumptions based on the analysis performed on the internal control system; 2) to obtain confirmation on important assumptions; 3) obtain infor- mation on assumptions for testing and possible retraction; 4) to help the system in selecting assertions to be chal- lenged; and 5 ) to review progress on the consensus pro- cess.

An auditor communicates with a reasoning system through a user interface, which provides dialog boxes to enable the input of assumptions (including premises), rules, and propositions. The auditor is guided through a series of queries such as “Is the input an assumption ( Z ) , a premise (I), a proposition ( P ) or a rule (K)?” Depend- ing on the input, other details such as status and justifica- tion are recorded. A rule cannot be input until all the relevant assumptions are entered. In fact, propositions and rules have to be entered in series for proper storage of the information. Once the rules are entered, the system creates the appropriate assumption sets for the proposi- tions.

An auditor can wait for the system to perform in due course or can invoke the communication protocol to raise queries about the confirmation of important assumptions made in the ATMS model. The system also requires input from the auditor when there are assumptions that need to be retracted to maintain truth in the data base. The auditor devises tests to confirm or contradict these as- sumptions and then informs the system of the results accordingly. Also, as discussed in the example, when as- sertions (mainly propositions) are made by other agents in the course of conflict resolution, the system may need guidance on which assertions to challenge. An auditor’s intuition on which information to challenge for a quick resolution is not captured in our implementation. The auditor plays an important role in guiding the course of the consensus process. If the system is not guided in this process, the selection is made randomly. Since the system is visualized as a decision aid to the auditor and not as a replacement for the auditor, our philosophy in system development is to involve auditors in important decisions where auditors’ experience and intuition can contribute significantly. The more routine tasks are performed au- tonomously by the system. Also the brief revision strate- gies used in the design of the system are domain knowl- edge dependent unlike the strategies in de Kleer [lll. Thus for developing the truth maintenance rules, audi- tors’ expertise is used.

The ability of the system to function autonomously in interacting with other agents, receiving and sending infor- mation for conflict resolution, and maintaining the in- tegrity of the data base can help auditors interact asyn-

chronously to some extent. Once an auditor enters his analysis in his system, he can send queries to confirm important assumptions, and attend to other tasks. The system can perform the routine exchanges and needs help only in retracting assumptions and in selecting assertions to challenge. It is conceivable that a face-to-face meeting between auditors may be required in some cases. In such instances, the individual systems can provide explanations to support arguments in the discourse.

VI. CONCLUSION The design of our distributed system is largely moti-

vated by the auditing application, a setting requiring char- acteristics different from designs reported in previous DAI research. The work most closely related to ours involves situations where agents with different knowledge collaborate to produce a solution satisfying to both. Previ- ous research by Bond [16] and Huhns et a1 [171 among others have focused on these types of issues. However, their frameworks do not include the ATMS for nonmono- tonic reasoning. Other researchers [51, [18] have focused on developing distributed truth maintenance models which provide nonmonotonic reason capabilities. Huhns [181 has developed a truth maintenance algorithm for distributed single-context truth maintenance systems that guarantees local consistency for each agent and global consistency for data that is shared by all of the agents. Our application makes similar demands. Mason and Johnson [191 have developed a similar distributed ATMS for multi-agent reasoning in seismic monitdring. In their systems, agents exchange facts, assumption sets, and inconsistent assump- tion sets. Unlike our application, the communicated facts can affect only the shared database and not the private, individual beliefs. Their model also allows agents to dis- agree on shared data. Thus while there is local consis- tency for each agent in their model, there may not be consistency over shared data.

The description of our system, its usage, and the exam- ples clearly highlight the potentials of our distributed reasoning approach in the auditing domain. The ATMS model provides structure to the analysis performed by each auditor and forces them to think in terms of rules used in making inferences. The assumptions are made explicit in the model so that the auditors are forced to reckon with them and to obtain confirmation from other auditors. The use of defaul assumptions minimizes the interdependencies between auditor subtasks during evalu- ation which enables efficient analysis of subtasks. The structure provided by the ATMS model is carried through in the interactions. Interactions focus only on the ele- ments of the belief structures with the goal of identifying conflicts, contradictions, and their reasons. The system restricts diversions, repetitions, and circular arguments that are common in face-to-face meetings. At the same time, auditors are involved and consulted on important decisions, such as, devising tests, retracting assumptions, and most importantly, guiding the consensus process. This ensures that auditors experience and intuition are used positively in the resolution process.

Page 14: Multi-auditor cooperation: a model of distributed reasoning

CHANG et al.: MULTI-AUDITOR COOPERATION: A MODEL 359

The asynchronous use of the system, even in its current limited form, can increase the efficiency of the consensus process by saving valuable time. Auditors can specify the degree of resolution that is required and control the consensus process accordingly. The most important bene- fit is that the system provides clear documentation of the consensus process and how the consensus was arrived at. This is very useful for purposes of reviewing the audit at a later stage either by senior auditors during the audit or in defending against later litigation events. The design of the system is also flexible enough to accommodate both hier- archical and non-hierarchical interactions.

The present implementation of our system is in C and supports the interaction of four auditors. We have consid- ered only simplistic resolution problems. A worst case scenario of conflict resolution, with a two-agent system, will involve full sharing of all assumptions, premises, and propositions. Full sharing significantly increases the nec- essary interaction between agents. As the complexity of truth maintenance in a system is NP complete [16], the addition of agents in the distributed system increases the frequency of interactions exponentially. Thus with larger groups it is necessary to limit the acceptable interactions, in such a way that accepted form of cooperation for conflict resolution can still occur. Identification of ac- cepted forms of cooperation in audit settings is an impor- tant part of our ongoing research.

[ll

[31

ill1

[121

REFERENCES Keith S. Decker, “Distributed problem-solving techniques: a sur- vey,” IEEE Trans. Syst. Man., and Cybernetics, vol. SMC-17, no. 5, Sept./Oct. 1987, pp. 729-740. Randall Davis and Reid G. Smith, “Negotiation as a Metaphor for Distributed Problem-Solving,” Artificial Zntell., vol. 20, pp. 63-109, 1983. B. Chandrasekharan, “Natural and social system metaphors for distributed problem solving: introduction to the issue,’’ ZEEE Trans. Syst. Man., and Cybemetics, vol. SMC-11, no. 1, pp. 1-5, Jan. 1981. Michael N. Huhns, Ed., Dishibuted Artificial Intelligence. San Mateo, C A Pitman Morgan Kaufman, 1987. RAD: Natraj Armi et al., “Overview of RAD: hybrid and dis- tributed reasoning tool,” MCC Tech. Rep. No. ACT-RA-098-90, Mar. 1990. S. Yu and J. Neter, “A stochastic model of intemal control system,” J. Accounting Res., Autumn 1973,. pp. 273-295. A. A. Arens and J. K. Leobbecke, Audrhn: An Integrated Ap- proach, 5th ed. Englewood Cliffs, NJ: prentice-Hall, 1991. Ira Solomon, “Multi-auditor judgment/decision making research,” J . Accounting Lit., vol. 6, 1987, pp. 1-25. J. E. Boritz, “The going concern assumption: accounting and auditing implications,” CICA Rep. Toronto, Canada, 1991. R. Meservy, A. D. Bailey, Hr., G. Duke, and P. Johnson, “Intemal control evaluation: A computational model of the review process,” in Auditing: A Journal of Practice and Theory, Fall 1986, pp. 44-74. Johan de Kleer, “An assumption-based truth maintenance system,” Artificial Zntell., vol. 28, pp. 127-162, 1986. R. Dacy and B. Ward, “On the fundamental nature of professional opinions: the traditional Bayesian, and Epistemic methods of at- testation,” in Auditing Research Symposium 1984, A. R. Abdel- khalik and I. Solomon, Eds. Urbana, I L Univ. Illinois, 1985, pp.

Z. Palmrose, “Litigation and independent auditors: the role of business failures and management fraud,” Auditing: A J . Practice and Theoly, Spring 1987, pp. 90-103. A. D. Bailey, Jr, G. Duke, J. Gerlach, C. Cko, R. Meservy, and A. B. Whinston, “TICOM and the analysis of intemal controls,” The Accounting Reu., pp. 186-201, Apr. 1985.

63-91.

[15] Ju-Yuan D. Yang, Michael N. Huhns, and Larry M. Stephens, “An architecture for control and communications in distributed artifi- cial intelligence,” IEEE Trans. Syst., Man., and Cyber., vol. SMC-15, May/June 1995, pp. 318-326.

[16] A. H. Bond, “The cooperation of experts in engineering design,” in Distributed Artijicial Intelligence, Volume ZZ, L. Gasser and M. N. Huhns, Eds. London: Pitman, 1989, pp. 463-486.

[17] M. N. Huhns, L. M. Stephens, and R. D. Bonnell, “Control and cooperation in distributed expert systems,” in Proc. IEEE South- eastcon, Orlando, FL, Apr. 1983, pp. 241-245.

[18] M. N. Huhns and D. M. Bridgeland, “Multiagent truth mainte- nance,” ZEEE Trans. Syst., Man., and Cybernetics, Dec. 1991.

[19] C. L. Mason and R. R. Johnson, “DATMS: A framework for distributed assumption based reasoning,” in Distributed Artijicial Intelligence, Volume 11, L. Gasser and M. N. Huhns, Eds. Lon- don: Pitman, 1989, pp. 293-318.

Ai-Mei Chang holds a Bachelor’s degree in com- puter science and mathematics and a Ph.D. in management information systems from Purdue University.

She is Assistant Professor of Management Information Systems in the College of Business and Public Administration, University of Ari- zona. Her research interests include computer supported collaborative systems, distributed arti- ficial intelligence and expert systems, and deci- sion support system-theory, design, and devel-

opment issues. She has published articles in IEEE Transactions on Systems, Man., and Cybernetics, Decision Support Systems, and the Jour- nal of Organizational Computing.

She is a member of the IEEE Computer Society, Association for Computing Machinery, and the Institute of Management Science.

Andrew D. Bailey, Jr. received two degrees at the University of Minnesota before eaming the Ph.D. from Ohio State University in 1971.

He is presently a Professor of Accounting and Management Information Systems and Head of the Department of Accounting at the University of Arizona. He was previously a Professor of Accounting at Ohio State University, and was a faculty member at the Universities of Maine, Minnesota, Iowa, and Purdue. He was also a visiting professor at the University of Queens-

land, Australia, and Otago University in New Zealand. His research interests include auditing and statistics and auditing in a computerized environment.

Dr. Bailey is a CPA, CIA, and CMA. He received the National Gold Medal on the CMA Examination. He was Vice President of the Ameri- can Accounting Association and past Chairman of the Auditing Section of the AAA, and was recently nominated as President-Elect of the national association.

Andrew B. Whinston was bom in June 1936 in New York City. He received the B.A. at the University of Michigan, the M.S. and the Ph.D. from the Graduate School of Industrial Admin- istration, Carnegie-Mellon University in 1957, 1960, and 1962, respectively.

His work experience includes faculty positions at Yale, University of Virginia, and Purdue, and he is currently Professor of Information Sys- tems, Economics, and Computer Science, and holds the Hugh Rov Cullen Centennial Chair in

Business Administration at the Ur&ersi& of Texas at Austin. His research focuses on organizational information systems, economics of information systems, and decision support systems. He has published over 200 papers and 16 books and is the editor of Decision Support Systems (published by North-Holland) and Journal of Organizational Computing (published by Ablex).


Recommended