+ All Categories
Home > Documents > Dynamic slicing of parallel message-passing programs

Dynamic slicing of parallel message-passing programs

Date post: 12-Nov-2023
Category:
Upload: independent
View: 1 times
Download: 0 times
Share this document with a friend
8
Dynamic Slicing of Parallel Message-Passing Programs Mariam Kamkar, Patrik Krajina, Peter Fritzson Department of Computer and Information Science Linkoping University, S-58 1 83 Linkoping, Sweden Email: marka@i d a h .se, p aw ida.liu .se, p etm ida.liu.se Fax: +46 13 282666 Phone: +46 13 281000 Abstract As software applications grow larger and more complex, program maintenance activities such as adding new func- tionality, debugging, and testing consume an increasing amount of available resources for software development. This is especially true for distributed systems communi- cating via message-passing, In order to cope with this increased complexity, programmers need effective computer-supported methods for decomposition and dependence analysis of programs, to understand depen- dencies between differentparts of software systems and to Jind the sources of errors. Program slicing is one method for such decomposition and dependence analysis. A program slice with respect to a specijied variable at some program point consists of those parts of the program which may directly or indirectly affect the value of that variable at the particular program point. In this paper we present an algorithm for dynamic slicing of distributedparallel programs and some results from an implementation for a parallel MIMD computer. 1 Introduction As software applications grow larger and more complex, program maintenance activities such as adding new func- tionality, debugging, and testing consume an increasing amount of available resources for software development. In order to cope with this increased complexity,programmers need effective computer-supportedmethods for decompo- sition and dependence analysis of programs [6,5, 13,8]. Program slicing is one method for such decomposition and dependence analysis. A program slice with respect to a specified variable at some program point consists of those parts of the program which may directly or indirectly affect the value of that variable at the particular program point. 1066-6192196 $5.00 0 1996 IEEE Proceedings of PDP '96 This is useful for understanding dependences within programs. A static program slice [ 151 is computed through static data and control flow analysis and is valid for all possible executions of the program. Static slices are often imprecise, i.e., they contain unnecessarilylarge parts of the program. In contrast, dynamic program slicing [9] considers only a particular execution of a program. The main application of dynamic program slicing is hence in program debugging. This is due to the fact that program debugging is used to analyze a particular execution of a program, the one that shows the existence of a bug in the program. Given an incorrect value of a variable of interest, dynamic program slicing can present a dynamic slice to the debugger (human or system) for further investigation. As an example consider the program in Figure 1 (I) which computes sum of the integers 1 to n. Figure l(I1) shows a static slice of this program with respect to the value of vari- able sum that is computed at statement printf ( If %\n" , sum). This slice consists of all state- ments in the program that are needed to compute the final value of sum in any execution, in this example the static slice is equal to the original program. Figure l(II1) shows a dynamic slice of the program in Figure 1 (I) with respect to the final value of sum for the specific test case n=O. The concept of dynamic slicing has been introduced originally for sequentialprograms (for a survey of program slicing techniques see [7,14]). Distributed programs intro- duce several problems which do not exist in sequential programs (e.g., non-reproducible behaviors, non-determin- istic selection of communication events, etc.). Several methods for dynamic slicing of distributed programs have been proposed [lo, 3,2]. Previously we have developed an interproceduraldynamic slicing algorithm, which works at the procedure abstraction level [ll], and which here has been generalized to handle communicating, message- passing parallel programs. In this paper we present an algorithm for dynamic slicing 170
Transcript

Dynamic Slicing of Parallel Message-Passing Programs

Mariam Kamkar, Patrik Krajina, Peter Fritzson

Department of Computer and Information Science Linkoping University, S-58 1 83 Linkoping, Sweden

Email: marka@ i d a h . se, p a w ida.liu .se, p e t m ida.liu. se Fax: +46 13 282666 Phone: +46 13 281000

Abstract As software applications grow larger and more complex,

program maintenance activities such as adding new func- tionality, debugging, and testing consume an increasing amount of available resources for software development. This is especially true for distributed systems communi- cating via message-passing, In order to cope with this increased complexity, programmers need effective computer-supported methods for decomposition and dependence analysis of programs, to understand depen- dencies between different parts of software systems and to Jind the sources of errors.

Program slicing is one method for such decomposition and dependence analysis. A program slice with respect to a specijied variable at some program point consists of those parts of the program which may directly or indirectly affect the value of that variable at the particular program point.

In this paper we present an algorithm for dynamic slicing of distributedparallel programs and some results from an implementation for a parallel MIMD computer.

1 Introduction

As software applications grow larger and more complex, program maintenance activities such as adding new func- tionality, debugging, and testing consume an increasing amount of available resources for software development. In order to cope with this increased complexity, programmers need effective computer-supported methods for decompo- sition and dependence analysis of programs [6 ,5, 13,8].

Program slicing is one method for such decomposition and dependence analysis. A program slice with respect to a specified variable at some program point consists of those parts of the program which may directly or indirectly affect the value of that variable at the particular program point.

1066-6192196 $5.00 0 1996 IEEE Proceedings of PDP '96

This is useful for understanding dependences within programs. A static program slice [ 151 is computed through static data and control flow analysis and is valid for all possible executions of the program. Static slices are often imprecise, i.e., they contain unnecessarily large parts of the program. In contrast, dynamic program slicing [9] considers only a particular execution of a program. The main application of dynamic program slicing is hence in program debugging. This is due to the fact that program debugging is used to analyze a particular execution of a program, the one that shows the existence of a bug in the program. Given an incorrect value of a variable of interest, dynamic program slicing can present a dynamic slice to the debugger (human or system) for further investigation.

As an example consider the program in Figure 1 (I) which computes sum of the integers 1 to n. Figure l(I1) shows a static slice of this program with respect to the value of vari- able sum that is computed at statement p r i n t f ( If %\n" , sum). This slice consists of all state- ments in the program that are needed to compute the final value of sum in any execution, in this example the static slice is equal to the original program. Figure l(II1) shows a dynamic slice of the program in Figure 1 (I) with respect to the final value of sum for the specific test case n=O.

The concept of dynamic slicing has been introduced originally for sequential programs (for a survey of program slicing techniques see [7,14]). Distributed programs intro- duce several problems which do not exist in sequential programs (e.g., non-reproducible behaviors, non-determin- istic selection of communication events, etc.). Several methods for dynamic slicing of distributed programs have been proposed [lo, 3,2]. Previously we have developed an interprocedural dynamic slicing algorithm, which works at the procedure abstraction level [ll], and which here has been generalized to handle communicating, message- passing parallel programs.

In this paper we present an algorithm for dynamic slicing

170

~~

( I ) main ( )

{ int N, i, sum;

iread ( 0 , &N) ; sum = 0;

i = 1;

while (i <= N) {

sum += i; i++ ;

1 printf ( "%i\n" , sum) ; 1

VI) main ( )

{ int N, i, sum;

iread(0, &N);

sum = 0;

i = 1;

while (i <= N) (

sum += i; i++;

1 printf ( "%i\n", sum) ; 1

( I W main ( ) { int N, i, sum;

sum = 0;

printf ( "%i\n" , sum) ; ]

of distributedparallel programs, and some results from an implementation for a parallel MIMD computer. We intro- duce the notion of Distributed Dynamic Dependence Graph (DDDG) which represents control, data and commu- nication dependences in a distributed program. This graph is built at run-time and used to compute slices of the program through graph traversals.

The rest of the paper is structured as follows: Section 2 describes the programming language under consideration; Section 3 defines the dynamic slicer for distributed programs; The general overview of the parallel distributed dynamic slicer is presented in Section 4; Section 5 gives some results from an implementation for the parallel MIMD computer. Section 6 describes the construction of the Distributed Dynamic Dependence Graph (DDDG) used in the algorithm which is introduced in Section 7; Related work and Conclusions are discussed in Section 8 and 9.

2 The Programming Language under Consideration

In the current implementation we are using the ANSI-C programming language both as the slicing system imple- mentation language and the supported language for soft- ware to be analyzed, but similar languages, like Pascal, Fortran, Modula-2 etc., could also be supported using the same technique. In this implementation, the following C data types and program constructs can be handled all kinds of variables including scalars, arrays, records, pointers, dynamic structures, global variables etc.; assignment state- ments; conditional (if, switch) statements; loops (for, while, do); function calls (with/without retum values, in and out parameters); type casting; message handling by send and receive statements.

The syntax and semantics of send and receive statements

are as follows: SendMessage (TopId, ProcId, Data, S i z e ) : Execution of a send statement results in the transmission of the message stored in Data to the process identified by ProcId. RecvMessage(TopId, ProcId, Data, Size): Execution of a receive statement causes the assignment of a sent message to Data. The receive statement is blocking.

Currently the implemented dynamic slicer does not support macros, include files, libraries, shared variables, i.e., global variables shared between processes.

3 Dynamic Slices for Distributed Programs

A distributed program consists of a set of processes that communicate through message passing primitives. As in the sequential program, there exist data and control depen- dences between statements in a distributed program [4]. There is a data dependence between two statements when- ever a variable appearing in one statement may have an incorrect value if the two statements are reversed. For example, given the following sequence of statements, s2 is data dependent on s 1.

Sl)X = y + z s 2 ) w = x / a

A control dependence exists between a statement and the predicate whose value immediately controls the execution of that statement. In the following sequence of statements, s 2 is control dependent on the predicate p in s 1.

s1)if (p) then S2)X = y + z

endif

There is an additional dependence relation in distributed programs which arises as the result of communications between processes through send and receive statements. A

17 1

receive statement is communication dependent on its send statement partner. In the following set of statements, s2 is communication dependent on its communicating partner s 1; the statement s 2 is receiving a message in &x from the process with ProcId 1, sent by statement sl through the message denoted by &x.

sl)SendMessage(top-id,l (void*) &x,

s2)RecvMessage(top-id,l,(void*) &x, sizeof(x));

sizeof(x));

Given a distributed program P = Epl, p2, . . .pn3 where pi is a process in P, we define a distributed dynamic slice on program P to be Progslice = I<pl,sl>, <pz, s p . , . <pnr sn>} where si is a subset of statements of process pi and contains all statements which directly or indirectly affect the computation of a specific statement (called slicing criterion) in a specific process.

A distributed slicing criterion is a non-empty set defined to be:

C = {<si,xi>l where si is a selected statement in process pi with test case xi}.

The criterion contains at least one element and at most equal to the number of processes in the program i.e., one element per process.

Figure 2 (I) shows a simple example program which consists of two processes, process Send-Receive With ProcId 0 and process Receive-Send with ProcId 1. This program computes the square of the value of variable x through these processes. Process o initializes x with integer 1 7 and then sends it to process 1 for further compu- tation. Process 1, after receiving the value 1 7 , computes its square and returns it to process 0. Process o receives the square of 1 7 and prints it out. Considering statement 11 in process o (which prints out the square of 17) as a slicing criterion, those statements which participates in the compu- tation of the square of 1 7 are statements 8 , 9 , 10, and 11 in process 0 and statements 8 , 10, and 11 in process 1, i.e., all the highlighted statements in Figure 2 (I). As a result for the program Prog, consisting of two processes:

Prog = {Send-Receive, Receive-Send)

ProgSlice = [<Send-Receive,[8,9,10,11}>,

In order to analyze communication behavior in distrib- uted programs we have also considered information about communication events in a program and its slice. For a distributed program Prog = {pl , pz , . . . pnl , the set Progcom of all communication events is:

the distributed slice will be:

<Receive-Send, {8,10,11}>}

ProgCom ={(<p,rec>, <q,snd>) I where rec in process p is communication

dependent on snd in process q}.

Figure 2 (11) shows the communication events of the program in Figure 2 (I). A communication slice Comslice is a subset of Progcom which consists of those communi- cation events appear in the ProgSlice of the program. In Figure 2(II), the program communication events are as follows:

ProgCom=[(<Receive-Send,E>, <Send-Receive,g>), (<Send-Receive,lO>,<Receive-Send,ll>]

The Comslice in this example is the same as the Progcom of the program.

4 General Overview of the Dynamic Slicer

We have implemented the Dynamic Slicer shown in Figure 3 on a parallel MIMD machine, a Parsytec GC PowerPlus-128 machine with 128 PowerPC 601 proces- sors and a communications network including 256 T805 transputers.

To construct the Distributed Dynamic Dependence Graph (DDDG) the slicer system first instruments the distributed program by calls to data and communication dependency collecting routines. These routines construct the Data and Communication Dependence Graph (Datu- ComDep) at run time and the Control Dependence Graph (ContDep) at compile time. Then the instrumented program is compiled into object code (as a future solution the instrumentation can be built into the compiler and therefore eliminate the need of a separate instrumented source code file). When the object code is executed, the DataComDep is created on-the-fly and dumped into a trace file.

The above mentioned steps are performed for and by each process. The slicing can now be done by the slicing system by accessing all trace files, source code files and control dependences files through the file system.

4.1 The Slicer System

Automatic Instrumenting & Instrumented code The automatic instrumentor instruments the target C

program. This part has not been implemented yet. The instrumentation is currently being done by hand.

Input: Target C program Output: Instrumented target C program and a file containing ContDep.

172

(1) Winduds "n8ssage.h" (2) Wincl& "gadg8ts.h"

(4) main ( in t top-id) ( 5 ) E

Figure 2 : (I) An example program which computes the square of value of variable x, using two processes Send-Receive with Procld 0 and Receive-Send with Procld 1. (11) Communication dependences between processes Send-Receive and Receive-Send.

Compiling & Object Code

compiles to the Parsytec GC is used.

User Input: User data User Output: User data An ANSI-C compiler from Motorola which cross-

The Slicer

The slicing criterion, which is a statement in a specific process, is sent from the slicer GUI by a socket. A sequen- tial algorithm is used and the slice is sent to the GUI. It is written in sequential C .

Input: DDDG: DataComDep in a trace file and the file containing the ContDep Socket Input: Slicing criterion Socket Output: Statements and messages in the slice

Input: Instrumented target C program Output: Object code

Executing & Trace file

trace file containing the DataComDep. The program runs on the Parsytec GC and generates a

Input: None Output: The DataComDep as a trace file

173

Figure 3 : General overview of the Dynamic Slicer system The GUI measured the latency for a 1 byte message on this computer

This is the Graphical User Interface of the slicer. The slicing criterion is defined here and sent by a socket to the slicer and the computed slice from the slicer is presented graphically. It has been implemented in sequential C, employing the Motif user interface component library.

Socket Input: Statements and messages in the slice Socket Output: Slicing criterion User Input: Slicing criterion User Output: Graphical view of the statements and messages in the slice

5 Performance Measurements

The scalability and performance of the implemented slicing system has been measured when applied to a small parallel array processing application running on the Parsytec GC/PowerPlus-128 MIMD computer with 128 processors and a node memory of 32 Mbyte. We have

to approximately 140 microseconds. The data rate between two pairs of processors varies between 2 and 3.5 Mbyte/ second depending on the message size. On the other hand, a given node can send data to 4 neighbors simultaneously. A 6-way parallel file system is also available on this machine, with a measured performance of approximately 10 Mbytedsecond for both reading and writing.

The results for the small array application are shown in Table 1. It was not possible to run the application on only one processor with instrumentation on, since the produced trace overflowed the node memory of 32 Mbyte. Here we have only measured the elapsed time without and with instrumentation, since producing the trace is the dominant cost of performing dynamic slicing. The computation of the actual slice is currently done on a workstation in a post- mortem manner, and is independent on the number of processors which produced the trace file. We are planning to replace this sequential slicing computation by a parallel version in the future.

Table I: Scalability when slicing a small array application

Number of Time without instrumentation Speedup Time with instrumentation Speedup processors (milliseconds) (without instr) . (milliseconds) (with instr)

1 2475 1 (assigned) - 1 (assigned)

2 1 1275 I 1.94 I 5281 I 2 (assigned)

4 / 658 I 3.76 I 3417

8 353 7.01 248 1 4.26

16 218 11.35 2024 5.75

32 1 49 16.61 1786 5.91

174

6.1 Control Dependence Graph (ContDep)

(2) with instrumentation 20

I 5 10 15 20 25 30

Number of processors Figure 4 : Speedup curves for a small array application, (1) without instrumentation and (2) including instrumentation to produce trace data for dynamic slicing using the slicer system.

Figure 4 shows the two speedup curves produced from the data in Table 1.

The scalability of the "pure'', un-instrumented applica- tion is reasonable up to approximately 16 processors where the latency and message passing time is starting to become significant compared to the computing work done on each processor. Surprisingly enough, the scalability of the instrumented application is worse despite the fact that more work is done on each processor, which might have resulted in better scalability. The explanation is that the sequential work performed by the master node (1.5 seconds part of all the times shown in the column for time with instrumenta- tion) increased relatively more than in the slaves because of the instrumentation, since many simple assignment state- ments were instrumented in the master.

The tracing overhead introduced by the instrumentation varies between a factor of 4 on 2 processors to a factor of 12 on 32 processors. This is high, but not unreasonable price, that may be worth to pay for locating some hard-to- find bugs.

6 Distributed Dynamic Dependence Graph (DDDG)

In this section we discuss a data structure called Distrib- uted Dynamic Dependence Graph (DDDG) which is used in computing distributed dynamic slices. The DDDG consists of two graphs, the Control Dependence Graph (ContDep), and the Data and Communication Dependence Graph (DataComDep).

A Control Dependence Graph, ContDepG = (CDG1, C D G ~ I . . . I CDG,) I of a distributed program Prog = ( pl, pz , . , . , pn) is a set of subgraphs C D G ~ = (vi, ci ) ; where vi is the set of vertices corresponding to simple statements and predicate expressions in process pi and ci is the set of edges that reflect control dependences between vertices in vi. The Control Dependence Graph is constructed at compile time since control dependences are not dependent on a particular execution. Control dependences from this graph is then used at run-time together with data and communication dependences to include dependent state- ments when computing slices.

6.2 Data and Communication Dependence Graph (DataComDep)

A Data and Communication Dependence Graph of a distributed program h o g = ( pl, pz , . . . , pn) is a distrib- uted graph DataComDepG= ( DCDGl I DCDG2, . . . , DCDG,) where DCDG~ represents the data and communication dependences for process i.

The DataComDep graph is constructed during the execu- tion of the program on-the-fly and contains data and communication dependences. It can be constructed in several ways. We present three methods:

1. Complete statement instance-based graph 2. Reduced statement instance-based graph 3. Statement-based graph

The selected method

For the first prototype implementation, we have used the first method, complete statement instance-based graph in the implementation of the Dynamic Slicer. The reasons for this choice is that the first method gives accurate slices and it is simpler to implement than the second choice. Also, since most of the implementation work needed for method 1 can be re-used as part of a future implementation of the second method, method 1 was a natural choice for a first implementation.

The definition of the Data and Communication Depen- dence Graph for each process is as follows:

DataComDep ( p ) =<VI E>, where v= { s I s is a statement instance in process p 1 E = a set of dependence edges divided in two groups: DataDep: a DataDep edge connects two statement instances si I s in process p and shows that the computation of si is data dependent on s j. ComDep: a ComDep edge connects a receive statement

175

instance si in process p to a send statement instance s j in process q to represent a communication dependence between si and s j.

7 An Incremental Dynamic Slicing Algorithm

Given the Distributed Dynamic Dependence Graph, the slice can be computed in two ways: by a parallel algorithm or by a sequential slicing algorithm. In this paper we consider a sequential algorithm for performing the slicing through a trace traversal, even though the construction of the trace has been done in parallel.

Definition 1: A + B means that A is dependent on B, of type data or control dependence.

Definition 2: (p, r) -+ (4, s) means that (receive) node r in process p is communication dependent on (send) node s in process q.

The general idea behind the algorithm is that all messages come in the form of (p, r) + (9, s), i.e., node pairs. This communication dependence is stored at the receive node, i. e., (q, s) is stored at (p, r).

A sequential program can easily be sliced incrementally, because all nodes of the DDDG can be stored in a linear linked list and they are data and/or control dependent on earlier nodes (i.e., nodes executed earlier) during a back- ward slicing. However, for slicing of parallel programs this cannot be done as easily. When the slicing algorithm reaches the send node (q, s) (see definition 2 above), it is not clear yet if this node will be included in the slice or not until the receive node (p, r) has been sliced. This algorithm suspends the slicing of a certain process if a send node is reached and this message has not yet been encountered, i. e., the corresponding receive node has not yet been sliced. The algorithm computes accurate dynamic slices.

8 Related Work

The concept of dynamic slicing has been originally intro- duced for sequential programs. Distributed programs intro- duce several problems which do not exist in sequential programs (e.g., non-reproducible behaviors, non-determin- istic selection of communication events, etc.). Several methods for dynamic slicing of distributed programs have been proposed.

Duesterwald and et al. [3] introduce a Distributed Depen- dence Graph for representation of distributed programs. A distributed program consists of a set of processes. Commu- nication between processes is assumed to be synchronous and non-deterministic send and receive statements. Their

Distributed Dependence Graph contains a single vertex for each statement and control predicate in the program. Control dependences between statements are determined statically while data and communication dependences are added to the graph at run-time. Slices are computed by determining the set of vertices from which the vertices specified in the criterion can be reached. This approach deals with non-determinism by replacing non-deterministic communication statements in the program with determin- istic communication statements in the slice.The slicing algorithm may compute inaccurate slices in the presence of loops due to the fact that a single vertex is used for all occurrences of a statement in the execution history.

Korel and Ferguson [lo] extend the dynamic slicing method of Korel and Laski [9] to distributed programs with Ada-type rendezvous communication. They formalize the execution history as a distributed program path and intro- duce the notion of communication influence to capture the interdependence between tasks (in addition to Definition- Use, Test-Control and Identity relations from the previous work). A dynamic slice is defined to be an executable projection of the original program that is obtained by deleting statements from it. The program is only guaran- teed to preserve the behavior of the program if the rendez- vous in the slice occur in the same relative order as in the program. This approach requires a mechanism for replaying rendezvous in the slice in the same relative order as they appeared in the original program.

Cheng [2] proposes a program representation, Process Dependence Net (PDN), for concurrent programs. PDN, in addition to the usual control and data dependence, repre- sents selection, synchronization and communication dependences. Cheng 's algorithm for computing dynamic slices is basically a generalization of the initial approach by Agrawal and Horgan [l] which computes a dynamic slice based on a static graph. Consequently, the slicing algorithm may compute inaccurate slices.

To summarize Korel and Ferguson [lo] and Duesterwald et al. [3] compute slices that are executable programs, but deal with non-determinism in a different way. Approaches presented in this paper and by Cheng [2] do not consider this problem because the computed slices are not execut- able programs. The dynamic slicing method by Duester- wald et al. [3] and Cheng [2] use static dependence graphs for computing dynamic slices. Although this is more space- efficient than the other approaches, the computed slices are inaccurate. The algorithms by Korel and Ferguson and the one used in our approach both require an amount of space that depends on the number of executed statements. Korel and Ferguson require their slices to be executable; there- fore these slices contain more statements than those

176

computed by our algorithm.

9 Conclusions

In this paper we have presented an algorithm for distrib- uted dynamic slicing and a tool (the Dynamic Slicer) for performing slicing on parallel and distributed message- passing based application. The slicer has an easy-to-use window-oriented user interface that clearly marks the parts of the program which contributes to some selected (faulty) computational result.The need for tools to help the programmer understand dependencies and interactions is more pressing in parallel software compared to conven- tional sequential software. This is especially true for message-passing based software.

This is the first prototype implementation of the slicer, which gives a tracing overhead of a factor of 4 (on 2 processors) to a factor of 12 (on 32 processors) in increase of execution time. A reasonable, but not perfect scalability of the combination of slicer and application is currently achieved. A number of rather small adjustments in the tracing mechanism of the slicing tools is expected to improve .the scalability properties.

However, to be useful for large applications further improvements in the efficiency of the tracing part of the slicing mechanism are needed. For example, the parallel version of the slicing algorithm mentioned in this paper should be implemented, static analysis of loops to reduce the amount of tracing should be utilized. Finally, the slicing tool itself should be integrated with a full parallel debugger, for example DETOP [ 121, which supports multi-processor, multi-thread and multi-program debugging. We have recently started a cooperation with the originators of DETOP towards this goal.

References Hiralal Agrawal and Joseph R. Horgan. Dynamic Program Slicing. In Proceedings of the ACM SIGPLA”90 Conference on Programming Language Design and Implementation, pages 246-256, New York, June 1990.

Jingde Cheng. Slicing Concurrent Programs - a graph- theoretical approach. In proceeding of the First international workshop on Automated and Algorithmic Debugging (1993), P. Fritzson, Ed., vol. 749 of Lecture notes in computer science, springer-verlag, pp. 223-240, 1993.

Evelyn Duesterwald, Rajiv Gupta, and Mary Lou Soffa. Distributed Slicing and Partial Re-execution for Distributed Programs. In Proceedings of the Fifth Workshop on Languages and Compilers for Parallel Computing, New

Haven Connecticut, August 1992.

Jeanne Ferrante, Karl J. Ottenstein, and Joe D. Warren. The Program Dependence Graph and its Use in Optimization. ACM Transactions on Programming Languages and Systems, 9(3):319-349, July 1987.

Keith B. Gallagher and James Lyle. Using Program Slicing in Software Maintenance. IEEE Transaction on Software Engineering, 17,8, pp. 751-761, 1991.

Susan Horwitz, Jan Prins, and Thomas Reps. Integrating Non-Interfering Versions of Programs. In Proceedings of the ACM SIGSOFTISIGPLAN Symposium on Principles of Programming Languages, San Diego, Califomia, January 1988.

Mariam Kamkar. Interprocedural dynamic Slicing with Applications to Debugging and Testing. Ph.D. thesis (No. 297), Department of Computer and Information Science, Linktjping University, April 93.

Mariam Kamkar, Peter Fritzson, Nahid Shahmehrt Interprocedural Dynamic Slicing with Applications to Interprocedural Data Flow Testing. In Proc of the IEEE Con$ on Sojiware Maintenance, CSM-93, San Diego, Sept. 1993.

Bogdan Korel and Janusz Laski. Dynamic Slicing of Computer Programs. The Journal of Systems and Software, 1990.

Bogdan Korel and Roger Ferguson. Dynamic Slicing of Distributed Programs. Applied Mathematics and Computer Science Vol. 2, No. 2, pp. 199-215, 1992.

Mariam Kamkar, Nahid Shahmehri, Peter Fritzson: Three Approaches to Interprocedural Dynamic Slicing. In Proc. of EUROMICR0’93 (Journal of Microprocessing and Microprogramming), vol. 38, No 1-5, Sept. 1993. Published by North-Holland.

Michael Oberhuber and Roland Wismuller: DETOP - An Interactive Debugger for PowerPC based Multicomputers. In Proc. of ZEUS95 workshop on Parallel Programming and Computing, Linkoping, Sweden, May 17-19, 1995. Published by 10s Press.

Nahid Shahmehri, Mariam Kamkar, and Peter Fritzson: Semi-Automatic Bug Localization in Software Maintenance. In Proc. of the IEEE Con$ on Software Maintenance, CSM90, San Diego, Nov. 1990.

Frank Tip. Generation of Program analysis Tools. Ph.D. thesis (series; 1995-5), Institute for Logic, Language and Computation, Amsterdam, Netherlands, 1995.

Mark Weiser. Program Slicing. IEEE Transactions on Sofhyare Engineering, SEr10(4):352-357, July 1984.

177


Recommended