+ All Categories
Home > Documents > Towards the standardization of performance measures for project scheduling heuristics

Towards the standardization of performance measures for project scheduling heuristics

Date post: 20-Sep-2016
Category:
Upload: ab
View: 212 times
Download: 0 times
Share this document with a friend
8
82 IEEE TRANSACTIONS ON ENGINEERING MANAGEMENT, VOL. 35, NO. 2, MAY 1988 Towards the Standardization of Performance Measures for Project Scheduling Heuristics ADEDEJI B. BADIRU A bstmet-This paper presents performance measures that can be used to compare project scheduling heuristics. It has long been realized that a standard should exist for evaluating scheduling rules. Unfortunately, no such standard has been seriously addressed up until now. Quantitative measures relative to the project duration are proposed for consideration for the diagnostic analysis of activity sequencing heuristics in PERT networks with precedence and resource constraints. The work presented may help establish the basis for the standardization of performance measures for project scheduling heuristics. A project analyst may use the evaluation techniques presented in deciding which of the many available scheduling heuristics is most appropriate for a given project scenario. A comparative experiment indicates that heuristics perform almost equally well for small projects while their performances vary considerably for large projects. This leads to the conclusion that small projects with mediocre complexity are not accurate performance measurement agents for scheduling heuristics. I. INTRODUCTION HERE HAS BEEN a considerable interest in project T scheduling heuristics in PERT networks due to the impracticality of mathematically optimal approaches [9]. Several mathematical techniques have been developed over the years to generate optimal project schedules where the minimi- zation of the project length is desired. Unfortunately, the optimal techniques are not generally used in practice because of the complexity involved in implementing them for realistic project configurations (see [lo], [15]). Even using a computer to generate an optimal schedule is sometimes cumbersome because of the combinatorial nature of activity scheduling and the resulting high computer memory and CPU time require- ments. As a result, a more practical approach to project scheduling is the use of heuristics (see [18]). If the circumstances of a scheduling problem satisfy the underlying assumptions, a good heuristic will yield schedules that are feasible enough to work with. Davis and Patterson [3], using 83 test projects, found that heuristic results generally are 5-10 percent worse than optimal results. For most applications, this is satisfactory enough. The comparative analysis conducted by Davis and Patterson considered .eight of the commonly used scheduling heuristics. They are: Minimum Job Slack, Resource Schedul- ing Method (i.e., minimum increase in project duration resulting when activity “a” follows activity ‘‘b”), Minimum Late Finish Time, Greatest Resource Demand, Greatest Resource Utilization, Shortest Imminent Operation, Most Jobs Manuscript received June 25, 1986; revised August 11, 1987. The author is with the School of Industrial Engineering, University of Oklahoma, Norman, OK 73019. IEEE Log Number 8820258. Possible, and Random Jobs. With the objective of minimizing project duration, the results produced by the heuristics were compared to optimal results generated by a “bounded enumer- ation’’ procedure. A major task in heuristic scheduling is to develop activity sequencing rules that support widely preva- lent and realistic project situations. Numerous scheduling heuristics have been developed in recent years. Many of these are now being applied by practitioners to actual projects. But due to the several approaches available, a comparative analysis is always a necessity. Whitehouse [16], Holloway et al. [9], Davis and Patterson [3], Cooper [2], and Kurtulus and Davis [ 101 addressed such comparative analyses. Fisher [20] presents an extensive comparative analysis of optimization and heuristic scheduling methods. His worst-case analysis methodology establishes the maximum deviation from optimality that can occur when a specified heuristic is applied within a given problem class. Even though his presentation relates only to machine sequencing, the technique can easily be extended to activity sequencing for PERT networks. In a recent study, Russell [25] used 80 test problems to compare six scheduling heuristics with the objective of maximizing net present value of project cash flows. Russell’s work represents a good example of incorporating economic aspects into PERT network analysis. In this paper, we discuss the need for standardizing performance measures for scheduling heuristics, we present some of the frequently used heuristics, we develop some performance measures, and we perform a comparative analysis of some selected heuristics. The terms “scheduling heuristics,” “scheduling rules,” “priority rules,” “sequenc- ing rules,” and “rules” are used interchangeably in this paper. 11. THE NEED FOR STANDARDIZATION Because comparisons based solely on the project length can be misleading, it is desirable to develop some form of aggregate criterion for comparing rules. Similarly, compari- sons based on inconsistent test problems often lead to erroneous conclusions. It is conceivable that heuristics found to be the best under one computer analysis using a set of test problems may turn out to be the worst under a different computer analysis using another set of test problems. This discrepancy may be due to the different ways by which the dissimilar computers and their associated program codes handle the different inherent structures of the test problems. Patterson and Roth [12] suggested a “clearing house” of test problems to evaluate the various scheduling approaches. The presentation in this paper is a means towards that goal. Some 0018-9391/88/0500-0082$01.00 O 1988 IEEE
Transcript
Page 1: Towards the standardization of performance measures for project scheduling heuristics

82 IEEE TRANSACTIONS ON ENGINEERING MANAGEMENT, VOL. 35, NO. 2, MAY 1988

Towards the Standardization of Performance Measures for Project Scheduling Heuristics

ADEDEJI B. BADIRU

A bstmet-This paper presents performance measures that can be used to compare project scheduling heuristics. It has long been realized that a standard should exist for evaluating scheduling rules. Unfortunately, no such standard has been seriously addressed up until now. Quantitative measures relative to the project duration are proposed for consideration for the diagnostic analysis of activity sequencing heuristics in PERT networks with precedence and resource constraints. The work presented may help establish the basis for the standardization of performance measures for project scheduling heuristics. A project analyst may use the evaluation techniques presented in deciding which of the many available scheduling heuristics is most appropriate for a given project scenario. A comparative experiment indicates that heuristics perform almost equally well for small projects while their performances vary considerably for large projects. This leads to the conclusion that small projects with mediocre complexity are not accurate performance measurement agents for scheduling heuristics.

I. INTRODUCTION HERE HAS BEEN a considerable interest in project T scheduling heuristics in PERT networks due to the

impracticality of mathematically optimal approaches [9]. Several mathematical techniques have been developed over the years to generate optimal project schedules where the minimi- zation of the project length is desired. Unfortunately, the optimal techniques are not generally used in practice because of the complexity involved in implementing them for realistic project configurations (see [lo], [15]). Even using a computer to generate an optimal schedule is sometimes cumbersome because of the combinatorial nature of activity scheduling and the resulting high computer memory and CPU time require- ments.

As a result, a more practical approach to project scheduling is the use of heuristics (see [18]). If the circumstances of a scheduling problem satisfy the underlying assumptions, a good heuristic will yield schedules that are feasible enough to work with. Davis and Patterson [3], using 83 test projects, found that heuristic results generally are 5-10 percent worse than optimal results. For most applications, this is satisfactory enough. The comparative analysis conducted by Davis and Patterson considered .eight of the commonly used scheduling heuristics. They are: Minimum Job Slack, Resource Schedul- ing Method (i.e., minimum increase in project duration resulting when activity “a” follows activity ‘ ‘b”) , Minimum Late Finish Time, Greatest Resource Demand, Greatest Resource Utilization, Shortest Imminent Operation, Most Jobs

Manuscript received June 25, 1986; revised August 11, 1987. The author is with the School of Industrial Engineering, University of

Oklahoma, Norman, OK 73019. IEEE Log Number 8820258.

Possible, and Random Jobs. With the objective of minimizing project duration, the results produced by the heuristics were compared to optimal results generated by a “bounded enumer- ation’’ procedure. A major task in heuristic scheduling is to develop activity sequencing rules that support widely preva- lent and realistic project situations. Numerous scheduling heuristics have been developed in recent years. Many of these are now being applied by practitioners to actual projects. But due to the several approaches available, a comparative analysis is always a necessity. Whitehouse [16], Holloway et al. [9], Davis and Patterson [3], Cooper [2], and Kurtulus and Davis [ 101 addressed such comparative analyses.

Fisher [20] presents an extensive comparative analysis of optimization and heuristic scheduling methods. His worst-case analysis methodology establishes the maximum deviation from optimality that can occur when a specified heuristic is applied within a given problem class. Even though his presentation relates only to machine sequencing, the technique can easily be extended to activity sequencing for PERT networks. In a recent study, Russell [25] used 80 test problems to compare six scheduling heuristics with the objective of maximizing net present value of project cash flows. Russell’s work represents a good example of incorporating economic aspects into PERT network analysis. In this paper, we discuss the need for standardizing performance measures for scheduling heuristics, we present some of the frequently used heuristics, we develop some performance measures, and we perform a comparative analysis of some selected heuristics. The terms “scheduling heuristics,” “scheduling rules,” “priority rules,” “sequenc- ing rules,” and “rules” are used interchangeably in this paper.

11. THE NEED FOR STANDARDIZATION Because comparisons based solely on the project length can

be misleading, it is desirable to develop some form of aggregate criterion for comparing rules. Similarly, compari- sons based on inconsistent test problems often lead to erroneous conclusions. It is conceivable that heuristics found to be the best under one computer analysis using a set of test problems may turn out to be the worst under a different computer analysis using another set of test problems. This discrepancy may be due to the different ways by which the dissimilar computers and their associated program codes handle the different inherent structures of the test problems. Patterson and Roth [12] suggested a “clearing house” of test problems to evaluate the various scheduling approaches. The presentation in this paper is a means towards that goal. Some

0018-9391/88/0500-0082$01.00 O 1988 IEEE

Page 2: Towards the standardization of performance measures for project scheduling heuristics

BADIRU: STANDARDIZATION OF PERFORMANCE MEASURES FOR SCHEDULING HEURISTICS 8 3

performance measures are presented for consideration for adoption when evaluating activity sequencing heuristics in PERT networks with precedence and resource constraints.

III. SOME PROJECT SCHEDULING HEURISTICS Activity-Time (ACTIM) is one of the early project schedul-

ing heuristics. This is the resource allocation priority rule used in Brooks’ Algorithm (BAG). The algorithm, which resulted from the work of George H. Brooks, was extended by Whitehouse and Brown [17]. The original algorithm consid- ered only the single-project single-resource case, but it lends itself to practical extensions for the multiresource cases. The ACTIM scheduling heuristic represents the maximum time that an activity controls in the project network on any one path. It is computed for each project activity by subtracting the activity’s latest start time from the critical path time as shown below:

ACTIM = (Critical Path Time)

- (Activity Latest Start Time).

Activity-Resource (ACTRES) is a scheduling heuristic pro- posed by Bedworth [ 191. This is a combination of the activity time and resource requirements. It is computed as:

ACTRES = (Activity Time)(Resource Requirement).

For multiple resources, the computation of ACTRES can be slightly modified to account for various resource types. For this purpose, the resource requirement can be replaced by a scaled sum of resource requirements over different resource types as shown later. An example of the scaling method used in this paper is presented later in this section.

Time-Resources (TIMRES) is another priority rule pro- posed by Bedworth. It is composed of equally weighted portions of ACTIM and ACTRES. It is expressed as:

TIMRES = 0.5 (ACTIM) + 0.5 (ACTRES)

where ACTIM and ACTRES are calculated as previously discussed.

GENRES is one of the extensions of Brooks’ algorithm proposed by Whitehouse and Brown [17]. In its computation, GENRES is, in effect, a modification of TIMRES with unequal weights assigned to ACTIM and ACTRES. The procedure for GENRES is a computer search technique whereby various weights of w (between 0 and 1) are used in the computational expression shown below:

GENRES = w (ACTIM) + (1 - W) (ACTRES).

Resource Over Time (ROT) is a scheduling criterion proposed by Elsayed [7]. The ROT scheduling heuristic is simply the resource requirement divided by the activity time as given below:

Resource Requirement Activity Time

ROT =

The resource requirement in the above expression can be

replaced by the scaled sum of resource requirements in the case of multiple resource types.

A comprehensive rule named Composite Allocation Factor (CAF) was developed by Badiru [ 11. For each activity i, CAF is computed as a weighted and scaled sum of two components Resource Allocation Factor (RAF) and Stochastic Activity Duration Factor (SAF) as follows:

CAF; = ( W) RAFi + (1 - W) SAFi

where w is a weighting factor between 0 and 1. RAF is defined for each activity i as:

where

xu number of resource type j units required by activity i, yi maximum units of resource j required by an activity, ti the expected duration of activity i, and R the number of resource types involved.

RAF is a measure of the expected resource consumption per unit time. A scaling procedure is used in such a way that the differences among the units of resource types are eliminated to obtain real numbers suitable for addition. The set of RAF values is itself scaled from 0 to 100 as follows:

Unscaled RAFi max { RAFi}

Scaled RAFi=

This helps to eliminate the time-based unit. Thus, RAF is reduced to a dimensionless quantity. Resource-intensive ac- tivities have larger magnitudes of RAF and, as such, require a greater attention in the scheduling process. To incorporate the stochastic nature of activity times in a project schedule, SAF is defined for each activity i as

ai SAF; = p ; + - Pi

where

pi a; u;/p; coefficient of variation of the duration of activity i.

In order to eliminate the discrepancy in the units of the terms in the expression for SAF, the expected activity duration (in time units) is scaled to a dimensionless value between 0 to 50 to yield

expected duration for activity i , standard deviation of duration for activity i , and

unscaled pi scaled pi= ( ) (50).

max { pi}

The coefficient of variation is also scaled from 0 to 50. Thus, SAF ranges from 0 to 100. Since RAF and SAF are both on a scale of 0 to 100, CAF is also scaled from 0 to 100. It is on the basis of the magnitudes of CAF that an activity will be allocated resources and a time slot in the project schedule. An activity that lasts longer, consumes more resources, and varies

Page 3: Towards the standardization of performance measures for project scheduling heuristics

84 IEEE TRANSACTIONS ON ENGINEERING MANAGEMENT, VOL. 35, NO. 2, MAY 1988

more in duration will have a larger magnitude of CAF. Such an activity is given priority during the scheduling process.

The issue of uncertainty in activity duration has been extensively addressed in the literature. MacCrimmon [22] presents a mathematical analysis of the magnitude and direction of errors introduced into a project schedule by the basic assumptions of the PERT model. He concludes that the PERT-calculated project mean is always biased optimistically. Rothkopf [2 I] suggests tackling tasks with nondeterministic durations by reducing the scheduling problem into equivalent deterministic problems. This involves developing an expecta- tion function for the duration of each task. A related approach dealing with subdividing PERT activities is presented by Healy [24]. Donaldson [23] presents an alternative method of estimating the mean and variance of a PERT activity time. The refined estimates are utilized in scheduling activities in the usual procedure of PERT.

With the exception of CAF, there is no other PERT activity sequencing rule that explicitly considers the stochasticity of activity durations in ranking activities during the scheduling process. Other frequently used scheduling rules are summa- rized by Cooper [2]. He presents the following activity sequencing rules that give priority to the activity which:

1) has the earliest arrival time (first-come-first served), 2) has the minimum late start time, 3) has the minimum early finish time, 4) has the minimum late finish time, 5) has least float, 6) has the shortest processing time, 7) has the longest processing time, 8) has most immediate successors, 9) has the greatest value of a random variable,

10) has most successors, 11) has least non-related jobs, 12) has least non-dependent jobs remaining, 13) has most successors per level following, 14) has least immediate successors, 15) has least successors, 16) can start first, considering resources, 17) has least float per successor, 18) has the largest path following 19) will finish first.

IV. PERFORMANCE MEASURES In addition to comparing rules on the basis of the raw

project durations they yield, the following aggregate measures are proposed. The first one is an evaluation of the ratio of the minimum project duration observed to the project duration obtained under each rule. For each rule m, the ratio under each test problem n is computed as:

where

pmn = the efficiency ratio for rule m under test problem n,

q n = minm { P L m n }

minimum project duration observed for test prob- lem n,

PL,, = the project duration for test problem n under rule m,

M = the number of scheduling rules considered, N = the number of test problems.

From the above definitions, the maximum value for the ratio is 1 .O. Thus, it is alternately referred to as the Rule Efficiency Ratio. The value qn is, of course, not necessarily the global minimum project duration for test problem n. Rather, it represents the local minimum based on the particular schedul- ing rules considered. If the global minimum duration for a project is known (probably from a mathematical programming technique), then it should be used in the expression for p m n . Rules can be compared on the basis of the absolute values for pmn or on the basis of the sums of pmn. The sums of pmn over the index n are defined as

N p,,, m = 1 , 2, * * * , M .

n = l

The use of the sums of pmn is a practical approach in comparing scheduling rules. It is possible to have a scheduling rule that will consistently yield near minimum project dura- tions for all test problems. On the other hand, there may be another rule that performs extremely well for some test problems while it performs miserably for other problems. The sums help to average out the overall performance over all test problems. The other comparison measure proposed involves the calculation of the percentage deviations from the observed minimum project duration. The deviations are computed as:

m = l , 2, ..., M , n = l , 2, - . e , N

which denotes the percentage deviation from the minimum project duration for rule m under test problem n.

V. COMPARATIVE EXPERIMENT To illustrate the use of the above performance measures, 30

test problems and 13 priority rules are used in an extensive analysis. The test projects range in size from 4 activities and 1 resource type to 80 activities and 10 resource types. Table I presents the characteristics of the projects. The rules used for the illustration include ACTIM, ACTRES, TIMRES, GEN- RES-1 (with w = 0.25), GENRES-2 (with w = 0.75), ROT, Minimum Total Slack (MTS) in PERT network, Minimum Late Start Time (MLST) in PERT, Minimum Late Completion Time (MLCT) in PERT, Shortest PERT Duration (SPD), Longest PERT Duration (LPD), Highest Number of Immedi- ate Successors ("IS), and CAF.

While the test problems cannot be generalized to all scheduling problems, they are representative of the diversity encountered in project scenarios. Some of the problems are adapted from problems that have been presented in the literature ([3]-[5], [7], [16], [18], [19], [22], and [27]) while some are generated at random. The last problem (problem 30) is an actual construction project. The structures of the test

Page 4: Towards the standardization of performance measures for project scheduling heuristics

BADIRU: STANDARDIZATION OF PERFORMANCE MEASURES FOR SCHEDULING HEURISTICS

____

8 5

TABLE I COMPLEXITIES OF TEST PROBLEMS

projects vary from very simple (problem 1 with two parallel paths) to very complex (problem 13 with intricate precedence relationships). For the purpose of the analysis presented here, we define a quantitative measure of the complexity of a project network as:

where

X project network complexity, L number of activities in the network, t, expected duration for activity i, R number of resource types, x, units of resource type j required by activity i, z, maximum units of resource type j available, p maximum number of immediate predecessors in the

network, d PERT duration of the project with no resource con-

straint.

The terms in the expression for the complexity are explained as follows: The maximum number of immediate predecessors p is a multiplicative factor that increases the complexity and potential for bottlenecks in a project network. The (1 - 1/15) term is a fractional measure (between 0.0 and 1.0) that indicates the time intensity or work content of the project. As L increases, the quantity (1 - 1IL) increases, and a larger fraction of the total time requirement (sum of ti) is charged to the network complexity. Conversely, as L decreases, the network complexity decreases proportionately with total time requirement. The sum of (tixi,) indicates the time-based consumption of a given resource type j relative to the maximum availability. The term is summed over all the different resource types. Having PERT duration in the denominator helps to express the complexity as a dimension- less quantity by canceling out the time units in the numerator.

In addition, it gives the network complexity per unit of total project duration. For the comparative analysis presented in this paper, the network complexity ranges from 2.99 for project number 3 to 69.82 for project 13.

Other network complexity measures have been proposed in the scheduling literature (see [ 121 and [27]). There is always a debate as to whether or not the complexity of a project can be accurately quantified. There are several quantitative and qualitative factors with unknown interactions that are present in any project network. As a result, any measure of project complexity should be used as a relatice measure of comparison rather than as an absolute indication of the difficulty involved in scheduling a given project.

Table 11 presents the efficiency ratios (defined previously in Section IV) for the 13 rules (M = 13) using the 30 projects (N = 30). At the bottom of the table, the sum of ratios over all the test problems is presented for each rule. The maximum possible value for the sum is 30.0, which implies a situation where a rule yields the observed minimums for all 30 test problems. The efficiency ratios are much more revealing in comparing the scheduling rules. The values easily identify which rules yielded the minimum durations and which rules came close to yielding the minimums. For example, in problems 1, 16, 18, and 21, all the rules performed equally well in producing the minimum project durations. It is noted that all four problems are small, with sizes ranging from 4 to 9 activities. In problem 10, which consists of 40 activities, only the MLCT rule yielded the minimum duration. The rule closest to yielding the minimum is GENRES-2 with a ratio of 0.998, while the rule farthest from the minimum is SPD with a ratio of 0.898. Ranking the rules on the basis of the sums of efficiency ratios yields the following order: CAF, ROT, MLCT, ACTIM, MLST, GENRES-2, TIMRES, MTS, "IS, ACTRES, GENRES-1, SPD, LPD.

Davis [4] reported that the performance of a scheduling approach can sometimes deteriorate with the increase in project sizes. As a result, a further comparison of the rules in Table 11 was done on the basis of 8 large projects out of the 30 test problems. Table 111 presents the partial sums of rule

Page 5: Towards the standardization of performance measures for project scheduling heuristics

86 IEEE TRANSACTIONS ON ENGINEERING MANAGEMENT, VOL. 35, NO. 2, MAY 1988

TABLE II COMPARISON OF RULE EFFICIENCY RATIOS pmnn

Rule N u n k r (m) 1 2 3 4 5 6 7 8 9 10 11 12 13

Problem Y h r (n) ACTIM ACTRES TIMRES GENRES1 GENRES2 ROT MTS MLST MLCT SPD LPD HNlS CAF

1 1.0 2 1 .o 3 0.788 4 0.987 5 1 .o 6 0.966 7 1 .o 8 1 .o 9 0.969 ..

10 0.986 11 0.977 12 0.910 13 0.954 14 0.863 15 0.968 16 1 .o 17 0.92 18 1 .o 19 1 .o 20 0.966 21 1 .o 22 1 .o 23 0.911 24 1 .o 25 0.982 26 1.0 ~~

27 OiiG.8 28 0.903 29 0.878

1.0 0.891 1 .o 1.0 1 .o 1 .o 0.886 0.974 1 .o 0.978 0.694 0.745 0.827 0.823 0.942 1 .o 0.894 1 .o 0.814 0.938 1 .o 0.864 0.874 1 .o 0.928 1 .o 0.987 1 .o 0.79

1.0 0.957 1 .o 1 .o 1.0 0.966 1 .o 0.945 0.926 0.974 0.694 0.796 0.821 0.865 0.972 1 .o 0.927 1 .o 1 .o 0.947 1.0 1.0 0.98 1 .o 0.928 1 .o 0.928 0.903 0.775

1.0 0.94 1 .o 1 .o 1.0 0.966 0.866 0.974 0.926 0.978 0.894 0.728 0.786 0.823 0.981 1.0 0.877 1.0 0.814 0.938 1 .o 1 .o 0.874 1 .o 0.928 1 .o 0.987 0.903 0.775

1.0 1.0 1.0 1 .o 1.0 0.966 1.0 1 .o 0.926 0.998 1 .o 0.7% 0.905 0.865 0.974 1 .o 0.92 1.0 1.0 1.0 1.0 1 .o 0.911 1 .o 0.865 1 .o 0.928 0.903 0.775

1.0 0.947 1 .o 0.965 0.942 1 .o 0.952 0.%9 0.926 0.932 1 .o 1.0 1 .o 1 .o 0.985 1 .o 0.986 1.0 0.814 0.968 1 .o 1 .o 1 .o 1 .o 0.97 1 .o 1.0 0.903 0.79

1 .o 0.933 0.788 0.987 1 .o 0.966 0.878 1 .o 1 .o 0.931 0.977 0.978 0.937 0.823 0.871 1 .o 0.854 1.0 1.0 0.926 1.0 1 .o 0.853 1 .o 0.913 1 .o 0.928 0.903 1.0 1 .o

1.0 1 .o 0.788 0.987 1 .o 8.966 1 .o 1 .o 0.989 0.986 0.977 0.91 0.954 0.883 0.968

0.92 1.0 1 .o 0.966 1.0 1.0 0.911 1 .o 0.982 1. 0.928 0.903 0.878 1 .o

1 .e

1 .o 0.925 0.788 0.987 1 .o 0.966 1 .o 0.949 0.989 1 .o 0.977 0.978 0.926 0.975 1 .o 1 .o 0.945 1.0 1 .o 0.977 1 .o 1 .o 0.853 1 .o 0.898 1 .o 0.987 0.903 0.878 1 .o

1.0 0.942 1 .o 0.%5 0.942 1 .o 0.952 0.945 0.926 0.898 0.977 0.941 0.937 0.938 0.978 1 .o 1 .o 1 .o 0.814 0.936 1 .o 1 .o 0.942 1 .o 0.951 0.979 0.987 0.903 1 .o 0.974

1.0 0.918 0.788 1 .o 1 .o 0.966 0.866 0.891 0.926 0.904 0.894 0.76 0.813 0.823 0.828 1 .o 0.79 1.0 1 .o 0.%5 1.0 0.864 0.638 1.0 0.924 1 .o 0.987 0.903 0.775 0.974

1.0 0.94 0.788 1 .o 1 .o 0.966 1.0 0.974 0.989 0.93 0.977 0.978 0.865 0.823 0.919 1 .o 0.906 1.0 0.814 0.895 1 .o 1 .o 1.0 1.0

0.878 0.974

1.0 0.947 1 .o 1 .o 0.942 1 .o 0.952 1 .o 1 .o 0.97 1 .o 1 .o 0.943 1 .o 0.981 1 .o 0.986 1.0 1 .o 0.965 1 .o 1.0 1 .o 1.0 1 .o 1 .o 1 .o 1 .o 0.79

30 1 .o 1 .o 1 .o 1.0 1.0 1 .o 1 .o

4m SUn

28.895 28.051 28.505 27.979 28.752 29.05 28.465 28.895 28.90 27.827 27.4111 28.303 29.477

TABLE III PARTIAL SUMS OF RULE EFFICIENCIES FOR EIGHT LARGE PROJECTS

_ _ _ _ _ ~

Rank Scheduling Rule Partial Sum

1

2

3 4

5

6

7

8

9 .

10

11

I2

13

CAT

ROT

MLCT

ACTIM

MIST

SPD GENRES-2

MTS

TIMRES

HNlS

GENRES-1

ACTRES

LPD

7.845

7.841

7.779

7.683

7.6 17

7.615

7.478

7.430

7.365

7.287

7.216

7.146

6.958

efficiency based on the 8 projects along with the rankings for the rules. The large projects are from test problems 10, 12, 13, 15, 17, 20, 25, and 30. The sizes of the projects are shown in Table I and range from 35 to 80 activities. It is interesting to note that the ranking based on only the large projects is not significantly different from the ranking based on all 30 projects. In fact, the four highest ranking rules in Table I1 maintain the same ranks in Table 111. We are, thus, led to the conclusion that the rules' ranks are controlled more by the values for the large projects. This is logical since the scheduling rules usually will produce identical results for small projects.

Table IV presents the percentage deviations from observed minimum project duration for each project-rule combination

(see Section IV for the computational expression). The lowest possible deviation is 0.0 percent, which indicates a situation where the duration under consideration is equal to the observed minimum project duration. Very valuable informa- tion can be derived from the contents of Table IV. For example, in problem 3 (shown in Table V), the scheduling rules either did very well or very poorly. There are only two deviation levels, namely 0.0 and 26.92 percent. If a deviation of 26.92 percent is considered too high to be acceptable, then the rules associated with that level of deviation can be judged as being ineffective. Thus, an analyst can preset the acceptable level of deviation, say at 10 percent, and then evaluate scheduling rules on the basis of that control limit. A comparison of the scheduling rules on the basis of raw project duration PL instead of efficiency ratios yields different ranks for the rules under each test problem.

An aggregate analysis of the raw project durations over the 30 test problems yields the following ranking of the 13 scheduling rules: SPD, ROT, "IS, CAF, ACTRES, MTS, LPD, GENRES2, ACTIM, GENRES1, MLST, MLCT. Table VI presents the resource-constrained PERT durations for the test problems under the different priority rules. It is seen in the Table that in some cases (problems 1, 5 , 16, 18, 21, 24, and 26), the resource-constrained PERT duration is insensitive to the type of scheduling rule used. This will usually be the case for small projects where the options of selectively scheduling activities are limited. In several cases, the same minimum project duration was produced by different priority rules. The aggregate measure used for the ranking presented above was obtained by summing the raw project durations in Table VI for each rule and dividing by 30, which is the number of test problems used. While this averaging approach may not yield an accurate measure of comparison, it

Page 6: Towards the standardization of performance measures for project scheduling heuristics

BADIRU: STANDARDIZATION OF PERFORMANCE MEASURES FOR SCHEDULING HEURISTICS

~

87

TABLE IV DEVIATIONS FROM THE OBSERVED MINIMUM PROJECT DURATIONS s,,,,

(percent)

Rule Y h r (n)

1 2 3 4 5 6 7 8 9 10 11 12 13

Problem Wukr (n) ACTIM ACTRES TIMRES GENRES1 GENRES2 ROT MTS MLST MLCT SPD LPD HNIS CAF

SUn 124.25 231.59 172.92 242.01 145.15 108.33 176.13 124.25 124.37 128.24 308.W 196.57 59.42 srn

Average 6.55 1.98

Rank 3 10 7 11 6 2 8 3 4 5 12 9 1

Deviation 4.14 7.72 5.76 8.07 4.84 3.61 5.87 4.14 4.15 4.28 10.30

TABLE V EXAMPLE FOR TEST PROBLEM 3

(Notations are presented in preceding sections)

Activity PERT Estimates Immediate Resource Number a m b Predecessors xi1 xi2

I 3 5 I O

0.5 1 3 1 1

I 1 2 1 1

2 3 6 Activity 1 2 0

I 3 4 Activity 2 I O

1.5 2 2 4 2 Activity 3

Initial Resource Availability: Z1 = 5, Z, = 2

does give a quick view of the relative effectiveness of the where m, n, M , N are as previously defined. Then scheduling rules. $n

" - M P --

denotes the average time for solving problem n, where M is the number of rules considered. The normalized solution time for rule m under problem

V I . SOLUTION TIME ANALYSIS

Using solution time as a performance measure, a further comparison Of the scheduling heuristics was conducted. Computer CPU time (VAX 11/780) was recorded for each

can then be denoted as

Trnn

P n

scheduling rule under each test problem. The following a,,=-.

Each heuristic m is ranked on the basis of the sum of normalized solution times over all test problems. That is

procedure is adopted for the solution time analysis. We let T,, denote CPU time for scheduling rule m under

test problem n. Then we define the sum of CPU's as

$,,=E Tmn, n = l , 2, e.., N , m = 1 , 2, - . . , M d m = Z Qmn, m = l , 2, * - * , M , n = l , 2, e . . , N. rn n

Page 7: Towards the standardization of performance measures for project scheduling heuristics

8 8

TABLE VI

DIFFERENT PRIORITY RULES COMPARISON OF PERT RESOURCE-CONSTRAINED DURATIONS FOR

PERT Resource-Constrained Duration

Problem Nrnkr (n) ACTIM ACTRES TIMRES GENRES1 GENRES2 ROT MTS MLST MLCT SPD LPD HWIS WF

1

10 11 12 13 1L 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30

14.10 119.67

8.25 89.00 32.67 29.83

365.00 31.33

175.17 7 c 2 8 42.58 40.67

145.33 . .~~ 79.83

115.33 22.00

168.83 16.67 9.50

90.08 6.08

13.83 71.17 28.50

156.17 8.00

13.83 24.00 15.00 39.75

14.00 134.33

6.50 87.83 32.67 28.83

411.83 32.17

173.17 76.87 46.50 49.67

167.67 85.67

118.50 22.00

173.67 16.67 11.67 92.75 6.08

16.00 74.17 28.50

165.17 8.00

13.00 21.67 16.67 39.75

14.00 125.00

6.50 87.83 32.67 29.83

365.00 33.17

187.00 77.20 46.50 46.50

168.83 81.50

114.83 22.00

167.50 16.67 9.50

91.83 6.08

13.83 66.17 28.50

165.17 8.00

13.83 24.00 17.00 39.75

14.00 127.33

6.50 87.83 32.67 29.83

411.83 32.17

187.00 76.87 46.50 50.83

176.33 85.67

113.83 22.00

177.17 16.67 11.67 92.75 6.08

13.83 74.17 28.50

165.17 8.00

13.00 24.00 17.00 39.75

14.00 119.67

6.50 87.83 32.67 29.83

365.00 31 -33

187.00 75.37 41.58 46.50

153.17 81.50

114.67 22.00

168.83 16.67 9.50

87.00 6.08

13.83 71.17 28.50

173.17 8.00

13.83 24.00 17.00 39.75

14.00 14.00 14.00 14.00 14.00 14.00 14.00 14.00 126.33 128.33 119.67 129.33 127.0 130.33 127.33 126.33

6.50 8.25 8.25 8.25 6.50 8.25 8.25 6.50 91.00 89.00 89.00 89.00 91.00 87.83 87.83 87.83 34.67 32.67 32.67 32.67 34.67 32.67 32.67 34.67 28.83 29.83 29.83 29.83 28.83 29.83 29.83 28.83

383.50 415.67 365.00 365.00 383.50 411.83 365.00 383.50 32.33 31.33 31.33 33.00 33.17 35.17 32.17 31.33

187.00 173.17 175.17 175.17 187.00 187.00 175.17 173.07 ~ 0 . 8 0 79.08 76.28 75.20 83.70 83.20 80.87 77.53 41.58 42.58 42.58 42.58 42.58 46.50 42.58 41.58 37.00 37.83 40.67 37.83 39.33 48.67 37.83 37.00

138.67 184.00 145.33 149.83 148.00 170.50 160.33 147.00 70.50 85.67 79.83 72.33 75.17 85.67 85.67 70.50

113.33 128.17 115.33 111.67 114.17 134.83 121.50 113.83 22.00 22.00 22.00 22.00 22.00 22.00 22.00 22.00

157.50 181.83 168.83 164.33 155.33 196.67 171.50 157.50 16.67 16.67 16.67 16.67 16.67 16.67 16.67 16.67 11.67 9.50 9.50 9.50 11.67 9.50 11.67 9.50 89.92 94.00 90.08 89.08 92.92 90.17 97.25 90.17 6.08 6.08 6.08 6.08 6.08 6.08 6.08 6.08

13.83 13.83 13.83 13.83 13.83 16.00 13.83 13.83 66.83 76.00 71.17 76.00 68.83 77.33 64.83 64.83 28.50 28.50 28.50 28.50 28.50 28.50 28.50 28.50

158.00 168.00 156.17 170.83 161.17 166.00 187.00 153.33 8.00 8.00 8.00 8.00 8.17 8.00 8.17 8.00

12.83 13.83 13.83 13.00 13.00 13.00 13.00 12.83 24.00 24.00 24.00 24.00 24.00 24.00 24.00 21.67 16.67 13.17 15.00 15.00 13.17 17.00 15.00 16.67 39.75 39.75 39.75 39.75 40.83 40.83 40.83 39.75

The above performance measure yields the following ranking: ACTRES, ROT, "IS, MTS, MLCT, LPD, MLST, SPD, ACTIM, TIMRES, GENRES-1, GENRES-2, CAF. It is obvious that the solution time of each scheduling rule depends on its computational complexity. The computations for AC- TRES, ROT, and "IS, for example, do not require a prior analysis of the PERT network. Since CAF considers several factors in the activity sequencing process, its computations are more complex than many other scheduling heuristics. The current proliferation of high speed desktop computers and efficient programming languages and practices should help in alleviating the problems of excessive solution times for project network analysis. Moreover, the capability to network with large powerful computers should make CPU considerations to be less of a critical issue.

VII. EXAMPLE FOR A SELECTED TEST PROBLEM An analysis of one of the small test problems is presented in

this section. The data for the problem is presented in Table V. Using the formulation for network complexity presented earlier, we obtain:

p = l , n = 6 , d = 6 . 3 3

f; ti= 13.5, f: tixil=22.5, f; t;Xi2=6.3. i = 1 i= I i= I

Thus, X = 2.99. (Refer to Table 1, problem 3 . ) Computations for the activity sequencing criterion for each

of the six activities are carried out as discussed previously in Section 111. For example, using the SPD rule, the activities are sequenced as Activity 3 (SPD = 1.2), Activity 2 (SPD = 1.3), Activity 6 (SPD = 1.9), Activity 5 (SPD = 2.8), Activity 1 (SPD = 3 .0 ) , and Activity 4 (SPD = 3.3) . Using the HNIS rule, the activities are arranged as Activity 6 ("IS = l), Activity 5 ("IS = l), Activity 4 ("IS = l),

Activity 3 ("IS = 0), Activity 2 ("IS = 0), and Activity 1 ("IS = 0). Ties are broken arbitrarily. After arranging the activities in the order in which they would be considered for resource allocation, PERT schedules were generated and the project lengths were recorded. The conlparisons for the various scheduling heuristics are as shown for problem 3 in Tables 11, 111, and IV.

VIII. SUMMARY AND CONCLUSIONS The need for the standardization of performance measures

for comparing scheduling heuristics has been addressed in this paper. Three quantitative measures related to the project duration are proposed. They are: 1) Rule Efficiency Ratio, which is the ratio of the minimum project duration observed during the comparative analysis to the project duration obtained from the rule under consideration. 2) Sum of Rule Efficiency Ratios, which is the sum (partial or full) of the ratios computed for each test problem under each scheduling heuristic, and 3) Percentage Deviation from minimum, which measures the deviation of the project duration produced by a given heuristic from the overall minimum observed. An illustration is presented on the procedure for using the proposed standards for comparative analysis.

The work presented is the first step towards the standardiza- tion of performance measures for project scheduling heuris- tics. The 30 test problems used can form a component of a standard ''clearing house" of test problems for evaluating scheduling heuristics in resource-constrained PERT networks. For such standardization, it is recommended that large and small projects be categorized into separate test groups. This is based on the experimental results which indicate that heuristic performance levels are controlled more by the large projects if a mixture of test projects (large and small) is used. A practical extension of the presentations in this paper may be the development of a knowledge-based system that would incorpo-

Page 8: Towards the standardization of performance measures for project scheduling heuristics

BADIRU: STANDARDIZATION OF PERFORMANCE MEASURES FOR SCHEDULING HEURISTICS 89

rate the standard test problems and performance measures. Hence, all a project manager needs to do is to input his project scenario and let the system perform the implicit analysis necessary to generate a recommended heuristic for scheduling the project.

For present practical applications, a project analyst may use the evaluation measures presented to study his or her project structure and then select the most suitable scheduling heuristic for implementation for the actual project. The performance measures presented yield reasonably accurate measurements of the relative performances of scheduling heuristics for any given problem. Small problems can easily be analyzed with some computationally rigorous effort. Large projects will invariably require computer analysis. Custom modifications and extensions can be incorporated into the proposed proce- dures to satisfy specific applications and interest. In conclu- sion, the presentations in this paper can significantly enhance the planning function in a project management environment. Prior analysis and selection of the most effective scheduling heuristic for a given project can help minimize schedule changes and delays often encountered in impromptu schedul- ing practices. With further study and refinements, the proce- dures presented may be standardized for general utilization in project scheduling functions.

REFERENCES

[I] A. B. Badiru, “A project scheduling heuristic under stochastic time and resource eonstraints (STARC),” doctoral dissertation, Univ. Central Florida, Orlando, FL, 1984. D. F. Cooper, “Heuristics for scheduling resourceconstrained project: An experimental investigation,” Manag. Sci., vol. 22, pp. 1186-1194, July 1976.

[3] E. W. Davis and J. H. Patterson, “A comparison of heuristic and optimum solutions in resource-constrained project scheduling, ” Manag. Sci., vol. 21, pp. 944-955, Apr. 1975.

[4] E. W. Davis, “Project scheduling under resource constraints- Historical review and categorization of procedures,” AZZE Trans.,

[5] E. W. Davis, “Project network summary measures constrained- resource scheduling,” AZZE Trans., vol. 7, pp. 132-142, June 1975.

[6] S. E. Elmaghraby, Activity Networks: Project Planning and Control by Network Models.

[7] E. A. Elsayed, “Algorithms for project scheduling with resource constraints,” Znt. J. Prod. Res., vol. 19, 1982.

[2]

V O ~ . 5, pp. 297-317, Aug. 1971.

New York: Wiley, 1977.

[8] L. G. Fendley, “Toward the development of a complete multi-project scheduling system,’’ J. Znd. Eng., vol. 19, pp. 505-515, Oct. 1968.

[9] C. A. Holloway, R. T. Nelson, and V. Suraphongschai, “Comparison , of a multi-pass heuristic decomposition procedure with other resource-

constrained project scheduling procedure,” Manag. Sci., vol. 25, pp. 862-872, Sept. 1979.

[lo] I. Kurtulus and E. W. Davis, “Multi-project scheduling: Categoriza- tion of heuristic rules performance,” Manag. Sci., vol. 28, pp. 161- 172, Feb. 1982.

[Ill J. J. Moder and C. R. Phillips, Project Management with CPM and PERT, 2nd ed. New York: Van Nostrand, 1970.

[12] J. H. Patterson and G. W. Roth, “Scheduling a project under multiple resource constraints: A zero-one programming approach,” AZZE Trans., vol. 8, pp. 449-455, Dec. 1976. D. T. Phillips and G. L. Hogg, “Stochastic network analysis with resource constraints, cost parameters and queueing capabilities using GERTS methodologies,” Comput. Znd. Eng., vol. 1, pp. 13-25, 1976.

141 A. B. Pritsker, L. J. Walters, and P. M. Wolfe, “Multi-project scheduling with limited resources: A zero-one programming ap- proach,” Manag. Sci., vol. 16, Sept. 1969.

151 R. Slowinski, “Two approaches to problems of resource allocation among project activities-A comparative study,” J. Oper. Res. SOC.,

161 G. E. Whitehouse, “A comparison of computer search heuristic to analyze activity/nehvorks with limited resources,” Project Manag. Ouart.. vol. 14. DD. 35-39, June 1983.

[13]

VOI. 31, pp. 711-723, Aug. 1980.

r171

r271

6. E. Whitehousd ind J. R. Brown, “Genres: An extension of Brooks Algorithm for project scheduling with resource constraints,” Comput. Znd. Eng., Dec. 1979. J. D. Wiest, “Some properties and schedules for large projects with limited resources,” Oper. Res., vol. 12, pp. 395418, May-June 1964. D. D. Bedworth and J. E. Bailey, Integrated Production Control Systems: Management, Ana&& Design. New York: Wiley, 1982. M. L. Fisher, “Worst-case analysis of heuristic algorithms,” Manag. Sci., vol. 26, pp. 1-17, Jan. 1980. M. H. Rothkopf, “Scheduling with random service times,” Manag. Sci., vol. 12, pp. 707-713, May 1966. K. R. MacCrimmon and C. A. Ryavec, “An analytical study of the PERT assumptions,” Oper. Res., vol. 12, pp. 16-37, Jan.-Feb. 1964. W. A. Donaldson, “The estimation of the mean and variance of a ‘PERT’ activity time,” Oper. Res., vol. 13, pp. 382-385, 1965. T. L. Healy, “Activity subdivision and PERT probability statement,” Oper. Res., vol. 9, pp. 341-348, May-June 1961. R. A. Russell, “A comparison of heuristics for scheduling projects with cash flows and resource restrictions,” Manag. Sci., vol. 32, pp.

J. H. Patterson, “Project scheduling: The effects of problem structure on heuristic performance,” Naval Res. Logistics Quart., vol. 23, no.

S. E. Elmaghraby and W. S. Herroelen, “On the measurement of complexity in activity networks,” Eur. J. Oper. Res., vol. 5 , pp. 223- 234, 1980.

1291-1300, Oct. 1986.

1 , pp. 95-123, 1976.


Recommended