+ All Categories
Home > Documents > The two-stage assembly flowshop scheduling problem with bicriteria of makespan and mean completion...

The two-stage assembly flowshop scheduling problem with bicriteria of makespan and mean completion...

Date post: 05-Dec-2023
Category:
Upload: independent
View: 0 times
Download: 0 times
Share this document with a friend
12
ORIGINAL ARTICLE The two-stage assembly flowshop scheduling problem with bicriteria of makespan and mean completion time Ali Allahverdi & Fawaz S. Al-Anzi Received: 17 September 2006 / Accepted: 17 January 2007 / Published online: 22 February 2007 # Springer-Verlag London Limited 2007 Abstract In this paper, we address the two-stage assembly flowshop scheduling problem with a weighted sum of makespan and mean completion time criteria, known as bicriteria. Since the problem is NP-hard, we propose heuristics to solve the problem. Specifically, we propose three heuristics; simulated annealing (SA), ant colony optimization (ACO), and self-adaptive differential evolu- tion (SDE). We have conducted computational experiments to compare the performance of the proposed heuristics. It is statistically shown that both SA and SDE perform better than ACO. Moreover, the experiments reveal that SA, in general, performs better than SDE, while SA consumes less CPU time than both SDE and ACO. Therefore, SA is shown to be the best heuristic for the problem. Keywords Assembly flowshop . Bicriteria . Makespan . Mean competition time . Heuristic 1 Introduction Consider the situation where there are n jobs such that each job has more than two operations. The first m operations of a job are performed at the first stage in parallel and the final operation is conducted at the second stage. Each of the m operations of a job at the first stage is performed by a different machine and the last operation on the machine at the second stage may start only after all m operations at the first stage are completed. Each machine can process only one job at a time. The described problem is known as a two-stage assembly flowshop scheduling problem with m operations at the first stage and one operation at the second stage. It should be noted that the problem reduces to the two-machine flowshop scheduling problem when there is only one machine at the first stage, i.e., m =1. Makespan and mean completion time are two commonly used performance measures in the scheduling literature. Minimizing makespan is important in situations where a simultaneously received batch of jobs is required to be completed as soon as possible. For example, a multi-item order submitted by a single customer needs to be delivered at the minimal possible time. The makespan criterion also increases the utilization of resources. There are other real- life situations in which each completed job is needed as soon as it is processed. In such situations, one is interested in minimizing the mean completion time of all jobs, rather than minimizing makespan. This objective is particularly important in real-life situations where reducing inventory or holding cost is of primary concern. The assembly flowshop scheduling problem was intro- duced independently by Lee et al. [22] and Potts et al. [31]. The two-stage assembly scheduling problem has many applications in industry. Potts et al. [31] described an application in personal computer manufacturing where central processing units, hard disks, monitors, keyboards, etc., are manufactured at the first stage, and all of the required components are assembled to customer specifica- tion at a packaging station (the second stage). Lee et al. [22] Int J Adv Manuf Technol (2008) 37:166177 DOI 10.1007/s00170-007-0950-y A. Allahverdi (*) Department of Industrial and Management Systems Engineering, Kuwait University, P.O. Box 5969, Safat, Kuwait e-mail: [email protected] F. S. Al-Anzi Department of Computer Engineering, Kuwait University, P.O. Box 5969, Safat, Kuwait e-mail: [email protected]
Transcript

ORIGINAL ARTICLE

The two-stage assembly flowshop scheduling problemwith bicriteria of makespan and mean completion time

Ali Allahverdi & Fawaz S. Al-Anzi

Received: 17 September 2006 /Accepted: 17 January 2007 /Published online: 22 February 2007# Springer-Verlag London Limited 2007

Abstract In this paper, we address the two-stage assemblyflowshop scheduling problem with a weighted sum ofmakespan and mean completion time criteria, known asbicriteria. Since the problem is NP-hard, we proposeheuristics to solve the problem. Specifically, we proposethree heuristics; simulated annealing (SA), ant colonyoptimization (ACO), and self-adaptive differential evolu-tion (SDE). We have conducted computational experimentsto compare the performance of the proposed heuristics. It isstatistically shown that both SA and SDE perform betterthan ACO. Moreover, the experiments reveal that SA, ingeneral, performs better than SDE, while SA consumes lessCPU time than both SDE and ACO. Therefore, SA isshown to be the best heuristic for the problem.

Keywords Assembly flowshop . Bicriteria . Makespan .

Mean competition time . Heuristic

1 Introduction

Consider the situation where there are n jobs such that eachjob has more than two operations. The first m operations of

a job are performed at the first stage in parallel and the finaloperation is conducted at the second stage. Each of the moperations of a job at the first stage is performed by adifferent machine and the last operation on the machine atthe second stage may start only after all m operations at thefirst stage are completed. Each machine can process onlyone job at a time. The described problem is known as atwo-stage assembly flowshop scheduling problem with moperations at the first stage and one operation at the secondstage. It should be noted that the problem reduces to thetwo-machine flowshop scheduling problem when there isonly one machine at the first stage, i.e., m=1.

Makespan and mean completion time are two commonlyused performance measures in the scheduling literature.Minimizing makespan is important in situations where asimultaneously received batch of jobs is required to becompleted as soon as possible. For example, a multi-itemorder submitted by a single customer needs to be deliveredat the minimal possible time. The makespan criterion alsoincreases the utilization of resources. There are other real-life situations in which each completed job is needed assoon as it is processed. In such situations, one is interestedin minimizing the mean completion time of all jobs, ratherthan minimizing makespan. This objective is particularlyimportant in real-life situations where reducing inventory orholding cost is of primary concern.

The assembly flowshop scheduling problem was intro-duced independently by Lee et al. [22] and Potts et al. [31].The two-stage assembly scheduling problem has manyapplications in industry. Potts et al. [31] described anapplication in personal computer manufacturing wherecentral processing units, hard disks, monitors, keyboards,etc., are manufactured at the first stage, and all of therequired components are assembled to customer specifica-tion at a packaging station (the second stage). Lee et al. [22]

Int J Adv Manuf Technol (2008) 37:166–177DOI 10.1007/s00170-007-0950-y

A. Allahverdi (*)Department of Industrial and Management Systems Engineering,Kuwait University,P.O. Box 5969, Safat, Kuwaite-mail: [email protected]

F. S. Al-AnziDepartment of Computer Engineering, Kuwait University,P.O. Box 5969, Safat, Kuwaite-mail: [email protected]

described another application in a fire engine assemblyplant. The body and chassis of fire engines are produced inparallel in two different departments. When the body andchassis are completed and the engine has been delivered(purchased from outside), they are fed to an assembly linewhere the fire engine is assembled. Another application isin the area of queries scheduling on distributed databasesystems, as discussed by Allahverdi and Al-Anzi [6].

Lee et al. [22] considered the problem with m=2, whilePotts et al. [31] considered the problem with an arbitrary m.Both studies addressed the problem with respect to make-span minimization and both proved that the problem withthis objective function is NP-hard in the strong sense form=2. Lee et al. [22] discussed a few polynomially solvablecases and presented a branch and bound algorithm.Moreover, they proposed three heuristics and analyzedtheir error bounds. Potts et al. [31] showed that the searchfor an optimal solution may be restricted to permutationschedules. They also showed that any arbitrary permutationschedule has a worst-case ratio bound of two, and they pre-sented a heuristic with a worst-case ratio bound of 2–1/m.Hariri and Potts [19] also addressed the same problem,developed a lower bound, and established several domi-nance relations. They also presented a branch and boundalgorithm incorporating the lower bound and dominancerelations. Another branch and bound algorithm wasproposed by Haouari and Daouas [17]. Sun et al. [37] alsoconsidered the same problem with the same makespanobjective function and proposed heuristics to solve theproblem. Allahverdi and Al-Anzi [6] obtained a dominancerelation for the same problem when setup times areconsidered as separate from processing times. They alsoproposed two evolutionary heuristics (a particle swarmoptimization and a tabu search) and proposed a simple andyet efficient algorithm with negligible computational time.

Tozkapan et al. [38] considered the two-stage assemblyscheduling problem with the total weighted flowtimeperformance measure. They showed that permutationschedules are dominant for the problem with this perfor-mance measure. They developed a lower bound and adominance relation, and utilized the bound and dominancerelation in a branch and bound algorithm. It should be notedthat the performance measures of flowtime and completiontime are equivalent when jobs are ready at time zero. Itshould also be noted that the total and mean completiontimes are equivalent performance measures. Al-Anzi andAllahverdi [3] considered the same problem with totalcompletion time criterion. They obtained optimal solutionsfor two special cases and proposed a simulated annealing(SA) heuristic, a tabu search heuristic, and a hybrid tabusearch heuristic. They compared their heuristics withexisting ones and showed that their hybrid tabu searchheuristic is the best.

The research mentioned so far addressed only the singlecriterion of either makespan or mean completion time,while the majority of real-life problems requires thedecision maker to consider both criteria before arriving ata decision. The problem with both makespan and meancompletion time has not been addressed for the consideredtwo-stage assembly scheduling problem and is the topic ofthe current paper.

As explained earlier, the problem reduces to the two-machine flowshop scheduling problem when m=1. Therehas been some effort to address the flowshop problem withmultiple criteria. Two different approaches might bedistinguished for the multiple-criteria problems: thosewhere all efficient solutions are generated and then trade-offs are made between the solutions and those where asingle objective function is constructed by the integration ofall of the relevant criteria, usually by forming a weightedlinear combination of them. The former approach is used bymany researchers, including Sayin and Karabati [34],Rajendran [32], and Ho and Chang [18], while the latterapproach is utilized by some other researchers, includingNagar et al. [27], Sivrikaya-Serifoglu and Ulusoy [35], Yeh[40, 41], Lee and Chou [21], Allahverdi [5], Allahverdi andAldowaisan [8], and Yeh and Allahverdi [42]. The latterapproach is taken in this paper.

We, in this paper, address the two-stage assemblyflowshop scheduling problem with a weighted sum ofmakespan and mean completion time. The problem isdescribed in the next section, and the three proposedheuristics are explained in Sect. 3. Comparison of theheuristics on randomly generated problems is performed inSect. 4, while possible future research directions areprovided in Sect. 5.

2 Problem definition

We assume that n jobs are simultaneously available at timezero and that preemption is not allowed, i.e., any startedoperation has to be completed without interruptions. Eachjob consists of a set of m+1 operations. The first moperations are performed at stage one on m parallelmachines, while the last operation is performed at stagetwo on the assembly machine. We use the followingnotation:

t[i, j] Operation time of the job in position i on machinej, i=1,..., n, j=1,..., m

p[i] Operation time of the job in position i on theassembly machine

C[i]1 Completion time of the job in position i

Note that job k is complete once all of its operations t[k, j](j=1,..., m) and p[k] are completed, where the operation p[k]

Int J Adv Manuf Technol (2008) 37:166–177 167

may start only after all operations t[k, j] (j=1,..., m) havebeen completed. Potts et al. [31] and Tozkapan et al. [38]showed that permutation schedules are dominant withrespect to makespan and total flowtime (completion time)criterion, respectively. Therefore, permutation schedules arealso dominant for the problem addressed in this paper.Thus, we restrict our search for the optimal solution topermutation schedules. In other words, the sequence of jobson all of the machines, including the assembly machine, isthe same.

It can be shown that the completion time of the job inposition j is as follows [3]:

C j½ � ¼ max maxk¼1;...; m

Xj

i¼1

t i; k½ �

( ); C j�1½ �

( )þ p j½ �

where C 0½ � ¼ 0

The mean completion time is MCT ¼ 1n

Pni¼1

C i½ � and themakespan is Cmax=C[n].

If the weight given to the makespan is denoted by α,then the objective function (OF) is defined by the followingequation:

OF ¼ aCmax þ 1� að ÞMCT

where 0<α<1. Notice that, when α=0, the problem reducesto the single criterion of Cmax, while it reduces to the singlecriterion of mean completion time when α=1. Theobjective is to find a schedule which yields a minimumvalue of OF.

The described problem has no polynomial solution, sinceit is known that the problem for α=0 (i.e., when theobjective is to minimize the mean completion time) is NP-hard, even for the two-machine flowshop schedulingproblem, i.e., when m=1 (see Gonzalez and Sahni [15]).Therefore, in the next section, we present three heuristicsfor solving the problem.

3 Heuristics

The three proposed heuristics are described in this section.One heuristic is ant colony optimization (ACO), which isbriefly described in Sect. 3.1. Another heuristic is simulatedannealing (SA), which is explained in Sect. 3.2, and the lastheuristic is a self-adaptive differential evolution (DE),which is presented in Sect. 3.3. A comparison of theseheuristics is performed in a later section.

3.1 Ant colony optimization (ACO)

The first ACO heuristic, called the ant system [13], wasinspired by studies of the behavior of ants. Ants commu-nicate among themselves through pheromone, a substance

they deposit on the ground in variable amounts as theymove about. It has been observed that the more ants use aparticular path, the more pheromone is deposited on thatpath and the more it becomes attractive to other antsseeking food. If an obstacle is suddenly placed on anestablished path leading to a food source, the ants willinitially go right or left in a seemingly random manner, butthose choosing the side that is, in fact, shorter will reach thefood more quickly and will make the return journey moreoften. The pheromone on the shorter path will, therefore, bemore strongly reinforced and will eventually become thepreferred route for the stream of ants.

ACO has been successfully used to solve many complexproblems, including scheduling problems, e.g., Blum [12],Liao and Juan [23], and Gutjahr and Rauner [16]. For ourscheduling problem, we start to solve the two-stageassembly flowshop scheduling problem using ACO byconstructing a complete weighted directed graph with nnodes, where each node represents one of the jobs to bescheduled. Every node (job) is directly connected to all ofthe other nodes (jobs) by n−1 outgoing edges. We canpicture that the ACO simulates the behavior of real antsmoving on this weighted directed complete graph and,hence, is able to solve this combinatorial optimizationproblem by finding the nest tour to all nodes (jobs). Thetour can start at any node and can follow the proper directededges. The ant should visit all nodes only once in its tour.Obviously, the tour is complete when it has n nodes andn−1 directed edges. The basic algorithm of the ACO isoutlined as the following:

Begin Heuristic

Represent the underlying problem by a weighteddirected connected graphSet initial pheromone for every edgeRepeat until the stopping criterion is satisfied

For each ant do

Randomly select a starting nodeRepeat until a complete tour is fulfilled

Move to the next node according to a nodetransition rule

End RepeatEnd ForFor each edge do

Update the pheromone intensity using a phero-mone updating rule

End ForEnd RepeatOutput the best global tour

End Heuristic

168 Int J Adv Manuf Technol (2008) 37:166–177

The initialization step of the ACO includes twoparts: the problem’s graph representation and the initialpheromone setting. First, the underlying problem isrepresented by a complete weighted directed graph G(V, E); each directed edge (i, j) has an associated weight w(i, j) using an adjacency matrix representation. Second,similar to the strategy used by Dorigo and Gambardella[14], where every edge is given a constant quantity ofinitial pheromone, our proposed method initializes thepheromone on all edges to zero. The heuristic positionsone ant on an arbitrary first node, and then the antmoves to the next unvisited node with some probabilitythat is proportional to the weight of its connecting edge.The process is repeated until every node is visited. Inthe world of real ants, shorter paths will retain morepheromone; analogously, in the ACO, the pathscorresponding to better solutions should receive morepheromone and become more attractive. The knowledgeof the better solution becomes more mature as the ACOalgorithm ages. This should be reflected in giving moreweight towards later iterations in the ACO’s life.

3.1.1 Transition rule

During the running session, the ants moving on thegraph travel from node to node following the directededges to complete the tour to all nodes. Because nonode can be visited twice, we put the nodes that havealready been visited in a list and mark them as in-accessible to prohibit the ants from visiting any nodemore than once. For the kth ant on node i, the selectionof the next node j to follow is determined according to aprobability (by a random process) that is proportional tothe weight, wij, where wij is the current pheromoneintensity on edge (i, j). The weight wij is updatedaccording to the updating pheromone rule. The probabil-ity of the kth ant to follow an edge linking to an unvisitednode j from node i is computed according to thefollowing:

pkij ¼ e wij�wmaxð Þ

where wmax is the maximum weight of all of the wijsfor all of the candidate unvisited nodes. The candidatenodes to be visited (unvisited nodes) are circularlyscanned for the smallest job number to the largest andback to the first iteratively. For each node j in thecandidate list, we randomly generate a number x between0 and 1. If the number x generated is smaller or equal tothis probability, i.e., x � pkij; then it is chosen as the oneto follow next in the tour.

3.1.2 Updating pheromone

Once all of the ants have completed their tours, the intensityof pheromone on each edge is updated by the pheromoneupdating rule:

wij ¼ wij þ current iteration

max iterations

� e best objective function�objective functionð Þ

where the ratio current iterationmax iterations represents the maturity of the

aging of the ACO algorithm such that max_iterations isdecided by the stopping criteria. The term e(best_objective_function−objective_function) represents the quality of the solutioncompared to the best achieved solution so far. Also, anevaporation rate is reflected on all edges, including edgesthat did not participate in the current solution. This is doneby applying the following updating rule after all ants havecompleted their tours in a cycle of the algorithm:

wij ¼ 1� rð Þ*wij

where r is the evaporation rate of pheromone intensity andis chosen between 0 and 1. The r value has been tested forthe range from 0.9 to 0.99 with an increment of 0.01. Thebest r value was found to be 0.98.

3.1.3 Number of ants and stopping criteria

We tested different ranges of population of ants of size n, 2n,and 3n. The number of ants selected for our problem is thesame as the number of jobs n to be scheduled, since itachieved an acceptable performance within a reasonableCPU time. The stopping criterion of the ACO could be amaximum number of iterative cycles, a specified CPU timelimit, or a maximum number of cycles between two im-provements of the global best solution. In this paper, we use agiven number of iterative cycles as the stopping criterion.This number is set to 1,000n after testing a range of stoppingcriterions as 100n, 500n, 1,000n, 1,500n, and 2,000n.

3.2 Simulated annealing heuristic (SA)

SA has been used to solve scheduling problems, e.g.,Sadegheih [33], Low [25], and Mika et al. [26]. The mainidea behind the proposed SA heuristic is to have a number ofiterations where, in every iteration of the heuristic, there is asingle random pair exchange in the sequence (other neighbor-hoods are used in different versions of SA). If the exchangeimproves the objective function, then it accepts the exchangeand the new sequence is preserved. If the objective functiondoes not improve, then it is only allowed to accept theexchange with some small probability p. As the number ofiterations increases, the probability p for which the heuristic is

Int J Adv Manuf Technol (2008) 37:166–177 169

allowed to accept an exchange that does not improve theobjective function is reduced exponentially. This reduction inthe probability is usually expressed as a function of a starttemperature (T1) that is reduced by a cooling factor to reach afinal (freezing) temperature. Notice that the temperaturecooling factor used in the proposed heuristic is exponential.However, in a general SA heuristic, it needs not to have anexponential cooling factor. This technique of reducing theprobability of accepting non-improving exchanges has provento be very useful in escaping local optimums during thecourse of the search for the global optimum.

First, the initial sequences that are used in SA are explained.One initial sequence is obtained by ordering all of the jobs intoincreasing order of p[i]. This initial sequence is called S1. It isexpected that, when the assembly machine dominates thefirst-stage machines (i.e., when processing times on theassembly machine are larger than those of the first-stagemachines), then ordering the jobs based on shortest process-ing time (SPT) on the assembly machine will yield a goodsolution. The SPT rule is known to perform well in generalfor the mean completion time criterion. The second initialsequence is obtained by considering the case that the first-stage machines dominate the assembly machine. In this case,the sequence is obtained by ordering the jobs in increasingorder of max

k¼1;...; mt i; k½ �

� �; which is called S2. A third sequence

is obtained by ordering the jobs in increasing order ofmax

k¼1;...; mt i; k½ �

� �þ p i½ �; where both stages are taken into

account. This sequence is called S3.The following is an algorithmic description of the heuristic:

Begin Heuristic

Let T1=0.1Select the best sequence among S1, S2, and S3 as thecurrent sequenceWhileT1≥0.0001Begin

Repeat 50 times

BeginLet L1=value of the objective function with currentsequencePick two random positions j and kLet L2=value of the objective function after theswapIfL2<L1 Then accept swapIfL2>L1 Then accept the move with probability f,where

d ¼ L2� L1

L1

��������

f ¼ edT1

End Repeat

Let T1=T1*0.98End While

End Heuristic

Setting the parameters for the proposed SA heuristic isessential in achieving a good performance. An initialestimate for the best value of a given parameter is obtainedby changing the values of that parameter while keeping allother parameters constant. We used the following values asinitial estimates of the parameters; T1=0.5, 0.1, and 0.01;cooling factor=0.99, 0.98, 0.97, 0.96, and 0.95; and finaltemperature=0.005, 0.001, 0.0005, 0.0001, 0.00005, and0.00001. Once these initial values are determined, then themethod of factorial experimental design (three values foreach parameter, including the initial best value of thatparameter, one value above, and one value below thatvalue) is used to fine tune the values of the parameters.After these experimentations, the parameters for the SAheuristic are set as follows; the initial temperature T1 is setto 0.1, the cooling factor is set to 0.98, the final temperatureis set to 0.0001, and the number of iterations per fixedtemperature is set to 50, since no significant improvementhas been observed beyond this value.

3.3 Self-adaptive differential evolution (SDE) heuristic

Evolutionary algorithms (EAs) are general-purpose sto-chastic search methods simulating natural selection andevolution in the biological world. EAs differ from otheroptimization methods, such as hill-climbing and SA, in thefact that EAs maintain a population of potential (orcandidate) solutions rather than a single solution to aproblem.

In general, all EAs work as follows: a population ofindividuals is randomly initialized, where each individualrepresents a potential solution to the problem. The quality ofeach solution is evaluated using a fitness function. Aselection process is applied during each iteration of an EAin order to form a new population. The selection process isbiased towards the fitter individuals in order to increase theirchances of being included in the new population. Individualsare altered using unary transformation (mutation) andhigher-order transformation (crossover). This procedure isrepeated until convergence is reached. The best solutionfound is expected to be a near-optimum solution.

The unary and higher-order transformations are referredto as evolutionary operators. The two most frequently usedevolutionary operators are:

– Crossover (or recombination), where parts from two(or more) individuals are combined to generate newindividuals. The main objective of crossover is toexplore new areas of the search space.

170 Int J Adv Manuf Technol (2008) 37:166–177

– Mutation, which modifies an individual by a smallrandom change to generate a new individual. The mainobjective of mutation is to add some diversity byintroducing more genetic material into the populationin order to avoid being trapped in a local optimum.Generally, mutation is applied using a low probability.However, some problems require using mutation with ahigher probability. A preferred strategy is to start with ahigh probability of mutation and decrease it over time,which is initially biased towards more exploration ofthe search space and then focuses on exploitation inlater generations.

Due to its population-based nature, EAs can avoid beingtrapped in a local optimum, and, consequently, have theability to find global optimal solutions. Thus, EAs can beviewed as global optimization algorithms.

Storn and Price [36] proposed a new EA calleddifferential evolution (DE). DE is similar to geneticalgorithms (GAs) in that a population of individuals areused to search for an optimal solution. The main differencebetween GAs and DE is that, in GAs, mutation is the resultof small perturbations to the genes of an individual, whilein DE, mutation is the result of arithmetic combinations ofindividuals. At the beginning of the evolution process, themutation operator of DE favors exploration. As evolutionprogresses, the mutation operator favors exploitation.Hence, DE automatically adapts the mutation increments(i.e., search step) to the best value based on the stage of theevolutionary process.

DE has been successfully applied to solve a wide rangeof optimization problems, including scheduling [29]. Inshort, DE is now generally considered as a reliable,accurate, robust, and fast optimization technique. However,the user has to find the best values for the problem-dependent control parameters used in DE. Finding the bestvalues for the control parameters is a time consuming task.

A new version of DE has been proposed by Omran et al.[28], where the control parameters are self-adaptive. Thisnew version is called self-adaptive differential evolution(SDE). To the best of our knowledge, SDE for schedulingproblems has been used only by Al-Anzi and Allahverdi [2].

DE, in general, does not make use of a mutation operatorthat depends on some probability distribution function, butit introduces a new arithmetic operator which adaptsindividuals on the basis of differences between randomlyselected pairs of individuals in a neighborhood.

For a parent i of generation t, an offspring i′ is createdin the following way. Randomly select three individualsfrom the current population, namely, i1, i2, and i3, withi1≠i2≠i3≠i and i1, i2, i3∈U(1,..., s), where s is thepopulation size of a neighborhood. Select a randomnumber r∈U(1,..., n), where n is the number of genes

(jobs) of a single chromosome (sequence). Then, for allparameters j=1,..., n:

x00i; j ¼ F xi1; xi2; j; xi3; j� �

if Random 0; 1ð Þ < Pri or if j ¼ rxi; j otherwise

In the above equation, Pri is the probability of reproduc-tion (with Pri∈[0, 1]), F is a scaling factor with F∈(0, n),and xi, j, x′i, j, xi1, j, xi2, j, and xi3, j indicate the jth job of theparent, the offspring, and the three individuals in theneighborhood, respectively. Thus, each offspring consistsof a stochastic linear combination of three randomly chosenindividuals when Random(0, 1)<Pri; otherwise, the off-spring inherits directly from the parent. Even when Pri=0,at least one of the parameters of the offspring will differfrom the parent (forced by the condition j=r).

The mutation process described above requires that thepopulation consists of more than three individuals. After thecompletion of the mutation process, the next step is to selectthe new generation. For each parent of the current popula-tion, the parent is replaced with its offspring if the fitness ofthe offspring is better; otherwise, the parent is carried over tothe next generation. There are different strategies for DEbased on the individual being perturbed, the number ofindividuals used in the mutation process, and the type ofcrossover used. We adopt the strategy that was reported byBabu and Jehan [11] as the most successful one.

The DE may not perform well if the parameters are notfinely tuned [24]. According to Lampinen and Zalinka [20],the stagnation problem may occur with small values of s,mainly due to the lack of diversity. The stagnation problemoccurs when the optimization algorithm stagnates beforefinding a globally optimal solution, even though thepopulation is diverse.

A new version of DE has been proposed by Abbass [1]and later modified by Omran et al. [28], where the controlparameters are self-adaptive. This new version is calledself-adaptive differential evolution (SDE). In Omran et al.’sapproach [28], the parameter Random(0, 1) is generated foreach variable from a normal distribution with a mean of 0.5and a standard deviation of 0.25, where whenever a negativenumber is obtained, it was regenerated in order to obtain anon-negative number. Each individual i has its own prob-ability of reproduction Pri. The parameter Pri is first ini-tialized for each individual in the population from a uniformdistribution between 0 and 1. The mutation probabilities arecomputed according to the following equation:

Pri ¼ Pri1 þ Pri2 � Pri3ð Þ* Random 0; 1ð Þwhere i1, i2, i3∈U(1,..., s).

Our proposed SDE heuristic considers a population(VApop) of given sequences (chromosomes). The initialpopulation is randomly generated. In order to produce thenext generations, two sequences are randomly selected

Int J Adv Manuf Technol (2008) 37:166–177 171

from VApop as parents to produce two offspring. These twooffspring are produced by swapping (crossover) sub-sequences of equal lengths among two parents. It isimportant to note that both offspring result in feasibleschedules. To do so, consider the following two sequencesof X and Y, where X={x1, x2,..., xi,..., xj,..., xn} and Y={y1,y2,..., yi,..., yj,..., yn}. The two segments of xi,..., xn and yi,...,yn are said to be compatible if they include the same subsetof jobs, but not necessarily in the same order. Twosequences X and Y are called compatible if they have twocompatible segments. The process of generating offspringfrom a given population is repeated Vcp times.

The process of generating the offspring is repeated for agiven number of generations (Vg). Then, y sequences of thepopulation (VApop) are replaced with the best y sequencesfrom the set of offspring sequences. At the second stage,each sequence in the new population is mutated with theprobability Pri for each individual i, as explained earlier. Atthe end of the given number of generations, a sequencewith the best value of objective function is accepted as theheuristic solution of the proposed heuristic.

The steps of the SDE heuristic are as follows:

Begin Heuristic

Initialize a population, VApop, of random sequencesRandomly initialize mutation probabilities for eachsequence i in VApop to PriCompute the Lmax of each sequence in VApop

Order the sequences in VApop according to Lmax fromthe best to the worstRepeat steps (i) to (v) for Vg times:

(i) Set the neighborhood size s to be 1/Vg of the totalpopulation size

(ii) Repeat steps (a) to (d) for Vcp times:

(a) Randomly choose two different compatibleparents to crossover

(b) Select compatible segments in the two parents(c) Swap the segments(d) Save the new sequences in VAchild and compute

the OF of each(iii) Order VAchild with respect to OF(iv) Replace the worst y sequences of VApop with the

best y sequences in VAchild, maintaining order withrespect to OF

(v) Mutate each sequence i in VApop as follows:

(a) Select a random position k between 1,..., n(b) For each job j in position [j] in the sequence i, do

the following:

If j=k or with probability Pri select three randomsequences i1, i2, i3 in the neighborhood s of thesequence i, where i1≠i2≠i3≠i:

Let jobs j1, j2, j3 be in position [j] of sequences i1,i2, i3, respectively, then compute j4=(j1+j2+j3)/3Replace job at position [j] in sequence i with j4(and fixing inconsistency in sequence i byreplacing the other (original) job of value j4 insequence i with job j)If sequence i after mutation has a better objectivefunction, then update the probability

Pri ¼ Pri1 þ Pri2 � Pri3ð Þ * Random 0; 1ð ÞElse reject mutation(vi) Compute OF and order VApop

Store the best solution from VApop as the final solutionImprove the final solution by applying a pairwiseexchange procedure

End Heuristic

It should be noted that the two parents that are used toperform the crossover operation are scanned from left toright. We stop at the earliest position where we can do aswap. That is, the scan process continues until all of thepositions in both sequences (parents) contain the same setof jobs, not necessarily in the same order. This is what isso-called single-point crossover. It should also be noted thatif y is less than the total number of offspring, then theremaining offspring are omitted. On the other hand, if y isgreater than the total number of offspring, we adjust thevalue of y temporarily to the number of offspring (this willallow more parents to go into the next generation).

Setting the parameters for the proposed SDE heuristic isessential in achieving a good performance. An initialestimate for the best value of a given parameter is obtainedby changing the values of that parameter while keeping allother parameters constant. We used the following values asinitial estimates of the parameters; VApop=n, 2n,..., 5n;Vg=n, 2n,..., 5n; Vcp=n, 2n,..., 5n; y=1/6, 2/6,..., 5/6; andPri=0.100, 0.105, 0.110,..., 0.200, since Pri is usually takenbetween 0.1 and 0.2. Once these initial values aredetermined, then the method of factorial experimentaldesign (three values for each parameter, including theinitial best value of that parameter, one value above, andone value below that value) is used to fine-tune the valuesof the parameters. After these experimentations, theparameters for the SDE heuristic are set as follows; VApop

is set to 2n, Vg is set to 4n, Vcp is set to 2n, y is set to 3/6,and Pri is set to 0.135.

4 Computational results

The proposed heuristics of ACO, SDE, and SA wereimplemented in C under the GCC-3.4.2 compiler usingthe built-in math library. The machine used was a Sun Fire

172 Int J Adv Manuf Technol (2008) 37:166–177

V880 with four CPU processors of 900 MHz running underthe Solaris Version 9.0 operating system with 8 GB ofRAM. To measure the effectiveness of the heuristics, wecompared the performance of the three heuristics againsteach other and against a random solution.

The processing times were randomly generated from auniform distribution [1, 100] on all m machines at the firststage, as well as the assembly machine at the second stage.In the scheduling literature, most researchers have used thisdistribution in their experimentation, e.g., Wang et al. [39],Pan and Chen [30], Al-Anzi and Allahverdi [4], andAllahverdi and Al-Anzi [7]. The reason for using a uniformdistribution with a wide range is that the variance of thisdistribution is large, and if a heuristic performs well withsuch a distribution, it is likely to perform well with otherdistributions.

Problem data were generated for different numbers ofjobs: 20, 40, 60, and 80. The experimentation wasconducted for the number of machines at the first stage

being 2, 4, 6, or 8. We have chosen the values of the weightα to be 0.2, 0.4, 0.6, or 0.8. The weight values of less than0.5 give more weight to the total completion time criterion,whereas the values of more than 0.5 give more weight tothe Cmax criterion.

We compared the performance of the heuristics usingthree measures: average percentage error (Error), standarddeviation (Std), and the percentage of times that the bestsolution is obtained (NOB) out of 30 replicates. Thepercentage error is defined as 100*(OF of the heuristic−OF of the best heuristic)/(OF of the best heuristic).

There are 64 combinations for the different values of n, m,and α. Thirty replicates were generated for each combina-tion, and, therefore, a total of 1,920 instances were generatedand evaluated. The results of the computational experimentsare summarized in Tables 1 and 2 and Figs. 1, 2, 3, 4, 5, and6. Note that the average error for the random solution wasvery large (always 100) compared with the other heuristics,and, therefore, it is not reported in the tables and figures.

Table 1 Computation results for n=20 and n=40

n m α ACO SDE SA

NOB Error Std NOB Error Std NOB Error Std

20 2 0.2 86.7 0.5873 0.0210 83.3 0.1511 0.0040 93.3 0.0297 0.00102 0.4 76.7 1.2689 0.0400 73.3 0.6256 0.0150 90.0 0.3289 0.01102 0.6 66.7 1.1783 0.0210 70.0 0.1789 0.0030 83.3 0.4992 0.01402 0.8 66.7 3.4560 0.0720 80.0 0.5188 0.0140 90.0 0.1496 0.00504 0.2 50.0 1.6534 0.0330 60.0 1.3888 0.0220 86.7 0.0638 0.00104 0.4 43.3 2.9067 0.0380 66.7 0.6021 0.0120 90.0 0.2441 0.01004 0.6 66.7 2.4312 0.0590 63.3 0.8310 0.0190 83.3 0.3667 0.01004 0.8 63.3 0.9404 0.0220 76.7 0.3761 0.0090 83.3 0.4239 0.01706 0.2 46.7 2.9950 0.0450 53.3 1.8601 0.0370 86.7 0.5723 0.02206 0.4 53.3 2.9211 0.0450 60.0 1.8822 0.0370 76.7 0.8499 0.02006 0.6 60.0 2.2107 0.0370 66.7 0.4923 0.0100 73.3 0.4437 0.00906 0.8 63.3 1.6807 0.0370 60.0 0.9900 0.0190 86.7 0.3365 0.01608 0.2 40.0 4.9737 0.0750 40.0 3.1820 0.0500 83.3 0.8327 0.02108 0.4 50.0 2.9679 0.0410 60.0 0.9967 0.0240 73.3 0.4194 0.01108 0.6 43.3 3.7342 0.0550 53.3 1.3951 0.0230 66.7 0.3246 0.00608 0.8 60.0 2.1824 0.0430 70.0 1.3057 0.0310 76.7 0.6135 0.0140

40 2 0.2 56.7 1.6937 0.0320 70.0 0.3478 0.0070 73.3 0.3520 0.00902 0.4 40.0 2.4787 0.0350 63.3 0.6308 0.0130 90.0 0.1748 0.00502 0.6 73.3 1.3431 0.0320 70.0 0.4155 0.0080 80.0 0.3288 0.00802 0.8 70.0 0.8190 0.0210 73.3 0.3434 0.0080 90.0 0.1582 0.00504 0.2 30.0 4.1673 0.0520 56.7 1.0745 0.0190 73.3 0.3689 0.00904 0.4 33.3 3.4138 0.0520 56.7 0.7929 0.0110 76.7 0.4635 0.01004 0.6 46.7 2.2900 0.0350 56.7 0.7729 0.0120 73.3 0.2235 0.00604 0.8 50.0 3.2558 0.0570 63.3 0.8626 0.0280 83.3 0.3459 0.01006 0.2 40.0 3.7157 0.0540 53.3 1.3197 0.0270 76.7 0.4414 0.01506 0.4 43.3 3.4049 0.0650 53.3 0.7018 0.0130 76.7 0.4694 0.01906 0.6 46.7 2.4260 0.0470 63.3 0.5708 0.0100 73.3 0.6739 0.02106 0.8 36.7 3.8941 0.0480 56.7 0.8365 0.0130 66.7 0.5037 0.01008 0.2 46.7 2.8077 0.0500 50.0 1.7836 0.0260 76.7 0.6990 0.02008 0.4 33.3 4.5073 0.0580 53.3 1.1048 0.0160 60.0 1.0696 0.02108 0.6 40.0 4.4242 0.0610 40.0 1.5125 0.0210 80.0 0.2205 0.00808 0.8 36.7 3.6942 0.0550 56.7 1.2437 0.0310 60.0 0.2873 0.0060

Int J Adv Manuf Technol (2008) 37:166–177 173

As indicated by Figs. 1 and 2, SDE and SA performmuch better than ACO. Figure 1 indicates that the error ofSDE and SA is smaller than that of ACO. These resultswere statistically tested by using a t-test. More specifically,

the following hypothesis testing was conducted for all 64combinations:

– Null hypothesis The average error of SDE=the averageerror of ACO

Table 2 Computation results for n=60 and n=80

n m α ACO SDE SA

NOB Error Std NOB Error Std NOB Error Std

60 2 0.2 53.3 1.5436 0.0320 76.7 0.2188 0.0050 73.3 0.2466 0.00502 0.4 60.0 1.1664 0.0280 66.7 0.3997 0.0070 90.0 0.0967 0.00302 0.6 43.3 2.3253 0.0360 66.7 0.7340 0.0130 73.3 0.2372 0.00702 0.8 60.0 1.4121 0.0240 70.0 0.4589 0.0090 80.0 0.1007 0.00204 0.2 33.3 2.5682 0.0410 60.0 0.4995 0.0070 63.3 0.3437 0.00604 0.4 36.7 3.1718 0.0540 60.0 0.7381 0.0150 80.0 0.3026 0.00604 0.6 40.0 3.4525 0.0470 53.3 1.0188 0.0170 80.0 0.0800 0.00204 0.8 40.0 2.8961 0.0510 53.3 0.5748 0.0110 76.7 0.2317 0.00606 0.2 33.3 3.1158 0.0460 53.3 0.5624 0.0090 73.3 0.2853 0.00706 0.4 30.0 4.3776 0.0590 36.7 1.0534 0.0120 73.3 0.1939 0.00406 0.6 33.3 2.0278 0.0280 50.0 0.4848 0.0070 60.0 0.4576 0.00906 0.8 23.3 1.6640 0.0170 53.3 0.7759 0.0110 76.7 0.3506 0.00808 0.2 40.0 3.8554 0.0600 63.3 0.6068 0.0120 46.7 0.7948 0.01408 0.4 36.7 2.8958 0.0430 50.0 1.0059 0.0150 46.7 0.8053 0.01208 0.6 30.0 3.6654 0.0530 56.7 0.8052 0.0190 43.3 1.0975 0.01808 0.8 23.3 3.0074 0.0480 53.3 0.8909 0.0140 63.3 0.4886 0.0080

80 2 0.2 50.0 1.6301 0.0370 60.0 0.3799 0.0060 86.7 0.0424 0.00102 0.4 56.7 1.5826 0.0350 66.7 0.1760 0.0030 80.0 0.1438 0.00302 0.6 46.7 2.3872 0.0450 60.0 0.5033 0.0090 76.7 0.1685 0.00402 0.8 53.3 1.0526 0.0230 60.0 0.4180 0.0070 73.3 0.1984 0.00304 0.2 46.7 1.2114 0.0230 53.3 0.4881 0.0070 73.3 0.2868 0.00704 0.4 40.0 2.3101 0.0270 60.0 0.6231 0.0110 63.3 0.3372 0.00604 0.6 46.7 1.2399 0.0230 56.7 0.6050 0.0090 66.7 0.3530 0.00804 0.8 40.0 1.5642 0.0300 56.7 0.7775 0.0110 70.0 0.2432 0.00606 0.2 33.3 2.0822 0.0320 33.3 0.8114 0.0100 60.0 0.3922 0.00606 0.4 43.3 1.9540 0.0330 43.3 0.8902 0.0100 76.7 0.2120 0.00506 0.6 50.0 1.0807 0.0150 63.3 0.5127 0.0100 60.0 0.6232 0.01006 0.8 43.3 1.7999 0.0240 46.7 1.0438 0.0160 83.3 0.1810 0.00408 0.2 33.3 3.3443 0.0530 63.3 0.6712 0.0140 63.3 0.6025 0.01008 0.4 23.3 3.6107 0.0500 40.0 0.8919 0.0130 70.0 0.4621 0.01108 0.6 26.7 4.4775 0.0590 26.7 0.6985 0.0070 63.3 0.4591 0.00908 0.8 43.3 2.7539 0.0510 50.0 0.6486 0.0120 70.0 0.1480 0.0030

0

0.5

1

1.5

2

2.5

3

0.2 0.4 0.6 0.8

Err

or

ACO

SA

α

SDE

Fig. 1 Overall error for different values of α

0

0.005

0.01

0.015

0.02

0.025

0.03

0.035

0.04

0.045

0.05

0.2 0.4 0.6 0.8α

Std

ACO

SA

SDE

Fig. 2 Overall standard deviation for different values of α

174 Int J Adv Manuf Technol (2008) 37:166–177

– Alternative hypothesis The average error of SDE<theaverage error of ACO

The null hypothesis was rejected for all 64 combinations ata 99% significance level. This implies that the average errorof SDE is, indeed, statistically smaller than that of ACO.

Similarly, a comparison of SA and ACO was conductedas:

– Null hypothesis The average error of SA=the averageerror of ACO

– Alternative hypothesis The average error of SA<theaverage error of ACO

The null hypothesis was rejected for all 64 combinations ata 99% significance level. Therefore, the average error ofSA is, indeed, statistically smaller than that of ACO.

Furthermore, a comparison of SDE and SA wasconducted as:

– Null hypothesis The average error of SA=the averageerror of SDE

– Alternative hypothesis The average error of SA<theaverage error of SDE

The null hypothesis was rejected for 56 combinations andfor the remaining eight combinations, the null hypothesiswas not rejected at a 99% significance level. Therefore, in

general, SA performs better than SDE. In order to evaluatea heuristic, one has to not only evaluate the error but alsoevaluate the CPU time. The CPU times of the heuristics aregiven in Figs. 5 and 6. As can be seen, SA takes less timethan both ACO and SDE. In particular, the CPU time ofSDE becomes larger for larger number of jobs, since itstime complexity is higher than the other two methods, and,therefore, it may not be appropriate for large values of nwhen the computational time is important. As a result, thebest heuristic is SA.

Figures 1 and 2 also indicate the performance ofheuristics with different values of α. Their performance,in general, show the same behavior for different α values,as can be seen from the figures. The performance of all ofthe heuristics become slightly worse as m becomes larger,as indicated by Fig. 3. On the other hand, the performanceof the heuristics, in general, is not affected by the numberof jobs (n), as shown by Fig. 4.

5 Summary and future research

The two-stage assembly scheduling problem wasaddressed with a weighted sum of makespan and meancompletion time criteria. Three heuristics were proposed;

0

0.5

1

1.5

2

2.5

3

3.5

4

2 4 6 8

m

Err

or

ACO

SA

SDE

Fig. 3 Overall error for different values of m

0

0.5

1

1.5

2

2.5

3

3.5

20 40 60 80

n

Err

or

ACO

SA

SDE

Fig. 4 Overall error for different values of n

0

50

100

150

200

250

300

350

20 40 60 80

n

CP

U (

Sec

onds

)

ACO

SA

SDE

Fig. 5 Average CPU time for different values of n

0

20

40

60

80

100

120

140

160

180

2 4 6 8

m

CP

U (

Sec

onds

)

ACO

SA

SDE

Fig. 6 Average CPU time for different values of m

Int J Adv Manuf Technol (2008) 37:166–177 175

simulated annealing (SA), ant colony optimization (ACO),and self-adaptive differential evolution (SDE). Extensivecomputational experiments were conducted in order toevaluate the performance of the proposed heuristics. Itwas shown by computational experiments that both SAand SDE perform much better than ACO. Moreover, itwas shown that SA, in general, performs better than SDE.The computational time of SA was less than that of bothSDE and ACO. Therefore, SA was shown to the bestheuristic followed by SDE. However, it may not besuitable to use SDE for large problems, since it requiresmore computational time.

One possible direction of research would be to addressthe problem with respect to a weighted sum of makespanand maximum lateness or a weighted sum of meancompletion time and maximum lateness.

Setup times were ignored in this paper. However, forsome scheduling environments, it may not be realistic toignore setup times [9, 10]. Therefore, another researchdirection would be to address the problem in this paperwhere setup times are explicitly considered.

Acknowledgments This research was supported by the KuwaitUniversity Research Administration, grant no. EI02/05.

References

1. Abbass HA (2002) The self-adaptive pareto differential evolutionalgorithm. In: Proceedings of the 4th IEEE Congress onEvolutionary Computation (CEC 2002), Honolulu, Hawaii, May2002, pp 831–836

2. Al-Anzi FS, Allahverdi A (2006) A self-adaptive differentialevolution heuristic for two-stage assembly scheduling problem tominimize maximum lateness with setup times. Eur J Oper Res (inpress)

3. Al-Anzi FS, Allahverdi A (2006) A hybrid tabu search heuristicfor the two-stage assembly scheduling problem. Int J Oper Res 3(2):109–119

4. Al-Anzi FS, Allahverdi A (2004) A hybrid simulated annealingheuristic for multimedia object requests scheduling problem. Int JComput Appl 26(4):207–212

5. Allahverdi A (2003) The two- and m-machine flowshop sched-uling problems with bicriteria of makespan and mean flowtime.Eur J Oper Res 147(2):373–396

6. Allahverdi A, Al-Anzi FS (2006) Evolutionary heuristics and analgorithm for the two-stage assembly scheduling problem tominimize makespan with setup times. Int J Prod Res 44(22):4713–4735

7. Allahverdi A, Al-Anzi FS (2002) Using two-machine flowshopwith maximum lateness objective to model multimedia dataobjects scheduling problem for WWW applications. Comput OperRes 29(8):971–994

8. Allahverdi A, Aldowaisan T (2002) No-wait flowshops withbicriteria of makespan and total completion time. J Oper Res Soc53(9):1004–1015

9. Allahverdi A, Gupta JND, Aldowaisan T (1999) A review ofscheduling research involving setup considerations. Omega Int JManag Sci 27(2):219–239

10. Allahverdi A, Ng CT, Cheng TCE, Kovalyov MY (2007) Asurvey of scheduling problems with setup times or costs. Eur JOper Res (in press)

11. Babu BV, Jehan MML (2003) Differential evolution for multi-objective optimization. In: Proceedings of the 5th IEEE Congresson Evolutionary Computation (CEC 2003), Canberra, Australia,December 2003, vol 4, pp 2696–2703

12. Blum C (2005) Beam-ACO—hybridizing ant colony optimizationwith beam search: an application to open shop scheduling.Comput Oper Res 32(6):1565–1591

13. Colorni A, Dorigo M, Maniezzo V (1991) Distributed optimiza-tion by ant colonies. In: Varela FJ, Bourgine P (eds) Proceedingsof the 1st European Conference on Artificial Life, Paris, France,December 1991

14. Dorigo M, Gambardella LM (1997) Ant colony system: acooperative learning approach to the traveling salesman problem.IEEE Trans Evol Comput 1(1):53–66

15. Gonzalez T, Sahni S (1978) Flow shop and job shop schedules.Oper Res 26(1):36–52

16. Gutjahr WJ, Rauner MS (2007) An ACO algorithm for a dynamicregional nurse-scheduling problem in Austria. Comput Oper Res34(3):642–666

17. Haouari M, Daouas T (1999) Optimal scheduling of the 3-machine assembly-type flow shop. RAIRO Rech Opér 33(4):439–445

18. Ho JC, Chang Y-L (1991) A new heuristic for the n-job, m-machine flow-shop problem. Eur J Oper Res 52(2):194–202

19. Hariri AMA, Potts CN (1997) A branch and bound algorithm forthe two-stage assembly scheduling problem. Eur J Oper Res 103(3):547–556

20. Lampinen J, Zelinka I (2000) On stagnation of the differentialevolution algorithm. In: Proceedings of the 6th InternationalMENDEL Conference on Soft Computing, Brno, Czech Republic,June 2000, pp 76–83

21. Lee C-E, Chou F-D (1998) A two-machine flowshop schedulingheuristic with bicriteria objective. Int J Ind Eng 5(2):128–139

22. Lee C-Y, Cheng TCE, Lin BMT (1993) Minimizing the makespanin the 3-machine assembly-type flowshop scheduling problem.Manag Sci 39(5):616–625

23. Liao CJ, Juan HC (2006) An ant colony optimization for single-machine tardiness scheduling with sequence-dependent setups.Comput Oper Res (in press)

24. Liu J, Lampinen J (2002) A fuzzy adaptive differential evolutionalgorithm. In: Proceedings of the 2002 IEEE International Region10 Conference onComputers, Communications, Control andPower Engineering (TENCON 2002), Beijing, China, October2002, pp 606–611

25. Low C (2005) Simulated annealing heuristic for flow shopscheduling problems with unrelated parallel machines. ComputOper Res 32(8):2013–2025

26. Mika M, Waligóra G, Weglarz J (2005) Simulated annealing andtabu search for multi-mode resource-constrained project schedul-ing with positive discounted cash flows and different paymentmodels. Eur J Oper Res 164(3):639–668

27. Nagar A, Haddock J, Heragu SS (1996) A combined branch-and-bound and genetic algorithm based approach for a flowshopscheduling problem. Ann Oper Res 63(3):397–414

28. Omran MGH, Salman A, Engelbrecht AP (2005) Self-adaptivedifferential evolution. In: Proceedings of the International Con-ference on Computational Intelligence and Security (CIS 2005),Xi’an, China, December 2005, pp 192–199

29. Onwubolu G, Davendra D (2006) Scheduling flow shops usingdifferential evolution algorithm. Eur J Oper Res 171(2):674–692

30. Pan C-H, Chen J-S (1997) Scheduling alternative operations intwo-machine flow-shops. J Oper Res Soc 48(5):533–540

176 Int J Adv Manuf Technol (2008) 37:166–177

31. Potts CN, Sevast’janov SV, Strusevich VA, Van Wassenhove LN,Zwaneveld CM (1995) The two-stage assembly schedulingproblem: complexity and approximation. Oper Res 43(2):346–355

32. Rajendran C (1995) Heuristics for scheduling in flowshop withmultiple objectives. Eur J Oper Res 82(3):540–555

33. Sadegheih A (2006) Scheduling problem using genetic algorithm,simulated annealing and the effects of parameter values on GAperformance. Appl Math Model 30(2):147–154

34. Sayin S, Karabati S (1999) A bicriteria approach to the two-machine flow shop scheduling problem. Eur J Oper Res 113(2):435–449

35. Sivrikaya-Serifoglu F, Ulusoy G (1998) A bicriteria two-machinepermutation flowshop problem. Eur J Oper Res 107(2):414–430

36. Storn R, Price K (1995) Differential evolution—a simple andefficient adaptive scheme for global optimization over continuousspaces. Technical report TR-95-012, International ComputerScience Institute, Berkeley, California

37. Sun X, Morizawa K, Nagasawa H (2003) Powerful heuristics tominimize makespan in fixed, 3-machine, assembly-type flowshopscheduling. Eur J Oper Res 146(3):498–516

38. Tozkapan A, Kirca O, Chung C-S (2003) A branch and boundalgorithm to minimize the total weighted flowtime for the two-stageassembly scheduling problem. Comput Oper Res 30(2):309–320

39. Wang MY, Sethi SP, Van de Velde SL (1997) Minimizingmakespan in a class of reentrant shops. Oper Res 45(5):702–712

40. Yeh W-C (1999) A new branch-and-bound approach for the n/2/flowshop/αF+βC[max] flowshop scheduling problem. ComputOper Res 26(13):1293–1310

41. Yeh W-C (2001) An efficient branch-and-bound algorithm for thetwo-machine bicriteria flowshop scheduling problem. J ManufSyst 20(2):113–123

42. Yeh W-C, Allahverdi A (2004) A branch-and-bound algorithm forthe three-machine flowshop scheduling problem with bicriteria ofmakespan and total flowtime. Int Trans Oper Res 11(3):341–359

Int J Adv Manuf Technol (2008) 37:166–177 177


Recommended