+ All Categories
Home > Documents > Optimization Methods for the Single-Machine Problem

Optimization Methods for the Single-Machine Problem

Date post: 31-Dec-2015
Category:
Upload: xavier-nielsen
View: 31 times
Download: 1 times
Share this document with a friend
Description:
Optimization Methods for the Single-Machine Problem. Chapter 3 Elements of Sequencing and Scheduling by Kenneth R. Baker Byung-Hyun Ha. R4. Outline. Introduction Adjacent pairwise interchange methods A dynamic programming approach Dominance property A branch and bound approach - PowerPoint PPT Presentation
21
Optimization Methods for the Single-Machine Problem Chapter 3 Elements of Sequencing and Scheduling by Kenneth R. Baker Byung-Hyun Ha R4
Transcript

Optimization Methods for the Single-Machine Problem

Chapter 3

Elements of Sequencing and Schedulingby Kenneth R. Baker

Byung-Hyun Ha

R4

2

Outline

Introduction

Adjacent pairwise interchange methods

A dynamic programming approach

Dominance property

A branch and bound approach

Mixed integer programming formulation from Baker and Trietsch, 2009

Summary

3

Introduction

Different scheduling procedures for different measures F-problem and L-problem by SPT sequencing U-problem by Algorithm 1 of Ch. 2 T-problem by ??

• NP-hard (currently no efficient algorithms known)

Optimization methods Dynamic programming approach Branch and bound approach Approximation schemes, ...

Using optimization problem solvers Problems are mostly described by mathematical programming model. The solvers (e.g., CPLEX, LINDO) employ branch-and-cut, branch-and-

price algorithms, etc.

4

Introduction

Typical scheduling problems Penalty function, usual

• gj(t) -- a penalty incurred when job j completes at time t

• assuming nondecreasing with regard to t

Maximum penalty problem• to minimize the maximum gj(t), e.g., Tmax-problem

Total penalty problem• to minimize the sum of gj(t), e.g., T-problem

Maximum penalty problems Theorem 1

• When the objective is to minimize the maximum penalty, job i may be assigned the last position in sequence if gi(P) gk(P) for all jobs k i, where P is the total processing time of jobs.

Application of Theorem 1 Tmax-problem -- gj(t) = max{0, t – dj}

Weighted quadratic tardiness -- gj(t) = wj(max{0, t – dj})2

5

Adjacent Pairwise Interchange Methods

Thrust of adjacent pairwise interchange methods Finding a sequence of which every adjacent pairwise interchange leads t

o poorer performance F-problem optimal by the method T-problem not sufficient to identify optimality!

Three-job example

There does not exist transitive (optimal) sequencing rule for T-problem.

Job j 1 2 3

pj

dj

14

22

33

1-3-2 3-1-2

2-1-3 2-3-1

3-2-11-2-3

T = 5 T = 4

T = 4T = 3

T = 4 T = 5

local optimum

6

Additive form of regular measures Z = j=1

n gj(Cj)

For total tardiness: gj(Cj) = max{0, Cj – dj}

We can easily formulate a dynamic programming for optimal sequence.

Notation X -- set of all jobs, J -- subset of X p(J) -- total time required to process jobs in J

Dynamic programming formulation G(J) = minjJ {G(J – {j}) + gj(p(J))}, where G() = 0

• minimum penalty for the subproblem consisting of the jobs in J

• gj(p(J)) -- penalty incurred by job j

• G(J – {j}) -- minimum penalty incurred by remaining jobs

Then, minimum total penalty will be G(X).

Dynamic Programming Approach

j... ...

p(J)

J X – J

7

Dynamic Programming Approach

Example -- X = {1, 2, 3}, gj(Cj) = max{0, Cj – dj}

G(X) = G({1,2,3}) = minj{1,2,3} {G({1,2,3} – {j}) + gj(p({1,2,3}))}

= min{G({2,3}) + g1(p({1,2,3})), G({1,3}) + g2(p({1,2,3})), G({1,2}) + g3(p({1,2,3}))}

= min{2 + 2, 0 + 4, 0 + 3} = 3 Sequence: 2-1-3

G({2,3}) = minj{2,3} {G({2,3} – {j}) + gj(p({2,3}))}

= min{G({3}) + g2(p({2,3})), G({2}) + g3(p({2,3}))} = min{0 + 3, 0 + 2} = 2

G({1,3}) = minj{1,3} {G({1,3} – {j}) + gj(p({1,3}))}

= min{G({3}) + g1(p({1,3})), G({1}) + g3(p({1,3}))} = min{0 + 0, 0 + 1} = 0

G({1,2}) = minj{1,2} {G({1,2} – {j}) + gj(p({1,2}))}

= min{G({2}) + g1(p({1,2})), G({1}) + g2(p({1,2}))} = min{0 + 0, 0 + 1} = 0

G({1}) = minj{1} {G({1} – {j}) + gj(p({1}))}

= G() + g1(p({1})) = g1(p1) = 0

G({2}) = minj{2} {G({2} – {j}) + gj(p({2}))}

= G() + g2(p({2})) = g2(p2) = 0

G({3}) = minj{3} {G({3} – {j}) + gj(p({3}))}

= G() + g3(p({3})) = g3(p3) = 0

Job j 1 2 3

pj

dj

14

22

33

8

Dynamic Programming Approach

Exercise (p. 3.6) X = {1, 2, 3, 4}, gj(Cj) = max{0, Cj – dj}

Computational efforts In proportion to n2n

• Number of subsets to be considered – 2n

• Finding G(J) for each subset J -- O(n) times evaluation

U-problem -- O(nlogn) All enumerations -- in proportion to n!

Job j 1 2 3 4

pj

dj

59

67

911

813

9

Dynamic Programming Approach

Computer implementation Labeling scheme

• Label of a set of jobs J -- L(J) = kJ Lk , where Lk = 2k-1 is label of job k

Overall procedure1. Set b(i) = 0 for i = 1..|X|, G(0) = 0

2. Loop

2-1. Find smallest integer j for which b(j) = 0.

If all b(j) = 1, then stop.

2-2. Set b(j) = 1.

2-3. For all i < j, set b(i) = 0.

2-4. Let J = {j | b(j) = 1}.

2-5. Set G(L(J)) = minjJ {G(L(J – {j})) + gj(p(J))}.

10

Dominance Properties

Dominance properties reduces the number of alternatives to be considered.

Those regarding schedules schedules without inserted idle times constituted dominant set

Those involving relationship between jobs Theorem 2

• In the Tw-problem, suppose that there exists a job k for which dk p(X). Then job k may be assigned the last position in sequence.

Theorem 3 (Emmons, 1969) (HW 3)• In the T-problem, there is an optimal schedule in which job j follows job i if on

e of the following conditions is satisfied:

(a) pi pj and di max{dj, p(Bj) + pj}

(b) di dj and dj p(Ai') – pj

(c) dj p(Ai')

where

• Ai -- set of jobs to follow job i in an optimal schedule; i.e., “after” set

• Ai' -- complement set of Ai , i.e., Ai' = X – Ai

• Bi -- set of jobs to precede job i in an optimal schedule; i.e., “before” set

11

Dominance Properties

Computer implementation Modified labeling scheme

• Renumbering jobs so that i < j whenever job i dominates job j

• Nj = {i | i < j}

• L'(J) = kJ L'k , where L'k = L'(Nk) – L'(Bk Nk) + 1

Overall procedure1. Set b(i) = 0 for i = 1..|X|, G(0) = 0

2. Loop

2-1. Find smallest integer j for which b(j) = 0.

If all b(j) = 1, then stop.

2-2. Set b(j) = 1.

2-3. For i = j–1, ..., 1, if b(i) = 1 and b(k) = 0 for all kAi , set b(i) = 0.

2-4. Let J = {j | b(j) = 1}.

2-5. Set G(L'(J)) = minjJ:Aj J= {G(L'(J – {j})) + gj(p(J))}.

Example of p. 3.12 -- 5 jobs• A1 = {2}, A2 = , A3 = {4, 5}, A4 = , A5 =

• L'1 = 1, L'2 = 1, L'3 = 3, L'4 = 3, L'5 = 6

Number of subsets to be evaluated are not 25 = 32 but 15.

12

Branch and Bound Approach

Iterations of two fundamental procedures Branching and bounding

Branching Partitioning a large problem into

subproblems, i.e., replacing an original problem by a set of new problems that are

(a) mutually exclusive and exhaustive subproblems of the original,

(b) partially solved version of the original, and

(c) smaller problems than the original

P(0)

P(2)P(1) P(n)...

...... ...

... ......

P(32)P(12) P(n2)

P(S)

13

Branch and Bound Approach

Bounding Curtailing enumeration process, by calculating a lower bound on the

optimal solution of a given subproblem• Suppose Z is known performance measure and lower bound of a certain

subproblem is not less than Z, then the subproblem need not be considered any further.

Fathomed branches• Branches of a subproblem that, no matter how the remainder of the

subproblem is solved, the resulting solution can never have a value better than known solution

Active subproblems Subproblems that have been encountered in branching process but that

have not been eliminated by dominance properties and whose own subproblems have not yet been generated

Termination condition A solution appears at the head of active list, which is a list of active

subproblems ranked by lower bound

14

Branch and Bound Approach

Example: a branch and bound procedure for T-problem Notation

• s -- partial sequence of jobs from among n jobs originally in problem• js -- partial sequence in which s is immediately preceded by job j• s' -- the complement of s• js' -- the complement of js

• p(s') = js' pj ; p(js') = p(s') – pj

• P(s) -- subproblem with a sequence which ends with partial sequence s• P(0) -- original problem

• vs = js Tj ; vjs = max{0, p(s) – dj} + vs

• bs -- lower bound of subproblem P(s)

15

Branch and Bound Approach

Example: a branch and bound procedure for T-problem (cont’d) Algorithm 1 -- Branch and Bound

1. (Initialization) Place P(0) on the active list. Set v0 = 0 and p() = j pj .

2. Remove the first subproblem, P(s), from the active list.Let k denote the number of jobs in the partial sequence s.If k = n, stop: the complete sequence s is optimal.Otherwise test Theorem 2 for P(s). If the property holds, go to Step 3; otherwise go to Step 4.

3. Let job j be the job with the latest due date in s'. Create the subproblem P(js) with

p(js') = p(s') – pj ; vjs = vs ; and bjs = vs

Place P(js) on the active list, ranked by its lower bound.Return to Step 2.

4. Create (n – k) subproblem P(js), one for each j in set s'. For P(js), let

p(js') = p(s') – pj ; vjs = vs + p(s') – dj ; and bjs = vjs

Now place each P(js) on the active list, ranked by its lower bound bjs .Return to Step 2.

16

Branch and Bound Approach

Example -- T-problem Job j 1 2 3 4 5

pj

dj

45

36

78

28

217P(0)

P(1) P(2) P(3) P(4) P(5)

P(35) P(45)P(25)P(15)

P(245) P(345)P(145)P(435)P(135) P(235)P(354)P(254)P(154)P(253)P(153)

P(1435) P(2435)

P(12435)

P(54)P(53)

1

1

23

4

5

6

7

8

10

11

13 12 10

10 10 12 11 9 9

14 13 11 19 18 16 13 12 10 18 17 15

12 11

11

10

9

P(1453) P(2453)

P(453)

15 16

17

Branch and Bound Approach

A branch and bound procedure for T-problem (cont’d) Options

• Lower bounds

• bs = vs , or

• bs = vs + minjs' {max{0, p(s) – dj}}, or

• ...• Trial solutions for bounding

• a schedule obtained during branching, or• one obtained by pursuing tree to bottom as rapidly as possible, or• one by heuristics, such as MDD rule, or• ...

• Branching• jumptracking, or• backtracking, or• ...

• ...

Enumeration of all the feasible solutions, implicitly!

18

Mixed Integer Programming Formulation

Example: T-problem by sequence-position decisions Notation

• n -- number of jobs

• pj , dj -- processing time and due date of job j

• xjk = 1, if job j is assigned to kth position in sequence;xjk = 0, otherwise

• tk -- tardiness of kth job in sequence

Formulation• Minimize

k=1n tk

• Subject to j=1

n xjk = 1 positions k

k=1n xjk = 1 jobs j

j=1n pj

u=1k xju – j=1

n dj xjk tk positions k

• xjk = 0 or 1 jobs j positions k

19

Mixed Integer Programming Formulation

Example: T-problem by sequence-position decisions (cont’d) Instantiation of T-problem with 3 jobs

• Minimize

• t1 + t2 + t3

• Subject to

• x11 + x21 + x31 = 1, x12 + x22 + x32 = 1, x13 + x23 + x33 = 1

• x11 + x12 + x13 = 1, x21 + x22 + x23 = 1, x31 + x32 + x33 = 1

• 1x11 + 2x21 + 3x31 – (4x11 + 2x21 + 3x31) t1

• 1(x11 + x12)+ 2(x21 + x22) + 3(x31 + x32) – (4x12 + 2x22 + 3x32) t2

• 1(x11 + x12 + x13)+ 2(x21 + x22 + x23) + 3(x31 + x32 + x33) – (4x13 + 2x23

+ 3x33) t3

• x11, x21, x31, x12, x22, x32, x13, x23, x33 = 0 or 1

A solution (may not be optimal)

• x11 = 0, x21 = 0, x31 = 1, x12 = 1, x22 = 0, x32 = 0, x13 = 0, x23 = 1, x33 = 0

Job j 1 2 3

pj

dj

14

22

33

20

Mixed Integer Programming Formulation

Example: T-problem by precedence decisions Notation

• n -- number of jobs

• pj , dj -- processing time and due date of job j

• yij = 1, if job i is scheduled before job j in sequence; for jobs i j yij = 0, otherwise

• sj , tj -- start time and tardiness of job j

• M -- big number

Formulation• Minimize

j=1n tj

• Subject to

• si + pi sj + M(1 – yij) jobs i j

• sj + pj si + Myij jobs i j

• sj + pj – dj tj jobs j

• yij = 0 or 1 jobs i j

Exercise: instantiate T-problem with 3 jobs.

Job j 1 2 3

pj

dj

14

22

33

21

Summary

Sequencing and scheduling, notoriously difficult problems Relatively few situations that can be analyzed by special structure General purpose techniques for optimal solutions, in this chapter Heuristic method for relatively good solutions, in the next chapter

Some options for efficiency of general methods Dynamic programming approach

• Importance of efficient computer implementation• Labeling schemes and set generation algorithms

• Dominance properties Solving Tw-problem up to 30 jobs (Schrage and Baker, 1978)

Solving T-problem up to 100 jobs (Potts and Van Wassenhove, 1982)

Branch and bound approach• Lower bound calculation• Initial trial solutions• Dominance check• Branching mechanism

Mixed integer programming approach


Recommended