+ All Categories
Home > Documents > Generalizing the Edge-Finder Rule for the Cumulative Constraint · 2016. 6. 28. · [Harjunkoski et...

Generalizing the Edge-Finder Rule for the Cumulative Constraint · 2016. 6. 28. · [Harjunkoski et...

Date post: 07-Feb-2021
Category:
Upload: others
View: 0 times
Download: 0 times
Share this document with a friend
7
Generalizing the Edge-Finder Rule for the Cumulative Constraint Vincent Gingras Universit´ e Laval, Qu´ ebec, QC, Canada [email protected] Claude-Guy Quimper Universit´ e Laval, Qu´ ebec, QC, Canada [email protected] Abstract We present two novel filtering algorithms for the CUMULATIVE constraint based on a new energetic relaxation. We introduce a generalization of the Overload Check and Edge-Finder rules based on a function computing the earliest completion time for a set of tasks. Depending on the relaxation used to compute this function, one obtains different levels of filtering. We present two algorithms that enforce these rules. The algorithms utilize a novel data structure that we call Profile and that encodes the resource utilization over time. Experiments show that these algorithms are competitive with the state- of-the-art algorithms, by doing a greater filtering and having a faster runtime. 1 Introduction Scheduling consists of deciding when a set of tasks needs to be executed on a shared resource. Applications can be found in economics [Buyya et al., 2005] or in industrial sequencing [Harjunkoski et al., 2014]. Constraint programming is an efficient way to solve scheduling problems. Many powerful filtering algorithms that prune the search space have been introduced for vari- ous scheduling problems [Baptiste et al., 2001]. These al- gorithms are particularly adapted for the cumulative problem in which multiple tasks can be simultaneously executed on a cumulative resource. Among these algorithms, we note the Time-Table [Beldiceanu and Carlsson, 2002], the Ener- getic Reasoning [Lopez and Esquirol, 1996], the Overload Check [Wolf and Schrader, 2006], the Edge-Finder [Mercier and Van Hentenryck, 2008] and the Time-Table Edge-Finder [Vil´ ım, 2011]. Constraint solvers call filtering algorithms multiple times during the search, hence the need for them to be fast and effi- cient. Cumulative scheduling problems being NP-Hard, these algorithms rely on a relaxation of the problem in order to be executed in polynomial time. In this paper, we introduce a novel relaxation that grants a stronger filtering when applied in conjunction with known filtering algorithms. In the next section, we formally define what a Cumula- tive Scheduling Problem (CuSP) is. Then, we present two state-of-the-art filtering rules: the Overload Check and Edge- Finder. We generalize these rules so that they become func- tion of the earliest completion time of a set of tasks. We introduce a novel function computing an optimistic value of earliest completion time for a set of tasks, based on a more realistic relaxation of the CuSP. Along with this function, we present a novel data structure, named Profile, we use to com- pute the function. We introduce two algorithms to enforce the generalized rules while using our own novel function. Fi- nally, we present experimental results obtained while solving CuSP instances from two different benchmark suites. 2 The Cumulative Scheduling Problem We consider the scheduling problem where a given set of tasks I = {1,...,n} must be executed, without interruption, on a cumulative resource of capacity C . A task i 2 I has an earliest starting time est i 2 Z, a latest completion time lct i 2 Z, a processing time p i 2 Z + , and a resource con- sumption value, commonly referred as height, h i 2 Z + . The energy of a task i is given by e i = p i h i . We denote the ear- liest completion time of a task ect i = est i +p i and the latest starting time lst i = lct i -p i . Some of these parameters can be generalized for a set of tasks I . est = min i2est i lct = max i2lct i e = X i2e i Let S i be the starting time of task i, and its do- main be dom(S i ) = [est i , lst i ]. The constraint CUMULATIVE([S 1 ,...,S n ],C ) is satisfied if the total re- source consumption of the tasks executing at any time t does not exceed the resource capacity C , which is expressed as : 8t : X i2I,Sit<Si+pi h i C. (1) A solution to the CUMULATIVE constraint is a solution to the Cumulative Scheduling Problem (CuSP). In addition to satisfying the CUMULATIVE constraint, one usually aims at optimizing an objective function, such as minimizing the makespan, i.e. the time at which all tasks are completed. Such scheduling problems are NP-Hard [Garey and John- son, 1979], therefore it is NP-Hard to remove every inconsis- tent values from the domains of the starting time variables S i . However, there exist many powerful filtering algorithms run- ning in polynomial time for the CUMULATIVE constraint. To Proceedings of the Twenty-Fifth International Joint Conference on Artificial Intelligence (IJCAI-16) 3103
Transcript
  • Generalizing the Edge-Finder Rule for the Cumulative Constraint

    Vincent GingrasUniversité Laval, Québec, QC, Canada

    [email protected]

    Claude-Guy QuimperUniversité Laval, Québec, QC, Canada

    [email protected]

    AbstractWe present two novel filtering algorithms for theCUMULATIVE constraint based on a new energeticrelaxation. We introduce a generalization of theOverload Check and Edge-Finder rules based on afunction computing the earliest completion time fora set of tasks. Depending on the relaxation used tocompute this function, one obtains different levelsof filtering. We present two algorithms that enforcethese rules. The algorithms utilize a novel datastructure that we call Profile and that encodes theresource utilization over time. Experiments showthat these algorithms are competitive with the state-of-the-art algorithms, by doing a greater filteringand having a faster runtime.

    1 IntroductionScheduling consists of deciding when a set of tasks needs tobe executed on a shared resource. Applications can be foundin economics [Buyya et al., 2005] or in industrial sequencing[Harjunkoski et al., 2014].

    Constraint programming is an efficient way to solvescheduling problems. Many powerful filtering algorithmsthat prune the search space have been introduced for vari-ous scheduling problems [Baptiste et al., 2001]. These al-gorithms are particularly adapted for the cumulative problemin which multiple tasks can be simultaneously executed ona cumulative resource. Among these algorithms, we notethe Time-Table [Beldiceanu and Carlsson, 2002], the Ener-getic Reasoning [Lopez and Esquirol, 1996], the OverloadCheck [Wolf and Schrader, 2006], the Edge-Finder [Mercierand Van Hentenryck, 2008] and the Time-Table Edge-Finder[Vilı́m, 2011].

    Constraint solvers call filtering algorithms multiple timesduring the search, hence the need for them to be fast and effi-cient. Cumulative scheduling problems being NP-Hard, thesealgorithms rely on a relaxation of the problem in order to beexecuted in polynomial time. In this paper, we introduce anovel relaxation that grants a stronger filtering when appliedin conjunction with known filtering algorithms.

    In the next section, we formally define what a Cumula-tive Scheduling Problem (CuSP) is. Then, we present two

    state-of-the-art filtering rules: the Overload Check and Edge-Finder. We generalize these rules so that they become func-tion of the earliest completion time of a set of tasks. Weintroduce a novel function computing an optimistic value ofearliest completion time for a set of tasks, based on a morerealistic relaxation of the CuSP. Along with this function, wepresent a novel data structure, named Profile, we use to com-pute the function. We introduce two algorithms to enforcethe generalized rules while using our own novel function. Fi-nally, we present experimental results obtained while solvingCuSP instances from two different benchmark suites.

    2 The Cumulative Scheduling ProblemWe consider the scheduling problem where a given set oftasks I = {1, . . . , n} must be executed, without interruption,on a cumulative resource of capacity C. A task i 2 I hasan earliest starting time est

    i

    2 Z, a latest completion timelct

    i

    2 Z, a processing time pi

    2 Z+, and a resource con-sumption value, commonly referred as height, h

    i

    2 Z+. Theenergy of a task i is given by e

    i

    = pi

    hi

    . We denote the ear-liest completion time of a task ect

    i

    = est

    i

    +pi

    and the lateststarting time lst

    i

    = lct

    i

    �pi

    . Some of these parameters canbe generalized for a set of tasks ⌦ ✓ I.

    est⌦ = mini2⌦

    est

    i

    lct⌦ = maxi2⌦

    lct

    i

    e⌦ =X

    i2⌦ei

    Let Si

    be the starting time of task i, and its do-main be dom(S

    i

    ) = [est

    i

    , lsti

    ]. The constraintCUMULATIVE([S1, . . . , Sn], C) is satisfied if the total re-source consumption of the tasks executing at any time t doesnot exceed the resource capacity C, which is expressed as :

    8t :X

    i2I,Sit

  • execute in polynomial time, these algorithms rely on a relax-ation of the original problem that generally revolves around atask property referred as the elasticity [Baptiste et al., 2001].A task i becomes fully elastic if we allow its resource con-sumption to fluctuate (and even to interrupt), as long as theamount of resource consumed in the interval [est

    i

    , lcti

    ) isequal to its energy e

    i

    .

    3 PreliminariesWe present two filtering algorithms based on an energetic re-laxation that we later improve using a novel relaxation.

    3.1 Overload CheckThe Overload Check is a test that detects inconsistencies inthe problem and triggers backtracks in the search tree. TheOverload Check rule enforces the condition that the energyconsumption required by a set of tasks ⌦ cannot exceed theresource capacity over the time interval [est⌦, lct⌦).

    9⌦ ✓ I : C(lct⌦� est⌦) < e⌦ ) fail (2)

    This condition is necessary to the existence of a feasiblesolution to the problem. [Wolf and Schrader, 2006] presentan algorithm enforcing this rule running in O(n log n) time.More recently, [Fahimi and Quimper, 2014] presented anOverload Check algorithm that runs in O(n) time, using adata structure named Timeline. Although initially conceivedfor the DISJUNCTIVE constraint, it is demonstrated that thealgorihtm can be adapted for the CUMULATIVE constraint,while maintaining its running time complexity of O(n).

    3.2 Edge-FinderThe Edge-Finder algorithm filters the starting time variables.The algorithms by [Vilı́m, 2009] and [Kameugne et al., 2014]proceed in two phases : the detection and the adjustment.

    The detection phase detects end before end temporal rela-tions between the tasks. The relation ⌦ lct⌦0 ) fail (9)

    9⌦0 ✓ I : e⌦0 > C(lct⌦0 � est⌦0) ) fail(10)However, since ectF⌦ ect⌦, rule (6) detects more failurecases than its fully-elastic relaxed version. This suggests tofind stronger relaxations for the function ect than ectF.

    3104

  • From [Vilı́m, 2009], we generalize the Edge-Finder detec-tion rule. A precedence is detected when a set of tasks ⌦,executing along a task i 62 ⌦, cannot meet its deadline.

    8⌦ ⇢ I, 8i 2 I \ ⌦ : ect⌦[{i} > lct⌦ ) ⌦ lct⌦.

    Similarly, ectF⌦[{i} > lct⌦ implies the detection conditionect

    H⌦[{i} > lct⌦. Consider the instance with C = 2 and four

    tasks whose parameters hesti

    , lcti

    , pi

    , hi

    i are h0, 4, 2, 1i,h1, 4, 1, 2i, h1, 4, 1, 2i, and h1, 4, 1, 2i. Only the OverloadCheck based on the horizontally-elastic relaxation fails.In the instance with C = 2 and the tasks x : h0, 5, 2, 1i,y : h1, 5, 2, 1i, z : h1, 5, 2, 2i, and w : h1, 10, 2, 1i. Theprecedence {x, y, z}

  • Algorithm 1: ScheduleTasks(⇥, c)for all time point t do t.�max 0, t.�req 022for i 2 ⇥ do44

    Increment Testi .�max and Testi .�req by hi5Decrement Tlcti .�max and Tecti .�req by hi6

    t P.first, ov 0, ect �1, S 0, hreq 07while t.time 6= lct⇥ do99

    t.ov ov1111l t.next.time� t.time12S S + t.�max13hmax max(S, c)14hreq hreq + t.�req15hcons min (hreq + ov, hmax)16if 0 < ov < (hcons � hreq) · l then17

    l max⇣1,j

    ov

    hcons�hreq

    k⌘18

    t.insertAfter(t.time + l , t.capacity, 0, 0)2020ov ov + (hreq � hcons) · l21t.capacity c� hcons2323if t.capacity < c then ect t.next.time24t t.next25

    t.ov ov26m 127while t 6= P.first and m > 0 do2929

    m min(m, t.ov)30t.ov m3232t t.previous33

    return ect, ov34

    Lemma 1. The Profile contains at most 4n+ 1 time points.Proof. There is initially one time point for every distinctvalue of est, ect and lct and one sentinel. One additionaltime point can be created per task being scheduled on theProfile when its energy is fully spent (line 20). ⇤

    Lemma 2. ScheduleTasks runs in O(n) time.Proof. The loops on lines 2, 4, 9, and 29 iterate over timepoints. By Lemma 1 they execute O(n) times. ⇤

    Theorem 3. The Profile obtained with ScheduleTaskscomplies with the horizontally-elastic relaxation.

    Proof. A task i can only be executed during the interval[est

    i

    , lcti

    ) and can consume at most hi

    units of the resourceat any time point in this interval. This property is enforcedwith the hmax value. An amount of ei energy is spent for eachtask i as the height of a task i is added to the hreq value forthe interval [est

    i

    , ecti

    ), which has a length of pi

    . ⇤

    6 Overload CheckWe present an algorithm that enforces the Overload Checkrule based on the horizontally-elastic relaxation, i.e. :

    9⌦ ✓ I : ectH⌦ > lct⌦ ) fail (18)

    Algorithm 2: OverloadCheck(I, C)⇥ ;1for i 2 I in ascending order of lct

    i

    do2⇥ ⇥ [ {i}3ect, ov ScheduleTasks(⇥, C)4if ect > lct

    i

    or ov > 0 then fail5

    The algorithm OverloadCheck is essentially the same asVilı́m’s [2009] except for the value of ect that is computedusing ScheduleTasks. The algorithm also fails if someoverflow was unspent beyond time lct⇥.Lemma 3. OverloadCheck runs in O(n2) time.Proof. The linear time algorithm ScheduleTasks iscalled n times. ⇤

    7 Edge-Finder DetectionWe introduce an algorithm that enforces the Edge-Finderrule (11) based on the horizontally-elastic relaxation.

    Like Vilı́m’s [2009] algorithm, Detection iterates overall tasks in non-increasing order of lct. On each iteration, thefunction ScheduleTasks schedules on an empty Profilethe left cut ⇥ of the current task. DetectPrecedencestests for precedence detection the tasks in ⇤. The functionDetectPrecedences returns all tasks j 2 ⇤ for whichthe rule (11) detects ⇥ e, it infers thattask j cannot finish before time lct, i.e. ⇥ 0 then fail6for h 2 {h

    i

    | i 2 ⇤} do88⇤

    h {i 2 ⇤ | hi

    = h}9⌦ DetectPrecedences(⇥,⇤h, h, lct⇥)10Prec Prec [ {⇥

  • Algorithm 4: DetectPrecedences(⇥,⇤h, h, lct)for all time point t do t.�max 01for i 2 ⇥ do2

    Decrement Testi .�max by hi3Increment Tlcti .�max by hi4

    minest mini2⇤h esti5

    t getNode(lct).previous6⌦ ;, e 0 , ov 0, hmax h7while t.time � minest do8

    l t.next.time� t.time9hmax hmax + t.next.�max10c min(t.capacity, hmax � (C � t.capacity))11e e+ l ·min(c, h) + max(0,min(ov, (h� c)l))12ov max(0, ov + l(c� h))13⌦ ⌦ [ {j 2 ⇤h |1515

    estj

    = t.time, ej

    �min(0, hj

    (ect

    j

    �lct)) > e}16t t.previous17

    return ⌦18

    Algorithm 5: Adjustment(Prec, C)for ⇥

  • 0 500 1000 1500H( runWimH (s)

    0

    500

    1000

    1500

    )( ru

    nWim

    H (s

    )

    6WDWiF

    0 500 1000 1500H( runWimH (s)

    0

    500

    1000

    1500

    )( ru

    nWim

    H (s

    )

    Dom2vHrWDHg

    0 500 1000 1500H( runWimH (s)

    0

    500

    1000

    1500

    )( ru

    nWim

    H (s

    )

    ImSDFWBDsHd6HDrFh

    0 1000 2000 3000H( EDFkWrDFks (1k)

    0

    1000

    2000

    3000

    )( E

    DFkW

    rDFk

    s (1k

    )

    0 1000 2000 3000 4000H( EDFkWrDFks (1k)

    0

    1000

    2000

    3000

    4000

    )( E

    DFkW

    rDFk

    s (1k

    )

    0 200 400 600 800H( EDFkWrDFks (1k)

    0

    200

    400

    600

    800

    )( E

    DFkW

    rDFk

    s (1k

    )

    Figure 2: Runtimes and backtracks comparison

    limits to hi

    the amount of energy spent by a task i at anygiven time which shifts the energy later on the schedule.The fully-elastic relaxation consumes the bottom part ofthe resource entirely and packs the remaining energy assoon as possible on the upper part. The horizontally-elasticrelaxation might not fully consume the bottom part anddoes not necessarily pack the remaining energy at theearliest time on the upper part. Consider the instance withC = 3 and five tasks whose parameters hest

    i

    , lcti

    , pi

    , hi

    i arex : h0, 4, 2, 1i, y : h1, 4, 1, 3i, z : h2, 4, 1, 3i, w : h2, 4, 1, 1i,and v : h1, 10, 3, 1i. We get adjF

    v

    = 2 < adjHv

    = 3 for theprecedence {x, y, z, w}

  • References[Baptiste and Le Pape, 2000] Philippe Baptiste and Claude

    Le Pape. Constraint propagation and decomposition tech-niques for highly disjunctive and highly cumulative projectscheduling problems. Constraints, 5(1-2):119–139, 2000.

    [Baptiste et al., 2001] P. Baptiste, C. Le Pape, and W. Nui-jten. Constraint-based scheduling: applying con-straint programming to scheduling problems, volume 39.Springer Science & Business Media, 2001.

    [Beldiceanu and Carlsson, 2002] Nicolas Beldiceanu andMats Carlsson. A new multi-resource cumulatives con-straint with negative heights. In Principles and Practice ofConstraint Programming-CP 2002, pages 63–79, 2002.

    [Boussemart et al., 2004] Frédéric Boussemart, FredHemery, Christophe Lecoutre, and Lakhdar Sais. Boost-ing systematic search by weighting constraints. In ECAI,volume 16, page 146, 2004.

    [Buyya et al., 2005] Rajkumar Buyya, Manzur Murshed,David Abramson, and Srikumar Venugopal. Schedulingparameter sweep applications on global grids: a deadlineand budget constrained cost–time optimization algorithm.Software: Practice and Experience, 35(5):491–512, 2005.

    [Fahimi and Quimper, 2014] Hamed Fahimi and Claude-Guy Quimper. Linear-time filtering algorithms for the dis-junctive constraint. In Twenty-Eighth AAAI Conference onArtificial Intelligence, pages 2637–2643, 2014.

    [Garey and Johnson, 1979] Michael R. Garey and David S.Johnson. Computers and intractability. W.H. Freeman,1979.

    [Gay et al., 2015] Steven Gay, Renaud Hartert, and PierreSchaus. Simple and scalable time-table filtering for thecumulative constraint. In Principles and Practice of Con-straint Programming, pages 149–157, 2015.

    [Harjunkoski et al., 2014] Iiro Harjunkoski, Christos T Mar-avelias, Peter Bongers, Pedro M Castro, Sebastian En-gell, Ignacio E Grossmann, John Hooker, Carlos Méndez,Guido Sand, and John Wassick. Scope for industrial ap-plications of production scheduling models and solutionmethods. Computers & Chemical Engineering, 62:161–193, 2014.

    [Kameugne et al., 2014] Roger Kameugne, Laure PaulineFotso, Joseph Scott, and Youcheu Ngo-Kateu. A quadraticedge-finding filtering algorithm for cumulative resourceconstraints. Constraints, 19(3):243–269, 2014.

    [Kolisch and Sprecher, 1997] Rainer Kolisch and ArnoSprecher. Psplib-a project scheduling problem library:Or software-orsep operations research software exchangeprogram. European journal of operational research,96(1):205–216, 1997.

    [Lopez and Esquirol, 1996] Pierre Lopez and Patrick Es-quirol. Consistency enforcing in scheduling: A generalformulation based on energetic reasoning. In 5th Interna-tional Workshop on Project Management and Scheduling(PMS’96), 1996.

    [Mercier and Van Hentenryck, 2008] Luc Mercier and Pas-cal Van Hentenryck. Edge finding for cumulative schedul-ing. INFORMS Journal on Computing, 20(1):143–153,2008.

    [Refalo, 2004] Philippe Refalo. Impact-based search strate-gies for constraint programming. In Principles and Prac-tice of Constraint Programming–CP 2004, pages 557–571. Springer, 2004.

    [Vilı́m, 2009] Petr Vilı́m. Edge finding filtering algorithmfor discrete cumulative resources in O(kn log n). In Prin-ciples and Practice of Constraint Programming-CP 2009,pages 802–816. Springer, 2009.

    [Vilı́m, 2011] Petr Vilı́m. Timetable edge finding filtering al-gorithm for discrete cumulative resources. In Integrationof AI and OR Techniques in Constraint Programming forCombinatorial Optimization Problems, pages 230–245.Springer, 2011.

    [Wolf and Schrader, 2006] Armin Wolf and GunnarSchrader. O(n log n) overload checking for the cu-mulative constraint and its application. In DeclarativeProgramming for Knowledge Management, pages 88–101.Springer, 2006.

    3109


Recommended