Franciszek Seredynski, Damian Kurdej Polish Academy of Sciences and Polish-Japanese Institute of...

Post on 18-Jan-2016

221 views 0 download

Tags:

transcript

Franciszek Seredynski, Damian KurdejPolish Academy of Sciences andPolish-Japanese Institute of Information Technology

AAPPLYINGPPLYING L LEARNINGEARNING C CLASSIFIERLASSIFIER SSYSTEMSYSTEMS for for MULTIPROCESSOR SCHEDULING MULTIPROCESSOR SCHEDULING PROBLEMPROBLEM

Motivations

• New scheduling algorithms are proposed near every day• In the light of

– NP-hard compliteness of the scheduling problem, and– No free lunch theorem concerning metaheuristics

this situation may last forever, at least till the moment of appearing quantum computers

• Can we use the knowledge gained from the experience with already known scheduling algorithms (hypeheuristics approach) ?

• We will use GA-based Learning classifier systems (LCS) to extract some knowledge and use it in the scheduling process

Multiprocessor Scheduling Problem The idea of LCS The concept of LCS-based scheduling Experimental results Conclusions

4

Multiprocessor system: undirected, unweighted graph Gs=(Vs,Es), called a system graph.

Parallel program: weighted, directed, acyclic graph GP=<VP,EP>, called a precedence task graph or a program graph.

The purpose of the scheduling is to distribute the tasks among the processors in such a way that the precedence constraints are preserved and the response time T (the total execution time) is minimized.

T = f (allocation, scheduling_policy = const)

Examples of a precedence task graph (a) and a system graph (b).

t1t1

t3t3t2

t2

5 3

1

2

3

5

5

t4t4

P1P1

P3P3 P4

P4

P2P2

a) b)

MUTIPROCESSOR SCHEDULING MUTIPROCESSOR SCHEDULING PROBLEMPROBLEM

Problem formulation • Given a set of program graph instances • Given a multiprocessor system• Given a number of scheduling algorithms (heuristics)

solving instances of the scheduling problem with some efficiency

• Is it possible to train LCS system to match a given instance of the scheduling problem with the best for it scheduling algorithm (to minimize the total exectution time) from a set of scheduling algorithms ?

Idea of GA-based Learning Classifier System (LCS)

Learning Classifier System

Evaluation system

Decision system

System for discovery

of new rules

Environment

Idea of Learning Classifier System (LCS)

Learning Classifier System

System for

discovery

of new rules

Decision systemEvaluation

system

Environment

Environment state or messagee.g.10100

Idea of Learning Classifier System (LCS)

Learning Classifier System

System for

discovery

of new rules

Decision systemEvaluation

system

Environment

actione.g. Turn right

Environment statee.g.10100

Idea of Learning Classifier System (LCS)

Learning Classifier System

System for

discovery

of new rules

Decision systemEvaluation

system

Environment

rewarde.g. 120

actione.g. Turn right

Environment statee.g.10100

Classifier (rule) in classical LCS

• The structure of a classifier– Condtition part C

– Action A

– Strength S

• Strength S– Used when a classifier is selected from a set of

classifiers to perform an action– Used when GA creates new rules

#011: 01 : 43C

A

S

Classifier in XCS

• C: condition part• a: action• p: expected reward• e: prediction error• f: fitness• exp: experience of classifier• ts: remembers recent time when GA was applied to this

classifier • as: expected population size [A], in which appears

classifier• num: numerosity of classifier

010##0#####:0 1000 2,504 0,77 499 19924 146,76 109

C

a

p

ε

f ts

exp as

num

XCSEnvironment

Detector

0011

Population [P]

C : a : p: e: f#011:01:43:.01:99#0##:11:11:.13:9001#:01:27:.05:52#0#1:11:18:.24:311##:00:32:.02:921#01:10:24:.17:15

...

Match set [M]

C : a : p: e: f#011:01:43:.01:99#0##:11:11:.13:9001#:01:27:.05:52#0#1:11:18:.24:3

Action set [A]

C : a : p: e: f#011:01:43:.01:99001#:01:27:.05:52

Action set [A]-1

C : a : p: e: f11##:00:32:.02:92

Efector

01

cover

1.

2.

3.

4.

5.

6.

Prediction array PA

00-

01 37.49

1112,75

10-

Enforcement

GA Subsumption

7.

8.9.

ρ

σa

Features of XCS

• Creates population of classifiers• Processes messages received from

environment• Applies GA to evolve classifiers• Sends action to environment• Learns, generalizes and modifies the set

of classifiers

Our problem • Given 200 program graph instances created on the

base of the 15-tree graph: training set• Each instance is a tree with different random task

and communication weights• Two processor system is considered• Given 5 scheduling heuristics• We want to train LCS system to select in the best

way the scheduling heuristic to solve given set of instance of the scheduling problem to provide the best possible solutions ?

Set of list algorithms

• ISH (Insert Scheduling Heuristic)• MCP (Modified Critical Path)• STF (Shortest Time First)• LTF (Longest Time First)• own list algorithm: priority of a task

depends on a size of the subgraph• We know how works each algorithm

(response time) on the set of scheduling instances

XCS-based scheduling system

1. XCS receives information about an instance of the scheduling problem

Program graph

+System graph

XCS

1.

XCS-based scheduling system

1. XCS receives information about an instance of the scheduling problem

2. XCS selects the best available heuristic

Program graph

+System graph

XCS

schedulingalgorithm

1.

2.

XCS-based scheduling system

1. XCS receives information about an instance of the scheduling problem

2. XCS selects the best heuristic from the set of available heuristics

3. Program graph and a system graph become input data of scheduling algorithm

Program graph+System graph

XCS

schedulingalgorithm

1.

2.3.

XCS-based scheduling system

1. XCS receives information about an instance of the scheduling problem

2. XCS selects the best heuristic from the set of available heuristics

3. Program graph and a system graph become input data of scheduling algorithm

4. Scheduling algorithm delivers a solution

Program graph+System graph

XCS

schedulingalgorithm

Gantt diagram

1.

2.3.

4.

Program graph signature: the basic information concerning program graph

• LCS receives from environment a signature of program graph • The signature codes some static properties of program graph

– comm/comp – the averaged communication to computation time for a program graph (3 bits)

– information about distribution of tasks with a given computational requirements (12 bits)

– information about distribution of communication time requirements to communicate between tasks (12 bits)

– Information about critical path based on evaluation of comp/comm (16bits)

• The length of the signature: 43 bits

Distribution of tasks with a given computational requirements/distribution of communication time requirements

Coding information concerning critical path based on evaluation of comp/comm

• Computing ratios on critical path:ratios[0] = 1/4, ratios[1] = 5/3, ratios[2] = 1/3

• Normalization:ratios[0] = 3/27, coding as 01,ratios[1] = 20/27,coding as 11,ratios[2] = 4/27, coding as 01.

• Coding signal concerning critical path: 0111010000000000

Training LCS: number of correct matching scheduling algorithms to instances as function of

number of training cycles

Training LCS: population size of rules as function of a number of cycles

Training: summary of experiments

• Nontrained system correctly matched heuristics with scheduling instances in 40-50% cases

• The system was able to learn to match correctly in 100% heuristics to instances

• It means that information about the matching process was extracted during the learning process

• Classifiers contain this information and during the learning process the process of generalization of rules was observed

• Learning process is a costly process, but the gained information can be used in the scheduling

LCS-based scheduling system: normal operation mode

• Modification of instances (program graphs) from training set: testing set

• All computation and communication weights were scailed by 10

• Next, weights of k tasks or communications were changed by constant d

Experiment: k=1, d=1

Experiment: k=2, d=2

Experiment: k=3, d=3

Normal operation mode: summary of experiments

Number of correct matching heuristics to scheduling instances

Number k of modified weights (tasks or communications)

Difference d between initial weight value and

the value after modification

90% 1 1

80% 2 2

75% 3 3

Conclusions

• LCS has been proposed to learn optimal matching scheduling algorithms to instances

• Instances were represented by specially signatures• During the learning process the knowledge about matching was extracted

in the shape of LCS rules, and next generalized• Creating signatures is one of the most crucial issues in the proposed

approach • Performance of the system depends also on many parameters of LCS• We believe that encouraging results of experiments open new possibilities

in developing hyperheuristics