Applied Mathematics and Computation 221 (2013) 257β267
Contents lists available at SciVerse ScienceDirect
Applied Mathematics and Computation
journal homepage: www.elsevier .com/ locate/amc
An effective modified binary particle swarm optimization(mBPSO) algorithm for multi-objective resource allocationproblem (MORAP)
0096-3003/$ - see front matter οΏ½ 2013 Elsevier Inc. All rights reserved.http://dx.doi.org/10.1016/j.amc.2013.06.039
β Corresponding author.E-mail address: [email protected] (K. Fan).
Kun Fan β, Weijia You, Yuanyuan LiSchool of Economics and Management, Beijing Forestry University, 35 Qinghua East Road, Haidian, Beijing 100083, PR China
a r t i c l e i n f o
Keywords:Binary particle swarm optimization (BPSO)Multi-objective resource allocation problem(MORAP)AlgorithmPareto optimal solutionsExample simulation
a b s t r a c t
A modified binary particle swarm optimization (mBPSO) algorithm is proposed for solvingthe multi-objective resource allocation problem (MORAP). First, the generation mechanismfor initial particles is established to guarantee that the algorithm can begin to search opti-mal particle in the feasible solution space. Second, we develop the update mechanism foriterative particles which includes setting up the memory array, modifying Sig function andverifying the constraint condition to assure that the regenerated particles meet the con-straint and algorithm can quickly converge. Third, the selection mechanism for pbesti
and gbest is proposed which uses the dynamic neighborhood strategy to ensure that thealgorithm to find Pareto optimal solutions. Through comparing the example simulationresults of our mBPSO with hGA and ACO published in references, we find that proposedmBPSO outperforms hGA and ACO. Finally, the effectiveness of the different improvedmethods is analyzed, and the synergism effect and the convergence behavior of the mBPSOalgorithm show its good performances.
οΏ½ 2013 Elsevier Inc. All rights reserved.
1. Introduction
Resource allocation problem (RAP) is a process that limited resources are distributed to various projects reasonably so asto optimize a certain objective. Resources may be raw materials, capitals, machineries and equipments, labors or foods, andthe objective includes profit maximization, cost minimization, quality optimization, and so on. For instance, plant distribu-tion [1] allocates limited products among plants to minimize the total cost, and water resources allocation [2] requires that acertain amount of water be purposefully left in or released into an aquatic ecosystem to maintain it in a condition. Job shopscheduling [3] allocates time for work orders on different types of production equipment to minimize delivery time or max-imize equipment utilization. In addition, software testing [4,5] guarantees the maximum reliability by allotting testing re-source to program modules, and public services resource allocation [6] achieves effectiveβefficientβequality goal andbalances the desire needs between different management level. There are so many resource allocation problems in the worldneed to research.
Because the number of optimization goals is different in diverse problem scenarios, resource allocation problem includessingle-objective RAP (SORAP) and multi-objective RAP. SORAP optimizes a single goal, such as benefit maximization or costminimization, while MORAP seeks to optimize a set of goals simultaneously. In the case of multiple-objectives, the optimalsolution to all objectives does not necessarily exist because of incommensurability and confliction among objectives [7].
258 K. Fan et al. / Applied Mathematics and Computation 221 (2013) 257β267
Usually, there exists a group of solutions for the MORAP which cannot be compared with each other simply. Such solutions,called no-dominated solutions or Pareto optimal solutions, cannot make any objective value improve without deterioratingany other objective value [7].
In the past few years, there has been a boom in applying various approaches to solving many different RAP and MORAPoptimization problems. The analytic hierarchy process combined with an artificial neural network algorithm [8] was pro-posed to put forward a reasonable budget allocation, while Ko & Lin [9] employed neural network to make portfolio selectionas a resource allocation problem. The data envelopment analysis (DEA) model [10] was proposed for resource allocation, andthe important advantage of using this model is that the decision makersβ preferences can be incorporated into the resourcereallocation. Rachmawati & Srinivasan [11] proposed a fuzzy evolutionary algorithm (EA) employing fuzzy representationand reasoning for the student project allocation involving fuzzy objectives. Grid systems were used in [12] to considerthe MORAP and scheduling problem in a grid computing environment. A memetic algorithm [13] was presented for solvingproject resource allocation problems, where the resource requirement of each project concerns numbers of monetary unitsand never exceeds the amount of capital available. Osman et al. [14] used general genetic algorithm (GA) to solve MORAP,while Lin & Gen [15] proposed a multi-objective hybrid genetic algorithm (mo-hGA) approach based on the multistage deci-sion making model to obtain a set of Pareto solutions. On the other hand, ant colony optimization algorithm (ACO) was mod-ified by Chaharsooghi & Kermani [16] to get Pareto solutions of the same MORAP as [15]. Yin & Wang [17] employed theparticle swarm optimization (PSO) paradigm and presented a hybrid execution plan to solve the nonlinear MORAP with inte-ger decision variable constraint.
Based on previous works, this paper intends to present a new algorithm for solving MORAP based on the binary particleswarm optimization algorithm (BPSO) which was developed by Kennedy & Eberhart [18]. The motivations of our research arethreefold.
οΏ½ Most existing methods based on swarm optimization for RAP and MORAP mainly include ACO [16] and PSO [17], while tothe best of our knowledge there is no previous work that applied BPSO to the MORAP. Encouraged by our successful appli-cation of BPSO in [19] to solve a class of job shop scheduling problem with a single objective, we employ modified binaryparticle swarm optimization (mBPSO) for solving the MORAP.οΏ½ The multi-objective hybrid genetic algorithm (mo-hGA) and the modified ant colony optimization (ACO) proposed in [15]
and [16] respectively are shown to be efficient for MORAP, and we hope to compare the performance of our mBPSO withthem through solving the same MORAP.οΏ½ We propose several strategies for handing Pareto optimal solutions which include initial particle generation mechanism,
iterative particle update mechanism and best particle selection mechanism. These techniques can ensure algorithm tosearch optimal particle in the feasible solution space and expedite the search. At the same time, these strategies arenot only useful in our methods but also beneficial to any other problems whose solutions are ββ0β1ββ matrixes.
The remainder of this paper is organized as follows. Section 2 formulates the addressed MORAP problem. Section 3 pre-sents the modified BPSO algorithm for tackling the MORAP in details. Section 4 reports the comparative performance of pro-posed mBPSO with the hybrid genetic algorithm and ant colony optimization, and also convergence analysis. Conclusions aredrawn in Section 5.
2. Mathematical formulation
In order to make a clear comparison with the genetic algorithm and the ant colony optimization algorithm, we also focuson the multi-stage decision making model for multi-objective human resource allocation problem and use the same math-ematical model as [15,16].
Notations
Indices:
i index of job, i = 1,2, . . .,N, j number of worker, j = 0,1,2, . . .,M Parameters: N total number of jobs, M total number of workers, cij cost of job i when j workers are assigned, eij efficiency of job i when j workers are assigned.Decision variables:
Xij ΒΌ1; if j workers are assigned to job i;
0; otherwise;
οΏ½
K. Fan et al. / Applied Mathematics and Computation 221 (2013) 257β267 259
maxXN
iΒΌ1
XM
jΒΌ0
eijXij; Γ°1Γ
minXN
iΒΌ1
XM
jΒΌ0
cijXij; Γ°2Γ
s:t:XN
iΒΌ1
XM
jΒΌ0
jXij 6 M; Γ°3Γ
XM
jΒΌ0
Xij ΒΌ 18i; Γ°4Γ
Xij ΒΌ 0 or 18i; j: Γ°5Γ
The objective function (1) is to maximize the total efficiencies for all the jobs, and the objective function (2) is to minimizethe total costs for all the workers. Constraint (3) ensures that the workers cannot be assigned more than the total numbers ofworkers. Constraint (4) ensures that each job i can be assigned to workers only once.
This human resource allocation problem is a multi-objective programming problem whose solutions is usually character-ized as Pareto optimal solutions which are not dominated by any other solutions. When the solution x is strictly better thansolution y in at least one objective and solution x is not worse than solution y in the others, we say that y is dominated by x.Formally, given M maximization objective functions, fj(x), j = 1,2, . . .,M, a solution x is said to dominate y, denoted x οΏ½ y, iffi(x) P fi(y), "i = 1,2, . . .,M and fj(x) > fj(y), $j e {1,2, . . .,M}[17]. Apparently, decision makers tend to use Pareto optimal solu-tions in solving multi-objective problems because the solution can be improved at least one objective without sacrificing anyother solutionsβ quality if this solution is not Pareto optimal. Therefore, this paper focuses on developing an efficient methodfor searching Pareto optimal solutions.
Considering the form of the solution X (ββ0β1ββ matrix) and the constraints, we improve binary particle swarm optimiza-tion (BPSO) algorithm to solve MORAP in this article. In implementation of BPSO algorithms, how to update the particles andenable them to fly toward the Pareto front become two of the major researches on the algorithm. In this paper, several mod-ules for handing Pareto optimal solutions are added, including initial particle generation mechanism, iterative particle up-date mechanism and best particle selection mechanism, and so on.
3. Modified binary particle swarm optimization for MORAPS
3.1. Review of BPSO
The particle swarm optimization (PSO) algorithm based on swarm intelligence theory is an evolutionary computationtechnique. The algorithm was first proposed by Kennedy and Eberhart in 1995 [20], and later they developed a discrete bin-ary version of PSO in 1997, namely binary particle swarm optimization (BPSO), which was used to solve some combinatorialoptimization problem in practice.
The particle swarm optimization algorithm is inspired by organism behavior of birds flocking and fish schooling and isbased on the principle that social sharing of information among members of a group offers an evolutionary advantage[21]. When a flock of birds searching food at random, if this region has only one piece of food, the most simple and effectivestrategy of finding this food is searching the surrounding area of the nearest bird. PSO algorithm is generated by gettinginspiration from this model and used to solve optimization problems. In recent years, many successful applications of BPSO,ranging from Tuning the structure and parameters of a neural network [22], Gene selection and classification [23], featureselection [24], engineering electromagnetics [25], to job shop scheduling [19], have been reported.
The algorithm process of BPSO is similar to PSO, and it proceeds as follows: Given an optimization function f(X) with deci-sion variables X, the BPSO searches the optimal solution Xβ by iteratively evolving a swarm of candidate solutions. Each can-didate solution corresponds to a birdβs position in the search space, and the bird is called particle. Besides position, eachparticle has its own velocity and fitness. The former determines the direction and distance of the particle flight, and the latteris determined by the optimized function f(X) and used as the performance evaluation of particles in the swarm. Every particleremembers and follows the current optimal particle and searches in the solution space. At the same time, each iteration pro-cess is not completely random, and if BPSO finds a better solution it will base on this solution to find the next better solution.
BPSO algorithm randomly generates an initial particle swarm with s particles (potential solutions) firstly. In each iterationprocess, the particle i (1 6 i 6 s) updates itself by tracing two ββextreme valueββ. One is the optimal position found by the par-ticle itself, which is called the personal best location pbesti, and the other ββextreme valueββ in the global version of BPSO is theoptimal position found by the whole swarm, which is called the global best location gbesti. While in the local version of BPSO,the other ββextreme valueββ is the neighborsβ best position lbesti attained by the particles within a topological neighborhood.
260 K. Fan et al. / Applied Mathematics and Computation 221 (2013) 257β267
After finding the two extreme values above, the particle updates its velocity and position by Eq. (6) and Formula (7) respec-tively. Particle i is expressed by D-dimensional vector, then the update equations of position Xi = (xi1,xi2, . . .,xiD)T and velocityVi = (vi1,vi2, . . .,viD)T are listed below [19]:
vkΓΎ1id ΒΌ wvk
id ΓΎ c1randk1 pbestk
id οΏ½ xkid
οΏ½ οΏ½ΓΎ c2randk
2 gbestkd οΏ½ xk
id
οΏ½ οΏ½Γ°6Γ
if qkΓΎ1id < Sig vkΓΎ1
id
οΏ½ οΏ½οΏ½ οΏ½then xkΓΎ1
id ΒΌ 1
else xkΓΎ1id ΒΌ 0
Γ°7Γ
In Eq. (6), vkid and xk
id are the velocity and position of dth dimension for particle i in kth iteration respectively. The velocityrepresents the flight distance of particle i from its current position. The cognition learning rate c1 and social learning rate c2
are the constants and they respectively regulate the maximal step size of flight towards the personal best particle and theglobal best particle. If the maximal step size is too small particles maybe fly away from the target area, while too big step sizewill result in suddenly flying to or flying over the target area. In addition, randk
1 and randk2 are random real numbers drawn
from U(0,1). The variable w is called the inertia weight whose role is to balance the global and local search.In Formula (7), the function SigΓ°vkΓΎ1
id Γ is a Sigmoid limiting transformation (SigΓ°vkΓΎ1id Γ ΒΌ 1=Γ°1ΓΎ expΓ°οΏ½vkΓΎ1
id ΓΓ) and is a quasi-random number selected from a uniform distribution in [0,1]. Obviously, xid, pbestid and gbestid are integers in [0,1].
3.2. The encoding scheme
Because the decision variable Xij = 1 or 0, we use the binary particle swarm algorithm (BPSO) to solve the multi-objectivehuman resource allocation problem. The encoding scheme of particle X is as same as a solution, namely using 0 or 1 to denotethe distribution state of each job. Xij = 1 indicates job i is assigned to worker j, and otherwise Xij = 0. Thus particle X is a ββ0β1ββmatrix with N row vectors, and its size is N οΏ½M (see Eq. (8)). Since each job i can only be assigned to workers one time (Con-straint (4)), each row of particle X only includes one 1 and M οΏ½ 1 0.
X ΒΌ
0 1 οΏ½ οΏ½ οΏ½ 0 οΏ½ οΏ½ οΏ½ 0 01 0 οΏ½ οΏ½ οΏ½ 0 οΏ½ οΏ½ οΏ½ 0 0... ..
. . .. ..
. . .. ..
. ...
0 0 οΏ½ οΏ½ οΏ½ 0 οΏ½ οΏ½ οΏ½ 1 0... ..
. . .. ..
. . .. ..
. ...
0 0 οΏ½ οΏ½ οΏ½ 0 οΏ½ οΏ½ οΏ½ 1 0
266666666664
377777777775
NοΏ½M
: Γ°8Γ
3.3. Fitness function
The fitness function is used to measure the quality of particles. This paper will directly adopt the modelβs objective func-tion as the fitness function of the algorithm. Then the fitness function is:
FΓ°XΓ ΒΌ
f1 ΒΌ MaxXN
iΒΌ1
XM
jΒΌ0
eijXij
f2 ΒΌ MinXN
iΒΌ1
XM
jΒΌ0
cijXij:
8>>>>><>>>>>:
Γ°9Γ
3.4. Improved algorithm
In order to solve MORAP, several improvements based on BPSO are proposed.
3.4.1. Generation mechanism for initial particlesSince initial particles generated randomly must not satisfy the constraint (4), it is necessary to design a new strategy to
generate the initial particle swarm, so that the algorithm can begin to search the optimal particle in the feasible solutionspace.
At the beginning of task allocation the probability of each job assigned to any worker is 1/M and each job just can be as-signed to a worker only once (i.e. each row of particle has only one 1), so a ββgeneratorββ can be developed to randomly andequiprobably generate the location (i.e. the column number) of 1 in each row. During the algorithm implementation, we usea function to carry out the ββgeneratorββ. The function randomly generates a real number in the range of [0,1] and sets thenumber in j column of i row to 1 and sets the other columns to 0 if the real number locates between (j οΏ½ 1)/M and j/M. Each
K. Fan et al. / Applied Mathematics and Computation 221 (2013) 257β267 261
row can achieve only one 1 and M οΏ½ 1 0 through using this function, so that an initial particle meeting the constraint can begenerated. The same mechanism is used to generate the initial particle swarm (including s particles), and all particles arefeasible initial solutions of the objective function.
3.4.2. Update mechanism for iterative particlesIf initial particles direct update themselves with Eq. (6) and Formula (7), the regenerated particles may not still meet the
constraint (4). Therefore, we develop the memory array and modify Sig function in order to ensure particlesβ effective update.
(1) When each Xij of particle X updates its position xij, a one-dimensional array named as ββmemory arrayββ is used to recordthe number of the row containing 1 so far (It means that the job i has been assigned to a worker). When Xij updatingthe algorithm first judge whether row i already contains 1 from Xi1 to Xi(jοΏ½1) through the array. If there is no 1 (namelythe elements of memory array have not contained i until now) and meet qkΓΎ1
id < SigΓ°vkΓΎ1id Γ, xij is updated to 1 and i is
recorded into the array. If the two conditions cannot be satisfied at the same time, xij is updated to 0. In other words, ifthe job i has been assigned to any one of first jοΏ½1 workers, it cannot be assigned to latter workers. The development ofthe memory array can improve the effectiveness of the algorithm greatly, and the comparison of algorithm resultsbetween using the memory array and not using will be analyzed in Section 4.3.
(2) When the size of the particle X is larger, Sig function is modified as:
Sig vkΓΎ1id
� �¼ 1=ððM � jà þ exp �vkþ1
id
οΏ½ οΏ½Γ; Γ°10Γ
where, M is the total column numbers of a particle (solution matrix), that is, the total number of workers, and j is thecolumn number when Xij updating, i.e. the index of a worker. The reasons why Sig function is modified are as follows:
β At the beginning of the allocation, the probability that each job is assigned to any worker is equal, namely 1/M.β After the first worker has been assigned a job, the probability that the rest each job is assigned to any other workersis also equal (1/M οΏ½ 1), and the rest may be deduced by analogy.β It can avoid the situation that too many jobs are assigned to the prior workers and the posterior workers may not
be allocated any jobs.
(3) After completing the position and the velocity update of each particle, it is necessary to set a judgment conditions totest whether the particle meets the Constraint (4). If the constraint is met the program jumps out the loop. Otherwisethe particle needs to re-update till meeting the constraint condition. It is more efficient for particle to update whenimporting ββmemory arrayββ and modifying Sig function, and verifying the constraint condition can eliminate unex-pected infeasible solutions.
3.4.3. Selection mechanism for pbesti and gbestObviously, the basic BPSO algorithm is not suitable for directly solving the multi-objective optimization problems which
do not have the absolute global optimal solution. Consequently, we need to continue to modify the BPSO algorithm to findmultiple optimal solutions and locate the Pareto front.
The concept of Pareto optimality was put forward by Vilfredo Pareto in 19th century [26]. For Pareto optimal solutions,there is no possible to make any objective value improve without deteriorating any other objective values. The Pareto opti-mum usually gives a group of solutions called non-inferior or non-dominated solutions instead of a single solution.
In this paper, the fitness value space is two-dimensional, so the Pareto front is the boundary of the fitness value region,which is a set of continuous or discontinuous line and/or points. For maximization problem, the boundary should be locatedat the upper right side of the fitness space (see Fig. 1). If the first fitness value F1 is fixed and only the second objective func-tion F2 is optimized, the final solution will ββdropββ onto the boundary line which containing Pareto front [27].
Hu & Eberhart [27] proposed a dynamic neighborhood PSO algorithm to solve the multi-objective optimization problemand obtained good results. The PSO and BPSO have the same algorithm theory and update principle, and the only difference
F1
F20
F
Fig. 1. An example of problem with two objective functions (maximization problem). The Pareto front is marked with the solid line.
262 K. Fan et al. / Applied Mathematics and Computation 221 (2013) 257β267
between them is the representation of the position. Therefore, we have adopted the idea of the dynamic neighborhood anddeveloped the following improvements to the BPSO algorithm to solve MORAP.
(1) Calculate the distance between the current particle and other particles in the fitness value space of the first objectivefunction (f1).
(2) Find the nearest l particles as the neighbors of the current particle based on the distance calculated above.(3) Find the personal optimal particle among these l + 1 particles by calculating the fitness value of the second objective
function (f2).(4) Update personal best location pbesti. When any one of the two values (f1 and 1/f2) of new particle is higher than those
of the current particle, pbesti is updated immediately.(5) Update global best location gbest. If and only if the two values (f1 and 1/f2) of new particle are both higher than those of
the current particle, gbest can be updated.
Through the above three mechanisms to improve BPSO, the modified algorithm successfully finds multiple optimal solu-tions and locates the Pareto front.
3.5. Parameter control
For this empirical study, we set the most of parameters according to the Refs. [27,28].
1. Set the neighbor size l = 2.2. Fix the first fitness values f1, while optimizing the second fitness function f2.3. In traditional PSO, the population size of the swarm is often set between 10 and 40. However, in the multi-objective envi-
ronment the larger population size is preferred. Through numerous experiments, we set population size s = 40 and max-imum iteration K = 50.
4. In Eq. (6), appropriate cognition learning rate c1 and social learning rate c2 can accelerate convergence and not easily leadto local best solution. Both of them are set to 1.49445. The inertia weight w = [0.5 + (Rnd/2.0)], thus w is a random numberchanging from 0.5 to 1.0. In addition, randk
1 οΏ½ U(0,1), randk2 οΏ½ U(0,1).
5. In Formula (7), to avoid the Sigmoid function reaching saturation, vkid is limited to between οΏ½4.0 and +4.0.
3.6. Algorithm process
The algorithm process of the proposed mBPSO is presented in Fig. 2.
Begin
Initialization: xi, vi, pbesti=xi(Generation mechanism for initial particles)
Evaluate each particleβs fitness (Eq. (9)) and find the initial gbest;
Set number of iteration k = 1
Update xi (Eq. (7))and vi (Eq.(6))(Update mechanism for iterative particles)
Calculate each particleβs fitness (Eq. (9)) and update pbesti, gbest
(Selection mechanism for pbesti and gbest)
k = k+1
k β€ K ?
Output
Y
N
Fig. 2. The algorithm process of the proposed mBPSO.
K. Fan et al. / Applied Mathematics and Computation 221 (2013) 257β267 263
4. Example simulation and results analysis
In this section, we present the example simulation results of comparative performances among several competing algo-rithms for the MORAP. The general properties, namely the synergism and convergence, of the proposed mBPSO algorithm areanalyzed. The mBPSO has been coded in Borland Delphi 7 and all experiments are conducted on a PC with a 1.73 GHz CPUand 2.0 GHz RAM.
4.1. Simulation data
The same mathematical model which was extracted from [15] is considered in this paper, we therefore use their simu-lation data set for evaluating the performance of the proposed mBPSO, and compare our algorithm with existing hybrid ge-netic algorithm [15] and ant colony optimization algorithm [16]. The problem is allocating 10 workers to a certain set of fourjobs, and the simulation data in Tables 1 and 2 provide the expected efficiency and cost.
4.2. Comparative performances
The following Tables 3β5 show the Pareto solutions calculated by hGA, ACO and the proposed mBPSO respectively.The MORAP solved by hGA and ACO only obtained 6 and 4 Pareto solutions respectively, while it got 8 Pareto solutions by
mBPSO. Fig. 3 shows that mBPSO dominates 67% of Pareto solutions constructed by hGA (Solutions 1, 2, 5 and 6 of Table 3)and also none of the Pareto solutions generated by mBPSO is dominated by hGA. At the same time, there are two Pareto solu-tions obtained by mBPSO (Solutions 4 and 6 of Table 5) dominates Solution 4 of ACO (see Table 4), and two Pareto solutionsof mBPSO (Solutions 5 and 8) are as same as Solutions 1 and 3 of ACO. In addition, mBPSO provides more Pareto solutions toMORAP than ACO. This means that the decision makers have more choices to allocate resources, and they can choose to focuson the overall efficiency or cost. Therefore, the proposed mBPSO approximately outperforms the hGA and ACO in solvingmulti-objective resource allocation problem.
The above results confirm that the BPSO (or PSO) algorithm have better performance than other evolutionary algorithmsincluding GA and ACO. The reasons should include the following aspects: (1) the evolution Eq. (6) and Formula (7) of BPSOcontain a ββSelectββ mechanism in which the personal best location (the parent generation) is replaced by the current position(the filial generation) of the particle only when the fitness value of the current position is better than the optimal positionexperienced by the particle itself; (2) BPSO algorithm retains and uses the information of the position and velocity (i.e. thevariation degree of the position) in the evolutionary process, while the other evolutionary algorithms only retain and use theinformation of the position; (3) each particle in the BPSO algorithm flies to the good direction according to the groupβs expe-rience in each generation, while in the evolutionary programming is mutate to any direction through a random function.Therefore, BPSO and PSO show more excellent features than the other evolutionary algorithms.
4.3. The effectiveness analysis of the improved methods
Through the above analysis of the comparative results, we know the mBPSO has good performance, and this is mainly dueto the improvements of the algorithm including the memory array and the modified Sig function. In order to verify the effec-tiveness of the improved methods, the non-dominated solutions of the combinations of the different methods are listed inTable 6 and Fig. 4.
In Table 6 and Fig. 4, ββAββ represents the generation mechanism for initial particles and ββEββ is the selection mechanism forpbesti and gbest, while ββBββ (the memory array), ββCββ (modifying the Sig function) and ββDββ (verifying the constraint condition)belong to the update mechanism for iterative particles. None of the non-dominated solutions can be obtained without usingthe memory array and the modified Sig function, which shows the two improvements have played a key role in mBPSO algo-rithm to solve the MORAP.
The MORAP solved by ββA + B + D + Eββ and ββA + C + D + Eββ obtained 7 and 4 Pareto solutions respectively. It can be seen inFig. 4 and Table 6 that mBPSO (i.e. ββA + B + C + D + Eββ) dominates 100% of Pareto solutions constructed by ββA + B + D + Eββ (i.e.without modifying Sig function). Although two solutions are same in the mBPSO and ββA + B + C + Eββ (i.e. without the memoryarray), there are only 4 Pareto solutions obtained by the latter and they mainly located in the upper left of the Pareto front.Obviously, the quality of the Pareto solutions generated by ββA + B + C + Eββ is worse than mBPSO.
Table 1Excepted cost cij.
Number of jobs Number of workers
0 1 2 3 4 5 6 7 8 9 10
1 41 38 46 32 78 76 72 84 80 92 962 45 54 36 55 87 82 90 132 97 121 1343 36 43 68 56 72 59 32 67 86 88 1004 46 78 88 64 90 80 120 104 96 86 120
Table 2Excepted efficiency eij.
Number of jobs Number of workers
0 1 2 3 4 5 6 7 8 9 10
1 0 37 42 50 54 56 58 65 72 80 952 0 49 55 59 62 67 73 80 87 95 1023 0 45 49 57 64 77 88 92 100 105 1104 0 60 67 72 79 83 88 97 102 110 120
Table 3The non-dominated (Pareto) solutions by hGA.
Solution k jX1j jX2j jX3j jX4j Overall cost Overall efficiency
1 3 2 1 4 201 2292 0 2 6 2 197 2103 3 2 5 0 173 1824 3 1 6 0 164 1755 1 1 6 2 212 2416 0 1 6 3 191 209
Table 4The non-dominated (Pareto) solutions by ACO.
Solution k jX1j jX2j jX3j jX4j Overall cost Overall efficiency
1 3 2 1 3 175 2222 1 2 6 0 152 1803 3 2 0 0 150 1054 1 1 6 1 202 234
Table 5The non-dominated (Pareto) solutions by mBPSO.
Solution k jX1j jX2j jX3j jX4j Overall cost Overall efficiency
1 3 3 5 5 226 2692 3 2 5 5 207 2653 3 2 5 3 191 2544 3 2 3 3 188 2345 3 2 1 3 175 2226 3 2 0 3 168 1787 3 2 1 0 157 1518 3 2 0 0 150 105
0.002 0.003 0.004 0.005 0.006 0.0070
50
100
150
200
250
300
Ove
rall
Eff
icie
ncy
1/Overall Cost
Pareto Solutions by hGAPareto Solutions by ACOPareto Solutions by mBPSO
Fig. 3. The hGA and ACO compared with mBPSO.
264 K. Fan et al. / Applied Mathematics and Computation 221 (2013) 257β267
Table 6The non-dominated solutions of the combinations of the different improved methods.
Solution k A + D + E A + B + D + E A + C + D + E A + B + C + D + E
Overall cost Overall efficiency Overall cost Overall efficiency Overall cost Overall efficiency Overall cost Overall efficiency
1 β β 212 241 253 277 226 2692 β β 200 226 251 271 207 2653 β β 197 220 226 269 191 2544 β β 193 216 207 265 188 2345 β β 189 210 β β 175 2226 β β 170 174 β β 168 1787 β β 157 150 β β 157 1518 β β β β β β 150 105
Note: A: Generation mechanism for initial particles; B: The memory array; C: Modifying the Sig function; D: Verifying the constraint condition; E: Selectionmechanism for pbesti and gbest.
0.003 0.004 0.005 0.006 0.00750
100
150
200
250
300
Ove
rall
Eff
icie
ncy
1/Overall Cost
A+B+D+EA+C+D+EA+B+C+D+E
Fig. 4. The non-dominated solutions of the combinations of the different improved methods.
K. Fan et al. / Applied Mathematics and Computation 221 (2013) 257β267 265
In short, all of the improvements for BPSO are very important, especially the memory array and the modified Sig functionwhich guarantee the algorithm has high efficiency.
4.4. Synergism
Because BPSO is a multi-agent optimization program, the synergism and convergence as its two important attributes mustbe investigated to ensure that the algorithm is developed well. Synergism effect means that in the same computational timethe algorithm gets a better effect by using multiple agents, and good convergence behavior makes us sure that the target ofmultiple agents is to optimize objective values[17].
The program is executed with different swarm sizes including 3, 5, 10, 20, 40, and 80 to investigate the synergism effect ofmBPSO. Given a fixed computational time, the total number of found Pareto optimal solutions is larger, the synergism effect
0 10 20 30 40 50 60 70 80 900
2
4
6
8
10
Num
ber
of f
ound
Par
eto
solu
tions
Swarm size
Fig. 5. Number of found Pareto solutions by using particle swarms of different size.
Fig. 6. The convergence curve of Pareto solution 4 generated by mBPSO (Overall efficiency = 234, Overall cost = 188).
266 K. Fan et al. / Applied Mathematics and Computation 221 (2013) 257β267
is better. Because we use the dynamic neighborhood strategy and set the neighbor size l = 2, the swarm size s must be greaterthan or equal to 3. As shown in Fig. 4, compared with the result of using swarms with larger or equal to 20 particles, thenumber of found Pareto solutions by using swarms with less than 20 particles is not acceptable. We observe that the swarmwith three or five particles can discover only 1 Pareto solution, and the swarm with 10 particles can find 2 Pareto solutions.On the other hand, the swarm with 20 particles is able to target 7 Pareto solutions. As for the swarms with larger than 20particles, there is only one more Pareto solution than using a swarm of 20 particles. Therefore, the proposed mBPSO exhibitsthe good synergism effect when the swarm size is larger or equal to 20 particles which conform to the most existing PSOapplications [17].
4.5. Convergence
To observe the convergence behavior of the proposed algorithm, when we program by Delphi 7, we set two Charts toautomatically track the two values of the global best location (i.e. f1(gbest) and f2(gbest)). Fig. 5 shows the convergence curvesof Pareto solution generated by mBPSO (Solutions 4 of Table 5). Because we use 40 particles for optimization, the global bestlocation of the initial particle swarm can basically reach a good value, and this make f1(gbest) and f2(gbest) only update them-selves a few times during the evolution. Therefore as we can see the curves are ladder-like. Overall, the proposed mBPSOalgorithm has better convergence. (See Fig. 6)
5. Conclusion
The multiple-objective resource allocation problem (MORAP) pursues an allocation of resource for a number of activitiesso as to make a set of objectives optimized simultaneously and resource constraints are satisfied [17]. MORAP has been ap-plied to many fields such as product allocation, water resource allocation, project budgeting, job shop scheduling, softwaretesting, and so on. According to the different applications, the researchers have to put forward corresponding problem
K. Fan et al. / Applied Mathematics and Computation 221 (2013) 257β267 267
formulations. In this paper we proposed a modified version of binary particle swarm optimization (mBPSO) for multi-objec-tive human resource allocation problem (one of MORAP). Firstly, to guarantee that the algorithm can begin to search optimalparticle in the feasible solution space, the generation mechanism for initial particles is established. Secondly, to assure thatthe regenerated particles meet the constraint and algorithm can quickly converge, we develop an update mechanism for iter-ative particles which includes setting up the memory array, modifying Sig function and verifying constraint condition.Thirdly, to ensure that the algorithm can find Pareto optimal solutions, we propose the selection mechanism for pbesti
and gbest which is using dynamic neighborhood strategy. In the last Section of the paper we compare the example simulationresults of proposed mBPSO with hGA and ACO which were applied in the same problem. This comparing manifests that pro-posed mBPSO outperforms hGA [15] and ACO [16]. On the other hand, the effectiveness of the improved methods for BPSOalgorithm is analyzed, and the results show that all of the improvements are very important, especially the memory arrayand the modified Sig function which guarantee the algorithm has high efficiency. Finally, through the results analysis we getthat the proposed mBPSO exhibits the good synergism effect and convergence behavior.
Acknowledgments
The helpful comments and suggestions of the anonymous referees will be much appreciated by the author. This researchis supported by the Youth Elite Project for Beijing Universities and the Fundamental Research Funds for the Central Univer-sities (Nos. TD2012-05, JGTD2013-01). The Innovation and Industry Development Funds for the Core Area of Haidian District,Beijing (No. K2012003S), the Ministry of Education of Humanities and Social Science project (Nos. 11YJAZH098 and12YJAZH090) and Beijing Forestry University Young Scientist Fund (No. BLX201127) are acknowledged.
References
[1] Y.C. Hou, Y.H. Chang, A new efficient encoding mode of genetic algorithms for the generalized plant allocation problem, J. Inf. Sci. Eng. 20 (5) (2004)1019β1034.
[2] Z.F. Yang, T. Sun, B.S. Cui, B. Chen, G.Q. Chen, Environmental flow requirements for integrated water resources allocation in the Yellow River Basin,China, Commun. Nonlinear Sci. Numer. 14 (5) (2009) 2469β2481.
[3] G. Zhang, X. Shao, P. Li, L. Gao, An effective hybrid particle swarm optimization algorithm for multi-objective flexible job-shop scheduling problem,Comput. Ind. Eng. 56 (4) (2009) 1309β1318.
[4] Y.S. Dai, M. Xie, K.L. Poh, B. Yang, Optimal testing-resource allocation with genetic algorithm for modular software systems, J. Syst. Softw. 66 (1) (2003)47β55.
[5] O. Berman, M. Cutler, Resource allocation during tests for optimally reliable software, Comput. Oper. Res. 31 (11) (2004) 1847β1865.[6] X. Li, J. Cui, A comprehensive dea approach for the resource allocation problem based on scale economies classification, J. Syst. Sci. Complex 21 (4)
(2008) 532β549.[7] C.M. Fonseca, P.J. Fleming, An overview of evolutionary algorithms in multi-objective optimization, Evol. Comput. 3 (1995) 1β16.[8] Y.C. Tang, An approach to budget allocation for an aerospace company-Fuzzy analytic hierarchy process and artificial neural network, Neurocomputing
72 (2009) 3477β3489.[9] P.C. Ko, P.C. Lin, Resource allocation neural network in portfolio selection, Expert Syst. Appl. 35 (1β2) (2008) 330β337.
[10] J. Wu, Q. An, S. Ali, L. Liang, DEA based resource allocation considering environmental factors, Math. Comput. Model. 11 (2011) 983β993.[11] L. Rachmawati, D. Srinivasan, A hybrid fuzzy evolutionary algorithm for a multi-objective resource allocation problem, in: Proceedings of the Fifth
International Conference on Hybrid Intelligent Systems (HISβ05), Rio de Janeiro, BRAZIL, 2005, pp. 55β60.[12] C. Li, L. Li, A new optimal approach for multiple optimisation objectives grid resource allocation and scheduling, Int. J. Syst. Sci. 39 (12) (2008) 1127β
1138.[13] Angela H.L. Chen, Applying memetic algorithm in multi-objective resource allocation among competing projects, J. Softw. 5 (8) (2010) 802β809.[14] M.S. Osman, M.A. Abo-Sinna, A.A. Mousa, An effective genetic algorithm approach to multiobjective resource allocation problems (MORAPs), Appl.
Math. Comput. 163 (2) (2005) 755β768.[15] C. Lin, M. Gen, Multiobjective resource allocation problem by multistage decision-based hybrid genetic algorithm, Appl. Math. Comput. 187 (2) (2007)
574β583.[16] S.K. Chaharsooghi, A.H.M. Kermani, An effective ant colony optimization algorithm (ACO) for multi-objective resource allocation problem (MORAP),
Appl. Math. Comput. 200 (1) (2008) 167β177.[17] P.Y. Yin, J.Y. Wang, Optimal multiple-objective resource allocation using hybrid particle swarm optimization and adaptive resource bounds technique,
J. Comp. Appl. Math. 216 (1) (2008) 73β86.[18] J. Kennedy, R.C. Eberhart, A discrete binary version of the particle swarm algorithm, in: Proceedings of the World Multiconference on Systemics,
Cybernetics and Informatics, Piscataway, NJ, 1997, pp. 4104β4109.[19] K. Fan, R. Zhang, G. Xia, An improved particle swarm optimization algorithm and its application to a class of JSP problem, in: Proceedings of IEEE
International Conference on Grey Systems and Intelligent Services, Nanjing, China, 2007, pp. 1628β1633.[20] J. Kennedy, R.C. Eberhart, Particle swarm optimization, in: Proceedings of IEEE International Conference on Neural Networks, Perth, Australia, 1995, pp.
1942β1948.[21] M.J. Shirazi, R. Vatankhah, M. Boroushaki, H. Salarieh, A. Alasty, Application of particle swarm optimization in chaos synchronization in noisy
environment in presence of unknown parameter uncertainty, Commun. Nonlinear Sci. Numer. Simul. 17 (2) (2012) 742β753.[22] L. Zhao, F. Qian, Tuning the structure and parameters of a neural network using cooperative binary-real particle swarm optimization, Expert Syst. Appl.
38 (5) (2011) 4972β4977.[23] L. Chuang, C. Yang, K. Wu, C. Yang, Gene selection and classification using Taguchi chaotic binary particle swarm optimization, Expert Syst. Appl. 38
(10) (2011) 13367β13377.[24] L. Chuang, C. Yang, J. Li, Chaotic maps based on binary particle swarm optimization for feature selection, Appl. Soft. Comput. 11 (1) (2011) 239β248.[25] J. Nanbo, Y. Rahmat-Samii, Hybrid real-binary particle swarm optimization (hPSO) in engineering electromagnetics, IEEE Trans. Antennas Propag. 58
(12) (2010) 3786β3794.[26] C.A. Coello Coello, A comprehensive survey of evolutionary-based multiobjective optimization techniques, Knowl. Inf. Syst. 1 (3) (1999) 269β308.[27] X. Hu, R. Eberhart, Multiobjective optimization using dynamic neighborhood particle swarm optimization, in: Proceedings of the 2002 Congress on
Evolutionary Computation, Honolulu, HI, 2002, pp. 1677β1681.[28] W. Yang, Q. Li, Survey on particle swarm optimization algorithm, Chin. Eng. Sci. 5 (6) (2004) 87β94.