1
Design Optimization Using Fast Annealing Evolution Algorithms B. P. Wang and D. T. Pham
Department of Mechanical and AerospaceUniversity of Texas at Arlington
Box 19023, Arlington Texas [email protected]
Abstract:
Fast Annealing Evolution Algorithms (FAEA) is a new global optimization technique that has been used to solve the energy minimization problem in computational chemistry. FAEA combines the idea of adaptive simulated annealing (ASA) with the population concept of genetic algorithms. FAEA has been shown to be effective to solve unconstrained minimization problem. In this paper, we made some modification to FAEA. We also extend FAEA for constrained and problems with multiple minima. The capabilities of the purposed algorithms are demonstrated by several numerical examples.
1. Introduction:
The objective function of complex design optimization problems is often non-convex. This implies the existence of many local minimums and makes it difficult to locate the desired global minimizer. In recent years, stochastic type global optimization techniques have attracted a lot attention due to their ease of implementation and their ability to find near global optimal solutions. Unlike classical gradient-based algorithm that accepts only improved design during the solution process, these global methods will accept some non-descending solution. This feature allows the global methods to escape from being trapped to a local minimum. There are two types of stochastic global optimization technique. Methods such as simulated annealing works with a single design while method such as genetic algorithm works with a population (i.e. a group of designs).
Simulated Annealing (SA) method is a generalization of a Monte Carlo approach for minimizing multivariable functions. SA is derived form the process of heating and then slowly cooling of a strong crystalline structure (i.e., metal base). The simulated annealing process lowers the temperature by slow stages until the system “freezes” and no further changes occur [10]. At each temperature the simulation must proceed long enough for the system to reach a steady state or equilibrium. The Metropolis criterion is used in SA for acceptant of the solution. The disadvantage of Standard SA is that it is time consuming to find the optimal fit when used standard Boltzmann distribution function. The more advance version of SA [7] is called Adaptive simulated annealing (ASA) (proposed by A. L. Ingber). ASA used a new distribution function to choose sampling points. This new distribution function has a fatter tail compare with other distribution function have been used in SA. With its fatter tail, ASA permits an easier access to test local minima in the search for desired global minimum. Ingber also redefined the temperature-updating scheme. Due to these modifications ASA converges
44th AIAA/ASME/ASCE/AHS Structures, Structural Dynamics, and Materials Confere7-10 April 2003, Norfolk, Virginia
AIAA 2003-1699
Copyright © 2003 by the American Institute of Aeronautics and Astronautics, Inc. All rights reserved.
2
very fast. However, for hard problems, such as problem with multi-minima and the difference between the global minimum and local minima are small, ASA sometime converges to a local minimum.
W. Cai and X. Shao combine the population base method, EA, with annealing technique in adaptive simulated annealing method, ASA (also known as VFA-very fast annealing method), to form Fast Annealing Evolutionary Algorithm (FAEA). FAEA have applied to a set of standard test function and compared it with some other stochastic methods. Application for FAEA is to optimize the Lennard-Jones clusters was investigated by W. Cai and X. Shao [3].
In this paper, we first modified Cai’s FAEA for unconstrained problem. Our modification includes a temperature-updating scheme based on the specified maximum number of function evaluation. Additionally, we also modify the Metropolis acceptant criterion. This will be referred to as FAEAU. Next, we modified FAEAU for solving constrained optimization problems. This is called FAEAC in the paper. Finally, we also extend FAEAU to solve Problem with multiple minimums. This is identified as MS-FAEA in the paper.
The remainder of the paper is structured as following. Section 2 summarizes the FAEAU, FAEAC, and MS-FAEA algorithms. The capabilities of these algorithms are demonstrated by numerical examples presented in section 3. Section 4 concludes this paper. The problems used in the numerical examples are given in Appendix A.
2. Algorithms:
Simulated annealing was essentially introduced as a Monte Carlo importance-sampling technique for large-dimension problem. The basic simulated annealing (SA) algorithm can be described by the following pseudo code:
T=TStart
WHILE T > Tmin
FOR i = 1 TO NsPerform Metropolis Algorithm
ENDReduce T
END WHILE
The ability of SA to obtain global minimum stems from the fact that in the Metropolis algorithm, we accept with some probability design points with increasing objective function. This allows the algorithm to avoid being trapped to a local minimum. The following pseudo code is the Metropolis algorithm:
T = fixed value or ‘temperature’Generate new design and the associated objective function, Enew
Evaluate change in objective function ∆E = Enew – Eold
3
IF ∆E < 0 Downhill move : accept it
ELSEUphill move : possible acceptance
Accept with probability T
E
EP∆−
=Accept new design if P > r
Where r is a random number between (0,1)Update design if accepted
END IF
Ingber developed adaptive simulated annealing (ASA) to improve the performance of the original SA. In ASA the design variable distribution function and the temperature updating schemes are changed to the following: Eqs. 1 and 4.
( )( )
∏=
++=
D
i
ii
i
TTy
xg1 1
1ln2
1 (1)
( )
−
+−=−
11
1**sgn1*2
21
iuii
TTuy ; (2)
( )iiiiold
inew LBUByxx −+= * (3)
UB and LB = upper and lower bounds.]1,0[randui = is random number with uniform distribution
( ) ( )Dii kcTkT /1
0 exp −= (4)
( )Dnmc iii /exp −= (5)
dk=exp(ni); k=k+dk (6)
For some hard minimization problem, ASA may still converge to a local minimum. To overcome this difficulty, Cai introduces FAEA, which is a population base implementation of ASA. In this paper, Cai’s schemes are modified somewhat. The following is the new algorithm.
Modified FAEA for Unconstrained Problems (FAEAU)
1. Prepare a set of annealing parameter: m, n, ns, npop and Imax. Calculated Texit, c and dk. (Where: m and n are annealing parameter, ns is Markow chain constant, T0 is initial Temperature (usually equal to 1), npop is population size.)
2. Generate an initial set of population, evaluate the function and choose the best solution as a current best know solution.
3. Called calculate the current T(k).4. Generate a new solution set for every individual by eqs. (3) base on the current
T(k). If the new solution is out of bounds, regenerate the new solution.5. Evaluate the function.
4
6. Check the acceptance of the new solution base on the Metropolis criterion.
)1,0(
)(
randU
eH T
ff oldnew
==
−−
A. If H > U then accept:1. If the solution is better or equal to the best know solution, set the
new solution to be the best know solution.2. If the new accept solution is worst than the best-known solution,
accept the point only.B. Else not accept:
1. Update the solution as follow: DysXX best /+= Where: ( )LBUByys −= *
7. Before the iteration number reaches ns and npop, go to 4.8. If T(k) ≥ Texit then stop, else go to 3.
Note that in this algorithm, we use the following equation of dk to update the temperature in ASA:
I
cTo
Texit
dk
D
i
=
log
(7)
I=Imax/(ns*npop) (8)
It should be noted that this new scheme controls the decreasing in Temperature accordingly to the specified maximum number of function evaluation, Imax.
Extension of FAEA for constrained minimization problem (FAEAC)
To extend the capability of FAEA for constrained problem, we use the Augmented Lagrange Methods [2] along with Feasibility Method added to the acceptance criterion. Augmented Lagrange Methods is a method to solve constrained problem as unconstrained problem. Feasible method is a technique that reduces the number of constraint violation and lead the solution to the feasible region. In our FAEA, Feasible Method is used at an acceptable criterion.
Consider constrained optimization problem:Find X to minimize f(X)Subject to: gs(X) ≤ 0.
he(X) = 0.
The Augmented Lagrange Method is summary as follow:
5
Find X to minimize F
( ) ( ) ( )∑∑==
++=c
eee
t
sss XhXgXfF
11
µλ
Where:f(X) is the objective functions.g(X) is the inequality constraint functions.h(X) is the equality constraint functions.λ and µ are penalty parameters.
We follow the standard Augmented Lagrange Method to solve this problem [2].
The feasible method is summary as follow:
1. If the numbers of violation of ( )Xg for the current point is less than
the numbers of violation of ( )Xg for the best-known solution and the
sum of ( )Xh for the current point is less then the sum of ( )Xh for the best known solution, then the solution is accepted as the best know solution.
2. If the numbers of violation of ( )Xg for the current point is equal to the
numbers of violation of ( )Xg for the best known solution and the sum
of ( )Xh for the current point is equal to the sum of ( )Xh for the best known solution, then use Augmented Lagrange Function with Metropolis criterion for acceptance. Check the acceptance of the new solution base on the Metropolis criterion using Augmented Lagrange Function F.1. If accept:
a. If the solution is better or equal to the best-known solution, set the new solution to be the best know solution.
b. If the new accept solution is worst than the best-known solution, accept the point only.
2. If not accept:a. Update the solution as follow: DysXX best /+=
3. Else, the current point is discard and update as follow: DysXX best /+= .
The following is the proposed FAEAC for constrained optimization problem:
1. Prepare a set of annealing parameter: m, n, ns, npop and Imax. Calculated Texit, c and dk.
2. Generate an initial set of population, evaluate the function and choose the best solution as a current best know solution.
3. Called calculate the current T(k).4. Generate a new solution set for every individual by eqs. (3) base on the current
T(k). If the new solution is out of bounds, regenerate the new solution.
6
5. Evaluate the Lagrange function F.6. Check for feasibility and acceptance base on the feasible method summary above.7. Before the iteration number reaches ns and npop, go to 4.8. If T(k) ≥ Texit then stop, else go to 3.
Extension to Problems with multiple minima (MS_FAEA)
Both algorithms above use population base but maintain only one best solution. The algorithm above also modify so it act like parallel annealing. Each individual point has it own best solution. This is identified as Multiple Solution Fast Annealing Evolution Algorithms (MS-FAEA). By this approach, the algorithm can find multi-optimal solution. Only the unconstrained problem is investigated using MS-FAEA.
The algorithms for MS-FAEA are summarized below:
1. Prepare a set of annealing parameter: m, n, ns, npop and Imax. Calculated Texit, c and dk.
2. Generate an initial set of population, evaluate the function and choose the best solution as a current best know solution.
3. Called calculate the current T(k).4. Generate a new solution set for every individual by eqs. (3) base on the current
T(k). If the new solution is out of bounds, regenerate the new solution.5. Evaluate the function.6. Check the acceptance of the new solution base on the Metropolis criterion.
A. If accept:1. If the solution of jf is better or equal to the best-known
solution of jf , set the new solution to be the best-known
solution for jf .
Where npopjj →= 1:
2. If the new accept solution of jf is worst than the best-known
solution of jf , accept the point only.
B. If not accept:1. Update the solution as follow: DysXX jbestj /, +=
7. Before the iteration number reaches ns and npop, go to 4.8. If T(k) ≥ Texit then stop, else go to 3.
3. Numerical Example:
The capabilities of the various FAEA algorithms are demonstrated by the following numerical examples. The functions used are listed in Appendix A. Three Types of problems are presented here. They are the unconstrained examples, constrained examples and problem with multiple global minimums.
7
Unconstrained Examples:
Case 1. 2 Dimensional examples
To illustrate FAEAU for unconstrained minimization problem, we solved the problem define in functions f1 to f9 in Appendix A. We presented 2D cases here to illustrated FAEA graphically. The results are summarized in Table 1. It should be noted hat we use a population of 3 for all the cases here.
Global minima Obtain by FAEAF(X*) X* F(X) X Feval. Npop.
f1 -1.0316 +/-[0.09, -1.71] -1.0316 +/-[0.0907, -0.7127] 505 3f2 0 [1, 1] 8.9278E-4 [0.901, 0.911] 10000 3f3 -1.031628 N/A +/-[-0.0899, 0.7126] 1010 3f4 N/A N/A 4.934E-8
4.8152E-8 1.9324E-4 1.1105E-6
[3.0000, 2.0001][-2.8051, 3.1313][-3.7785, -3.2848][3.5845, -1.5845]
1010………
3………
f5 N/A N/A -5.2328 [-4.4537, -4.4531] 505 3f6 0 [0, 0] 7.6147E-8 [-0.807, 2.035]*E-5 1010 3f7 -837.9658 [420.97,420.97] -837.966 [420.9705, 420.97] 1200 3f8 0 [0, 0] 0 [-9.462, 1.367]*E-8 5000 3f9 -8.88E-16 [0, 0] 9.2321E-4 [0.778, 3.162]*E-4 1010 3
Table 1: Solutions for nine functions above with 2 variables.
The data in table1 show that the results obtain by FAEAU are very close to the global solution. FAEAU have a hard time to find the global minimum for function f8. This is due to (1) large to range of the search space (-600 to 600) and (2) the function has many local minima close to the global minima with the function value closed to the global minimum. Even with 10,000 function evaluations, some time it converges to a local minimum. If the range of the function is decreased, the global minima will be obtained easier and requires fewer function evaluations. From the table, FAEAU is doing better with small range of search. It will require more function evaluate for problem with larger ranges.
Case 2. High Dimension Examples
We use function f6 to f9 to demonstrate the capabilities of FAEAU to solve unconstrained with many variables. Specifically, we present the results for problems with 10 to 100 variables.
Rastrigin’s F.1 Schwefel’s F. Griewangk’s F. Ackley’s F.n F(X) X Int. F(X) X Int. F(X) X Int. F(X) X Int.10 1.57E-4 420.9 748020 1.1033 |1| 80000 0.0097 420.9 15010 5.05E-6 |0.006| 1600030 0.0075 |0.01| 12000
8
50 0.0778 |0.01| 800000 0.2937 420-421
50000
100 3.9025 420-421
400000 1.5338 |0.8| 78600
Table 2. Solutions of four functions with many variables. |X| =[-x,x].
Table 2 shows the result of using FAEAU to find the global minima of four multi-modal functions with 10 to 100 variables. All of the results shows in Table 2 converge to the global minima. The true global minima is f = 0. This shows that FAEAU is capable of handling problem with many parameters.
The results for functions f1 and f2 are shown in Figures 1 to 4. Figure 1 shows all the points used by FAEAU to evaluate the objective function. The heavy circles are the 3 points is the initial population. Figure 2 shown the acceptance points as the minimum. The minimum points are initially outside the optimum area and move into the optimum area very fast. The same things can be said about figure 3 and 4 for f2 (Banana Function).
For the 9-test function, f8 (Griewank’s function) is the hardest to obtain a global minimum when we set 5000 to be the maximum number of function evaluation. This is due to the function have so many local minima and the function value for these minima is close to global minimum. Because the values are so close, FAEAU cannot distingue the between the local and global minimum. Figure 5 show the surface of Griewank’sfunction for the search range. The surface looks smooth. Figure 6 shows the zoom in of the plot; the range of plot is 60 times smaller than the search range. The plot shows the function have many local minima point. One global minimum is at X = 0 and the value for the global point is f = 0. The values for the local minimum points are also very close to zero. Since the global minimum and local minimum are close, sometime trap in one of these local minima.
Griewank’s function (f8) was solved many times. Some of the results are summarized in figures 7 to 14. Figure 7 to 10 are for the runs that land on the global minimum. Figure 11 to 14 are for the run that land on a local minimum. As show in these figures, the method converges fasts that eventually become local search. And the method seems to converge to the point in four directions for two variables.
The effects of population size on the FAEAU search are shown in figures 15 and 16. Figure 15 showing for population size 1. This is the same as simple annealing. Figure 16 shows the result for population of 20. As show in figure 15, the search points are not very diverging; it did not search all the area. And as show in figure 16, the search points are cover a lot more area; almost the entire space. Thus it appear increasing of the population size greatly increases the chance of converging into the global minima.
9
-2 -1.5 -1 -0.5 0 0.5 1 1.5 2-1
-0.8
-0.6
-0.4
-0.2
0
0.2
0.4
0.6
0.8
1
x
y
x2 (4-2.1 x2+(x4)/3)+x y+y2 (-4+4 y2)
Figure 1. All points used to evaluate the function f1. (505 + 3 initial)
-2 -1.5 -1 -0.5 0 0.5 1 1.5 2-1
-0.8
-0.6
-0.4
-0.2
0
0.2
0.4
0.6
0.8
1
x
y
x2 (4-2.1 x2+(x4)/3)+x y+y2 (-4+4 y2)
Figure 2. Points accepted as a minimum point for function f1. (39 + 3 initial)
10
-6 -4 -2 0 2 4 6-6
-4
-2
0
2
4
6
x
y
100 (y-x2)2+(1-x)2
Figure 3. All points used to evaluate the Banana function f2. (1513 + 3 initial).
-6 -4 -2 0 2 4 6-6
-4
-2
0
2
4
6
x
y
100 (y-x2)2+(1-x)2
Figure 4. Points accepted as a minimum point for Banana function f1. (99 + 3 initial).
11
Figure 5. Surface and contour plot of the Griewank’s Function f8.
Figure 6. Detailed Surface and contour plot of Griewank’s Function f8 for smaller ranges
12
Figure 7. All points used to calculate the function f8. (3001 + 3 initial)
Figure8. Zoom in of all points used to calculate the function f8.
13
-600 -400 -200 0 200 400 600-600
-400
-200
0
200
400
600
x1
x2
Initia l Points
Figure 9. All points accepted as a minimum f8 (134; f = 5.5511e-016).
-10 -8 -6 -4 -2 0 2 4 6 8 10-10
-8
-6
-4
-2
0
2
4
6
8
10
x1
x2
Initia l Points
Figure 10. Contour with points of acceptance as minima points f8 (Zoom in f = 5.5511e-016)
14
Figure 11. All points for the fail run f1. (3001 f= 0.0047)
Figure 12. Zoom-in detail of all points show (3001 f = 0.0047)
15
-600 -400 -200 0 200 400 600-600
-400
-200
0
200
400
600
x1
x2
Initia l Points
Figure 13. Ppoints accepted as minimum (171; f =0.0074) for fail run f8.
-10 -8 -6 -4 -2 0 2 4 6 8 10-10
-8
-6
-4
-2
0
2
4
6
8
10
x1
x2
Initia l Points
Figure 14. Results of fail Run land on Local Minimum f1 (Zoom in). (171; f = 0.0074).
16
Figure 15. Results population of 1: the points do not cover all area.
Figure 16. Results with populations of 20: the points covers more area.
17
Problem 1 (P1) Problem 2 (P2) Problem 3 (P2) Problem 4 (P3)F -14.9998 682.2200(R1)
680.7314 (R2)25.6463 (R1)25.3276 (R2)
7100.7000
Feval 30000 200020 (R1)300050 (R2)
50000 (R1)150000 (R2)
300000
P1. X=[1, 1, 1, 1, 1, 1, 1, 1, 1, 3, 3, 3, 1]P2. X=[2.1150, 1.8750, -0.6774, 4.6066, -0.6463, 1.1311, 1.4629] (R1) X=[2.3287, 1.9391, -0.3570, 4.3866, -0.5735, 1.0359, 1.5762] (R2)P3. X = [2.0611, 2.6237, 8.7942, 5.0363, 0.9454, 1.2411, 1.1166, 9.6652, 7.8033,7.8878]P4. X =[620.81466, 1018.72811, 5461.18140, 185.34639, 281.67183, 214.62202, 303.54313, 381.61972]
Table 3. Showed the results for four linear and nonlinear constraint problems. R1 = run 1, R2 = run 2 (attempt to obtain better solution).
Constrained Examples:
The capability of FAEAC for constrained problems is summarized in table 3. For problem 1, 2, and 3 FAEAC have no problem of getting into the feasible region and converging to close to the best solution. However, for problem 4, FAEAC has trouble of getting into the feasible region with a low number of function evaluations. FAEAC is also unable to get close to the best solution. If more function evaluation were use, FAEAC might obtain better solution. FAEAC finds the feasible area faster when use more than one annealing point (more then one population).
Multiple minima Example:
The capability of MS-FAEA to find multiple minima is demonstrated using f3, f4, f10 and f11. The following parameters are used in computation: m = 60, n = 10 and ns = 8. Imax = 1041 (attempt 1000) plus 10 initial for f3, Imax = 3041 (attempt 3000) plus10 initial for f4, Imax = 1521 (attempt 1500) plus10 initial for f10, and Imax = 4001 (attempt 4000) plus 10 initial for f11. These functions have multi-global minima (two or more). Ten runs were conduct; all ten time the code finds all global minima.
f3 -0.9974 -0.6948 -1.0166 -1.0208 -1.0310 -1.0295 -0.9309 -0.2093 -0.0254 1.0170
X -0.1849 -0.1925 -0.0326 0.0413 0.0908 -0.0797 -0.2374 -1.7100 -1.5908 1.65230.7221 -0.7498 0.7257 -0.6951 -0.7212 0.6973 0.6715 0.8188 0.6934 -0.3568
Best known solution f3 = -1.0316285
f4 0.0253 0.0005 3.0123 0.1999 0.0016 0.0006 0.0000 0.0026 0.0120 0.2663
X 2.9737 -3.7823 3.2850 3.0657 -2.8018 3.5858 3.5844 -2.7961 -3.7659 2.96332.0303 -3.2839 1.6639 1.8971 3.1257 -1.8542 -1.8491 3.1309 -3.2715 2.1320
Best known solution f4 = 0 (?)
f10 0.8207 0.4097 0.5110 0.4500 0.3993 0.4032 0.4045 0.4006 0.3979 0.4565
X 9.2878 3.1836 -3.2501 3.0919 9.4405 -3.1748 3.1146 3.1653 3.1422 3.0628
18
1.7849 2.3001 12.7754 2.1135 2.5031 12.3515 2.3523 2.2547 2.2709 2.5070Best known solution f10 = 5/(4π)
f11 -186.7011 -186.7309 -186.7309 -172.0752 -186.7232 - 186.7224 -186.7202 - 186.5106 - 186.6936 -186.7288
X -1.4283 5.4829 -7.0835 4.8699 5.4829 5.4833 -0.7981 5.4913 -7.0862 -7.0845-7.0818 4.8581 -7.7082 5.4021 -1.4233 -1.4270 4.8580 -7.7031 4.8611 -1.4251
Best known solution f11= -186.7309Table 4. Results for four problems with multi-global minima.
Figure 17. PFAEA Result of f3, left, plot final result for full-range, right, plot of zoom in.
-5 -4 -3 -2 -1 0 1 2 3 4 5-5
-4
-3
-2
-1
0
1
2
3
4
5
x
y
4 x2-2.1 x4+(x6)/3+x y-4 y2+4 y4
-2 -1.5 -1 -0.5 0 0.5 1 1.5 2-2
-1.5
-1
-0.5
0
0.5
1
1.5
2
x
y
4 x2-2.1 x4+(x6)/3+x y-4 y2+4 y4
19
-5 -4 -3 -2 -1 0 1 2 3 4 5-5
-4
-3
-2
-1
0
1
2
3
4
5
x
y
(x2+y-11)2+(x+y2-7)2
Figure 18. Contour plot with results for f4, all 4 global minima are found.
Figure 19. Results of PAEA for f10, Left, contours with three global minima. Right, showing the surface-contour plot with global minima.
-5 0 5 100
5
10
15
x
y
((5/π) x-(5.1/4) (1/(π2)) x2+y-6)2+...+10
20
Figure 20. Contours plot for f11. MS-FAEA find some global minima out of 18.
As show in table 4 and figures 17 to 20, MS-FAEA is capable of finding multi-global minima problem. The price is that it required more function evaluation than the convention method (searching for one global minima). The results in table 4 show that most of the solution are very close to the know minima value. For f3, f4, and f10, MS-FAEA find all the location of global minima. However, for f11, only few of the global minima were finds, there are 18 known global minima, because only a population of 10 points was used.
4. Concluding Remarks:
In this paper, we have modified and extended the Fast Annealing Evolution Algorithm (FAEA) for solution of unconstrained and constrained minimization problem. For unconstrained, we proposed FAEAU, which contains the following major modification to the adaptive simulated algorithm (ASA):
1. A Temperature updating scheme that depends on the total number of function evaluations.
2. A new Metropolis acceptance criterion.
For constrained problems, we developed FAEAC, which used Augmented Lagrange multiplier method a feasible region method to handle the constrained. We also proposed
21
MS-FAEA, which can be used to solve problems with multiple optimums. The proposed algorithms are illustrated by many numerical examples. These preliminary results show that a population based simulated annealing could be very effective to solve design optimization problems.
For real engineering design problem where function evaluation requires length numerical solutions, the proposed algorithms would not be very useful due to the large number of function required to get a converged solution. However, the proposed algorithms could be coupled with a response surface model (RSM) to increase its overall effectiveness. We are currently investigating a combined FAEAC/RSM algorithm to solve some designproblems. We are also performing a systematic study of the various parameters on the performance of FAEAC. The results will be reported in a future paper.
Appendix A
The functions used in the numerical examples are listed in this Apendix. These include eleven unconstrained functions and four constrained problems. Knows solutions of the constrained problem are also included.
Unconstraint Problem:Function 1:
( ) ( )22
2221
41
21
211 441.24 xxxxxxxf +−+++−= ; 22 1 ≤≤− x ; 11 2 ≤≤− x
fmin at (0.09,-0.71) and (-0.09,0.71)
Function 2: Banana function: [8]
( ) ( )21
22122 1100 xxxf −+−= ; 66 ≤≤− ix
fmin = 0 at xi = 1
Function 3: Camelback function: [9]
42
2221
614
1213 44
31.24 xxxx
xxxf +−++−= ; 55 ≤≤− ix
fmin = -1.031628
Function 4: Himmelblau’s function: [8]
( ) ( )2221
2
2214 711 −++−+= xxxxf ; 1010 ≤≤− ix
Function 5: Modified Himmelblau’s function: [8]
( )[ ]{ }∑=
−−+=2
1
245 20305.001.0
iiii xxxf ; 66 ≤≤− ix
Function 6: Rastrigin’s function: [9]
( )[ ]∑=
−+=n
iii xAxAnf
1
26 2cos π ; 12.512.5 ≤≤− ix ; A=8
22
fmin = 0 at xi = 0
Function 7: Schwefel’s function: [9]
( )[ ]∑=
−=n
iii xxnf
17 sin982887.418 ; 500500 ≤≤− ix
fmin = n * 418.982887 at xi = 420.9687 (n is the number of variables)
Function 8: Griewank’s function: [9]
∏∑==
−+=n
i
in
i
i
i
xxf
11
2
8 cos4000
1 ; 600600 ≤≤− ix
fmin = 0 at xi = 0
Function 9: Ackley’s function: [8]
( )
−
−−+= ∑∑
==
n
ii
n
ii x
nxef
11
29 2cos
1exp
2
12.0exp2020 π ; 3030 ≤≤− ix
fmin at xi = 0
Function 10: Brain’s function: [3]
10cos8
10106
4
1.551
2
22
211
10 +
−+
−+−= xx
xxf πππ ; 105 1 ≤≤− x & 150 2 ≤≤ x
fmin = 5/(4*π) at (-π,12.275), (π,2.275), (3π,2.475)
Function 11: Shubert’s function: [3]
( )[ ] ( )[ ]
++
++= ∑∑
==
5
12
5
118 1cos1cos
ii
ixiiixiif ; 1010 ≤≤− ix
fmin = -186.739
Constraint Problems:Problem 1: [4]
∑∑∑===
−−=13
5
4
1
24
1
55i
ii
ii
i xxxf
Subject to:01022 1110211 ≤−+++= xxxxg 01022 1210312 ≤−+++= xxxxg
01022 1211213 ≤−+++= xxxxg 08 1014 ≤+−= xxg
08 1125 ≤+−= xxg 08 1236 ≤+−= xxg
02 10547 ≤+−−= xxxg 02 11768 ≤+−−= xxxg
02 12989 ≤+−−= xxxg
10 ≤≤ ix ; i = 1,…,9; 1000 ≤≤ ix ; i = 10,11,12; 10 13 ≤≤ x
The global optimum report: =*X [1, 1, 1, 1, 1, 1, 1, 1, 1, 3, 3, 3, 1]; ( ) 15* −=Xf .
23
Problem 2: [5]
( ) ( ) ( ) 767647
26
65
24
43
22
21 810471011312510 xxxxxxxxxxxf −−−+++−++−+−=
Subject to:05432127 5
243
42
211 ≤+++++−= xxxxxg 01037282 54
23212 ≤+++++−= xxxxxg
08623196 726
2213 ≤++++−= xxxxg 0115234 76
2321
22
214 ≤−++−+= xxxxxxxg
1010 ≤≤− ix ; i = 1,…,7
The global optimum report: =*X [2.330499, 1.951372, -0.4775414, 4.365726, -0.6244870, 1.038131, 1.594227]; ( ) 6300573.680* =Xf .
Problem 3: [5]( ) ( ) ( ) ( ) ( )2
827
26
25
24
232121
22
21 117512354101614 −++−+−+−+−+−−++= xxxxxxxxxxxxf
( ) ( ) 457102 210
29 +−+−+ xx
Subject to:09354105 87211 ≥−+−−= xxxxg
0217810 97212 ≥−++−= xxxxg
0122528 109213 ≥++−−= xxxxg
( ) ( ) 0120723423 423
22
214 ≥++−−−−−= xxxxg
( ) 04026385 42
32215 ≥++−−−−= xxxxg
( ) 0614222 65212
2216 ≥+−+−−−= xxxxxxg
( ) ( ) 03034285.0 625
22
217 ≥++−−−−−= xxxxg
( ) 0781263 102
9218 ≥+−−−= xxxxg
1010 ≤≤− ix ; i = 1,…,10
The global optimum report: =*X [2.171996, 2.363683, 8.773926, 5.095984, 0.9906548, 1.430574, 1.321644, 9.828726, 8.280092, 8.375927]; ( ) 3062091.24* =XfProblem 4: [5]
321 xxxf ++=Subject to:
( ) 00025.01 641 ≥+−= xxg ( ) 00025.01 4752 ≥−+−= xxxg
( ) 001.01 583 ≥−−= xxg 0333.83333100332582.833 14614 ≥+−−= xxxxg
012501250 4425725 ≥+−−= xxxxxxg 025001250000 553836 ≥+−−= xxxxxg
10000100 1 ≤≤ x ; 100001000 ≤≤ ix ; i = 2,3 100010 ≤≤ ix ; i = 4,…,8
The global optimum report: =*X [579.3040, 1359.943, 5110.071, 182.0174, 295.5985, 217.9799, 286.4162, 395.5979]; ( ) 330923.7049* =Xf
References:
24
1. Pham, D.T. and Karaboga, D. ‘Intelligent Optimisation Techniques’, Springer-Verlag London Limited 2000, Great Britain.
2. Vanderplaats, Garret N. ‘Numerical Optimization Techniques for Engineering Design’, Vanderplaats Research & Development, Inc. Colorado Springs, CO, USA.
3. Cai, Wensheng and Shao, Xueguang. (2002). ‘A Fast Annealing Evolutionary Algorithm for Global Optimization’, Journal of Computational Chemistry, Vol. 23, No. 4. John Wiley & Sons, Inc.
4. Flaudas, C. A. and Pardalos, P. M. (1987). ‘A collection of Test Problems for Constrained Global Optimization Algorithms.’ Lecture Notes in Computer Science, vol. 455. Springer-Verlag.
5. Hock, W. and Schittowski, K. (1981). ‘Test Examples for Nonlinear Programming Codes.’ Lecture Notes in Economics and Mathematical System, vol. 187. Springer-Verlag. ISBN 3-540-10561-1 and 0-387-10561-1.
6. Lampinen, Jouni (2001). ‘Solving Problems Subject to Multiple Nonlinear Constraints by the Differential Evolution.’ In: Radek Matousek and Pavel Osmera (eds.) (2001). Proceeding of MENDEL 2001, ‘7th International conference on Soft Computing.’ June 6-8, 2001, Brno, Czech Republic. Brno University of Technology, Faculty of Engineering, Institute of Automation and Computer Science, Brno (Czech Republic), pp. 50-57. ISBN 80-214-1894-X.
7. Ingber, L. ‘Adaptive Simulated Annealing (ASA): Lessons learned.’ Lester Ingber Research, McLean, VA, USA.
8. Natvarlal, Patel Chirag. ‘Differential Evolution a Method of Global Optimization.’ Master Thesis, The University of Texas at Arlington, May 2002.
9. Torn, A.Zilniskas, A. ‘Global Optimization.’ Springer-Verlag: Berlin, 1989.10. http://www.npac.syr.edu/REU/reu94/ramoldov/proposal/section3_2.html