A review of the methods for global optimization reveals that most methods have been developed for unconstrained problems. They need to be extended to general constrained problems because most of the engineering applications have constraints. Some of the methods can be easily extended while others need further work. It is also possible to transform a constrained problem to an unconstrained one by using penalty or augmented Lagrangian methods and solve the problem that way. Some of the global optimization methods find all the local minimum points while others find only a few of them. In any case, all the methods require a very large number of calculations. Therefore, the computational effort to obtain a global solution is generally substantial. The methods for global optimization can be divided into two broad categories: deterministic and stochastic. Some deterministic methods are based on certain assumptions on the cost function that are not easy to check. These methods are not very useful since they are not applicable to general problems. Other deterministic methods are based on certain heuristics which may not lead to the true global solution. Several stochastic methods have been developed as some variation of the pure random search. Some methods are useful for only discrete optimization problems while others can be used for both discrete and continuous problems. Main characteristics of each method are identified and discussed. The selection of a method for a particular application depends on several attributes, such as types of design variables, whether or not all local minima are desired, and availability of gradients of all the functions.Notation
Number of equality constraints
- ()
T
A transpose of a vector
-
A
A hypercubic cell in clustering methods
-
Distance between two adjacent mesh points
-
Probability that a uniform sample of size
N contains at least one point in a subset
A of
S
-
A(v, x)
Aspiration level function
-
A
The set of points with cost function values less than
f(
x
G
*
) +
. Same as
A
f
(
)
-
A
f
(
)
A set of points at which the cost function value is within
of
f(
x
G
*
)
-
A
(
)
A set of points x with
[
f(x)] smaller than
-
A
N
The set of
N random points
-
A
q
The set of sample points with the cost function value
f
q
-
Q
The contraction coefficient; –1
Q
0
-
R
The expansion coefficient;
E
> 1
-
R
The reflection coefficient; 0 <
R
1
-
A
x
(
)
A set of points that are within the distance
from x
G
*
- D
Diagonal form of the Hessian matrix
-
det()
Determinant of a matrix
-
d
j
A monotonic function of the number of failed local minimizations
-
d
t
Infinitesimal change in time
-
d
x
Infinitesimal change in design
-
A small positive constant
-
(t)
A real function called the noise coefficient
-
0
Initial value for
(t)
-
exp()
The exponential function
-
f
(c)
The record; smallest cost function value over X
(C)
-
[
f(x)]
Functional for calculating the volume fraction of a subset
-
Second-order approximation to
f(x)
-
f(x)
The cost function
-
An estimate of the upper bound of global minimum
-
f
E
The cost function value at x
E
-
f
L
The cost function value at x
L
-
f
opt
The current best minimum function value
-
f
P
The cost function value at x
P
-
f
Q
The cost function value at x
Q
-
f
q
A function value used to reduce the random sample
-
f
R
The cost function value at x
R
-
f
S
The cost function value at x
S
-
f
T F min
A common minimum cost function value for several trajectories
-
f
TF opt
The best current minimum value found so far for
f
TF min
-
f
W
The cost function value at x
W
-
G
Minimum number of points in a cell (
A) to be considered full
-
The gamma function
-
A factor used to scale the global optimum cost in the zooming method
-
Minimum distance assumed to exist between two local minimum points
-
gi(x)
Constraints of the optimization problem
-
H
The size of the tabu list
- H(x
*)
The Hessian matrix of the cost function at x
*
-
h
j
Half side length of a hypercube
-
h
m
Minimum half side lengths of hypercubes in one row
- I
The unity matrix
-
ILIM
A limit on the number of trials before the temperature is reduced
-
J
The set of active constraints
-
K
Estimate of total number of local minima
-
k
Iteration counter
-
The number of times a clustering algorithm is executed
-
L
Lipschitz constant, defined in Section 2
-
L
The number of local searches performed
-
i
The corresponding pole strengths
- log ()
The natural logarithm
- LS
Local search procedure
-
M
Number of local minimum points found in
L searches
-
m
Total number of constraints
-
m(t)
Mass of a particle as a function of time
-
m()
The
Lebesgue measure of the
a set
-
Average cost value for a number of random sample of points in
S
-
N
The number of sample points taken from a uniform random distribution
-
n
Number of design variables
-
n(t)
Nonconservative resistance forces
-
n
c
Number of cells;
S is divided into
n
c
cells
-
NT
Number of trajectories
-
Pi (3.1415926)
- P
i
(j)
Hypersphere approximating the
j-th cluster at stage
i
-
p(
x
(i))
Boltzmann-Gibbs distribution; the probability of finding the system in a particular configuration
-
pg
A parameter corresponding to each reduced sample point, defined in (36)
-
Q
An orthogonal matrix used to diagonalize the Hessian matrix
-
i
(i = 1, K)
The relative size of the
i-th region of attraction
-
r
i
(j)
Radius of the
j-th hypersp here at stage
i
-
R
x
*
Region of attraction of a local minimum x
*
-
r
j
Radius of a hypersphere
-
r
A critical distance; determines whether a point is linked to a cluster
-
R
n
A set of
n tuples of real numbers
-
A hyper rectangle set used to approximate
S
-
S
The constraint set
-
A user supplied parameter used to determine
r
-
s
The number of failed local minimizations
- T
The tabu list
-
t
Time
-
T(x)
The tunneling function
-
T
c
(x)
The constrained tunneling function
-
T
i
The temperature of a system at a configuration
i
-
TLIMIT
A lower limit for the temperature
-
TR
A factor between 0 and 1 used to reduce the temperature
-
u(x)
A unimodal function
- V(x)
The set of all feasible moves at the current design
-
v(x)
An oscillating small perturbation.
-
V(y
(i))
Voronoi cell of the code point y
(i)
- v
–1
An inverse move
- v
k
A move; the change from previous to current designs
- w(
t)
An
n-dimensional standard. Wiener process
- x
Design variable vector of dimension
n
- x
#
A movable pole used in the tunneling method
- x
(0)
A starting point for a local search procedure
- X
(c)
A sequence of feasible points {x
(1), x
(2),,x
(c)}
- x(
t)
Design vector as a function of time
- X
*
The set of all local minimum points
- x
*
A local minimum point for
f(x)
- x
*(i)
Poles used in the tunneling method
- x
G
*
A global minimum point for
f(x)
-
Transformed design space
-
The velocity vector of the particle as a function of time
-
Acceleration vector of the particle as a function of time
- x
C
Centroid of the simplex excluding x
L
- x
c
A pole point used in the tunneling method
- x
E
An expansion point of x
R
along the direction x
C
x
R
- x
L
The best point of a simplex
- x
P
A new trial point
- x
Q
A contraction point
- x
R
A reflection point; reflection of x
W
on x
C
- x
S
The second worst point of a simplex
- x
W
The worst point of a simplex
-
The reduced sample point with the smallest function value of a full cell
-
Y
The set of code points
-
y
(i)
A code point; a point that represents all the points of the
i-th cell
-
z
A random number uniformly distributed in (0,1)
-
Z
(c)
The set of points x where [
f
(c)
– ] is smaller than
f(x)
- []
+
Max (0,)
- | |
Absolute value
-
The Euclidean norm
-
f[x(t)]
The gradient of the cost function
相似文献